url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/8725 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8725/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8725/comments | https://api.github.com/repos/huggingface/transformers/issues/8725/events | https://github.com/huggingface/transformers/issues/8725 | 748,654,701 | MDU6SXNzdWU3NDg2NTQ3MDE= | 8,725 | Longformer inference speed is slower than bert of the same length | {
"login": "chenlin038",
"id": 38657070,
"node_id": "MDQ6VXNlcjM4NjU3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/38657070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chenlin038",
"html_url": "https://github.com/chenlin038",
"followers_url": "https://api.github.com/users/chenlin038/followers",
"following_url": "https://api.github.com/users/chenlin038/following{/other_user}",
"gists_url": "https://api.github.com/users/chenlin038/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chenlin038/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chenlin038/subscriptions",
"organizations_url": "https://api.github.com/users/chenlin038/orgs",
"repos_url": "https://api.github.com/users/chenlin038/repos",
"events_url": "https://api.github.com/users/chenlin038/events{/privacy}",
"received_events_url": "https://api.github.com/users/chenlin038/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1897896961,
"node_id": "MDU6TGFiZWwxODk3ODk2OTYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Migration",
"name": "Migration",
"color": "e99695",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hey @chenlin038,\r\n\r\nCan you copy paste the configs of `bert-base-1536` and `long-bert-1536` below? This way I can see exactly which configs you are using. Otherwise, I also can only refer to the answers in https://github.com/allenai/longformer/issues/106 .",
"> Hey @chenlin038,\r\n> \r\n> Can you copy paste the configs of `bert-base-1536` and `long-bert-1536` below? This way I can see exactly which configs you are using. Otherwise, I also can only refer to the answers in [allenai/longformer#106](https://github.com/allenai/longformer/issues/106) .\r\n\r\nSorry for the late reply!This is the config of ### bert-base-1536:\r\n{\r\n \"architectures\": [\r\n \"BertForMaskedLM\"\r\n ],\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"directionality\": \"bidi\",\r\n \"gradient_checkpointing\": false,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"layer_norm_eps\": 1e-12,\r\n \"max_position_embeddings\": 1536,\r\n \"model_type\": \"bert\",\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 12,\r\n \"pad_token_id\": 0,\r\n \"pooler_fc_size\": 768,\r\n \"pooler_num_attention_heads\": 12,\r\n \"pooler_num_fc_layers\": 3,\r\n \"pooler_size_per_head\": 128,\r\n \"pooler_type\": \"first_token_transform\",\r\n \"type_vocab_size\": 2,\r\n \"vocab_size\": 21128\r\n}\r\n\r\nThe following is the config of ### long-bert-1536:\r\n{\r\n \"architectures\": [\r\n \"BertLongForSequenceClassification\"\r\n ],\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"attention_window\": [\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512\r\n ],\r\n \"directionality\": \"bidi\",\r\n \"gradient_checkpointing\": false,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"layer_norm_eps\": 1e-12,\r\n \"max_position_embeddings\": 1536,\r\n \"model_type\": \"bert\",\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 12,\r\n \"pad_token_id\": 0,\r\n \"pooler_fc_size\": 768,\r\n \"pooler_num_attention_heads\": 12,\r\n \"pooler_num_fc_layers\": 3,\r\n \"pooler_size_per_head\": 128,\r\n \"pooler_type\": \"first_token_transform\",\r\n \"type_vocab_size\": 2,\r\n \"vocab_size\": 21128\r\n}\r\n",
"Ok, yeah I'm not very surprised that `bert-base-1536` is faster in your case. Longformer should mainly be used to prevent out-of-memory problems for long sequences. To do so it uses a complex attention mechanism than BERT which makes it a bit slower (especially for shorter sequences). So, in your case I would expect the Longformer model to use less memory than the BERT model, but not necessarily to be faster. If `bert-base-1536` fits in memory, then I think it's a good idea to use the model",
"> Ok, yeah I'm not very surprised that `bert-base-1536` is faster in your case. Longformer should mainly be used to prevent out-of-memory problems for long sequences. To do so it uses a complex attention mechanism than BERT which makes it a bit slower (especially for shorter sequences). So, in your case I would expect the Longformer model to use less memory than the BERT model, but not necessarily to be faster. If `bert-base-1536` fits in memory, then I think it's a good idea to use the model\r\n\r\nDoes the longformer use less memory only have an effect on the GPU, or even specific GPU types, such as Nvidia Ampere? If I use CPU as a inference device, can I save more memory? Is there currently any optimization for the storage or calculation of sparse matrices by the CPU?",
"You should definitely see an improvement in CPU memory usage when using longformer! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hi, It seems I have a similar issue. However, the time difference between Longformer and Roberta is nearly a factor 10. Does it seem normal? After checking it is in the longformer.encoder step that this is quite slow. Here is the config I use (I have similar factor whatever the config):\r\nLongFormer_TOY_MODEL_HPARAMS = {\r\n \"vocab_size\": len(LongFormer_VOCAB),\r\n \"hidden_size\": 64,\r\n \"num_hidden_layers\": 3,\r\n \"num_attention_heads\": 8,\r\n \"intermediate_size\": 32,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"max_position_embeddings\": 512\r\n + 2, # tokenizer's model_max_length + 2 (<s> / </s> tokens of sequence)\r\n \"initializer_range\": 0.02,\r\n \"layer_norm_eps\": 1e-12,\r\n \"attention_window\": 512\r\n}\r\n\r\n\r\n\r\nThanks!"
] | 1,606 | 1,637 | 1,619 | NONE | null | I used bert-base as the basic model and retrained my long-bert with a length of 1536. Then I compared the difference in inference speed of the original bert-base-1536. After a lot of testing, I found that long-bert-1536 and bert-base-1536 are basically the same in inference speed. I see a similar problem [https://github.com/allenai/longformer/issues/106] , but the length of my test data is all greater than 1000. I think window attention should be faster than self-attention because the amount of calculation is smaller, but why does this problem occur? Here are some settings:
attention windows (each layer is the same): 512
Global attention: only used for cls token
Inference device: cpu
task: text classification
By the way, does the size of the attention window affect the speed of inference? I tested different window sizes, but the speed is basically the same. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8725/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8725/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8724 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8724/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8724/comments | https://api.github.com/repos/huggingface/transformers/issues/8724/events | https://github.com/huggingface/transformers/issues/8724 | 748,622,478 | MDU6SXNzdWU3NDg2MjI0Nzg= | 8,724 | @ehsan-soe I fixed the problem by truncating incomplete batches. So if there are 2001 examples and my batch size = 2, then I truncate the last example and train on the first 2000. This has fixed it for me both with and without distributed. My load_and_cache function now looks like this | {
"login": "ChaooMa",
"id": 11719780,
"node_id": "MDQ6VXNlcjExNzE5Nzgw",
"avatar_url": "https://avatars.githubusercontent.com/u/11719780?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChaooMa",
"html_url": "https://github.com/ChaooMa",
"followers_url": "https://api.github.com/users/ChaooMa/followers",
"following_url": "https://api.github.com/users/ChaooMa/following{/other_user}",
"gists_url": "https://api.github.com/users/ChaooMa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChaooMa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChaooMa/subscriptions",
"organizations_url": "https://api.github.com/users/ChaooMa/orgs",
"repos_url": "https://api.github.com/users/ChaooMa/repos",
"events_url": "https://api.github.com/users/ChaooMa/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChaooMa/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,606 | 1,619 | 1,619 | NONE | null | @ehsan-soe I fixed the problem by truncating incomplete batches. So if there are 2001 examples and my batch size = 2, then I truncate the last example and train on the first 2000. This has fixed it for me both with and without distributed. My load_and_cache function now looks like this
```
def load_and_cache_examples(args, tokenizer, evaluate=False, fpath=None):
if fpath:
dataset = TextDataset(tokenizer, args, fpath)
else:
dataset = TextDataset(tokenizer, args, args.eval_data_path if evaluate else args.train_data_path)
# Ignore incomplete batches
# If you don't do this, you'll get an error at the end of training
n = len(dataset) % args.per_gpu_train_batch_size
if n != 0:
dataset.examples = dataset.examples[:-n]
return dataset
```
_Originally posted by @isabelcachola in https://github.com/huggingface/transformers/issues/1220#issuecomment-557237248_ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8724/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8723 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8723/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8723/comments | https://api.github.com/repos/huggingface/transformers/issues/8723/events | https://github.com/huggingface/transformers/issues/8723 | 748,554,020 | MDU6SXNzdWU3NDg1NTQwMjA= | 8,723 | Model conversion from PyTorch to TF2 doesn't work properly for XLM-Roberta | {
"login": "QixinLi",
"id": 25460447,
"node_id": "MDQ6VXNlcjI1NDYwNDQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/25460447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/QixinLi",
"html_url": "https://github.com/QixinLi",
"followers_url": "https://api.github.com/users/QixinLi/followers",
"following_url": "https://api.github.com/users/QixinLi/following{/other_user}",
"gists_url": "https://api.github.com/users/QixinLi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/QixinLi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/QixinLi/subscriptions",
"organizations_url": "https://api.github.com/users/QixinLi/orgs",
"repos_url": "https://api.github.com/users/QixinLi/repos",
"events_url": "https://api.github.com/users/QixinLi/events{/privacy}",
"received_events_url": "https://api.github.com/users/QixinLi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Could you provide the commands you used to launch the script, and where you obtained the file from? Thanks.",
"> Hello! Could you provide the commands you used to launch the script, and where you obtained the file from? Thanks.\r\n\r\nI found that I made a mistake when saving the pretrained model, after fix this bug, the script converts the pytorch model correctly. \r\nThanks for your time!",
"@QixinLi , how did you converti this one? I am having similar type of problem while converting xlmroberta to tf. \r\n\r\nmy code: https://colab.research.google.com/drive/17mOz39gXNHjeGN9tT4oJMBLvSWkgKlDN?usp=sharing\r\n\r\n```\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-6-37732b1a66b9> in <module>()\r\n 2 '/content/drive/MyDrive/Colab_Notebooks/models/pytorch_xlmr/pytorch_model.bin',\r\n 3 '/content/drive/MyDrive/Colab_Notebooks/models/pytorch_xlmr/config.json',\r\n----> 4 '/content/drive/MyDrive/Colab_Notebooks/models/pytorch_xlmr')\r\n\r\n2 frames\r\n/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_pytorch_utils.py in load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs, allow_missing_keys)\r\n 179 continue\r\n 180 \r\n--> 181 raise AttributeError(f\"{name} not found in PyTorch model\")\r\n 182 \r\n 183 array = pt_state_dict[name].numpy()\r\n\r\nAttributeError: lm_head.bias not found in PyTorch model\r\n```\r\n"
] | 1,606 | 1,626 | 1,606 | NONE | null | ## Environment info
- `transformers` version: 3.4.0
- Platform: MacOS
- Python version: 3.7
- PyTorch version (GPU?): 1.6.0
- Tensorflow version (GPU?): 2.3.1
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
@LysandreJik
## Errors
```
Loading PyTorch weights from pytorch_model.bin
PyTorch checkpoint contains 470,547,238 parameters
Loaded 278,295,186 parameters in the TF 2.0 model.
Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFXLMRobertaForMaskedLM: ['lm_head.decoder.bias', 'roberta.embeddings.position_ids', 'lm_head.decoder.weight']
- This IS expected if you are initializing TFXLMRobertaForMaskedLM from a PyTorch model trained on another task or with another architecture (e.g. initializing a TFBertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing TFXLMRobertaForMaskedLM from a PyTorch model that you expect to be exactly identical (e.g. initializing a TFBertForSequenceClassification model from a BertForSequenceClassification model).
All the weights of TFXLMRobertaForMaskedLM were initialized from the PyTorch model.
If your task is similar to the task the model of the ckeckpoint was trained on, you can already use TFXLMRobertaForMaskedLM for predictions without further training.
All model checkpoint weights were used when initializing XLMRobertaForMaskedLM.
All the weights of XLMRobertaForMaskedLM were initialized from the model checkpoint at None.
If your task is similar to the task the model of the checkpoint was trained on, you can already use XLMRobertaForMaskedLM for predictions without further training.
Traceback (most recent call last):
File "convert_pytorch_checkpoint_to_tf2.py", line 432, in <module>
use_cached_models=args.use_cached_models)
File "convert_pytorch_checkpoint_to_tf2.py", line 297, in convert_pt_checkpoint_to_tf
assert diff <= 2e-2, "Error, model absolute difference is >2e-2: {}".format(diff)
AssertionError: Error, model absolute difference is >2e-2: 1.0000114440917969
Max absolute difference between models outputs 1.0000114440917969
```
There's some weights didn't initialize correctly from pytorch model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8723/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8722 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8722/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8722/comments | https://api.github.com/repos/huggingface/transformers/issues/8722/events | https://github.com/huggingface/transformers/issues/8722 | 748,515,884 | MDU6SXNzdWU3NDg1MTU4ODQ= | 8,722 | a bug in generation_beam_search.py | {
"login": "ZhaoQianfeng",
"id": 53401404,
"node_id": "MDQ6VXNlcjUzNDAxNDA0",
"avatar_url": "https://avatars.githubusercontent.com/u/53401404?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhaoQianfeng",
"html_url": "https://github.com/ZhaoQianfeng",
"followers_url": "https://api.github.com/users/ZhaoQianfeng/followers",
"following_url": "https://api.github.com/users/ZhaoQianfeng/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhaoQianfeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhaoQianfeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhaoQianfeng/subscriptions",
"organizations_url": "https://api.github.com/users/ZhaoQianfeng/orgs",
"repos_url": "https://api.github.com/users/ZhaoQianfeng/repos",
"events_url": "https://api.github.com/users/ZhaoQianfeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhaoQianfeng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @ZhaoQianfeng I think I know what you mean! So in short you are saying that `beam_hyp.add(...)` should be behaving differently depending on whether it finished with or without EOS token right? \r\n\r\nStill not sure whether this is a real problem though... -> Could you maybe open a PR that quickly shows the changes you would want beam search to have and we can take a look at some code? This would be super helpful! Thanks a lot for diving into the code :-) ",
"Hello @patrickvonplaten. Glad you know what i mean. Those hypotheses which finished without EOS are calculated with longer length than they should be, so they have higher scores than they should be, this may cause them to compete unfairly with those with EOS and make us incorrectly throw away those \"with EOS\" hypotheses . \r\nMaybe it is not a severe problem, above situation maybe not common. Anyway, I modify the code here #8890 but haven't test it.I will be happy if it helps.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,606 | 1,619 | 1,619 | NONE | null | I wanted to implement my own beam_search function and read
> src/transformers/generation_beam_search.py
source code for help.In the process of reading the source code, I found an unreasonable place, maybe it could be seen as a bug.
Here is the problem:
In the process() function of generation_beam_search.py, line 223:
```python
if (eos_token_id is not None) and (next_token.item() == eos_token_id):
# if beam_token does not belong to top num_beams tokens, it should not be added
is_beam_token_worse_than_top_num_beams = beam_token_rank >= self.num_beams
if is_beam_token_worse_than_top_num_beams:
continue
beam_hyp.add(
input_ids[batch_beam_idx].clone(),
next_score.item(),
)
```
when we generate a complete sentence(get a eos_token in top beam_size),we send the `input_ids[batch_beam_id]`(which is the sequence of tokens),and the score to the beam_hyp.add(). eos_token is not included in `input_ids[batch_beam_id]`, but it's ok, beacuse bos_token is included in `input_ids[batch_beam_id]`, when we calculate the score of whole sentence at line 334:
```python
def add(self, hyp: torch.LongTensor, sum_logprobs: float):
"""
Add a new hypothesis to the list.
"""
score = sum_logprobs / (hyp.shape[-1] ** self.length_penalty)
...
```
wo got correct hyp.shape[-1],which can be seen as how many elements are added to get the sum_logprobs.
for example, `input_ids[batch_beam_id] = ['<bos>', ' she', 'is', 'a', 'cute', 'girl']`,and assume the score of this beam is -1.2 .we know that -1.2 is the probability sum of `she`, `is`, `a`, `cute`, `girl`. In next step we generate a eos_token, the score of eos_token is -0.3 and we update socre to -1.5, -1.5 is probability sum of `she`, `is`, `a`, `cute`, `girl`,`<eos>`. we send `input_ids[batch_beam_id] = ['<bos>', ' she', 'is', 'a', 'cute', 'girl']` and -1.5 as parameters to beam_hyp.add(...), and we calculate the whole sentence score by `score = sum_logprobs / (hyp.shape[-1] ** self.length_penalty)`,
because **`len(['<bos>', ' she', 'is', 'a', 'cute', 'girl'])` == `len([' she', 'is', 'a', 'cute', 'girl','<eos>'])`**, so hyp.shape[-1] is correct.
**the real problem is in finalize() function:**
line 227:
```python
# need to add best num_beams hypotheses to generated hyps
for beam_id in range(self.num_beams):
batch_beam_idx = batch_idx * self.num_beams + beam_id
final_score = final_beam_scores[batch_beam_idx].item()
final_tokens = input_ids[batch_beam_idx]
beam_hyp.add(final_tokens, final_score)
```
we use finalize() function to manually add those hypothese which dont generate eos_toekn untill max_length to beam_hyp.
in instance, `final_tokens= input_ids[batch_beam_idx] = ['<bos>', ' she', 'is', 'a', 'cute', 'and', 'smart']` and we need to manually add it to beam_hyp because it reaches the max_length.
look at this line above:
```python
beam_hyp.add(final_tokens, final_score)
```
now len(final_tokens) == len(['\<bos\>', ' she', 'is', 'a', 'cute', 'and', 'smart']) == 7, so hyp.shape[-1] equals 7.But the sum_logprobs is calculated by log probability sum of `she`, `is`, `a`, `cute`, `and`, `smart`, only 6 element! **It's a different case from process(),because we have not add eos_token probablity, so the hyp.shape[-1] should minus 1 in this case!**
My English is poor, hope you can understand my meaning. Looking forward for some feedback.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8722/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8722/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8721 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8721/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8721/comments | https://api.github.com/repos/huggingface/transformers/issues/8721/events | https://github.com/huggingface/transformers/issues/8721 | 748,368,794 | MDU6SXNzdWU3NDgzNjg3OTQ= | 8,721 | run_clm.py training script failing with CUDA out of memory error, using gpt2 and arguments from docs. | {
"login": "erik-dunteman",
"id": 44653944,
"node_id": "MDQ6VXNlcjQ0NjUzOTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/44653944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erik-dunteman",
"html_url": "https://github.com/erik-dunteman",
"followers_url": "https://api.github.com/users/erik-dunteman/followers",
"following_url": "https://api.github.com/users/erik-dunteman/following{/other_user}",
"gists_url": "https://api.github.com/users/erik-dunteman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erik-dunteman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erik-dunteman/subscriptions",
"organizations_url": "https://api.github.com/users/erik-dunteman/orgs",
"repos_url": "https://api.github.com/users/erik-dunteman/repos",
"events_url": "https://api.github.com/users/erik-dunteman/events{/privacy}",
"received_events_url": "https://api.github.com/users/erik-dunteman/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The comment you are mentioning was about the old `run_language_modeling` script, and probably with some more options for a K80 that what you are running the script with (we should probably remove it or update with a proper command that gives those results). This doesn't look like a memory leak problem, you just don't have enough GPU memory to run the this large model with its full sequence length (of 1,024). You could try:\r\n- a smaller batch size with `--per_device_batch_size 4` or even 2 (or use gradient accumulation)\r\n- a smaller sequence length with `--block_size 512` or even 256\r\n- a smaller model with `--model_name_or_path gpt2-medium` or even distilgpt2.\r\n",
"The smaller `--per_device_train_batch_size 2` batch size seems to be working for me. Just started the training process. Thank you very much for the extremely quick response, and for being an OSS maintainer @sgugger! \r\n\r\nI'll likely drop one more update in this thread to confirm that it worked all the way through.",
"Can confirm - your advice works for me. \r\n\r\nIn fact, I managed to retrain even the XL on T100 GPUs on the new p4d.24xl instances. Definitely high mem requirements, but doable with `--model_name_or_path gpt2-xl --per_device_train_batch_size 1 --block_size 512`\r\n\r\nThanks, team! Y'all have a https://buymeacoffee.com account I can send some brews to? I appreciate your work.",
"Hi, \r\n\r\n@sgugger I'm getting the same out of memory error on G Colab, as @erik-dunteman mentioned and I am using the smallest model of distilgpt2, I followed the advice here and added the additional argument to my command:\r\n\r\n```python\r\n!python /content/transformers/examples/language-modeling/run_clm.py \\\r\n --model_name_or_path distilgpt2 \\\r\n --train_file /content/train.txt \\\r\n --per_device_batch_size 2 \\\r\n --do_train \\\r\n --output_dir model_output\r\n```\r\n\r\nbut am now getting the error:\r\n\r\n`ValueError: Some specified arguments are not used by the HfArgumentParser: ['--per_device_batch_size', '2']`\r\n\r\nAlso, I could not find many parameters that were previously supported by ``run_language_modeling.py`` such as ``--line_by_line``. Were these removed in ``run_clm``? Is there a place where all possible arguments are listed?\r\n\r\nThanks",
"The correct argument name is `--per_device_train_batch_size` or `--per_device_eval_batch_size`.\r\n\r\nThee is no `--line_by_line` argument to the `run_clm` script as this option does not make sense for causal language models such as GPT-2, which are pretrained by concatenating all available texts separated by a special token, not by using individual sentences with padding (like masked language models).\r\n\r\nTo list all available arguments, just use -h or --help as an option for the script. ",
"> The correct argument name is `--per_device_train_batch_size` or `--per_device_eval_batch_size`.\r\n> \r\n> Thee is no `--line_by_line` argument to the `run_clm` script as this option does not make sense for causal language models such as GPT-2, which are pretrained by concatenating all available texts separated by a special token, not by using individual sentences with padding (like masked language models).\r\n> \r\n> To list all available arguments, just use -h or --help as an option for the script.\r\n\r\nThanks, I figured out the ``--per_device_train_batch_size `` parameter and got ``run_clm`` to work.\r\n\r\nI really need ``--line-by-line`` for my dataset as the training dataset is just individual sentences, where the next sentence has no connection with the previous.\r\n\r\nIs there any way to get ``line by line `` to work with ``run_clm``?\r\n\r\nThanks",
"As I said, this makes no sense for those types of models so this won't be in our official examples. You can adapt the part that does this in `run_mlm` for your own need.",
"I get the same out of memory error because it tries to run this on my 1050 ti instead of my k80. I exported CUDA_VISIBLE_DEVICES=1,2 which is my k80, but this script always runs on my tiny 1050ti. Is there a switch to set which gpu to use?",
"@LysandreJik @sgugger can we load data in RAM in batches i.e lazy loading of data in RAM from disks and delete it after training on specific data?\r\n"
] | 1,606 | 1,667 | 1,606 | NONE | null | ## Environment info
- `transformers` version: 3.5.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: Yes, via official run_clm.py script
- Using distributed or parallel set-up in script?: No
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
Trainer: @sgugger
## Information
Model I am using: GPT2
The problem arises when using:
* [x] the official example scripts: language-modeling/run_clm.py
* [ ] my own modified scripts: (give details below)
I'm running [the provided example](https://github.com/huggingface/transformers/tree/master/examples/language-modeling):
```
python run_clm.py \
--model_name_or_path gpt2 \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--do_train \
--do_eval \
--output_dir /tmp/test-clm
```
and getting this error:
```
RuntimeError: CUDA out of memory.
```
on the first pass through Trainer.training_step()
Full traceback:
```
2020-11-22 22:02:22.921355: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
11/22/2020 22:02:24 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: False
11/22/2020 22:02:24 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir='/tmp/test-clm', overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=False, evaluate_during_training=False, evaluation_strategy=<EvaluationStrategy.NO: 'no'>, prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Nov22_22-02-24_f7d2e15228b7', logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name='/tmp/test-clm', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None)
Reusing dataset wikitext (/root/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91)
[INFO|configuration_utils.py:413] 2020-11-22 22:02:24,711 >> loading configuration file https://huggingface.co/gpt2/resolve/main/config.json from cache at /root/.cache/torch/transformers/fc674cd6907b4c9e933cb42d67662436b89fa9540a1f40d7c919d0109289ad01.7d2e0efa5ca20cef4fb199382111e9d3ad96fd77b849e1d4bed13a66e1336f51
[INFO|configuration_utils.py:449] 2020-11-22 22:02:24,711 >> Model config GPT2Config {
"activation_function": "gelu_new",
"architectures": [
"GPT2LMHeadModel"
],
"attn_pdrop": 0.1,
"bos_token_id": 50256,
"embd_pdrop": 0.1,
"eos_token_id": 50256,
"gradient_checkpointing": false,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_inner": null,
"n_layer": 12,
"n_positions": 1024,
"resid_pdrop": 0.1,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"task_specific_params": {
"text-generation": {
"do_sample": true,
"max_length": 50
}
},
"vocab_size": 50257
}
[INFO|configuration_utils.py:413] 2020-11-22 22:02:24,791 >> loading configuration file https://huggingface.co/gpt2/resolve/main/config.json from cache at /root/.cache/torch/transformers/fc674cd6907b4c9e933cb42d67662436b89fa9540a1f40d7c919d0109289ad01.7d2e0efa5ca20cef4fb199382111e9d3ad96fd77b849e1d4bed13a66e1336f51
[INFO|configuration_utils.py:449] 2020-11-22 22:02:24,791 >> Model config GPT2Config {
"activation_function": "gelu_new",
"architectures": [
"GPT2LMHeadModel"
],
"attn_pdrop": 0.1,
"bos_token_id": 50256,
"embd_pdrop": 0.1,
"eos_token_id": 50256,
"gradient_checkpointing": false,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_inner": null,
"n_layer": 12,
"n_positions": 1024,
"resid_pdrop": 0.1,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"task_specific_params": {
"text-generation": {
"do_sample": true,
"max_length": 50
}
},
"vocab_size": 50257
}
[INFO|tokenization_utils_base.py:1650] 2020-11-22 22:02:25,081 >> loading file https://huggingface.co/gpt2/resolve/main/vocab.json from cache at /root/.cache/torch/transformers/684fe667923972fb57f6b4dcb61a3c92763ad89882f3da5da9866baf14f2d60f.c7ed1f96aac49e745788faa77ba0a26a392643a50bb388b9c04ff469e555241f
[INFO|tokenization_utils_base.py:1650] 2020-11-22 22:02:25,081 >> loading file https://huggingface.co/gpt2/resolve/main/merges.txt from cache at /root/.cache/torch/transformers/c0c761a63004025aeadd530c4c27b860ec4ecbe8a00531233de21d865a402598.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b
[INFO|tokenization_utils_base.py:1650] 2020-11-22 22:02:25,082 >> loading file https://huggingface.co/gpt2/resolve/main/tokenizer.json from cache at /root/.cache/torch/transformers/16a2f78023c8dc511294f0c97b5e10fde3ef9889ad6d11ffaa2a00714e73926e.cf2d0ecb83b6df91b3dbb53f1d1e4c311578bfd3aa0e04934215a49bf9898df0
[INFO|modeling_utils.py:940] 2020-11-22 22:02:25,230 >> loading weights file https://huggingface.co/gpt2/resolve/main/pytorch_model.bin from cache at /root/.cache/torch/transformers/752929ace039baa8ef70fe21cdf9ab9445773d20e733cf693d667982e210837e.323c769945a351daa25546176f8208b3004b6f563438a7603e7932bae9025925
[INFO|modeling_utils.py:1056] 2020-11-22 22:02:30,168 >> All model checkpoint weights were used when initializing GPT2LMHeadModel.
[INFO|modeling_utils.py:1065] 2020-11-22 22:02:30,168 >> All the weights of GPT2LMHeadModel were initialized from the model checkpoint at gpt2.
If your task is similar to the task the model of the checkpoint was trained on, you can already use GPT2LMHeadModel for predictions without further training.
Loading cached processed dataset at /root/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91/cache-e3061a317d13eb90.arrow
Loading cached processed dataset at /root/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91/cache-a948c1d62c014b03.arrow
Loading cached processed dataset at /root/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91/cache-ea170b0cdcba7aa4.arrow
Loading cached processed dataset at /root/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91/cache-38ad73a52a8ec98e.arrow
Loading cached processed dataset at /root/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91/cache-dd6364e0f6a6c9eb.arrow
Loading cached processed dataset at /root/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91/cache-c40818aaf33935e0.arrow
[INFO|trainer.py:388] 2020-11-22 22:02:35,382 >> The following columns in the training set don't have a corresponding argument in `GPT2LMHeadModel.forward` and have been ignored: .
[INFO|trainer.py:388] 2020-11-22 22:02:35,382 >> The following columns in the evaluation set don't have a corresponding argument in `GPT2LMHeadModel.forward` and have been ignored: .
[INFO|trainer.py:693] 2020-11-22 22:02:35,385 >> ***** Running training *****
[INFO|trainer.py:694] 2020-11-22 22:02:35,385 >> Num examples = 2318
[INFO|trainer.py:695] 2020-11-22 22:02:35,385 >> Num Epochs = 3
[INFO|trainer.py:696] 2020-11-22 22:02:35,385 >> Instantaneous batch size per device = 8
[INFO|trainer.py:697] 2020-11-22 22:02:35,386 >> Total train batch size (w. parallel, distributed & accumulation) = 8
[INFO|trainer.py:698] 2020-11-22 22:02:35,386 >> Gradient Accumulation steps = 1
[INFO|trainer.py:699] 2020-11-22 22:02:35,386 >> Total optimization steps = 870
0% 0/870 [00:00<?, ?it/s]Traceback (most recent call last):
File "run_clm.py", line 351, in <module>
main()
File "run_clm.py", line 321, in main
trainer.train(model_path=model_path)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 775, in train
tr_loss += self.training_step(model, inputs)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1112, in training_step
loss = self.compute_loss(model, inputs)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1136, in compute_loss
outputs = model(**inputs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 787, in forward
return_dict=return_dict,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 659, in forward
output_attentions=output_attentions,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 295, in forward
output_attentions=output_attentions,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 239, in forward
attn_outputs = self._attn(query, key, value, attention_mask, head_mask, output_attentions)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 181, in _attn
w = self.attn_dropout(w)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/dropout.py", line 58, in forward
return F.dropout(input, self.p, self.training, self.inplace)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 983, in dropout
else _VF.dropout(input, p, training))
RuntimeError: CUDA out of memory. Tried to allocate 384.00 MiB (GPU 0; 14.73 GiB total capacity; 13.50 GiB already allocated; 137.81 MiB free; 13.55 GiB reserved in total by PyTorch)
0% 0/870 [00:00<?, ?it/s]
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
On the transformers wikitext dataset. I also attempted on my own corpus.txt file. Same issue with both.
## To reproduce
Steps to reproduce the behavior:
I have a minimal reproduction on [this Colab notebook](https://colab.research.google.com/drive/1-jYjb-eqUJsJRjkeHeL9UryV8TZJa9XQ?usp=sharing)
## What I've checked out so far:
I traced the problem to the Trainer.training_step() method. It seems [PR 6999](https://github.com/huggingface/transformers/pull/6999) was an attempt to fix a similar problem. However, with my issue, the CUDA OOM error happens before the loss.detach() on the first pass of training_step()
This is similar to [issue 7169](https://github.com/huggingface/transformers/issues/7169), except I'm not doing distributed training.
I've tested this issue both in Google Colab (1xGPU) and then on an AWS EC2 g4dn.12xlarge instance (4xGPU). I was pursuing the obvious possibility of Colab GPU simply being too small. Both max out with a "CUDA out of memory" error.
I also tried using the TPU launcher script, which hit an error, but that's a separate issue.
I also tried using the legacy [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/contrib/legacy/run_language_modeling.py) script with the same arguments on Colab (a friend had done so a few months ago and had success on Colab). I got this error there:
```AttributeError: 'GPT2TokenizerFast' object has no attribute 'max_len'```
but that's a separate issue.
## Expected behavior
The docs say that expected behavior for running will be the output of a trained model at the --output_dir flag
```This takes about half an hour to train on a single K80 GPU and about one minute for the evaluation to run. It reaches a score of ~20 perplexity once fine-tuned on the dataset.```
How do we fix this to make run_clm.py work? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8721/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8720 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8720/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8720/comments | https://api.github.com/repos/huggingface/transformers/issues/8720/events | https://github.com/huggingface/transformers/issues/8720 | 748,368,255 | MDU6SXNzdWU3NDgzNjgyNTU= | 8,720 | Broken links in example for torch.load() after. converting tensorflow checkpoint to pytorch save model | {
"login": "apurvaasf",
"id": 24805644,
"node_id": "MDQ6VXNlcjI0ODA1NjQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/24805644?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apurvaasf",
"html_url": "https://github.com/apurvaasf",
"followers_url": "https://api.github.com/users/apurvaasf/followers",
"following_url": "https://api.github.com/users/apurvaasf/following{/other_user}",
"gists_url": "https://api.github.com/users/apurvaasf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apurvaasf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apurvaasf/subscriptions",
"organizations_url": "https://api.github.com/users/apurvaasf/orgs",
"repos_url": "https://api.github.com/users/apurvaasf/repos",
"events_url": "https://api.github.com/users/apurvaasf/events{/privacy}",
"received_events_url": "https://api.github.com/users/apurvaasf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1897896961,
"node_id": "MDU6TGFiZWwxODk3ODk2OTYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Migration",
"name": "Migration",
"color": "e99695",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"If I understand correctly, you're trying to load a pytorch model from a `pytorch_model.bin`? If so, have you taken a look at the [quickstart](https://huggingface.co/transformers/v2.4.0/quickstart.html#main-concepts)? The `from_pretrained` method is probably what you're looking for.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,606 | 1,611 | 1,611 | NONE | null | The links for run_bert_extract_features.py, run_bert_classifier.py, and run_bert_squad.py are all broken [here](https://huggingface.co/transformers/v2.4.0/converting_tensorflow_models.html). Could someone point me to a notebook where I can find examples for loading from a PyTorch save file pytorch_model.bin?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8720/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8719 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8719/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8719/comments | https://api.github.com/repos/huggingface/transformers/issues/8719/events | https://github.com/huggingface/transformers/issues/8719 | 748,367,945 | MDU6SXNzdWU3NDgzNjc5NDU= | 8,719 | Unable to Tie Encoder Decoder Parameters When Using EncoderDecoderModel Constructor | {
"login": "ncoop57",
"id": 7613470,
"node_id": "MDQ6VXNlcjc2MTM0NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7613470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ncoop57",
"html_url": "https://github.com/ncoop57",
"followers_url": "https://api.github.com/users/ncoop57/followers",
"following_url": "https://api.github.com/users/ncoop57/following{/other_user}",
"gists_url": "https://api.github.com/users/ncoop57/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ncoop57/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ncoop57/subscriptions",
"organizations_url": "https://api.github.com/users/ncoop57/orgs",
"repos_url": "https://api.github.com/users/ncoop57/repos",
"events_url": "https://api.github.com/users/ncoop57/events{/privacy}",
"received_events_url": "https://api.github.com/users/ncoop57/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Oops realized it was because this weight tying doesn't work across different architectures."
] | 1,606 | 1,606 | 1,606 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 3.5.0
- Platform: Linux-5.4.0-53-generic-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): Encoder (RoBERTa) Decoder (GPT2) model
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
```
from transformers import (
AutoConfig,
AutoModel,
AutoModelForCausalLM,
EncoderDecoderModel,
EncoderDecoderConfig,
GPT2Config,
)
encoder_config = AutoConfig.from_pretrained('microsoft/codebert-base')
encoder = AutoModel.from_pretrained('microsoft/codebert-base')
decoder_config = GPT2Config(
n_layer = 6,
n_head = encoder_config.num_attention_heads,
add_cross_attention= True,
)
decoder = AutoModelForCausalLM.from_config(decoder_config)
encoder_decoder_config = EncoderDecoderConfig.from_encoder_decoder_configs(encoder_config, decoder_config)
encoder_decoder_config.tie_encoder_decoder = True
shared_codebert2gpt = EncoderDecoderModel(encoder = encoder, decoder = decoder, config = encoder_decoder_config)
```
The tasks I am working on is: N/A
## To reproduce
Steps to reproduce the behavior:
Running the above code produces the following message:
```
The following encoder weights were not tied to the decoder ['transformer/pooler', 'transformer/embeddings', 'transformer/encoder']
```
When checking the number of parameters of the model produces a model with `shared_codebert2gpt: 220,741,632` parameters, which is the same number of parameters if I were to not attempt to tie the encoder and decoder parameters :(.
## Expected behavior
The above snippet should produce a model with roughly `172,503,552` parameters.
My big question is, am I doing this correctly? I can correctly tie the model parameters if I use the `EncoderDecoderModel.from_encoder_decoder_pretrained` constructor and pass `tie_encoder_decoder=True`. However, for my task, I don't want to use a pretrained decoder and so am unable to use this constructor.
Any help with this would be greatly appreciated!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8719/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8718 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8718/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8718/comments | https://api.github.com/repos/huggingface/transformers/issues/8718/events | https://github.com/huggingface/transformers/issues/8718 | 748,358,821 | MDU6SXNzdWU3NDgzNTg4MjE= | 8,718 | Issues with finetune_trainer.py on multiple gpus | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi there. There is little anyone can do to help without knowing the actual command you are running.\r\n",
"Hi\nI am running\nhttps://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune_trainer.py\nonce\nwith multiple gpu machines once with 1 machine,\nI have adapted it for my usecase, this is hard for me to provide the exact\ncommand, as I need to share the whole codebase.\nMy question is though more general I observe performance differences if you\nrun this code on 1 dataset with multiple gpu/gpu, any thoughts on this?\nthanks\nRabeeh\n\nOn Mon, Nov 23, 2020 at 1:16 AM Sylvain Gugger <[email protected]>\nwrote:\n\n> Hi there. There is little anyone can do to help without knowing the actual\n> command you are running.\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/8718#issuecomment-731871920>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABP4ZCHAF5CQRJP2LDI2XQDSRGSUPANCNFSM4T6X5WFQ>\n> .\n>\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,606 | 1,619 | 1,619 | NONE | null | ## Environment info
- `transformers` version:3.5.1
- Platform: google cloud
- Python version: 3.7
- PyTorch version (GPU?): yes 1.6
- Tensorflow version (GPU?): -
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: -
### Who can help
Trainer: @sgugger
Text Generation: @patrickvonplaten @TevenLeScao
T5: @patrickvonplaten
examples/seq2seq: @patil-suraj
## Information
Hi
I am trying to train finetune_trainer.py on multiple gpus, one issue I got is
1) Runtime error: Input,ouptut,and indices must be on the current device,
Looking into the codes, in training_args.py when you set device for the n_gpu > 0 on line 401, this should be changed from cuda:0 to cuda to me.
2) The accuracy on multiple gpus does not match on single gpu and this is much lower, any idea on this.
Thank you.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8718/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8717 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8717/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8717/comments | https://api.github.com/repos/huggingface/transformers/issues/8717/events | https://github.com/huggingface/transformers/pull/8717 | 748,358,747 | MDExOlB1bGxSZXF1ZXN0NTI1MzYyNzY5 | 8,717 | Add T5 Encoder for Feature Extraction | {
"login": "agemagician",
"id": 6087313,
"node_id": "MDQ6VXNlcjYwODczMTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/agemagician",
"html_url": "https://github.com/agemagician",
"followers_url": "https://api.github.com/users/agemagician/followers",
"following_url": "https://api.github.com/users/agemagician/following{/other_user}",
"gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}",
"starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/agemagician/subscriptions",
"organizations_url": "https://api.github.com/users/agemagician/orgs",
"repos_url": "https://api.github.com/users/agemagician/repos",
"events_url": "https://api.github.com/users/agemagician/events{/privacy}",
"received_events_url": "https://api.github.com/users/agemagician/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I like it! ",
"Great, I am glad that you did like it.\r\n\r\nThanks @patrickvonplaten and @jplu for your feedback.\r\n\r\n@patrickvonplaten :\r\nI have adjusted all your code review, and also add it to the T5 documentation.\r\nThe only missing part is the tests, it will be great if you could add it.\r\n\r\n@jplu :\r\nI have removed the unnecessary parameters from the TF model.\r\n\r\nIs there anything else needed from my side to merge the pull request ?",
"> Great, I am glad that you did like it.\r\n> \r\n> Thanks @patrickvonplaten and @jplu for your feedback.\r\n> \r\n> @patrickvonplaten :\r\n> I have adjusted all your code review, and also add it to the T5 documentation.\r\n> The only missing part is the tests, it will be great if you could add it.\r\n> \r\n> @jplu :\r\n> I have removed the unnecessary parameters from the TF model.\r\n> \r\n> Is there anything else needed from my side to merge the pull request ?\r\n\r\nI think that's great! I'll fix the tests and merge :-) ",
"## Update:\r\n\r\nPR is ready for review IMO. Would be great if @LysandreJik @jplu and @sgugger you can take a look :-) ",
"> ## Update:\r\n> PR is ready for review IMO. Would be great if @LysandreJik @jplu and @sgugger you can take a look :-)\r\n\r\nThanks a lot @patrickvonplaten ^_^",
"> Very clean implementation, thanks a lot @agemagician!\r\n\r\nYou are welcome.\r\nI am glad that I could help making the library better, even with a small contribution.\r\nI have to say without @patrickvonplaten help, I could not make it ^_^"
] | 1,606 | 1,606 | 1,606 | CONTRIBUTOR | null | # What does this PR do?
While using T5 for feature extraction, I found out that T5 encoder provides better features than T5 decoder. Hence, it makes sense to have T5 encoder only, which should reduce the memory and inference time by half, if feature extraction is needed rather than conditional generation.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
T5: @patrickvonplaten
tensorflow: @jplu
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8717/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8717/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8717",
"html_url": "https://github.com/huggingface/transformers/pull/8717",
"diff_url": "https://github.com/huggingface/transformers/pull/8717.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8717.patch",
"merged_at": 1606721680000
} |
https://api.github.com/repos/huggingface/transformers/issues/8716 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8716/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8716/comments | https://api.github.com/repos/huggingface/transformers/issues/8716/events | https://github.com/huggingface/transformers/pull/8716 | 748,345,316 | MDExOlB1bGxSZXF1ZXN0NTI1MzUzMDkz | 8,716 | [trainer] make generate work with multigpu | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Normally the model attribute of the Trainer is always a reference to the real model (without the module from DataParallel and the likes), so using self.model here should prevent this error.\r\n\r\nIt did - thank you!\r\n\r\nThis is a very ambiguous situation for a user who wants to use HF trainer in their code. When to use `model` the argument and when `self.model`.\r\n\r\nWhat happens here is `model = torch.nn.DataParallel(self.model)` in the previous frame (`src/transformers/trainer.py:prediction_loop`), so `model` no longer has its normal methods accessible.\r\n\r\nHere are some possible solutions to resolve this ambiguity:\r\n\r\n1. monkeypatch `torch.nn.DataParallel` to expand its API to support all the methods of the original model transparently by installing a catch all `__getattr__` and remap all the failed method look ups to delegate to `self.module`.\r\n\r\n2. not to call the function argument `model` anymore, since it isn't under multi gpu, but is something else. \r\n\r\n3. remove the `model` argument completely + document to always use `self.model` - currently in `seq2seq_trainer.py `once we switch to `self.model`, `prediction_step()` no longer needs `model` as an argument (but is it always the case?)\r\n\r\n",
"We can certainly improve the documentation and the debugging experience. I think I prefer the solution 2 since 1. is too magic (so will probably make things harder to debug) and 3 is not compatible with the regular `Trainer` (that needs the unwrapped model though I'd need to check to be sure).\r\n\r\nDoing `model` -> `wrapped_model` should be enough to clarify things? Wdyt\r\n",
"> [...] 3 is not compatible with the regular `Trainer` (that needs the unwrapped model though I'd need to check to be sure).\r\n\r\nDid you mean to say \"needs the wrapped model\"?\r\n\r\nUnless I'm misreading what you wrote 3rd solution is the right one, since the Trainer doesn't do anything with the wrapped model. I don't know though whether this is so everywhere.\r\n\r\nThe 4th solution is passing `self.model `as the `model` arg, and making the wrapped model available via `self.wrapped_model` if the user needs it.\r\n\r\n> Doing `model` -> `wrapped_model` should be enough to clarify things? Wdyt\r\n\r\nExcept it won't be wrapped per se most of the time - very confusing to the user. Currently it should be called `may_be_wrapped_model_use_self_model_instead` variable ;)",
"I meant the wrapped model, sorry.",
" I'm getting this issue too using a T5 Model on multiple gpus\r\n\r\n`AttributeError: 'DataParallel' object has no attribute 'generate'`\r\n\r\nIs this supposed to be resolved? I've never seen this before. I've tried with 4.10.0 as well as current master branch",
"@JamesDeAntonis \r\n\r\nIs it possible you somehow have a really old `transformers` in your `sys.path`?\r\n\r\nIf not, as always we need a way to reproduce the problem as the first step. And ideally in a new issue so that it can be tracked.\r\n\r\nBut you can also see the fix in this PR and try to trace it to where the `generate` call is made. Clearly it's not calling it on the correct object.\r\n\r\nThank you."
] | 1,606 | 1,632 | 1,606 | CONTRIBUTOR | null | This PR:
* fixes **torch.nn.modules.module.ModuleAttributeError: 'DataParallel' object has no attribute 'generate'** under DataParallel
* enables test_finetune_bert2bert under multigpu - the test now works with any number of GPUs.
Chances are that this would be the same problem with any other `model.foo` calls as this is [not the first time this is happening](https://github.com/huggingface/transformers/issues/7146). i.e. the base model class most likely needs to made aware of `DataParallel` and transparently get the `model` at the calling point.
@sgugger, @LysandreJik, @patrickvonplaten
Fixes: https://github.com/huggingface/transformers/issues/8713 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8716/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8716",
"html_url": "https://github.com/huggingface/transformers/pull/8716",
"diff_url": "https://github.com/huggingface/transformers/pull/8716.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8716.patch",
"merged_at": 1606157848000
} |
https://api.github.com/repos/huggingface/transformers/issues/8715 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8715/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8715/comments | https://api.github.com/repos/huggingface/transformers/issues/8715/events | https://github.com/huggingface/transformers/issues/8715 | 748,324,565 | MDU6SXNzdWU3NDgzMjQ1NjU= | 8,715 | placing the run dir only in the output_dir | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,606 | 1,619 | 1,619 | NONE | null | Hi
In training_args.py, there is a code which creates a run dir, then anyplace user runs the code there it would create a run dir, could you create it only in the output_dir? thanks
```
def default_logdir() -> str:
"""
Same default as PyTorch
"""
import socket
from datetime import datetime
current_time = datetime.now().strftime("%b%d_%H-%M-%S")
return os.path.join("runs", current_time + "_" + socket.gethostname())
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8715/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8714 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8714/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8714/comments | https://api.github.com/repos/huggingface/transformers/issues/8714/events | https://github.com/huggingface/transformers/pull/8714 | 748,290,270 | MDExOlB1bGxSZXF1ZXN0NTI1MzE3OTIw | 8,714 | Add TFGPT2ForSequenceClassification based on DialogRPT | {
"login": "spatil6",
"id": 6419011,
"node_id": "MDQ6VXNlcjY0MTkwMTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6419011?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/spatil6",
"html_url": "https://github.com/spatil6",
"followers_url": "https://api.github.com/users/spatil6/followers",
"following_url": "https://api.github.com/users/spatil6/following{/other_user}",
"gists_url": "https://api.github.com/users/spatil6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/spatil6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/spatil6/subscriptions",
"organizations_url": "https://api.github.com/users/spatil6/orgs",
"repos_url": "https://api.github.com/users/spatil6/repos",
"events_url": "https://api.github.com/users/spatil6/events{/privacy}",
"received_events_url": "https://api.github.com/users/spatil6/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for review @jplu . I'll update my code with review comments and new input processing.\r\n",
"> Thank you very much for this very nice addition!!\r\n> \r\n> I left few comments on it. Also can you run the following piece of code and tell me if it works properly:\r\n> \r\n> ```\r\n> import tensorflow as tf\r\n> from transformers import GPT2Tokenizer, TFGPT2ForSequenceClassification\r\n> \r\n> model = tf.function(TFGPT2ForSequenceClassification.from_pretrained(\"microsoft/dialogrpt\"))\r\n> tokenizer = GPT2Tokenizer.from_pretrained(\"microsoft/dialogrpt\")\r\n> inputs = tokenizer(\"Hello\", return_tensors=\"tf\")\r\n> model(inputs)\r\n> ```\r\n> \r\n> @LysandreJik I would recommend as well to wait a bit that the new input processing to be merged.\r\n\r\n<img width=\"1176\" alt=\"output\" src=\"https://user-images.githubusercontent.com/6419011/100481765-2c2da900-311b-11eb-8fdd-15762f7d43df.png\">\r\n",
"Hello @jplu and @LysandreJik ,\r\nI have refactored code as per review comments and added new input processing as well.\r\n\r\nKindly review.",
"> Much better!! Thanks for the updates.\r\n> \r\n> There is still one comment to be addressed and the tests to fix.\r\n\r\n@jplu tests are also fixed now.",
"@spatil6 we have merged today a PR that updates the way the booleans are processed. You can see an example in the TF BERT file for example, can you rebase and proceed to the same changes please. It would be awesome if you could do it!",
"> @spatil6 we have merged today a PR that updates the way the booleans are processed. You can see an example in the TF BERT file for example, can you rebase and proceed to the same changes please. It would be awesome if you could do it!\r\n\r\nSure, will do that."
] | 1,606 | 1,608 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
This PR implements TFGPT2ForSequenceClassification in order to support DialogRPT.
Strongly based on modifications made in #7501
Fixes #7622
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@LysandreJik Please review this PR, let me know if there is anything that should be changed =) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8714/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8714/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8714",
"html_url": "https://github.com/huggingface/transformers/pull/8714",
"diff_url": "https://github.com/huggingface/transformers/pull/8714.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8714.patch",
"merged_at": 1607356718000
} |
https://api.github.com/repos/huggingface/transformers/issues/8713 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8713/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8713/comments | https://api.github.com/repos/huggingface/transformers/issues/8713/events | https://github.com/huggingface/transformers/issues/8713 | 748,289,205 | MDU6SXNzdWU3NDgyODkyMDU= | 8,713 | eval of seq2seq/finetune_trainer does not work on multiple gpus | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"seems to be related to #8613 ",
"Fix https://github.com/huggingface/transformers/pull/8716\r\n"
] | 1,606 | 1,606 | 1,606 | NONE | null | Hi
I am using
transformers = 3.5.1
python = 3.7
8 gpu machine
I am getting this error when trying to run "finetune_trainer.py" with do_eval option on multiple gpus. thanks for your help
@patil-suraj @patrickvonplaten
```
11/22/2020 17:14:20 - INFO - __main__ - *** Evaluate ***
11/22/2020 17:14:20 - INFO - seq2seq.utils.utils - using task specific params for boolq: {'max_length': 4}
Traceback (most recent call last):
File "finetune_t5_trainer.py", line 233, in <module>
main()
File "finetune_t5_trainer.py", line 188, in main
result = trainer.evaluate(eval_datasets, compute_metrics_fn)
File "/home/rabeeh/internship/seq2seq/t5_trainer.py", line 175, in evaluate
prediction_loss_only=True if self.compute_metrics is None else None, # self.compute_metrics[eval_task]
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/trainer.py", line 1417, in prediction_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only)
File "/home/rabeeh/internship/seq2seq/t5_trainer.py", line 249, in prediction_step
generated_tokens = model.generate(
File "/opt/conda/envs/internship/lib/python3.7/site-packages/torch/nn/modules/module.py", line 779, in __getattr__
type(self).__name__, name))
torch.nn.modules.module.ModuleAttributeError: 'DataParallel' object has no attribute 'generate'
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8713/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8713/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8712 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8712/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8712/comments | https://api.github.com/repos/huggingface/transformers/issues/8712/events | https://github.com/huggingface/transformers/issues/8712 | 748,284,296 | MDU6SXNzdWU3NDgyODQyOTY= | 8,712 | distributed_eval does not run | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"with t5-base seems to run fine, so perhaps the specific model in the README is not available. Thank you. ",
"Hello! Indeed, it seems this model does not exist. Do you want to open a PR with a model that works? Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,606 | 1,619 | 1,619 | NONE | null | Hi
I am trying to run distributed_eval with latest version of huggingface codes installed from the webpage, (internship)
please find the command below and the errors, thank you for your help.
Info on versions/machine:
python = 3.7
8 gpus
transformers 4.0.0rc1 pypi_0 pypi
```
rabeeh@gpu8:~/transformers/examples/seq2seq$ python -m torch.distributed.launch --nproc_per_node=8 run_distributed_eval.py --model_name sshleifer/distilbart-large-xsum-12-3 --save_dir xsum_generations --data_dir xsum --fp16
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
2020-11-22 16:51:27.894686: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2020-11-22 16:51:27.894685: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2020-11-22 16:51:27.894686: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2020-11-22 16:51:27.894688: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2020-11-22 16:51:27.894688: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2020-11-22 16:51:27.894685: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2020-11-22 16:51:27.894688: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2020-11-22 16:51:27.896156: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
404 Client Error: Not Found for url: https://huggingface.co/sshleifer/distilbart-large-xsum-12-3/resolve/main/config.json
Traceback (most recent call last):
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_utils.py", line 389, in get_config_dict
local_files_only=local_files_only,
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/file_utils.py", line 987, in cached_path
local_files_only=local_files_only,
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/file_utils.py", line 1108, in get_from_cache
r.raise_for_status()
File "/opt/conda/envs/internship/lib/python3.7/site-packages/requests/models.py", line 943, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/sshleifer/distilbart-large-xsum-12-3/resolve/main/config.json
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "run_distributed_eval.py", line 248, in <module>
run_generate()
File "run_distributed_eval.py", line 180, in run_generate
**generate_kwargs,
File "run_distributed_eval.py", line 57, in eval_data_dir
model = AutoModelForSeq2SeqLM.from_pretrained(model_name).cuda()
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/models/auto/modeling_auto.py", line 1141, in from_pretrained
pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 341, in from_pretrained
config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_utils.py", line 401, in get_config_dict
raise EnvironmentError(msg)
OSError: Can't load config for 'sshleifer/distilbart-large-xsum-12-3'. Make sure that:
- 'sshleifer/distilbart-large-xsum-12-3' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'sshleifer/distilbart-large-xsum-12-3' is the correct path to a directory containing a config.json file
404 Client Error: Not Found for url: https://huggingface.co/sshleifer/distilbart-large-xsum-12-3/resolve/main/config.json
Traceback (most recent call last):
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_utils.py", line 389, in get_config_dict
local_files_only=local_files_only,
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/file_utils.py", line 987, in cached_path
local_files_only=local_files_only,
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/file_utils.py", line 1108, in get_from_cache
r.raise_for_status()
File "/opt/conda/envs/internship/lib/python3.7/site-packages/requests/models.py", line 943, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/sshleifer/distilbart-large-xsum-12-3/resolve/main/config.json
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "run_distributed_eval.py", line 248, in <module>
run_generate()
File "run_distributed_eval.py", line 180, in run_generate
**generate_kwargs,
File "run_distributed_eval.py", line 57, in eval_data_dir
model = AutoModelForSeq2SeqLM.from_pretrained(model_name).cuda()
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/models/auto/modeling_auto.py", line 1141, in from_pretrained
pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 341, in from_pretrained
config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_utils.py", line 401, in get_config_dict
raise EnvironmentError(msg)
OSError: Can't load config for 'sshleifer/distilbart-large-xsum-12-3'. Make sure that:
- 'sshleifer/distilbart-large-xsum-12-3' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'sshleifer/distilbart-large-xsum-12-3' is the correct path to a directory containing a config.json file
404 Client Error: Not Found for url: https://huggingface.co/sshleifer/distilbart-large-xsum-12-3/resolve/main/config.json
Traceback (most recent call last):
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_utils.py", line 389, in get_config_dict
local_files_only=local_files_only,
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/file_utils.py", line 987, in cached_path
404 Client Error: Not Found for url: https://huggingface.co/sshleifer/distilbart-large-xsum-12-3/resolve/main/config.json
Traceback (most recent call last):
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_utils.py", line 389, in get_config_dict
local_files_only=local_files_only,
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/file_utils.py", line 1108, in get_from_cache
r.raise_for_status()
File "/opt/conda/envs/internship/lib/python3.7/site-packages/requests/models.py", line 943, in raise_for_status
local_files_only=local_files_only,
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/file_utils.py", line 987, in cached_path
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/sshleifer/distilbart-large-xsum-12-3/resolve/main/config.json
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "run_distributed_eval.py", line 248, in <module>
run_generate()
File "run_distributed_eval.py", line 180, in run_generate
**generate_kwargs,
File "run_distributed_eval.py", line 57, in eval_data_dir
local_files_only=local_files_only,
model = AutoModelForSeq2SeqLM.from_pretrained(model_name).cuda()
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/file_utils.py", line 1108, in get_from_cache
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/models/auto/modeling_auto.py", line 1141, in from_pretrained
pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 341, in from_pretrained
r.raise_for_status()
File "/opt/conda/envs/internship/lib/python3.7/site-packages/requests/models.py", line 943, in raise_for_status
config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_utils.py", line 401, in get_config_dict
raise EnvironmentError(msg)
OSError: Can't load config for 'sshleifer/distilbart-large-xsum-12-3'. Make sure that:
- 'sshleifer/distilbart-large-xsum-12-3' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'sshleifer/distilbart-large-xsum-12-3' is the correct path to a directory containing a config.json file
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/sshleifer/distilbart-large-xsum-12-3/resolve/main/config.json
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "run_distributed_eval.py", line 248, in <module>
run_generate()
File "run_distributed_eval.py", line 180, in run_generate
**generate_kwargs,
File "run_distributed_eval.py", line 57, in eval_data_dir
model = AutoModelForSeq2SeqLM.from_pretrained(model_name).cuda()
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/models/auto/modeling_auto.py", line 1141, in from_pretrained
pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 341, in from_pretrained
config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_utils.py", line 401, in get_config_dict
raise EnvironmentError(msg)
OSError: Can't load config for 'sshleifer/distilbart-large-xsum-12-3'. Make sure that:
- 'sshleifer/distilbart-large-xsum-12-3' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'sshleifer/distilbart-large-xsum-12-3' is the correct path to a directory containing a config.json file
404 Client Error: Not Found for url: https://huggingface.co/sshleifer/distilbart-large-xsum-12-3/resolve/main/config.json
Traceback (most recent call last):
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_utils.py", line 389, in get_config_dict
local_files_only=local_files_only,
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/file_utils.py", line 987, in cached_path
local_files_only=local_files_only,
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/file_utils.py", line 1108, in get_from_cache
r.raise_for_status()
File "/opt/conda/envs/internship/lib/python3.7/site-packages/requests/models.py", line 943, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/sshleifer/distilbart-large-xsum-12-3/resolve/main/config.json
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "run_distributed_eval.py", line 248, in <module>
run_generate()
File "run_distributed_eval.py", line 180, in run_generate
**generate_kwargs,
File "run_distributed_eval.py", line 57, in eval_data_dir
model = AutoModelForSeq2SeqLM.from_pretrained(model_name).cuda()
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/models/auto/modeling_auto.py", line 1141, in from_pretrained
pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 341, in from_pretrained
config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_utils.py", line 401, in get_config_dict
404 Client Error: Not Found for url: https://huggingface.co/sshleifer/distilbart-large-xsum-12-3/resolve/main/config.json
raise EnvironmentError(msg)
OSError: Can't load config for 'sshleifer/distilbart-large-xsum-12-3'. Make sure that:
- 'sshleifer/distilbart-large-xsum-12-3' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'sshleifer/distilbart-large-xsum-12-3' is the correct path to a directory containing a config.json file
Traceback (most recent call last):
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_utils.py", line 389, in get_config_dict
404 Client Error: Not Found for url: https://huggingface.co/sshleifer/distilbart-large-xsum-12-3/resolve/main/config.json
Traceback (most recent call last):
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_utils.py", line 389, in get_config_dict
local_files_only=local_files_only,
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/file_utils.py", line 987, in cached_path
local_files_only=local_files_only,
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/file_utils.py", line 1108, in get_from_cache
local_files_only=local_files_only,
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/file_utils.py", line 987, in cached_path
r.raise_for_status()
File "/opt/conda/envs/internship/lib/python3.7/site-packages/requests/models.py", line 943, in raise_for_status
local_files_only=local_files_only,
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/file_utils.py", line 1108, in get_from_cache
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/sshleifer/distilbart-large-xsum-12-3/resolve/main/config.json
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "run_distributed_eval.py", line 248, in <module>
run_generate()
File "run_distributed_eval.py", line 180, in run_generate
r.raise_for_status()
File "/opt/conda/envs/internship/lib/python3.7/site-packages/requests/models.py", line 943, in raise_for_status
**generate_kwargs,
File "run_distributed_eval.py", line 57, in eval_data_dir
model = AutoModelForSeq2SeqLM.from_pretrained(model_name).cuda()
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/models/auto/modeling_auto.py", line 1141, in from_pretrained
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/sshleifer/distilbart-large-xsum-12-3/resolve/main/config.json
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "run_distributed_eval.py", line 248, in <module>
pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 341, in from_pretrained
run_generate()
File "run_distributed_eval.py", line 180, in run_generate
**generate_kwargs,
File "run_distributed_eval.py", line 57, in eval_data_dir
config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_utils.py", line 401, in get_config_dict
model = AutoModelForSeq2SeqLM.from_pretrained(model_name).cuda()
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/models/auto/modeling_auto.py", line 1141, in from_pretrained
raise EnvironmentError(msg)
OSError: Can't load config for 'sshleifer/distilbart-large-xsum-12-3'. Make sure that:
- 'sshleifer/distilbart-large-xsum-12-3' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'sshleifer/distilbart-large-xsum-12-3' is the correct path to a directory containing a config.json file
pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 341, in from_pretrained
config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_utils.py", line 401, in get_config_dict
raise EnvironmentError(msg)
OSError: Can't load config for 'sshleifer/distilbart-large-xsum-12-3'. Make sure that:
- 'sshleifer/distilbart-large-xsum-12-3' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'sshleifer/distilbart-large-xsum-12-3' is the correct path to a directory containing a config.json file
404 Client Error: Not Found for url: https://huggingface.co/sshleifer/distilbart-large-xsum-12-3/resolve/main/config.json
Traceback (most recent call last):
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_utils.py", line 389, in get_config_dict
local_files_only=local_files_only,
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/file_utils.py", line 987, in cached_path
local_files_only=local_files_only,
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/file_utils.py", line 1108, in get_from_cache
r.raise_for_status()
File "/opt/conda/envs/internship/lib/python3.7/site-packages/requests/models.py", line 943, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/sshleifer/distilbart-large-xsum-12-3/resolve/main/config.json
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "run_distributed_eval.py", line 248, in <module>
run_generate()
File "run_distributed_eval.py", line 180, in run_generate
**generate_kwargs,
File "run_distributed_eval.py", line 57, in eval_data_dir
model = AutoModelForSeq2SeqLM.from_pretrained(model_name).cuda()
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/models/auto/modeling_auto.py", line 1141, in from_pretrained
pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 341, in from_pretrained
config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_utils.py", line 401, in get_config_dict
raise EnvironmentError(msg)
OSError: Can't load config for 'sshleifer/distilbart-large-xsum-12-3'. Make sure that:
- 'sshleifer/distilbart-large-xsum-12-3' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'sshleifer/distilbart-large-xsum-12-3' is the correct path to a directory containing a config.json file
Traceback (most recent call last):
File "/opt/conda/envs/internship/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/opt/conda/envs/internship/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/torch/distributed/launch.py", line 260, in <module>
main()
File "/opt/conda/envs/internship/lib/python3.7/site-packages/torch/distributed/launch.py", line 256, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/opt/conda/envs/internship/bin/python', '-u', 'run_distributed_eval.py', '--local_rank=7', '--model_name', 'sshleifer/distilbart-large-xsum-12-3', '--save_dir', 'xsum_generations', '--data_dir', 'xsum', '--fp16']' returned non-zero exit status 1.
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8712/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8712/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8711 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8711/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8711/comments | https://api.github.com/repos/huggingface/transformers/issues/8711/events | https://github.com/huggingface/transformers/issues/8711 | 748,269,244 | MDU6SXNzdWU3NDgyNjkyNDQ= | 8,711 | Model predictions wrong | {
"login": "brunopistone",
"id": 10196125,
"node_id": "MDQ6VXNlcjEwMTk2MTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/10196125?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brunopistone",
"html_url": "https://github.com/brunopistone",
"followers_url": "https://api.github.com/users/brunopistone/followers",
"following_url": "https://api.github.com/users/brunopistone/following{/other_user}",
"gists_url": "https://api.github.com/users/brunopistone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brunopistone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brunopistone/subscriptions",
"organizations_url": "https://api.github.com/users/brunopistone/orgs",
"repos_url": "https://api.github.com/users/brunopistone/repos",
"events_url": "https://api.github.com/users/brunopistone/events{/privacy}",
"received_events_url": "https://api.github.com/users/brunopistone/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"If you have an accuracy near 10% on a two-label sequence classification task - does that mean it gets 90% of the results wrong? If so, you might just have switched the labels.",
"Hi, no the problem is not related to what you said. I tried also to perform one hot encoding on the labels and change the loss function to \"categorical_crossentropy\" but the results are the same.\r\nI tried to use the official pre trained english model (**https://github.com/google-research/bert**) with another module and I don't have this problem (the keras model is the same).",
"Hello!\r\n\r\nCan you try with `TFBertForSequenceClassification`?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,606 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.5.0
- Platform: Linux
- Python version: 3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.3.1
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): Bert -> bert-base-uncased
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Hi @LysandreJik , @sgugger , @jplu , I wan running my own script on a custom dataset by using "bert-base-uncased". It's a simple classification task with two classes. Below some examples:
```
"is_offensive", "text"
"1", "Your service is a shit."
"0", "Really great examples. Thank you for your help @exemple01"
```
This is the definition of the model:
```
from transformers import AutoConfig, BertTokenizer, TFAutoModel
config = AutoConfig.from_pretrained("bert-base-uncased")
config.output_hidden_states = output_hidden_states
model_bert = TFAutoModel.from_pretrained("bert-base-uncased", config=self.config)
model_bert = self.model.bert
input_ids_in = tf.keras.layers.Input(shape=(333,), name='input_token', dtype='int32')
input_masks_in = tf.keras.layers.Input(shape=(333,), name='masked_token', dtype='int32')
embeddings, main_layer = model_bert(input_ids_in, attention_mask=input_masks_in)
X = tf.keras.layers.Dropout(0.2)(main_layer)
X = tf.keras.layers.Dense(2, activation='softmax')(X)
loss_function = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model = tf.keras.Model(
inputs=[input_ids_in, input_masks_in],
outputs=[X]
)
for layer in model.layers[:3]:
layer.trainable = False
model.compile(optimizer=tf.optimizers.Adam(lr=0.00001), loss=loss_function, metrics=['sparse_categorical_accuracy'])
history = model.fit(
X_train,
y_train,
validation_split=0.2,
epochs=10,
batch_size=100
)
```
I've trained the model for 5 epochs, these are results after the last epoch:
```
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_token (InputLayer) [(None, 333)] 0
__________________________________________________________________________________________________
masked_token (InputLayer) [(None, 333)] 0
__________________________________________________________________________________________________
bert (TFBertMainLayer) ((None, 333, 768), ( 109482240 input_token[0][0]
masked_token[0][0]
__________________________________________________________________________________________________
dropout_75 (Dropout) (None, 768) 0 bert[0][1]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 2) 1538 dropout_75[0][0]
==================================================================================================
Total params: 109,483,778
Trainable params: 1,538
Non-trainable params: 109,482,240
__________________________________________________________________________________________________
1475/1475 [==============================] - ETA: 0s - loss: 0.5041 - accuracy: 0.8028
Accuracy: 0.8027665019035339
Loss: 0.5041469931602478
Val Accuracy: 0.8009492754936218
```
Then I save the model in this way:
```
try:
modelName = os.path.join(model_path, model_name)
model_json = model.to_json()
with open(modelName + ".json" "w") as json_file:
json_file.write(model_json)
json_file.close()
model.save_weights(modelName + ".h5")
logger.info("Saved {} to disk".format(modelName))
except Exception as e:
stacktrace = traceback.format_exc()
logger.error("{}".format(stacktrace))
raise e
```
When I try to perform a prediction also on trained sentences, the model completely fails the goal. I think that is something wrong in the training results, I cannot have an ~81% of accuracy during the training and on validation, but when I validate the model on a completely new dataset I obtain an accuracy near to the 10%.
I decided to build my own model and I compared your framework with another one, that gives optimal results(near to the 85%).
Can you help me to understand the mistakes?
Thank you. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8711/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8711/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8710 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8710/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8710/comments | https://api.github.com/repos/huggingface/transformers/issues/8710/events | https://github.com/huggingface/transformers/issues/8710 | 748,211,017 | MDU6SXNzdWU3NDgyMTEwMTc= | 8,710 | [BUG] Wrong Scores for many SQUAD models | {
"login": "elronbandel",
"id": 23455264,
"node_id": "MDQ6VXNlcjIzNDU1MjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/23455264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elronbandel",
"html_url": "https://github.com/elronbandel",
"followers_url": "https://api.github.com/users/elronbandel/followers",
"following_url": "https://api.github.com/users/elronbandel/following{/other_user}",
"gists_url": "https://api.github.com/users/elronbandel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elronbandel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elronbandel/subscriptions",
"organizations_url": "https://api.github.com/users/elronbandel/orgs",
"repos_url": "https://api.github.com/users/elronbandel/repos",
"events_url": "https://api.github.com/users/elronbandel/events{/privacy}",
"received_events_url": "https://api.github.com/users/elronbandel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,606 | 1,619 | 1,619 | NONE | null | @julien-c
@VictorSanh
## Information
All models trained with run_squad.py have abstaining threshold of 0.0 and possibly wrong evaluation scores.
Those model have their wrong score in their model cards many people rely on.
For example: ahotrod/electra_large_discriminator_squad2_512
The results with current evaluation script:
```
"exact": 87.09677419354838,
"f1": 89.98343832723452,
"total": 11873,
"HasAns_exact": 84.66599190283401,
"HasAns_f1": 90.44759839056285,
"HasAns_total": 5928,
"NoAns_exact": 89.52060555088309,
"NoAns_f1": 89.52060555088309,
"NoAns_total": 5945,
"best_exact": 87.09677419354838,
"best_exact_thresh": 0.0,
"best_f1": 89.98343832723432,
"best_f1_thresh": 0.0
```
The problem and its fix can be found in: [[Bug Fix] Fix run_squad.py evaluation code doesn't use probabilities](https://github.com/huggingface/transformers/pull/7319) #7319
The problem arises when using:
* [ ] run_squad.py
The tasks I am working on is:
* [ ] SQuAD
## To reproduce
Steps to reproduce the behaviour:
1. run `run_squad.py`
## Expected behaviour:
Resulting 'best_f1_thresh' wont be 0.0 .
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8710/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8710/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8709 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8709/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8709/comments | https://api.github.com/repos/huggingface/transformers/issues/8709/events | https://github.com/huggingface/transformers/issues/8709 | 748,180,401 | MDU6SXNzdWU3NDgxODA0MDE= | 8,709 | Can't load weights for | {
"login": "saburbutt",
"id": 33926182,
"node_id": "MDQ6VXNlcjMzOTI2MTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/33926182?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saburbutt",
"html_url": "https://github.com/saburbutt",
"followers_url": "https://api.github.com/users/saburbutt/followers",
"following_url": "https://api.github.com/users/saburbutt/following{/other_user}",
"gists_url": "https://api.github.com/users/saburbutt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saburbutt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saburbutt/subscriptions",
"organizations_url": "https://api.github.com/users/saburbutt/orgs",
"repos_url": "https://api.github.com/users/saburbutt/repos",
"events_url": "https://api.github.com/users/saburbutt/events{/privacy}",
"received_events_url": "https://api.github.com/users/saburbutt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [] | 1,606 | 1,606 | 1,606 | NONE | null | # 🌟 New model addition
**Getting the following error after training a question answering problem using ALBERT.**
404 Client Error: Not Found for url: https://huggingface.co/saburbutt/albert_xxlarge_tweetqa_v2/resolve/main/tf_model.h5
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
950 try:
--> 951 state_dict = torch.load(resolved_archive_file, map_location="cpu")
952 except Exception:
13 frames
RuntimeError: [enforce fail at inline_container.cc:145] . PytorchStreamReader failed reading zip archive: failed finding central directory
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
OSError: Unable to load weights from pytorch checkpoint file for 'saburbutt/albert_xxlarge_tweetqa_v2' at '/root/.cache/torch/transformers/280e3f03092e3b52d227bc27519ff98aff017abcc160fc5138df7ce1bddcff1e.b5346cd8c01b1d2591b342ede0146ce26b68ad0a84ff87e5dc8f9d5a03a79910'If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
During handling of the above exception, another exception occurred:
HTTPError Traceback (most recent call last)
HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/saburbutt/albert_xxlarge_tweetqa_v2/resolve/main/tf_model.h5
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
683 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a file named one of {TF2_WEIGHTS_NAME}, {WEIGHTS_NAME}.\n\n"
684 )
--> 685 raise EnvironmentError(msg)
686 if resolved_archive_file == archive_file:
687 logger.info("loading weights file {}".format(archive_file))
OSError: Can't load weights for 'saburbutt/albert_xxlarge_tweetqa_v2'. Make sure that:
- 'saburbutt/albert_xxlarge_tweetqa_v2' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'saburbutt/albert_xxlarge_tweetqa_v2' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin.
It was working for all the previous models I have tried.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8709/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8709/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8708 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8708/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8708/comments | https://api.github.com/repos/huggingface/transformers/issues/8708/events | https://github.com/huggingface/transformers/pull/8708 | 748,133,828 | MDExOlB1bGxSZXF1ZXN0NTI1MjExNjk4 | 8,708 | Fix many typos | {
"login": "bryant1410",
"id": 3905501,
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryant1410",
"html_url": "https://github.com/bryant1410",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,606 | 1,606 | 1,606 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8708/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8708/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8708",
"html_url": "https://github.com/huggingface/transformers/pull/8708",
"diff_url": "https://github.com/huggingface/transformers/pull/8708.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8708.patch",
"merged_at": 1606017491000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/8707 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8707/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8707/comments | https://api.github.com/repos/huggingface/transformers/issues/8707/events | https://github.com/huggingface/transformers/issues/8707 | 748,117,841 | MDU6SXNzdWU3NDgxMTc4NDE= | 8,707 | Accuracy changes dramatically | {
"login": "burakisikli",
"id": 982014,
"node_id": "MDQ6VXNlcjk4MjAxNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/982014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/burakisikli",
"html_url": "https://github.com/burakisikli",
"followers_url": "https://api.github.com/users/burakisikli/followers",
"following_url": "https://api.github.com/users/burakisikli/following{/other_user}",
"gists_url": "https://api.github.com/users/burakisikli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/burakisikli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/burakisikli/subscriptions",
"organizations_url": "https://api.github.com/users/burakisikli/orgs",
"repos_url": "https://api.github.com/users/burakisikli/repos",
"events_url": "https://api.github.com/users/burakisikli/events{/privacy}",
"received_events_url": "https://api.github.com/users/burakisikli/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nIt would be great if you could also include both training scripts, so that we may compare. There should be no difference between the PyTorch training or the TensorFlow training.\r\n\r\nThanks!"
] | 1,605 | 1,606 | 1,606 | NONE | null | ## Environment info
- `transformers` version: 3.5.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
I tried to fine tune a bert model for text classification task using same parameters(learning rate, warmup step, batch size, number of epoch) in pytorch and tensorflow. If I use tensorflow, the validation accuracy changes dramatically. In pytorch accuracy is around %96, in tensorflow %76. One thing I noticed is the gpu memory usage difference (pytorch: ~12gb, tf ~8gb). Shouldn't we expect it to be the similar accuracy?
```python
from transformers import TFBertForSequenceClassification
model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels = num_labels)
optimizer = tf.keras.optimizers.Adam(learning_rate=lr_schedule)
model.compile(optimizer=optimizer, loss=model.compute_loss, metrics=['accuracy'])
history = model.fit(train_dataset.shuffle(1000).batch(32), epochs=epochs, batch_size=32)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8707/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8707/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8706 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8706/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8706/comments | https://api.github.com/repos/huggingface/transformers/issues/8706/events | https://github.com/huggingface/transformers/issues/8706 | 748,110,831 | MDU6SXNzdWU3NDgxMTA4MzE= | 8,706 | T5v1.1 Addition of special tokens | {
"login": "FL33TW00D",
"id": 45471420,
"node_id": "MDQ6VXNlcjQ1NDcxNDIw",
"avatar_url": "https://avatars.githubusercontent.com/u/45471420?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FL33TW00D",
"html_url": "https://github.com/FL33TW00D",
"followers_url": "https://api.github.com/users/FL33TW00D/followers",
"following_url": "https://api.github.com/users/FL33TW00D/following{/other_user}",
"gists_url": "https://api.github.com/users/FL33TW00D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FL33TW00D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FL33TW00D/subscriptions",
"organizations_url": "https://api.github.com/users/FL33TW00D/orgs",
"repos_url": "https://api.github.com/users/FL33TW00D/repos",
"events_url": "https://api.github.com/users/FL33TW00D/events{/privacy}",
"received_events_url": "https://api.github.com/users/FL33TW00D/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"duplicate of https://github.com/huggingface/transformers/issues/8643. \r\n\r\nThis is indeed a big problem. I'll try to get to it this week!",
"Hey @FL33TW00D - actually I cannot reproduce your error....Can you try to update to the `tokenizers` version as well?",
"I can correctly shorten T5's embedding matrix...",
"Hi @patrickvonplaten,\r\nAppreciate you looking at this. \r\n\r\nI suspect that in this case it's user error.\r\nI am attempting to add the special tokens like so prior to pretraining:\r\n```python\r\nfrom transformers import T5TokenizerFast, T5ForConditionalGeneration\r\nMODEL_NAME = 'google/t5-v1_1-base'\r\nspecial_tokens = [\"<ORG>\",\r\n \"<PERSON>\"]\r\ntokenizer = T5TokenizerFast.from_pretrained('t5-base')\r\nspecial_tokens_dict = {'additional_special_tokens': ['<ORG>','<PERSON>']}\r\nnum_added_tokens = tokenizer.add_special_tokens(special_tokens_dict)\r\nprint(f'ADDED TOKENS: {num_added_tokens}')\r\nmodel = T5ForConditionalGeneration.from_pretrained(MODEL_NAME)\r\nmodel.resize_token_embeddings(len(tokenizer))\r\nmodel.to(\"cuda\")\r\n```\r\nI then pretrain the model, and save like so:\r\n`model.save_pretrained('t5_base_test')`\r\n\r\nIt is upon model loading that I receive the error:\r\n```\r\nT5ForConditionalGeneration.from_pretrained('./t5_base_test')\r\n```\r\n```\r\nsize mismatch for lm_head.weight: copying a param with shape torch.Size([32128, 768]) from checkpoint, the shape in current model is torch.Size([32102, 768]).\r\n```\r\n\r\nFrom the config.json, it looks like the rest of the layers are being scaled to the len(tokenizer) of 32102, and only the language modelling head on the final layer remaining as 32128.\r\n\r\nAny insight into this?\r\n\r\nMany Thanks,\r\nChris",
"I can reproduce - will fix it! Thanks for the detailed error description ",
"BTW, it's recommend to always use the same model identifier for model and tokenizer, even though in this case it would not have made a difference. So:\r\n\r\n```python\r\ntokenizer = T5TokenizerFast.from_pretrained('google/t5-v1_1-base')\r\n```",
"> \r\n> \r\n> I can reproduce - will fix it! Thanks for the detailed error description\r\n\r\nMassive thanks for fixing this. Really appreciate it.",
"Will try to have it merged into master by tomorrow",
"> Will try to have it merged into master by tomorrow\r\n\r\n@patrickvonplaten No worries already forked and working great! :+1: ",
"Hi @patrickvonplaten \r\n\r\nI have faced the same situation with `MT5ForConditionalGeneration` when I have reproduced the [question_generation](https://github.com/patil-suraj/question_generation) with Thai language data (I have prepared the dataset from ['xquad.th'](https://huggingface.co/datasets/viewer/?dataset=xquad&config=xquad.th) ) by @patil-suraj \r\n\r\nThis is my error messages\r\n```bash\r\n>>> model = MT5ForConditionalGeneration.from_pretrained('./mt5-base-qg-hl-xquad-th-6-epochs')\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/sakares/transformers/src/transformers/modeling_utils.py\", line 1144, in from_pretrained\r\n raise RuntimeError(\r\nRuntimeError: Error(s) in loading state_dict for MT5ForConditionalGeneration:\r\n\tsize mismatch for lm_head.weight: copying a param with shape torch.Size([250112, 768]) from checkpoint, \r\nthe shape in current model is torch.Size([250102, 768]).\r\n```\r\n\r\nI have dive into the file src/transformers/models/mt5/modeling_mt5.py and found that MT5ForConditionalGeneration \r\n just overrode the T5ForConditionalGeneration, and it should not be a problem.\r\n\r\nSorry to bring you here @patil-suraj . I am just curious since I have modified the script run_qg.py for MT5\r\nand, according to this [discussion](https://github.com/huggingface/transformers/pull/8880#issuecomment-737113053), I found the script did not have things like \r\n`model.resize_token_embeddings(len(tokenizer))`\r\n\r\nMy question: should I run method resize_token_embeddings before start the training model.\r\n\r\n",
"also facing same issue as @sakares ..Have you solved it?",
"Can you guys try again on master -> this should have been fixed by now: https://github.com/huggingface/transformers/issues/9055#issuecomment-745450713",
"@acul3 No luck yet. But I found the alternative solution with PyTorch Lightning script [\"Finetune MT5 for Question Generation in Hindi\"](https://www.kaggle.com/parthplc/finetune-mt5-for-question-generation-in-hindi/) and it works as expected",
" i manage to solve my problem by changing tokenizer to `google/mt5-base' instead of 't5-base'(my mistake) and install transformers from source(master) as @patrickvonplaten told\r\n\r\nwill try to look that script @sakares..thank you"
] | 1,605 | 1,608 | 1,606 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.0 --rc1
- Platform: Colab
- Python version: 3.6.9
- PyTorch version (GPU?): TESLA V4
- Tensorflow version (GPU?): 2.3.0
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
T5: @patrickvonplaten
-->
## Information
Model I am using (Bert, XLNet ...): T5-1.1
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X ] my own modified scripts: (give details below)
```python
from transformers import T5TokenizerFast, T5ForConditionalGeneration
TOKENIZER_NAME = 't5-base'
MODEL_NAME = 'google/t5-v1_1-base'
tokenizer = T5TokenizerFast.from_pretrained(MODEL_NAME)
special_tokens_dict = {'additional_special_tokens': ['<ORG>','<PERSON>']}
num_added_toks = tokenizer.add_special_tokens(special_tokens_dict)
model = T5ForConditionalGeneration.from_pretrained('google/t5-v1_1-base', return_dict=True)
model.resize_token_embeddings(len(tokenizer))
model.to("cuda")
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
Custom dataset
## To reproduce
Steps to reproduce the behavior:
1. Attempting to add Entity tokens to T5 1.1, upon loading from pretrained the following error occurs:
`size mismatch for lm_head.weight: copying a param with shape torch.Size([32128, 768]) from checkpoint, the shape in current model is torch.Size([32102, 768]).`
I am assuming the addition of the special tokens did not get propagated to the lm head size.
I would expect the LM Head to be resized in addition to the standard layers.
Many Thanks,
Chris
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8706/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8706/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8705 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8705/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8705/comments | https://api.github.com/repos/huggingface/transformers/issues/8705/events | https://github.com/huggingface/transformers/issues/8705 | 748,100,879 | MDU6SXNzdWU3NDgxMDA4Nzk= | 8,705 | DPRReaderTokenizers returns, for multiple passages given, only the tokens & masks of one passage | {
"login": "omarsou",
"id": 49435231,
"node_id": "MDQ6VXNlcjQ5NDM1MjMx",
"avatar_url": "https://avatars.githubusercontent.com/u/49435231?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarsou",
"html_url": "https://github.com/omarsou",
"followers_url": "https://api.github.com/users/omarsou/followers",
"following_url": "https://api.github.com/users/omarsou/following{/other_user}",
"gists_url": "https://api.github.com/users/omarsou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarsou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarsou/subscriptions",
"organizations_url": "https://api.github.com/users/omarsou/orgs",
"repos_url": "https://api.github.com/users/omarsou/repos",
"events_url": "https://api.github.com/users/omarsou/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarsou/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, \r\nA little update, I think I fixed this issue. \r\nThe tokenizer returns a tensor of shape (n_passages, n_sequence_length) but only because I have duplicated the question like [questions] * n_passages. It was not clear on the documentation since I thought it was automatically done. "
] | 1,605 | 1,606 | 1,606 | NONE | null | ## Environment info
- `transformers` version: 3.5.1
- Platform: Colab Notebook
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
tokenizers: @mfuntowicz
-->
## Information
Model I am using : DPRReaderTokenizer
The problem arises when using:
DPRReaderTokenizer, instead of returning as many tensor as passages, he returns only one (On the documentation it must return (n_passages, sequence_length), but it returns (1, sequence_length) on basic examples.
The tasks I am working on is:
* Tokenization with DPRReaderTokenizer on multiple passages (texts)
## To reproduce
from transformers import AlbertTokenizer, AlbertForQuestionAnswering, DPRReader, DPRReaderTokenizer, AutoTokenizer
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer_DPR = DPRReaderTokenizer.from_pretrained('facebook/dpr-reader-single-nq-base')
model_DPR = DPRReader.from_pretrained('facebook/dpr-reader-single-nq-base', return_dict=True).cuda()
encoded_inputs = tokenizer_DPR(
questions=["What is Transformers?"],
titles=['Attention is all you need', 'One famous library'],
texts=['Attention is a new mechanism designed to improve the performance of the seq2seq models', 'One of the most
famous NLP library is called Transformers' ],
padding=True,
return_tensors='pt'
)
encoded_inputs
{'input_ids': tensor([[ 101, 2054, 2003, 19081, 1029, 102, 3086, 2003, 2035, 2017,
2342, 102, 3086, 2003, 1037, 2047, 7337, 2881, 2000, 5335,
1996, 2836, 1997, 1996, 7367, 4160, 2475, 3366, 4160, 4275]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1]])}
## Remarks
The expected outputs is two tensors ... and I got only one .. 👎 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8705/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8705/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8704 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8704/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8704/comments | https://api.github.com/repos/huggingface/transformers/issues/8704/events | https://github.com/huggingface/transformers/issues/8704 | 748,065,368 | MDU6SXNzdWU3NDgwNjUzNjg= | 8,704 | Generating from mT5 | {
"login": "tomhosking",
"id": 9419158,
"node_id": "MDQ6VXNlcjk0MTkxNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9419158?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomhosking",
"html_url": "https://github.com/tomhosking",
"followers_url": "https://api.github.com/users/tomhosking/followers",
"following_url": "https://api.github.com/users/tomhosking/following{/other_user}",
"gists_url": "https://api.github.com/users/tomhosking/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomhosking/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomhosking/subscriptions",
"organizations_url": "https://api.github.com/users/tomhosking/orgs",
"repos_url": "https://api.github.com/users/tomhosking/repos",
"events_url": "https://api.github.com/users/tomhosking/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomhosking/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"mT5 is not pretrained on downstream tasks like T5 was - see: https://huggingface.co/transformers/master/model_summary.html#mt5 \r\n\r\nSo it not surprising that mT5 won't work well out-of-the-box without fine-tuning.",
"Ah, I hadn't realised that. But in that case, wouldn't the expected output be a reconstruction of the input?",
"> Ah, I hadn't realised that. But in that case, wouldn't the expected output be a reconstruction of the input?\r\n\r\nHard to say if the model does not include any sentinel tokens (`<extra_id_1>`) and if one uses `generate()` instead of just the forward pass.... . Wolud be interesting to play around with the two pre-trained model variants though and see what differences they show...",
"I agree that I would only get reconstruction if the decoding setup matched training :) Can you point me at any documentation that describes what special tokens are expected? I dug around in your implementation and the official repo but couldn't see anything. The output of `tokenizer.prepare_seq2seq_batch()` is the same for src and tgt as well (presumably because it uses the T5 tokenizer - does it not need its own?)\r\n\r\nEdit: Looking again, it seems like the sentinel tokens are just the equivalent of `[MASK]`? In which case the model should be able to reconstruct the input if it has access to the full (un-noised) sequence.",
"Maybe these pointers help:\r\n- https://github.com/huggingface/transformers/issues/7451\r\n- https://github.com/huggingface/transformers/issues/7910\r\n- https://github.com/huggingface/transformers/issues/3985\r\n\r\nmT5 is pretrained exactly like T5 only without the downstream supersived training mixin. I think the T5 paper should explain in detail how this in done.",
"Does anybody have some more pointers on how to use (train) the mT5 model that has been added to master for text generation? Anything explaining how the finetuning is done in practice using Huggingface Transformers would be greatly appreciated!",
"Hey @Rijgersberg, what exactly do you mean by text generation ? GPT2-like open-end text generation?",
"Well not open-end text generation in the sense of \"writing\", but using text-to-text generation to perform all types of different NLP tasks with little to no training. Basically what the GPT-3-paper calls \"few shot learning\".\r\n\r\nSpecifically, I would be interested in replicating the [WT5?! Training Text-to-Text Models to Explain their Predictions](https://arxiv.org/abs/2004.14546) results in languages other than English. But I'm having some trouble understanding what the differences between the T5 and mT5 models in Transformers mean for accomplishing that task.",
"Hey @tomhosking how did you use MT5ForConditionalGeneration, T5Tokenizer\r\nI used \r\n```\r\npip install transformers\r\n```\r\nBut it is showing \r\n```\r\nImportError: cannot import name 'MT5ForConditionalGeneration'\r\n```\r\nHow can we install it?🤔\r\n",
"@parthplc You can specify version of package You would like to install. For me it was experimental: `transformers==4.0.0rc1` and it works fine.\r\n\r\nFor training mT5 model for generating summary You can check out [this](https://towardsdatascience.com/fine-tuning-a-t5-transformer-for-any-summarization-task-82334c64c81) post. It worked for me.\r\n\r\n[edit]\r\nI forgot to mention, the only modification You have to make is to replace `T5ForConditionalGeneration` with `MT5ForConditionalGeneration`.",
"> Well not open-end text generation in the sense of \"writing\", but using text-to-text generation to perform all types of different NLP tasks with little to no training. Basically what the GPT-3-paper calls \"few shot learning\".\r\n> \r\n> Specifically, I would be interested in replicating the [WT5?! Training Text-to-Text Models to Explain their Predictions](https://arxiv.org/abs/2004.14546) results in languages other than English. But I'm having some trouble understanding what the differences between the T5 and mT5 models in Transformers mean for accomplishing that task.\r\n\r\nIn this case, I would just fine-tune mT5 with the normal causal language modeling objective meaning:\r\n\r\n```python\r\nfrom transformers import MT5ForConditionalGeneration, T5Tokenizer \r\nmt5 = MT5ForConditionalGeneration.from_pretrained(\"google/mt5-base\")\r\nmt5_tok = T5Tokenizer.from_pretrained(\"google/mt5-base\")\r\n\r\ninput_ids = mt5_tok(\"explain sentiment: I went to see this movie with my husband, and we both thought the acting was terrible!\", return_tensors=\"pt\").input_ids # in the language of your choice\r\nlabels = mt5_tok(\"negative explanation: the acting was terrible.\", return_tensors=\"pt\").input_ids # in the language of your choice\r\n\r\nloss = mt5(input_ids=input_ids, labels=labels).loss\r\n```\r\n\r\nI took one of the visual examples of the paper you mentioned.\r\n\r\nIn short, there is no difference in how mt5 and t5 should be fine-tuned.\r\n\r\nAlso, @mrm8488 already successfully fine-tuned an mT5 model: https://twitter.com/mrm8488/status/1329478063768350723\r\nsorry to ping you here @mrm8488 - but maybe you have some tips/tricks for mt5 fine-tuning? \r\n\r\nAlso pinging our T5 fine-tuning expert @patil-suraj ",
"> Well not open-end text generation in the sense of \"writing\", but using text-to-text generation to perform all types of different NLP tasks with little to no training. Basically what the GPT-3-paper calls \"few shot learning\".\r\n\r\nI'm not sure if you can use mT5 with no training (fine-tuning), since it was not pre-trained with any supervised objective like `T5`. \r\n\r\nOne experiment to try is to fine-tune `mT5` on the english data and see if it works for your language without any language specific fine-tuning (In my experiments, `T5` trained on English SQuAD for que gen gave interesting results for French and German without any language specific fine-tuning).\r\n\r\nBut for better results you should fine-tune `mT5` on the language specific dataset.\r\n\r\nAnd also as Patrick said, you can fine-tune `mT5` and `T5` the same way. \r\nThe major differences between `mT5` and `T5` are\r\n- `mT5` is based on [`T51.1`](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/released_checkpoints.md#t511)\r\n- pre-trained on 101 languages\r\n- no supervised pre-training",
"Hi, I slightly modified the script provided by @patil-suraj to fine-tune [`T5` on SQUAD] (https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) and after many epochs (I think I am missing anything/doing something wrong) I got 'decent' results fine-tuning mT5-small on tydiQA for multilingual QA https://huggingface.co/mrm8488/mT5-small-finetuned-tydiqa-for-xqa. The [PR with the model card](https://github.com/huggingface/transformers/pull/8729) for more details is not approved yet.",
"> Hi, I slightly modified the script provided by @ patil-suraj to fine-tune [`T5` on SQUAD] (https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) and after many epochs (I think I am missing anything/doing something wrong) I got 'decent' results fine-tuning mT5-small on tydiQA for multilingual QA https://huggingface.co/mrm8488/mT5-small-finetuned-tydiqa-for-xqa. The [PR with the model card](https://github.com/huggingface/transformers/pull/8729) for more details is not approved yet.\r\n\r\njust merged it :-) BTW, you can now directly create the model cards online - no need for PRs anymore ;-)",
"> > Well not open-end text generation in the sense of \"writing\", but using text-to-text generation to perform all types of different NLP tasks with little to no training. Basically what the GPT-3-paper calls \"few shot learning\".\r\n> \r\n> I'm not sure if you can use mT5 with no training (fine-tuning), since it was not pre-trained with any supervised objective like `T5`.\r\n> \r\n> One experiment to try is to fine-tune `mT5` on the english data and see if it works for your language without any language specific fine-tuning (In my experiments, `T5` trained on English SQuAD for que gen gave interesting results for French and German without any language specific fine-tuning).\r\n> \r\n> But for better results you should fine-tune `mT5` on the language specific dataset.\r\n> \r\n> And also as Patrick said, you can fine-tune `mT5` and `T5` the same way.\r\n> The major differences between `mT5` and `T5` are\r\n> \r\n> * `mT5` is based on [`T51.1`](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/released_checkpoints.md#t511)\r\n> * pre-trained on 101 languages\r\n> * no supervised pre-training\r\n\r\nhey @patil-suraj @mrm8488 how can we finetune mT5 for other languages. Let's suppose we have language translation problem for any language other than English and if we finetune using T5 tokenizer we would be replacing each word with unk tokens. how will it be fine-tuned? eg.\r\n```\r\nprint(tokenizer.decode(data['source_ids']))\r\nprint(tokenizer.decode(data['target_ids']))\r\n```\r\n```\r\nEnglish to Hindi: Tell me the name of the ninth month.</s> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad>\r\n<unk> <unk> <unk> <unk> <unk> <unk> </s> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad>\r\n```\r\n",
"@parthplc - I don't really understand your question. Since mT5 was trained on 101 languages it's tokenizer can obviously handle all those languages, *e.g.*:\r\n\r\n```python \r\nfrom transformers import AutoTokenizer\r\n\r\ntok = AutoTokenizer.from_pretrained(\"google/mt5-small\")\r\ntok.decode(tok(\"Der Satz wird auch definiert als sprachliche Einheit, die aus Subjekt und Prädikat besteht. Dies soll auf Aristoteles zurückgehen. Entsprechend definiert die traditionelle Grammatik den Satz als bestehend aus: Satzaussage (Prädikat), Satzergänzung (Objekt) und Satzgegenstand (Subjekt).\").input_ids) # gives no <unk> symbols\r\n```\r\n\r\nHopefully, this makes more sense now",
"> ## Environment info\r\n> * `transformers` version: #9c0afdaf7b091c341072b432ad6ee17ba7a5016b\r\n> * Platform: Google colab\r\n> * Python version: 3.6.9\r\n> * PyTorch version (GPU?): 1.7.0\r\n> No GPU\r\n> \r\n> ### Who can help\r\n> mT5: @patrickvonplaten\r\n> \r\n> ## Information\r\n> Generating from `mT5-small` gives (nearly) empty output:\r\n> \r\n> ```\r\n> from transformers import MT5ForConditionalGeneration, T5Tokenizer\r\n> model = MT5ForConditionalGeneration.from_pretrained(\"google/mt5-small\")\r\n> tokenizer = T5Tokenizer.from_pretrained(\"google/mt5-small\")\r\n> article = \"translate to french: The capital of France is Paris.\"\r\n> batch = tokenizer.prepare_seq2seq_batch(src_texts=[article], return_tensors=\"pt\")\r\n> output_ids = model.generate(input_ids=batch.input_ids, num_return_sequences=1, num_beams=8, length_penalty=0.1)\r\n> tokenizer.decode(output_ids[0])\r\n> ```\r\n> \r\n> `>>> <pad> <extra_id_0></s>`\r\n> \r\n> Using the same input for T5 gives reasonable output:\r\n> \r\n> ```\r\n> from transformers import T5ForConditionalGeneration, T5Tokenizer\r\n> model = T5ForConditionalGeneration.from_pretrained(\"t5-small\")\r\n> tokenizer = T5Tokenizer.from_pretrained(\"t5-small\")\r\n> article = \"translate to french: The capital of France is Paris.\"\r\n> batch = tokenizer.prepare_seq2seq_batch(src_texts=[article], return_tensors=\"pt\")\r\n> output_ids = model.generate(input_ids=batch.input_ids, num_return_sequences=1, num_beams=8, length_penalty=0.1)\r\n> tokenizer.decode(output_ids[0])\r\n> ```\r\n> \r\n> `>>> <pad> La capitale de la France est Paris.</s>`\r\n> \r\n> My understanding is that mT5 is trained in the same way as T5, and should work in a very similar way?\r\n\r\nHi, I met the same problem when fine-tuning mt5 to a Chinese QG environment. I'm wondering if you have solved this issue?",
"hi @nomoreoneday \r\n\r\n`mT5` is a pre-trained model and it's not finetuned on any downstream task, whereas T5 was already trained on translation task as part of its supervised pre-training mixture, which could explain the empty output of mT5.\r\n\r\nYou should fine-tune the model on your task, to use for generation. \r\n\r\n> I met the same problem when fine-tuning mt5 to a Chinese QG environment\r\n\r\nAnd if you are having trouble with fine-tuning then please post a shot code snippet so we can reproduce your issue.",
"> hi @nomoreoneday\r\n> \r\n> `mT5` is a pre-trained model and it's not finetuned on any downstream task, whereas T5 was already trained on translation task as part of its supervised pre-training mixture, which could explain the empty output of mT5.\r\n> \r\n> You should fine-tune the model on your task, to use for generation.\r\n> \r\n> > I met the same problem when fine-tuning mt5 to a Chinese QG environment\r\n> \r\n> And if you are having trouble with fine-tuning then please post a shot code snippet so we can reproduce your issue.\r\n\r\nhi @patil-suraj \r\n\r\nthanks for replying. I'm trying to replicate your project(https://github.com/patil-suraj/question_generation) on a Chinese QG task. I got decent results when I run \r\n```\r\nexport CUDA_VISIBLE_DEVICES=0\r\npython3 eval.py \\\r\n --model_name_or_path mt5-small-ncp-qg-hl-base_epoch30 \\\r\n --valid_file_path data/valid_data_qa_hl_mt5_ncp_all_task.pt \\\r\n --model_type mt5 \\\r\n --num_beams 4 \\\r\n --max_decoding_length 32 \\\r\n --output_path hypothesis_mt5-small-ncp-qg-hl-base_epoch30_ncp_all_task.txt\r\n```\r\nBut when I trying to construct the pipeline and run:\r\n\r\n`\r\ndef _extract_answers(self,context):\r\n\r\n sents,inputs = self._prepare_inputs_for_ans_extraction(context)\r\n inputs = self._tokenize(inputs,padding = True,truncation = True) #encoding\r\n print(\"inputs after encoding:\",inputs)\r\n\r\n outs = self.ans_model.generate(\r\n input_ids = inputs['input_ids'].to(self.device),\r\n attention_mask = inputs['attention_mask'].to(self.device),\r\n max_length = 32,\r\n )\r\n\r\n dec = [self.ans_tokenizer.decode(ids,skip_special_tokens=False) for ids in outs] #decoding\r\n print(\"dec:\", dec)\r\n answers = [item.split('<sep>') for item in dec]\r\n print(\"answers1:\",answers)\r\n answers = [i[:-1] for i in answers]\r\n print(\"answers2:\",answers)\r\n\r\n return sents, answers\r\n`\r\n\r\nI got the empty answers. like this\r\n\r\n`dec: ['<pad> <extra_id_0></s>']\r\nanswers1: [['<pad> <extra_id_0></s>']]\r\nanswers2: [[]]`",
"I wondering if there is any difference in data preprocessing between t5 and mt5. ",
"> > hi @nomoreoneday\r\n> > `mT5` is a pre-trained model and it's not finetuned on any downstream task, whereas T5 was already trained on translation task as part of its supervised pre-training mixture, which could explain the empty output of mT5.\r\n> > You should fine-tune the model on your task, to use for generation.\r\n> > > I met the same problem when fine-tuning mt5 to a Chinese QG environment\r\n> > \r\n> > \r\n> > And if you are having trouble with fine-tuning then please post a shot code snippet so we can reproduce your issue.\r\n> \r\n> hi @patil-suraj\r\n> \r\n> thanks for replying. I'm trying to replicate your project(https://github.com/patil-suraj/question_generation) on a Chinese QG task. I got decent results when I run\r\n> \r\n> ```\r\n> export CUDA_VISIBLE_DEVICES=0\r\n> python3 eval.py \\\r\n> --model_name_or_path mt5-small-ncp-qg-hl-base_epoch30 \\\r\n> --valid_file_path data/valid_data_qa_hl_mt5_ncp_all_task.pt \\\r\n> --model_type mt5 \\\r\n> --num_beams 4 \\\r\n> --max_decoding_length 32 \\\r\n> --output_path hypothesis_mt5-small-ncp-qg-hl-base_epoch30_ncp_all_task.txt\r\n> ```\r\n> \r\n> But when I trying to construct the pipeline and run:\r\n> \r\n> `\r\n> def _extract_answers(self,context):\r\n> \r\n> ```\r\n> sents,inputs = self._prepare_inputs_for_ans_extraction(context)\r\n> inputs = self._tokenize(inputs,padding = True,truncation = True) #encoding\r\n> print(\"inputs after encoding:\",inputs)\r\n> \r\n> outs = self.ans_model.generate(\r\n> input_ids = inputs['input_ids'].to(self.device),\r\n> attention_mask = inputs['attention_mask'].to(self.device),\r\n> max_length = 32,\r\n> )\r\n> \r\n> dec = [self.ans_tokenizer.decode(ids,skip_special_tokens=False) for ids in outs] #decoding\r\n> print(\"dec:\", dec)\r\n> answers = [item.split('<sep>') for item in dec]\r\n> print(\"answers1:\",answers)\r\n> answers = [i[:-1] for i in answers]\r\n> print(\"answers2:\",answers)\r\n> \r\n> return sents, answers\r\n> ```\r\n> \r\n> `\r\n> \r\n> I got the empty answers. like this\r\n> \r\n> `dec: ['<pad> <extra_id_0></s>'] answers1: [['<pad> <extra_id_0></s>']] answers2: [[]]`\r\n\r\nhaving the same issue here",
"I have finally overcome the ['<pad> <extra_id_0></s>'] issue and obtained decent post-training predictions with MT5, just had to \r\n1. lower the lr (set to 0.001, as indicated in mt5 paper) \r\nand \r\n2. train for a lot more epochs in comparison with T5 for the same task (60 epochs for MT5 vs 10 for T5 for a simple text style transfer task fine-tuning). \r\n\r\n@nomoreoneday I have no touched anything but the model name when switching between t5 and mt5 in my training pipeline, wonder if I should?",
"> Hi, I slightly modified the script provided by @patil-suraj to fine-tune [`T5` on SQUAD] (https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) and after many epochs (I think I am missing anything/doing something wrong) I got 'decent' results fine-tuning mT5-small on tydiQA for multilingual QA https://huggingface.co/mrm8488/mT5-small-finetuned-tydiqa-for-xqa. The [PR with the model card](https://github.com/huggingface/transformers/pull/8729) for more details is not approved yet.\r\n\r\n@mrm8488 Hi, for this model mrm8488/mT5-small-finetuned-tydiqa-for-xqa , I tried to run your demo script, but failed with error loading the tokenizer. And the Hosted inference API on this page doesn't work as well.\r\n hope to see your feedback.",
"> > hi @nomoreoneday\r\n> > `mT5` is a pre-trained model and it's not finetuned on any downstream task, whereas T5 was already trained on translation task as part of its supervised pre-training mixture, which could explain the empty output of mT5.\r\n> > You should fine-tune the model on your task, to use for generation.\r\n> > > I met the same problem when fine-tuning mt5 to a Chinese QG environment\r\n> > \r\n> > \r\n> > And if you are having trouble with fine-tuning then please post a shot code snippet so we can reproduce your issue.\r\n> \r\n> hi @patil-suraj\r\n> \r\n> thanks for replying. I'm trying to replicate your project(https://github.com/patil-suraj/question_generation) on a Chinese QG task. I got decent results when I run\r\n> \r\n> ```\r\n> export CUDA_VISIBLE_DEVICES=0\r\n> python3 eval.py \\\r\n> --model_name_or_path mt5-small-ncp-qg-hl-base_epoch30 \\\r\n> --valid_file_path data/valid_data_qa_hl_mt5_ncp_all_task.pt \\\r\n> --model_type mt5 \\\r\n> --num_beams 4 \\\r\n> --max_decoding_length 32 \\\r\n> --output_path hypothesis_mt5-small-ncp-qg-hl-base_epoch30_ncp_all_task.txt\r\n> ```\r\n> \r\n> But when I trying to construct the pipeline and run:\r\n> \r\n> `\r\n> def _extract_answers(self,context):\r\n> \r\n> ```\r\n> sents,inputs = self._prepare_inputs_for_ans_extraction(context)\r\n> inputs = self._tokenize(inputs,padding = True,truncation = True) #encoding\r\n> print(\"inputs after encoding:\",inputs)\r\n> \r\n> outs = self.ans_model.generate(\r\n> input_ids = inputs['input_ids'].to(self.device),\r\n> attention_mask = inputs['attention_mask'].to(self.device),\r\n> max_length = 32,\r\n> )\r\n> \r\n> dec = [self.ans_tokenizer.decode(ids,skip_special_tokens=False) for ids in outs] #decoding\r\n> print(\"dec:\", dec)\r\n> answers = [item.split('<sep>') for item in dec]\r\n> print(\"answers1:\",answers)\r\n> answers = [i[:-1] for i in answers]\r\n> print(\"answers2:\",answers)\r\n> \r\n> return sents, answers\r\n> ```\r\n> \r\n> `\r\n> \r\n> I got the empty answers. like this\r\n> \r\n> `dec: ['<pad> <extra_id_0></s>'] answers1: [['<pad> <extra_id_0></s>']] answers2: [[]]`\r\n\r\nSame too ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"> I have finally overcome the [' <extra_id_0>'] issue and obtained decent post-training predictions with MT5, just had to\r\n> \r\n> 1. lower the lr (set to 0.001, as indicated in mt5 paper)\r\n> and\r\n> 2. train for a lot more epochs in comparison with T5 for the same task (60 epochs for MT5 vs 10 for T5 for a simple text style transfer task fine-tuning).\r\n> \r\n> @nomoreoneday I have no touched anything but the model name when switching between t5 and mt5 in my training pipeline, wonder if I should?\r\n\r\nThankyou,I have same problem, I am trying train more epochs, to see if it can be correct",
"> > I have finally overcome the [' <extra_id_0>'] issue and obtained decent post-training predictions with MT5, just had to\r\n> > \r\n> > 1. lower the lr (set to 0.001, as indicated in mt5 paper)\r\n> > and\r\n> > 2. train for a lot more epochs in comparison with T5 for the same task (60 epochs for MT5 vs 10 for T5 for a simple text style transfer task fine-tuning).\r\n> > \r\n> > @nomoreoneday I have no touched anything but the model name when switching between t5 and mt5 in my training pipeline, wonder if I should?\r\n> \r\n> Thankyou,I have same problem, I am trying train more epochs, to see if it can be correct\r\n\r\nDo you have any newer ideas about this problem?",
"> > > hi @nomoreoneday\r\n> > > `mT5` is a pre-trained model and it's not finetuned on any downstream task, whereas T5 was already trained on translation task as part of its supervised pre-training mixture, which could explain the empty output of mT5.\r\n> > > You should fine-tune the model on your task, to use for generation.\r\n> > > > I met the same problem when fine-tuning mt5 to a Chinese QG environment\r\n> > > \r\n> > > \r\n> > > And if you are having trouble with fine-tuning then please post a shot code snippet so we can reproduce your issue.\r\n> > \r\n> > \r\n> > hi @patil-suraj\r\n> > thanks for replying. I'm trying to replicate your project(https://github.com/patil-suraj/question_generation) on a Chinese QG task. I got decent results when I run\r\n> > ```\r\n> > export CUDA_VISIBLE_DEVICES=0\r\n> > python3 eval.py \\\r\n> > --model_name_or_path mt5-small-ncp-qg-hl-base_epoch30 \\\r\n> > --valid_file_path data/valid_data_qa_hl_mt5_ncp_all_task.pt \\\r\n> > --model_type mt5 \\\r\n> > --num_beams 4 \\\r\n> > --max_decoding_length 32 \\\r\n> > --output_path hypothesis_mt5-small-ncp-qg-hl-base_epoch30_ncp_all_task.txt\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > But when I trying to construct the pipeline and run:\r\n> > `\r\n> > def _extract_answers(self,context):\r\n> > ```\r\n> > sents,inputs = self._prepare_inputs_for_ans_extraction(context)\r\n> > inputs = self._tokenize(inputs,padding = True,truncation = True) #encoding\r\n> > print(\"inputs after encoding:\",inputs)\r\n> > \r\n> > outs = self.ans_model.generate(\r\n> > input_ids = inputs['input_ids'].to(self.device),\r\n> > attention_mask = inputs['attention_mask'].to(self.device),\r\n> > max_length = 32,\r\n> > )\r\n> > \r\n> > dec = [self.ans_tokenizer.decode(ids,skip_special_tokens=False) for ids in outs] #decoding\r\n> > print(\"dec:\", dec)\r\n> > answers = [item.split('<sep>') for item in dec]\r\n> > print(\"answers1:\",answers)\r\n> > answers = [i[:-1] for i in answers]\r\n> > print(\"answers2:\",answers)\r\n> > \r\n> > return sents, answers\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > `\r\n> > I got the empty answers. like this\r\n> > `dec: ['<pad> <extra_id_0></s>'] answers1: [['<pad> <extra_id_0></s>']] answers2: [[]]`\r\n> \r\n> having the same issue here\r\n\r\nI am also having the same issue",
"Sorry I'm loosing a bit track of what the problem is here. Note that `mt5` cannot generate coherent sentences out-of-the-box because it's only be pretrained on the span-mask filling task and not on any down-stream tasks."
] | 1,605 | 1,661 | 1,622 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: #9c0afdaf7b091c341072b432ad6ee17ba7a5016b
- Platform: Google colab
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0
No GPU
### Who can help
mT5: @patrickvonplaten
## Information
Generating from `mT5-small` gives (nearly) empty output:
```
from transformers import MT5ForConditionalGeneration, T5Tokenizer
model = MT5ForConditionalGeneration.from_pretrained("google/mt5-small")
tokenizer = T5Tokenizer.from_pretrained("google/mt5-small")
article = "translate to french: The capital of France is Paris."
batch = tokenizer.prepare_seq2seq_batch(src_texts=[article], return_tensors="pt")
output_ids = model.generate(input_ids=batch.input_ids, num_return_sequences=1, num_beams=8, length_penalty=0.1)
tokenizer.decode(output_ids[0])
```
`>>> <pad> <extra_id_0></s>`
Using the same input for T5 gives reasonable output:
```
from transformers import T5ForConditionalGeneration, T5Tokenizer
model = T5ForConditionalGeneration.from_pretrained("t5-small")
tokenizer = T5Tokenizer.from_pretrained("t5-small")
article = "translate to french: The capital of France is Paris."
batch = tokenizer.prepare_seq2seq_batch(src_texts=[article], return_tensors="pt")
output_ids = model.generate(input_ids=batch.input_ids, num_return_sequences=1, num_beams=8, length_penalty=0.1)
tokenizer.decode(output_ids[0])
```
`>>> <pad> La capitale de la France est Paris.</s>`
My understanding is that mT5 is trained in the same way as T5, and should work in a very similar way?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8704/reactions",
"total_count": 9,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8704/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8703 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8703/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8703/comments | https://api.github.com/repos/huggingface/transformers/issues/8703/events | https://github.com/huggingface/transformers/issues/8703 | 748,026,472 | MDU6SXNzdWU3NDgwMjY0NzI= | 8,703 | providing the user with possibility to set the cache path | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This path is the default `cache_path` of the datasets library, not transformers. You can change it by setting an environment variable named `HF_HOME` to the path you want, the datasets will then be cached in this path suffixed with \"/datasets/\"",
"Hi\nthank you, the huggingface codes gebnerally also create an empty folder\ntitled ' ' when I run it, which is specifies the caching folder address,\ncould it be possible not to create this folder?\nthanks\nBest\nRabeeh\n\nOn Sat, Nov 21, 2020 at 6:44 PM Sylvain Gugger <[email protected]>\nwrote:\n\n> This path is the default cache_path of the datasets library, not\n> transformers. You can change it by setting an environment variable named\n> HF_HOME to the path you want, the datasets will then be cached in this\n> path suffixed with \"/datasets/\"\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/8703#issuecomment-731611519>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABP4ZCDCFJDCXZAPZ73ORHDSQ74AZANCNFSM4T5ZWXDA>\n> .\n>\n",
"We would need to see the code you are running that creates this empty folder named `\" \"` to be able to help.",
"Hi there\nI am training seq2seq_trainer codes. I have adapted it for my use case but\nin the original version of codes should also happen.\nthanks\nBest\nRabeeh\n\nOn Sun, Nov 22, 2020, 4:46 AM Sylvain Gugger <[email protected]>\nwrote:\n\n> We would need to see the code you are running that creates this empty\n> folder named \" \" to be able to help.\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/8703#issuecomment-731693963>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABP4ZCCKYJ5TFIKGT2N7LTLSRCCP5ANCNFSM4T5ZWXDA>\n> .\n>\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,605 | 1,619 | 1,619 | NONE | null | Dear HuggingFace team,
In the codes of huggingface, most of the time there is some cache_path which is on the home directory,
/idiap/home/rkarimi/.cache/huggingface/datasets/downloads
could you provide me with a command to set a different cache_path? to me this is hard-coded in the codes or I am missing something.
Thank you.
Best regards
Rabeeh
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8703/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8703/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8702 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8702/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8702/comments | https://api.github.com/repos/huggingface/transformers/issues/8702/events | https://github.com/huggingface/transformers/issues/8702 | 748,011,457 | MDU6SXNzdWU3NDgwMTE0NTc= | 8,702 | Question about beam_sample: using two softmax? | {
"login": "binshengliu",
"id": 441707,
"node_id": "MDQ6VXNlcjQ0MTcwNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/441707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/binshengliu",
"html_url": "https://github.com/binshengliu",
"followers_url": "https://api.github.com/users/binshengliu/followers",
"following_url": "https://api.github.com/users/binshengliu/following{/other_user}",
"gists_url": "https://api.github.com/users/binshengliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/binshengliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/binshengliu/subscriptions",
"organizations_url": "https://api.github.com/users/binshengliu/orgs",
"repos_url": "https://api.github.com/users/binshengliu/repos",
"events_url": "https://api.github.com/users/binshengliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/binshengliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"After submitting I realized `F.softmax(F.log_softmax())` is equivalent to `F.softmax()`. The different probability values I noticed comes from the normalization of multiple beams in one dimension (line 1169). That doesn't change the behaviour of the sampling."
] | 1,605 | 1,605 | 1,605 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.5.1
- Platform: Linux-4.15.0-122-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
@patrickvonplaten @TevenLeScao
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): Bart
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
I just have some confusions about the current beam_sample code. Please disregard if I misunderstood.
I noticed in current beam_sample code there are two softmax operations to produce the token probabilities, line 1161 and 1171 in the following snippet. After 1171, the final probabilities would be similar to `F.softmax(F.log_softmax(next_token_logits, dim=-1), dim=-1)` which is very different from what we usually get by `softmax(logits)`. Similarly in [top-p filtering](https://github.com/huggingface/transformers/blob/9c0afdaf7b091c341072b432ad6ee17ba7a5016b/src/transformers/generation_logits_process.py#L184) `F.softmax` is used on log values. But shouldn't we use `exp` to recover the probabilities in these cases?
https://github.com/huggingface/transformers/blob/9c0afdaf7b091c341072b432ad6ee17ba7a5016b/src/transformers/generation_utils.py#L1161-L1171
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8702/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8702/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8701 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8701/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8701/comments | https://api.github.com/repos/huggingface/transformers/issues/8701/events | https://github.com/huggingface/transformers/issues/8701 | 747,958,265 | MDU6SXNzdWU3NDc5NTgyNjU= | 8,701 | TypeError: an integer is required (got type NoneType) | {
"login": "parthplc",
"id": 35425925,
"node_id": "MDQ6VXNlcjM1NDI1OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/35425925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/parthplc",
"html_url": "https://github.com/parthplc",
"followers_url": "https://api.github.com/users/parthplc/followers",
"following_url": "https://api.github.com/users/parthplc/following{/other_user}",
"gists_url": "https://api.github.com/users/parthplc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/parthplc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/parthplc/subscriptions",
"organizations_url": "https://api.github.com/users/parthplc/orgs",
"repos_url": "https://api.github.com/users/parthplc/repos",
"events_url": "https://api.github.com/users/parthplc/events{/privacy}",
"received_events_url": "https://api.github.com/users/parthplc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @parthplc,\r\n\r\nCould you try to post a very short code snippet that can reproduce the error. It's too time-consuming to go through such a big notebook sadly",
"@parthplc Please post a solution if you close the issue. There is none in your colab as far as I can see (`model.resize_token_embeddings(len(tokenizer))` is the only relevant code I can see and it doesn't fix the problem)."
] | 1,605 | 1,611 | 1,606 | NONE | null | While using trainer class to finetune gpt-2 on Hindi dataset. It's outputting the following error
```
TypeError Traceback (most recent call last)
<ipython-input-44-3435b262f1ae> in <module>()
----> 1 trainer.train()
5 frames
/usr/local/lib/python3.6/dist-packages/transformers/data/datasets/language_modeling.py in __getitem__(self, i)
99
100 def __getitem__(self, i) -> torch.Tensor:
--> 101 return torch.tensor(self.examples[i], dtype=torch.long)
102
103
TypeError: an integer is required (got type NoneType)
```
Here is the link: https://colab.research.google.com/drive/1um5UeY9hasmjPNcR1WkBe2uDDFhLUBrX?usp=sharing
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8701/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8701/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8700 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8700/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8700/comments | https://api.github.com/repos/huggingface/transformers/issues/8700/events | https://github.com/huggingface/transformers/issues/8700 | 747,925,386 | MDU6SXNzdWU3NDc5MjUzODY= | 8,700 | training text_classification with tpu using xla_spawn gives wrong result | {
"login": "kiyoungkim1",
"id": 37245002,
"node_id": "MDQ6VXNlcjM3MjQ1MDAy",
"avatar_url": "https://avatars.githubusercontent.com/u/37245002?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kiyoungkim1",
"html_url": "https://github.com/kiyoungkim1",
"followers_url": "https://api.github.com/users/kiyoungkim1/followers",
"following_url": "https://api.github.com/users/kiyoungkim1/following{/other_user}",
"gists_url": "https://api.github.com/users/kiyoungkim1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kiyoungkim1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kiyoungkim1/subscriptions",
"organizations_url": "https://api.github.com/users/kiyoungkim1/orgs",
"repos_url": "https://api.github.com/users/kiyoungkim1/repos",
"events_url": "https://api.github.com/users/kiyoungkim1/events{/privacy}",
"received_events_url": "https://api.github.com/users/kiyoungkim1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello! I'm not sure I completely understand the issue. Are you saying you do not obtain the results you expected when using the 8 cores of the TPU, vs using a single TPU core?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,605 | 1,619 | 1,619 | CONTRIBUTOR | null | I tested text_classification code with tpu and gpu versions shown below.
tpu(colab with 8cores) and gpu(colab) versions take 2min and 17min, respectively, which is nice.
Training gpu version gives good behavior, where loss decreases continuously.
However, for tpu version, the dataset is almost divided into 8 segments(so 1/8 times consuming), but each node may not be connected. Results from tpu of each node are just result with gpu with 1/8 of the original dataset size.
1.
Should I change something?
Or if I`d like to use TPU, I have to use TF version?
2.
My final goal is to train Roberta on TPU in Korean.
There are three options.
1. Huggingface Trainer with xla --> I am here.
2. Huggingface TFTrainer --> TFTrainer supports TPU, but need to make mlm datasets.
3. Fairseq with xla
If there are any sources or examples, please let me know.
Thanks,
# tpu version (same one shown in the document 'https://github.com/huggingface/transformers/tree/master/examples/text-classification'):
python examples/xla_spawn.py \
--num_cores=8 \
transformers/examples/text-classification/run_glue.py \
--do_train \
--do_eval \
--task_name=mrpc \
--num_train_epochs=3 \
--max_seq_length=128 \
--learning_rate=5e-5 \
--output_dir=/tmp/mrpc \
--overwrite_output_dir \
--logging_steps=5 \
--save_steps=5 \
--tpu_metrics_debug \
--model_name_or_path=bert-base-cased \
--per_device_train_batch_size=64 \
--per_device_eval_batch_size=64
# single gpu version (remove --num_cores=8, --tpu_metrics_debug)
python examples/xla_spawn.py \
transformers/examples/text-classification/run_glue.py \
--do_train \
--do_eval \
--task_name=mrpc \
--num_train_epochs=3 \
--max_seq_length=128 \
--learning_rate=5e-5 \
--output_dir=/tmp/mrpc \
--overwrite_output_dir \
--logging_steps=5 \
--save_steps=5 \
--model_name_or_path=bert-base-cased \
--per_device_train_batch_size=64 \
--per_device_eval_batch_size=64 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8700/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8700/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8699 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8699/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8699/comments | https://api.github.com/repos/huggingface/transformers/issues/8699/events | https://github.com/huggingface/transformers/issues/8699 | 747,914,350 | MDU6SXNzdWU3NDc5MTQzNTA= | 8,699 | Cannot load tokenizer in community T5 pretrained model | {
"login": "minhtam2048",
"id": 45004329,
"node_id": "MDQ6VXNlcjQ1MDA0MzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/45004329?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minhtam2048",
"html_url": "https://github.com/minhtam2048",
"followers_url": "https://api.github.com/users/minhtam2048/followers",
"following_url": "https://api.github.com/users/minhtam2048/following{/other_user}",
"gists_url": "https://api.github.com/users/minhtam2048/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minhtam2048/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minhtam2048/subscriptions",
"organizations_url": "https://api.github.com/users/minhtam2048/orgs",
"repos_url": "https://api.github.com/users/minhtam2048/repos",
"events_url": "https://api.github.com/users/minhtam2048/events{/privacy}",
"received_events_url": "https://api.github.com/users/minhtam2048/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Yes, that model from @sshleifer does not bundle its own tokenizer, as you can see in the list of files: https://huggingface.co/sshleifer/t5-base-cnn/tree/main\r\n\r\nWe'll add this info to the model card, but you can just use the one from t5: `T5Tokenizer.from_pretrained(\"t5-base\")`",
"@julien-c Thank you for your help",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,605 | 1,619 | 1,619 | NONE | null | ## Environment info
- `transformers` version: 3.5
- Platform: Window 10
- Python version: 3.7.9
- PyTorch version (GPU?): 1.7.0+cpu (False)
- Tensorflow version (GPU?): 2.3.1 (False)
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
### Who can help
T5: @patrickvonplaten
## Information
I'm trying to use the **sshleifer/t5-base-cnn** for summarization task. But It seem likes there is something wrong with the tokenizer.
tokenizer = T5Tokenizer.from_pretrained('sshleifer/t5-base-cnn')
model = T5ForConditionalGeneration.from_pretrained('sshleifer/t5-base-cnn')
This code return an error:
OSError: Can't load tokenizer for 'sshleifer/t5-base-cnn'. Make sure that:
- 'sshleifer/t5-base-cnn' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'sshleifer/t5-base-cnn' is the correct path to a directory containing relevant tokenizer files
Can someone point out what I am missing or Is there any problem with my code ?
Many thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8699/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8699/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8698 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8698/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8698/comments | https://api.github.com/repos/huggingface/transformers/issues/8698/events | https://github.com/huggingface/transformers/issues/8698 | 747,898,509 | MDU6SXNzdWU3NDc4OTg1MDk= | 8,698 | CSV/JSON file format for examples/token-classification/run_ner.py | {
"login": "ganeshjawahar",
"id": 4785960,
"node_id": "MDQ6VXNlcjQ3ODU5NjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4785960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ganeshjawahar",
"html_url": "https://github.com/ganeshjawahar",
"followers_url": "https://api.github.com/users/ganeshjawahar/followers",
"following_url": "https://api.github.com/users/ganeshjawahar/following{/other_user}",
"gists_url": "https://api.github.com/users/ganeshjawahar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ganeshjawahar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ganeshjawahar/subscriptions",
"organizations_url": "https://api.github.com/users/ganeshjawahar/orgs",
"repos_url": "https://api.github.com/users/ganeshjawahar/repos",
"events_url": "https://api.github.com/users/ganeshjawahar/events{/privacy}",
"received_events_url": "https://api.github.com/users/ganeshjawahar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @ganeshjawahar , please have a look at the `run_NER_old.py` script! It should handle custom files 🤗 ",
"Usage and more examples are documented here:\r\n\r\nhttps://github.com/huggingface/transformers/tree/master/examples/token-classification#old-version-of-the-script",
"Thanks for the quick response. I'm able to make use of `run_ner_old.py` with my custom dataset. Is there a similar documentation to use `run_ner.py` with custom dataset? \r\n\r\nP.S.: `run_ner_old.py` loads all examples into RAM and that's a problem for me as my custom dataset is very large. I was thinking of getting around this issue by using `run_ner.py` which uses datasets library. ",
"If you can provide a tiny example for csv or json format, that should be very helpful. 🤗",
"Ah, I see, an example for a json-based file format can be found here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/master/tests/fixtures/tests_samples/conll/sample.json\r\n\r\nAnother possibility would be, that you write a custom recipe with Hugging Face datasets library. Then you can run the `run_NER.py` script by passing the (local) path name of your recipe to the script. Just have a look at the CoNNL dataset/recipe:\r\n\r\nhttps://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py\r\n\r\nYou could usw it as a template and modify it for your needs 🤗 ",
"I think the JSON sample should be in the [token-classification README](https://github.com/huggingface/transformers/blob/master/examples/token-classification/README.md) for people trying to use `run_ner.py` from local files. Would you also be willing to provide a CSV sample? So far, I have found through trial, error, and code deciphering that:\r\n\r\n- The CSV needs to start with column names (not respecting this causes `ValueError: External features info don't match the dataset`)\r\n- The column separator should be a comma (`,`)\r\n- Text containing commas should be in double quotes (like this `\",\"`) to disambiguate columns\r\n- Literal double quotes should be escaped with `\\`\r\n\r\nRight now, my CSV file looks like this:\r\n```\r\ntoken,label\r\nDC,M \r\n##T,M \r\n##N,M \r\n##4,M \r\nas,O \r\na,O \r\nm,O \r\n##od,O \r\n##ifier,O\r\n...\r\n```\r\n\r\nI get the following error:\r\n```\r\nFile \"projects/github/transformers/examples/token-classification/run_ner.py\", line 221, in main\r\n if isinstance(features[label_column_name].feature, ClassLabel):\r\nAttributeError: 'Value' object has no attribute 'feature'\r\n```\r\n\r\nUsing the python debugger, I've found that `features[label_column_name] = Value(dtype='string', id=None)` but I don't know if this is expected behavior. I can only assume that it isn't, but I can't seem to figure out what else `features[label_column_name]` could or should be.\r\n\r\nI'm pretty much stuck, and knowing if the issue comes from the structure of my CSV would be very helpful.\r\n\r\nFurthermore, I've tried formatting my data as close as I could to the [JSON conll sample](https://github.com/huggingface/transformers/blob/master/tests/fixtures/tests_samples/conll/sample.json), but I get the following error:\r\n```\r\njson.decoder.JSONDecodeError: Extra data: line 2 column 1\r\n```\r\nAfter a little bit of googling, as I suspected it turns out one cannot have multiple JSON objects in one file. So if the intended JSON format for `run_ner.py` requires one JSON object per sequence but JSON files can't contain more than one JSON object, how can we get `run_ner.py` to work with several sequences in JSON mode?",
"Exact same process/issue/errors as @gpiat. Would be very helpful if the format for the csv option for run_ner.py was explicitly defined in the readme. If there was a sample input for the csv option that is fully functional with the script it would be much more simple to modify our custom data to match the sample as opposed to writing a custom recipe.",
"Same problem as @gpiat with CSV.\r\n@stefan-it And it seems the old script is no longer available?",
"I believe I've solved the same problem as @gpiat , @millanbatra1234 and @AleksandrsBerdicevskis have had:\r\n\r\nReplace the `if isinstance(features[label_column_name].feature, ClassLabel):` in run_ner.py with `if hasattr(features[label_column_name], 'feature') and isinstance(features[label_column_name].feature, ClassLabel):`. \r\n\r\nI tried @gpiat's CSV format and that doesn't work. Instead, I used the JSON format, which looks like this:\r\n\r\n```\r\n{\"tokens\": [\"APPLICATION\", \"and\", \"Affidavit\", \"for\", \"Search\", \"Warrant\", \"as\", \"to\", \"The\", \"Matter\", \"of\", \"the\", \"Search\", \"of\", \"9\", \"Granite\", \"Street\", \",\", \"#\", \"5\", \"(\", \"Attachments\", \":\", \"#\", \"1\", \"Affidavit\", \"of\", \"James\", \"Keczkemethy)(Belpedio\", \",\", \"Lisa\", \")\", \"(\", \"Entered\", \":\", \"12/15/2020\", \")\"], \"tags\": [\"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"B-MISC\", \"I-MISC\", \"I-MISC\", \"I-MISC\", \"I-MISC\", \"L-MISC\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\"]}\r\n{\"tokens\": [\"APPLICATION\", \"for\", \"Search\", \"Warrant\", \"by\", \"USA\", \"as\", \"to\", \"702\", \"-\", \"517\", \"-\", \"7282\", \"(\", \"KM\", \",\", \"ilcd\", \")\", \"(\", \"Entered\", \":\", \"12/10/2020\", \")\"], \"tags\": [\"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"B-MISC\", \"I-MISC\", \"I-MISC\", \"I-MISC\", \"L-MISC\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\"]}\r\n{\"tokens\": [\"APPLICATION\", \"AND\", \"AFFIDAVIT\", \"by\", \"USA\", \"as\", \"to\", \"4\", \"CELLULAR\", \"TELEPHONES\", \"SEIZED\", \"FROM\", \"THE\", \"FDC\", \"IN\", \"PHILADELPHIA\", \"AND\", \"CURRENTLY\", \"HELD\", \"BY\", \"THE\", \"FBI\", \"PHILADELPHIA\", \"DIVISION\", \"Re\", \":\", \"Search\", \"Warrant\", \"Issued\", \".\", \"(\", \"mac\", \",\", \")\", \"(\", \"Entered\", \":\", \"12/09/2020\", \")\"], \"tags\": [\"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"B-MISC\", \"I-MISC\", \"I-MISC\", \"I-MISC\", \"I-MISC\", \"I-MISC\", \"I-MISC\", \"I-MISC\", \"I-MISC\", \"I-MISC\", \"I-MISC\", \"I-MISC\", \"I-MISC\", \"I-MISC\", \"I-MISC\", \"I-MISC\", \"L-MISC\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\"]}\r\n```\r\n\r\nSo, yes, you can have more than one JSON object in the file. Each JSON object goes on its own line. This is sometimes called JSONL or JSONLINES format.\r\n\r\n",
"@jeremybmerrill Thanks! Yes, JSON does work, I should have mentioned that (it actually does even without changing the code as you suggest). \r\n\r\n(With JSON, I run into another input problem (#9660), but I guess that's a different story.)",
"In my case the json format didn't work due to this issue [github.com/huggingface/datasets/issues/2181](https://github.com/huggingface/datasets/issues/2181). Pyarrow can't handle json if the line size is too big. So I had to split large lines into smaller ones.",
"JSON works, but CSV still does not work now",
"CSV file input does not work! I converted it into JSON, so It works now. \r\n"
] | 1,605 | 1,646 | 1,606 | NONE | null | ## Environment info
- `transformers` version: 3.5.0
- Platform: Linux-3.10.0-1160.6.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core
- Python version: 3.6.8
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): 2.3.1 (False)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
@mfuntowicz, @stefan-it
## Information
Model I am using (Bert, XLNet ...): XLM-R
The problem arises when using:
* [x] the official example scripts: (give details below)
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
https://github.com/huggingface/transformers/tree/master/examples/token-classification
```
python run_ner.py \
--model_name_or_path bert-base-uncased \
--train_file path_to_train_file \
--validation_file path_to_validation_file \
--output_dir /tmp/test-ner \
--do_train \
--do_eval
```
I am trying to perform ner on custom dataset. It's not clear what's the format of `path_to_train_file` and `path_to_validation_file`. From the code, it seems that the file format should be csv or json. Can you please give more details on this so that I can format my dataset accordingly?
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8698/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8698/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8697 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8697/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8697/comments | https://api.github.com/repos/huggingface/transformers/issues/8697/events | https://github.com/huggingface/transformers/pull/8697 | 747,835,040 | MDExOlB1bGxSZXF1ZXN0NTI0OTg2MjE4 | 8,697 | test | {
"login": "alexorona",
"id": 11825654,
"node_id": "MDQ6VXNlcjExODI1NjU0",
"avatar_url": "https://avatars.githubusercontent.com/u/11825654?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexorona",
"html_url": "https://github.com/alexorona",
"followers_url": "https://api.github.com/users/alexorona/followers",
"following_url": "https://api.github.com/users/alexorona/following{/other_user}",
"gists_url": "https://api.github.com/users/alexorona/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexorona/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexorona/subscriptions",
"organizations_url": "https://api.github.com/users/alexorona/orgs",
"repos_url": "https://api.github.com/users/alexorona/repos",
"events_url": "https://api.github.com/users/alexorona/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexorona/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8697/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8697/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8697",
"html_url": "https://github.com/huggingface/transformers/pull/8697",
"diff_url": "https://github.com/huggingface/transformers/pull/8697.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8697.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/8696 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8696/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8696/comments | https://api.github.com/repos/huggingface/transformers/issues/8696/events | https://github.com/huggingface/transformers/pull/8696 | 747,812,174 | MDExOlB1bGxSZXF1ZXN0NTI0OTY3MTMx | 8,696 | gpt2 and t5 parallel modeling | {
"login": "alexorona",
"id": 11825654,
"node_id": "MDQ6VXNlcjExODI1NjU0",
"avatar_url": "https://avatars.githubusercontent.com/u/11825654?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexorona",
"html_url": "https://github.com/alexorona",
"followers_url": "https://api.github.com/users/alexorona/followers",
"following_url": "https://api.github.com/users/alexorona/following{/other_user}",
"gists_url": "https://api.github.com/users/alexorona/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexorona/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexorona/subscriptions",
"organizations_url": "https://api.github.com/users/alexorona/orgs",
"repos_url": "https://api.github.com/users/alexorona/repos",
"events_url": "https://api.github.com/users/alexorona/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexorona/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2627272588,
"node_id": "MDU6TGFiZWwyNjI3MjcyNTg4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Model%20Parallel",
"name": "Model Parallel",
"color": "8B66A5",
"default": false,
"description": "Model Parallelilsm Implementations"
}
] | closed | false | null | [] | [
"Would it be a good idea to support a less painful way of writing a device_map? This is because as the developer experiments with the mapping, the current method is very inefficient to modify the layer maps.\r\n\r\nPerhaps there could be more than one way to do it?\r\n\r\nInstead of:\r\n```\r\n device_map = {0: [0, 1, 2, 3, 4, 5, 6, 7, 8],\r\n 1: [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21],\r\n 2: [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34],\r\n 3: [35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]}\r\n\r\n```\r\n\r\ne.g. some ideas for perhaps much simpler ways to create such a map:\r\n\r\n* remap the layers as follows:\r\n```\r\ndevice_map = {\r\n devices: [0, 3, 4, 5],\r\n layer_split: [8, 12, 12, 12],\r\n}\r\n```\r\n* a simple string:\r\n```\r\ndevice_map = {\r\n devices: [0, 1, 2, 3],\r\n layer_map: \"1-10, 9-21, 22-34, 35-47\",\r\n}\r\n```\r\n* a simple string using slice notation\r\n```\r\ndevice_map = {\r\n devices: [0, 1, 2, 3],\r\n layer_slice: \"1:10, 9:21, 22:34, 35:47\",\r\n}\r\n```\r\n\r\nprobably several ways can be supported and a wrapper expand them into the explicit version used now based on the keys of the map in the argument.\r\n\r\nin either case, changing the map is much easier then...\r\n",
"Not a bad idea. Adding to that: create a mapping utility that's been tested for larger model types like `device_map = get_device_map(machine = 4, model = \"gpt2-xl\")`. The first device should have fewer layers because it has the embedding and head. This is what I was using for testing:\r\n\r\n```\r\ndef get_device_map(machine: str, model_name: str) -> dict:\r\n \"\"\"Returns a dictionary optimized for distributing a model across\r\n several devices in a model parallel manner.\"\"\"\r\n if machine in [\"TeslaV100x4\", \"p3.8xlarge\", 4]:\r\n device_dict = {\r\n \"gpt2-xl\": {\r\n 0: [0, 1, 2, 3, 4, 5, 6, 7, 8],\r\n 1: [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21],\r\n 2: [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34],\r\n 3: [35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47],\r\n },\r\n \"t5-large\": {\r\n 0: [0, 1, 2],\r\n 1: [3, 4, 5, 6, 7, 8, 9],\r\n 2: [10, 11, 12, 13, 14, 15, 16],\r\n 3: [17, 18, 19, 20, 21, 22, 23],\r\n },\r\n \"t5-3b\": {\r\n 0: [0, 1, 2],\r\n 1: [3, 4, 5, 6, 7, 8, 9],\r\n 2: [10, 11, 12, 13, 14, 15, 16],\r\n 3: [17, 18, 19, 20, 21, 22, 23],\r\n },\r\n \"t5-11b\": {\r\n 0: [0, 1, 2],\r\n 1: [3, 4, 5, 6, 7, 8, 9],\r\n 2: [10, 11, 12, 13, 14, 15, 16],\r\n 3: [17, 18, 19, 20, 21, 22, 23],\r\n },\r\n }\r\n return device_dict[model_name]\r\n```",
"This is definitely a goodness to have in the library as well as it will save the developer start up time! I'd even extend it to a specific card size in the argument to `get_device_map`, since the map would be different depending on the size of the cards.\r\n\r\nBut this won't work for cards of different sizes, e.g. at the moment I have 1x 22GB + 1x 8GB cards. But perhaps this is an odd case and most serious setups have identical cards. I don't know. \r\n",
"@stas00 Yeah, I think you're right. Could do something simple like `device_map` dictionary should be ranges like this:\r\n\r\n```\r\ndevice_map = {0: range(0, 10),\r\n 1: range(11, 24),\r\n ...}\r\n```\r\nSimpler than creating a list.",
"well, it's the same just using python to save on typing ;) this is still awkward a bit as you have to count ;)\r\n\r\nhere there is less counting: I want you to use devices `[0,1,2,3]` and slice the layers as `[8, 12, 12, 12]` :)\r\n\r\nThat's why I'm suggesting to support more than one way.",
"But most likely any of these custom ways can be easily delegated to a helper util, so the end result is the `device_map` as you implemented it. e..g:\r\n```\r\ndevice_map=device_map_make_by_partition([0,1,2,3], [8, 12, 12, 12])\r\ndevice_map=device_map_make_by_slice([0,1,2,3], \"1:10, 9:21, 22:34, 35:47\")\r\n```\r\n"
] | 1,605 | 1,609 | 1,606 | CONTRIBUTOR | null | # Model Parallelism for T5 and GPT2
Adds two new methods to `GPT2LMHead` and the `GPT2Model` classes to enable you to generate and fine-tune models using model parallelism. This feature is most applicable for `gpt2-large` and `gpt2-xl`. Minor modifications are made to the `TrainingArguments` and `Trainer` classes to avoid conflicting data parallelism behavior and related batch_size increases which would negate model parallelism. Note that nearly 64GB of GPU (4 Tesla v100s) are needed to fine-tune `gpt2-xl` @ 1024 tokens.
It is critically important to provide users the ability to specify where to put the blocks of a model because the GPU sizes and numbers are likely to be very diverse. This is done with a dictionary called `device_map`. I am planning on providing some examples and guidelines for the p3, p2 and g3 AWS instances.
Model parallelism has to be baked into the model class itself. Currently working on the T5 model. From my calculations the 11B model cannot fit on the largest p3 instance that I have access to (8 Tesla v100 GPUs). The 3B model can.
The methods are:
- `parallelize`, which will distribute the attention blocks of the model across several devices according to a device map
- `deparallelize`, which will move the model back to cpu
# Example
```
model = GPT2LMHeadModel.from_pretrained('gpt2-xl')
device_map = {0: [0, 1, 2, 3, 4, 5, 6, 7, 8],
1: [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21],
2: [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34],
3: [35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]}
model.parallelize(device_map) # Distributes the model's attention blocks across several devices
model.deparallelize() # Puts the model back on cpu and calls torch.cuda.empty_cache() to liberate GPU memory
```
## Reviewers
@LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8696/reactions",
"total_count": 10,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 9,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8696/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8696",
"html_url": "https://github.com/huggingface/transformers/pull/8696",
"diff_url": "https://github.com/huggingface/transformers/pull/8696.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8696.patch",
"merged_at": 1606160484000
} |
https://api.github.com/repos/huggingface/transformers/issues/8695 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8695/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8695/comments | https://api.github.com/repos/huggingface/transformers/issues/8695/events | https://github.com/huggingface/transformers/pull/8695 | 747,804,697 | MDExOlB1bGxSZXF1ZXN0NTI0OTYwNzAx | 8,695 | Update README.md to fix typo | {
"login": "siddiqaa",
"id": 15359637,
"node_id": "MDQ6VXNlcjE1MzU5NjM3",
"avatar_url": "https://avatars.githubusercontent.com/u/15359637?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/siddiqaa",
"html_url": "https://github.com/siddiqaa",
"followers_url": "https://api.github.com/users/siddiqaa/followers",
"following_url": "https://api.github.com/users/siddiqaa/following{/other_user}",
"gists_url": "https://api.github.com/users/siddiqaa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/siddiqaa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/siddiqaa/subscriptions",
"organizations_url": "https://api.github.com/users/siddiqaa/orgs",
"repos_url": "https://api.github.com/users/siddiqaa/repos",
"events_url": "https://api.github.com/users/siddiqaa/events{/privacy}",
"received_events_url": "https://api.github.com/users/siddiqaa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patil-suraj this pull request can be closed the typo has already been fixed. ",
"Thanks for letting me know."
] | 1,605 | 1,617 | 1,617 | NONE | null | Fix typo on line 45
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8695/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8695",
"html_url": "https://github.com/huggingface/transformers/pull/8695",
"diff_url": "https://github.com/huggingface/transformers/pull/8695.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8695.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8694 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8694/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8694/comments | https://api.github.com/repos/huggingface/transformers/issues/8694/events | https://github.com/huggingface/transformers/pull/8694 | 747,779,895 | MDExOlB1bGxSZXF1ZXN0NTI0OTM5NzQ0 | 8,694 | [Generate Test] fix flaky ci | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
In this PR: #8686 , I forgot to change the test accordingly -> this caused CI to be flaky.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8694/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8694/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8694",
"html_url": "https://github.com/huggingface/transformers/pull/8694",
"diff_url": "https://github.com/huggingface/transformers/pull/8694.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8694.patch",
"merged_at": 1605906442000
} |
https://api.github.com/repos/huggingface/transformers/issues/8693 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8693/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8693/comments | https://api.github.com/repos/huggingface/transformers/issues/8693/events | https://github.com/huggingface/transformers/pull/8693 | 747,770,823 | MDExOlB1bGxSZXF1ZXN0NTI0OTMyMjIx | 8,693 | update tensorflow to functional version | {
"login": "bbatha",
"id": 301475,
"node_id": "MDQ6VXNlcjMwMTQ3NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/301475?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bbatha",
"html_url": "https://github.com/bbatha",
"followers_url": "https://api.github.com/users/bbatha/followers",
"following_url": "https://api.github.com/users/bbatha/following{/other_user}",
"gists_url": "https://api.github.com/users/bbatha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bbatha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bbatha/subscriptions",
"organizations_url": "https://api.github.com/users/bbatha/orgs",
"repos_url": "https://api.github.com/users/bbatha/repos",
"events_url": "https://api.github.com/users/bbatha/events{/privacy}",
"received_events_url": "https://api.github.com/users/bbatha/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This was solved by https://github.com/huggingface/transformers/pull/8616\r\n\r\nThank you for your contribution!"
] | 1,605 | 1,606 | 1,606 | NONE | null | ## What does this PR do?
Related to #7333 notebooks/02-transformers.ipynb has you install an unsupported version of tensorflow.
Fixes # N/A
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ x ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8693/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8693/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8693",
"html_url": "https://github.com/huggingface/transformers/pull/8693",
"diff_url": "https://github.com/huggingface/transformers/pull/8693.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8693.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8692 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8692/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8692/comments | https://api.github.com/repos/huggingface/transformers/issues/8692/events | https://github.com/huggingface/transformers/issues/8692 | 747,743,982 | MDU6SXNzdWU3NDc3NDM5ODI= | 8,692 | issues with seq length with inference code for classification | {
"login": "sashank06",
"id": 8636933,
"node_id": "MDQ6VXNlcjg2MzY5MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8636933?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashank06",
"html_url": "https://github.com/sashank06",
"followers_url": "https://api.github.com/users/sashank06/followers",
"following_url": "https://api.github.com/users/sashank06/following{/other_user}",
"gists_url": "https://api.github.com/users/sashank06/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashank06/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashank06/subscriptions",
"organizations_url": "https://api.github.com/users/sashank06/orgs",
"repos_url": "https://api.github.com/users/sashank06/repos",
"events_url": "https://api.github.com/users/sashank06/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashank06/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"There is an issue here because your sequence is now to long for your model. The model only supports sequences of size 512 or less, but this code:\r\n\r\n```py\r\nsequence_0 = \"The company HuggingFace is based in New York City\" * 100\r\nsequence_1 = \"Apples are especially bad for your health\"\r\nsequence_2 = \"HuggingFace's headquarters are situated in Manhattan\"\r\nparaphrase = tokenizer(sequence_0, sequence_2, return_tensors=\"pt\")\r\nnot_paraphrase = tokenizer(sequence_0, sequence_1, return_tensors=\"pt\")\r\n```\r\ncreates tensors of length `1310` and `1313`, which is too long for your model. You should enable the truncation parameter on your tokenizer to ensure that the length is correct:\r\n```py\r\nparaphrase = tokenizer(sequence_0, sequence_2, return_tensors=\"pt\", truncation=True)\r\nnot_paraphrase = tokenizer(sequence_0, sequence_1, return_tensors=\"pt\", truncation=True)\r\n```"
] | 1,605 | 1,606 | 1,606 | NONE | null | ## Environment info
- `transformers` version: 3.5.1
- Platform: Google Colab
- Python version: 3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik , @mfuntowicz , @VictorSanh
## Information
Model I am using: BERT
The problem arises when using:
* [X] the official example scripts: (give details below)
Modified the official scripts slightly to change the length of the input sequence.
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased-finetuned-mrpc")
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased-finetuned-mrpc", return_dict=True)
classes = ["not paraphrase", "is paraphrase"]
sequence_0 = "The company HuggingFace is based in New York City" * 100
sequence_1 = "Apples are especially bad for your health"
sequence_2 = "HuggingFace's headquarters are situated in Manhattan"
paraphrase = tokenizer(sequence_0, sequence_2, return_tensors="pt")
not_paraphrase = tokenizer(sequence_0, sequence_1, return_tensors="pt")
paraphrase_classification_logits = model(**paraphrase).logits
not_paraphrase_classification_logits = model(**not_paraphrase).logits
paraphrase_results = torch.softmax(paraphrase_classification_logits, dim=1).tolist()[0]
not_paraphrase_results = torch.softmax(not_paraphrase_classification_logits, dim=1).tolist()[0]
# Should be paraphrase
for i in range(len(classes)):
print(f"{classes[i]}: {int(round(paraphrase_results[i] * 100))}%")
# Should not be paraphrase
for i in range(len(classes)):
print(f"{classes[i]}: {int(round(not_paraphrase_results[i] * 100))}%")```
```
The tasks I am working on is:
* [X ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. If you run the code above, you run into the RunTime Error
Error:
```Token indices sequence length is longer than the specified maximum sequence length for this model (1313 > 512). Running this sequence through the model will result in indexing errors
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-3-f386a657dfdb> in <module>()
9 paraphrase = tokenizer(sequence_0, sequence_2, return_tensors="pt")
10 not_paraphrase = tokenizer(sequence_0, sequence_1, return_tensors="pt")
---> 11 paraphrase_classification_logits = model(**paraphrase).logits
12 not_paraphrase_classification_logits = model(**not_paraphrase).logits
13 paraphrase_results = torch.softmax(paraphrase_classification_logits, dim=1).tolist()[0]
5 frames
/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
199 token_type_embeddings = self.token_type_embeddings(token_type_ids)
200
--> 201 embeddings = inputs_embeds + position_embeddings + token_type_embeddings
202 embeddings = self.LayerNorm(embeddings)
203 embeddings = self.dropout(embeddings)
RuntimeError: The size of tensor a (1313) must match the size of tensor b (512) at non-singleton dimension 1```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8692/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8692/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8691 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8691/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8691/comments | https://api.github.com/repos/huggingface/transformers/issues/8691/events | https://github.com/huggingface/transformers/issues/8691 | 747,717,774 | MDU6SXNzdWU3NDc3MTc3NzQ= | 8,691 | Pegasus example not working | {
"login": "greenstars",
"id": 23225390,
"node_id": "MDQ6VXNlcjIzMjI1Mzkw",
"avatar_url": "https://avatars.githubusercontent.com/u/23225390?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/greenstars",
"html_url": "https://github.com/greenstars",
"followers_url": "https://api.github.com/users/greenstars/followers",
"following_url": "https://api.github.com/users/greenstars/following{/other_user}",
"gists_url": "https://api.github.com/users/greenstars/gists{/gist_id}",
"starred_url": "https://api.github.com/users/greenstars/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/greenstars/subscriptions",
"organizations_url": "https://api.github.com/users/greenstars/orgs",
"repos_url": "https://api.github.com/users/greenstars/repos",
"events_url": "https://api.github.com/users/greenstars/events{/privacy}",
"received_events_url": "https://api.github.com/users/greenstars/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@greenstars having the same issue - How did you resolve this?",
"> @greenstars having the same issue - How did you resolve this?\r\n\r\n@EliaKunz I changed \"!pip install git+https://github.com/huggingface/transformers.git\" to \"!pip install transformers\". ",
"Thx! Was on datalore with the latest transformers 4 - downgraded to 3.5 and everything is working now.",
"I had the same issue with the latest transformers 4.1 (pip installed). It's fixed after adding return_tensors point. \r\n\r\nFrom\r\n\r\n`batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest').to(torch_device)`\r\n\r\nto\r\n\r\n`batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest', return_tensors='pt').to(torch_device)`\r\n\r\ndid the job for me.\r\n",
"On running \r\n`batch = tokenizer(src_text, truncation=True, padding='longest', return_tensors=\"pt\").to(device)`\r\nI am getting the error\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-49-e6e55e18a32c> in <module>()\r\n----> 1 batch = tokenizer(src_text, truncation=True, padding='longest', return_tensors=\"pt\").to(device)\r\n 2 translated = model.generate(**batch)\r\n 3 tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)\r\n\r\nTypeError: 'NoneType' object is not callable\r\n```\r\nand on running `batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest', return_tensors='pt').to(torch_device)`\r\nI am getting the error\r\n```\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-50-b7183fa2a37c> in <module>()\r\n----> 1 batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest', return_tensors='pt').to(torch_device)\r\n 2 translated = model.generate(**batch)\r\n 3 tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)\r\n\r\nAttributeError: 'NoneType' object has no attribute 'prepare_seq2seq_batch'\r\n```\r\n\r\nAny help would be greatly appreciated. ",
"@YatinKapoor your tokenizer seems to be `None`",
"Need to replace PegasusTokenizer with AutoTokenizer\r\n```\r\nfrom transformers import PegasusForConditionalGeneration, AutoTokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\n````\r\n",
"@maifeng thanks! AutoTokenizer did the job for me! ",
"> I had the same issue with the latest transformers 4.1 (pip installed). It's fixed after adding return_tensors point.\r\n> \r\n> From\r\n> \r\n> `batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest').to(torch_device)`\r\n> \r\n> to\r\n> \r\n> `batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest', return_tensors='pt').to(torch_device)`\r\n> \r\n> did the job for me.\r\n\r\nWorked for me"
] | 1,605 | 1,673 | 1,605 | NONE | null | @patrickvonplaten
Hi,
I am trying to run on the pegasus example on Colab.
"!pip install git+https://github.com/huggingface/transformers.git
!pip install sentencepiece
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
import torch
src_text = [
""" PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."""
]
model_name = 'google/pegasus-xsum'
torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device)
batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest').to(torch_device)
translated = model.generate(**batch)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
assert tgt_text[0] == "California's largest electricity provider has turned off power to hundreds of thousands of customers.
Collecting git+https://github.com/huggingface/transformers.git
Cloning https://github.com/huggingface/transformers.git to /tmp/pip-req-build-gvb7jrr9
Running command git clone -q https://github.com/huggingface/transformers.git /tmp/pip-req-build-gvb7jrr9
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Requirement already satisfied (use --upgrade to upgrade): transformers==4.0.0rc1 from git+https://github.com/huggingface/transformers.git in /usr/local/lib/python3.6/dist-packages
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from transformers==4.0.0rc1) (2.23.0)
Requirement already satisfied: tokenizers==0.9.4 in /usr/local/lib/python3.6/dist-packages (from transformers==4.0.0rc1) (0.9.4)
Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.6/dist-packages (from transformers==4.0.0rc1) (4.41.1)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from transformers==4.0.0rc1) (1.18.5)
Requirement already satisfied: filelock in /usr/local/lib/python3.6/dist-packages (from transformers==4.0.0rc1) (3.0.12)
Requirement already satisfied: dataclasses; python_version < "3.7" in /usr/local/lib/python3.6/dist-packages (from transformers==4.0.0rc1) (0.7)
Requirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from transformers==4.0.0rc1) (20.4)
Requirement already satisfied: sacremoses in /usr/local/lib/python3.6/dist-packages (from transformers==4.0.0rc1) (0.0.43)
Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.6/dist-packages (from transformers==4.0.0rc1) (2019.12.20)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->transformers==4.0.0rc1) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->transformers==4.0.0rc1) (2020.6.20)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->transformers==4.0.0rc1) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->transformers==4.0.0rc1) (2.10)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from packaging->transformers==4.0.0rc1) (1.15.0)
Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from packaging->transformers==4.0.0rc1) (2.4.7)
Requirement already satisfied: click in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers==4.0.0rc1) (7.1.2)
Requirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers==4.0.0rc1) (0.17.0)
Building wheels for collected packages: transformers
Building wheel for transformers (PEP 517) ... done
Created wheel for transformers: filename=transformers-4.0.0rc1-cp36-none-any.whl size=1349475 sha256=8f08b76fc03d4cd0c1532e37462b5f1682fc58ad7f92ed533533b276fc4ecaf5
Stored in directory: /tmp/pip-ephem-wheel-cache-8gbsru65/wheels/33/eb/3b/4bf5dd835e865e472d4fc0754f35ac0edb08fe852e8f21655f
Successfully built transformers
Requirement already satisfied: sentencepiece in /usr/local/lib/python3.6/dist-packages (0.1.94)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-1-ad40feda49b0> in <module>()
12 tokenizer = PegasusTokenizer.from_pretrained(model_name)
13 model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device)
---> 14 batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest').to(torch_device)
15 translated = model.generate(**batch)
16 tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
2 frames
/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py in wrapper(*args, **kwargs)
1236 def wrapper(*args, **kwargs):
1237 if is_torch_available():
-> 1238 return func(*args, **kwargs)
1239 else:
1240 raise ImportError(f"Method `{func.__name__}` requires PyTorch.")
/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py in to(self, device)
777 modification.
778 """
--> 779 self.data = {k: v.to(device) for k, v in self.data.items()}
780 return self
781
/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py in <dictcomp>(.0)
777 modification.
778 """
--> 779 self.data = {k: v.to(device) for k, v in self.data.items()}
780 return self
781
AttributeError: 'list' object has no attribute 'to' "
Please help.
Thanks,
Akila
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8691/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8691/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8690 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8690/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8690/comments | https://api.github.com/repos/huggingface/transformers/issues/8690/events | https://github.com/huggingface/transformers/issues/8690 | 747,677,540 | MDU6SXNzdWU3NDc2Nzc1NDA= | 8,690 | connection issue | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Having a similar issue while running Multi class classification model",
"@patrickvonplaten @sumyuck @sgugger ",
"Hi\r\nI am constantly getting this erorr, looks like a bug to me since sometimes it appears sometimes not, could you please help me, this is expensive experiments I am trying on TPUs and I appreciate your help to fix it, it just many times fails due to this error\r\n\r\ngetting this erorr Exception in device=TPU:0: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.\r\nel/0 I1124 07:19:52.663760 424494 main shadow.py:87 > Traceback (most recent call last):\r\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py\", line 330, in _mp_start_fn\r\n _start_fn(index, pf_cfg, fn, args)\r\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py\", line 324, in _start_fn\r\n fn(gindex, *args)\r\n File \"/workdir/seq2seq/finetune_t5_trainer.py\", line 230, in _mp_fn\r\n main()\r\n File \"/workdir/seq2seq/finetune_t5_trainer.py\", line 71, in main\r\n cache_dir=model_args.cache_dir,\r\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/configuration_utils.py\", line 347, in from_pretrained\r\n config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/configuration_utils.py\", line 388, in get_config_dict\r\n local_files_only=local_files_only,\r\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/file_utils.py\", line 955, in cached_path\r\n local_files_only=local_files_only,\r\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/file_utils.py\", line 1125, in get_from_cache\r\n \"Connection error, and we cannot find the requested files in the cached path.\"",
"@sumyuck",
"@thomwolf ",
"this is with transformer 3.5.1, pytorch 1.6, on TPU v3-8, and I am using xla_spawn to launch the jobs, looks like a general issue with caching part. ",
"Same for me. Getting this error while trying to execute following line:\r\n tokenizer = LxmertTokenizer.from_pretrained('unc-nlp/lxmert-base-uncased')\r\n\r\n File \"/Users/xxx/anaconda3/envs/test/lib/python3.7/site-packages/transformers/tokenization_utils_base.py\", line 1629, in from_pretrained\r\n local_files_only=local_files_only,\r\n File \"/Users/xxx/anaconda3/envs/test/lib/python3.7/site-packages/transformers/file_utils.py\", line 955, in cached_path\r\n local_files_only=local_files_only,\r\n File \"/Users/xxx/anaconda3/envs/test/lib/python3.7/site-packages/transformers/file_utils.py\", line 1125, in get_from_cache\r\n \"Connection error, and we cannot find the requested files in the cached path.\"\r\nValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.\r\n",
"to me this is not a connection issue. i do have connection but an issue in\ncaching mechanism.\n\nOn Wed, Nov 25, 2020, 2:33 AM Alkesh <[email protected]> wrote:\n\n> Same for me. Getting this error while trying to execute following line:\n> tokenizer = LxmertTokenizer.from_pretrained('unc-nlp/lxmert-base-uncased')\n>\n> File\n> \"/Users/xxx/anaconda3/envs/test/lib/python3.7/site-packages/transformers/tokenization_utils_base.py\",\n> line 1629, in from_pretrained\n> local_files_only=local_files_only,\n> File\n> \"/Users/xxx/anaconda3/envs/test/lib/python3.7/site-packages/transformers/file_utils.py\",\n> line 955, in cached_path\n> local_files_only=local_files_only,\n> File\n> \"/Users/xxx/anaconda3/envs/test/lib/python3.7/site-packages/transformers/file_utils.py\",\n> line 1125, in get_from_cache\n> \"Connection error, and we cannot find the requested files in the cached\n> path.\"\n> ValueError: Connection error, and we cannot find the requested files in\n> the cached path. Please try again or make sure your Internet connection is\n> on.\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/8690#issuecomment-733405868>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABP4ZCGOHVMHGA33EGSQ6UTSRRNGTANCNFSM4T5CBSUA>\n> .\n>\n",
"I am having the same issue too. I am pointing to the cache directory where pytorch is saving the models:\r\n`cache_dir = '/home/me/.cache/torch/transformers/' \r\n\r\nmodelpath = \"bert-base-uncased\" \r\n\r\nmodel = AutoModel.from_pretrained(modelpath, cache_dir=cache_dir) \r\n\r\ntokenizer = AutoTokenizer.from_pretrained(modelpath, cache_dir=cache_dir) \r\n`\r\nAnd I am getting a connection error. pytorch: 1.7.0, transformers: 3.5.1.",
"Working on a fix, hopefully fixed for good today.\r\n\r\nMeanwhile as a workaround please retry a couple minutes later should do the trick",
" I deleted all cache, redownloaded all modes and ran again. It seems to be working as of now. ",
"Scaling of connectivity for model hosting should be way improved now. Please comment here if you still experience connectivity issues from now on.\r\n\r\nThanks!",
"I am still getting this error with transformers version - 3.5.1 and torch - 1.7.0 on python 3.6.9. Please check. I have tried deleting all cache, installing transformers using pip and source code both. But still getting the same issue again and again.",
"@AshishDuhan Are you loading a model in particular? Do you have a code snippet that consistently fails for you? ",
"_import torch\r\nfrom transformers import PegasusForConditionalGeneration, PegasusTokenizer\r\n\r\nsrc_text = [\"\"\"<TEXT-HERE>\"\"\"]\r\nmodel_name='google/pegasus-cnn_dailymail'\r\ntorch_device='cuda' if torch.cuda.is_available() else 'cpu'\r\ntokenizer=PegasusTokenizer.from_pretrained(model_name)\r\nmodel=PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device)\r\nbatch=tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest').to(torch_device)\r\ntranslated=model.generate(**batch)\r\ntgt_text=tokenizer.batch_decode(translated, skip_special_tokens=True)\r\nprint('Summary:', tgt_text[0])_\r\n\r\n\r\n**This is one of the models I am trying to load. Although I have tried other models too and nothing works. Even the basic command fail with following error:**\r\n\r\n**python -c \"from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))\"**\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/opt/app/jupyter/environments/env_summarization/lib/python3.6/site-packages/transformers/pipelines.py\", line 2828, in pipeline\r\n framework = framework or get_framework(model)\r\n File \"/opt/app/jupyter/environments/env_summarization/lib/python3.6/site-packages/transformers/pipelines.py\", line 106, in get_framework\r\n model = AutoModel.from_pretrained(model, revision=revision)\r\n File \"/opt/app/jupyter/environments/env_summarization/lib/python3.6/site-packages/transformers/modeling_auto.py\", line 636, in from_pretrained\r\n pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs\r\n File \"/opt/app/jupyter/environments/env_summarization/lib/python3.6/site-packages/transformers/configuration_auto.py\", line 333, in from_pretrained\r\n config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n File \"/opt/app/jupyter/environments/env_summarization/lib/python3.6/site-packages/transformers/configuration_utils.py\", line 388, in get_config_dict\r\n local_files_only=local_files_only,\r\n File \"/opt/app/jupyter/environments/env_summarization/lib/python3.6/site-packages/transformers/file_utils.py\", line 955, in cached_path\r\n local_files_only=local_files_only,\r\n File \"/opt/app/jupyter/environments/env_summarization/lib/python3.6/site-packages/transformers/file_utils.py\", line 1125, in get_from_cache\r\n \"Connection error, and we cannot find the requested files in the cached path.\"\r\nValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.",
"Our connectivity has been good these past 24 hours so this might be a different (local) issue, @AshishDuhan.\r\n\r\nAre you behind a proxy by any chance?\r\n\r\nDoes `curl -i https://huggingface.co/google/pegasus-cnn_dailymail/resolve/main/config.json` work from your machine? Can you try what you're doing from a machine in the cloud, like a Google Colab?",
"I am facing the same issue still - \r\n\r\nTraceback (most recent call last):\r\n File \"Untitled.py\", line 59, in <module>\r\n tokenizer = AutoTokenizer.from_pretrained(\"emilyalsentzer/Bio_ClinicalBERT\")\r\n File \"/project/6001557/akallada/digipath/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py\", line 310, in from_pretrained\r\n config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)\r\n File \"/project/6001557/akallada/digipath/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py\", line 341, in from_pretrained\r\n config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n File \"/project/6001557/akallada/digipath/lib/python3.7/site-packages/transformers/configuration_utils.py\", line 386, in get_config_dict\r\n local_files_only=local_files_only,\r\n File \"/project/6001557/akallada/digipath/lib/python3.7/site-packages/transformers/file_utils.py\", line 1007, in cached_path\r\n local_files_only=local_files_only,\r\n File \"/project/6001557/akallada/digipath/lib/python3.7/site-packages/transformers/file_utils.py\", line 1177, in get_from_cache\r\n \"Connection error, and we cannot find the requested files in the cached path.\"\r\nValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.\r\n",
"I'm having the same connection issue. I've tried with and without passing my proxies into the BertModel\r\n\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-19-c8b8c602a810> in <module>\r\n 1 from transformers import BertTokenizer, BertModel\r\n----> 2 model = BertModel.from_pretrained(\"bert-base-uncased\", **proxies)\r\n\r\n~/opt/anaconda3/envs/milglue/lib/python3.8/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 865 if not isinstance(config, PretrainedConfig):\r\n 866 config_path = config if config is not None else pretrained_model_name_or_path\r\n--> 867 config, model_kwargs = cls.config_class.from_pretrained(\r\n 868 config_path,\r\n 869 *model_args,\r\n\r\n~/opt/anaconda3/envs/milglue/lib/python3.8/site-packages/transformers/configuration_utils.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)\r\n 345 \r\n 346 \"\"\"\r\n--> 347 config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n 348 return cls.from_dict(config_dict, **kwargs)\r\n 349 \r\n\r\n~/opt/anaconda3/envs/milglue/lib/python3.8/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)\r\n 380 try:\r\n 381 # Load from URL or cache if already cached\r\n--> 382 resolved_config_file = cached_path(\r\n 383 config_file,\r\n 384 cache_dir=cache_dir,\r\n\r\n~/opt/anaconda3/envs/milglue/lib/python3.8/site-packages/transformers/file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, local_files_only)\r\n 946 if is_remote_url(url_or_filename):\r\n 947 # URL, so get it from the cache (downloading if necessary)\r\n--> 948 output_path = get_from_cache(\r\n 949 url_or_filename,\r\n 950 cache_dir=cache_dir,\r\n\r\n~/opt/anaconda3/envs/milglue/lib/python3.8/site-packages/transformers/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only)\r\n 1122 )\r\n 1123 else:\r\n-> 1124 raise ValueError(\r\n 1125 \"Connection error, and we cannot find the requested files in the cached path.\"\r\n 1126 \" Please try again or make sure your Internet connection is on.\"\r\n\r\nValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.",
"Hard to say without seeing your full networking environment.\r\n\r\nIf you try to `curl -I` the URLs that you get on the arrow icons next to files in e.g. https://huggingface.co/bert-base-uncased/tree/main (or equivalent page for the model you try to download), what happens?",
"it happened to me too , is there any fix on that ? ",
"is it transient or permanent (i.e. if you relaunch the command does it happen again)? You need to give us some more details if we want to help you troubleshoot.",
"Hi\r\nI am still getting this issue. see blow. I am using transformer 3.5.1, could you tell me if the issue is fixed in this version? if not which version of transformers library I should use? thanks\r\n@julien-c \r\n\r\n```\r\n 12/13/2020 13:56:10 - INFO - seq2seq.utils.utils - config is reset to the initial values.\r\ntp/0 I1213 06:00:34.060680 252396 main shadow.py:122 > Traceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/urllib3/connection.py\", line 170, in _new_conn\r\n (self._dns_host, self.port), self.timeout, **extra_kw\r\n File \"/usr/local/lib/python3.6/dist-packages/urllib3/util/connection.py\", line 96, in create_connection\r\n raise err\r\n File \"/usr/local/lib/python3.6/dist-packages/urllib3/util/connection.py\", line 86, in create_connection\r\n sock.connect(sa)\r\nsocket.timeout: timed out\r\ntp/0 I1213 06:00:34.060720 252396 main shadow.py:122 > \r\ntp/0 I1213 06:00:34.060759 252396 main shadow.py:122 > During handling of the above exception, another exception occurred:\r\ntp/0 I1213 06:00:34.060825 252396 main shadow.py:122 > \r\ntp/0 I1213 06:00:34.060866 252396 main shadow.py:122 > Traceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py\", line 706, in urlopen\r\n chunked=chunked,\r\n File \"/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py\", line 382, in _make_request\r\n self._validate_conn(conn)\r\n File \"/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py\", line 1010, in _validate_conn\r\n conn.connect()\r\n File \"/usr/local/lib/python3.6/dist-packages/urllib3/connection.py\", line 353, in connect\r\n conn = self._new_conn()\r\n File \"/usr/local/lib/python3.6/dist-packages/urllib3/connection.py\", line 177, in _new_conn\r\n % (self.host, self.timeout),\r\nurllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x7f47db511e80>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)')\r\ntp/0 I1213 06:00:34.060908 252396 main shadow.py:122 > \r\ntp/0 I1213 06:00:34.060970 252396 main shadow.py:122 > During handling of the above exception, another exception occurred:\r\ntp/0 I1213 06:00:34.061113 252396 main shadow.py:122 > \r\ntp/0 I1213 06:00:34.061207 252396 main shadow.py:122 > Traceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/requests/adapters.py\", line 449, in send\r\n timeout=timeout\r\n File \"/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py\", line 756, in urlopen\r\n method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]\r\n File \"/usr/local/lib/python3.6/dist-packages/urllib3/util/retry.py\", line 573, in increment\r\n raise MaxRetryError(_pool, url, error or ResponseError(cause))\r\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/glue/glue.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f47db511e80>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)'))\r\ntp/0 I1213 06:00:34.061293 252396 main shadow.py:122 > \r\ntp/0 I1213 06:00:34.061372 252396 main shadow.py:122 > During handling of the above exception, another exception occurred:\r\ntp/0 I1213 06:00:34.061421 252396 main shadow.py:122 > \r\ntp/0 I1213 06:00:34.061486 252396 main shadow.py:122 > Traceback (most recent call last):\r\n File \"finetune_t5_trainer.py\", line 361, in <module>\r\n main()\r\n File \"finetune_t5_trainer.py\", line 269, in main\r\n add_prefix=False if training_args.train_adapters else True)\r\n File \"/workdir/seq2seq/data/tasks.py\", line 70, in get_dataset\r\n dataset = self.load_dataset(split=split)\r\n File \"/workdir/seq2seq/data/tasks.py\", line 306, in load_dataset\r\n return datasets.load_dataset('glue', 'cola', split=split)\r\n File \"/usr/local/lib/python3.6/dist-packages/datasets/load.py\", line 589, in load_dataset\r\n path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True\r\n File \"/usr/local/lib/python3.6/dist-packages/datasets/load.py\", line 263, in prepare_module\r\n head_hf_s3(path, filename=name, dataset=dataset)\r\n File \"/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py\", line 200, in head_hf_s3\r\n return http_head(hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset))\r\n File \"/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py\", line 403, in http_head\r\n url, proxies=proxies, headers=headers, cookies=cookies, allow_redirects=allow_redirects, timeout=timeout\r\n File \"/usr/local/lib/python3.6/dist-packages/requests/api.py\", line 104, in head\r\n return request('head', url, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/requests/api.py\", line 61, in request\r\n return session.request(method=method, url=url, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/requests/sessions.py\", line 542, in request\r\n resp = self.send(prep, **send_kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/requests/sessions.py\", line 655, in send\r\n r = adapter.send(request, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/requests/adapters.py\", line 504, in send\r\n raise ConnectTimeout(e, request=request)\r\nrequests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/glue/glue.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f47db511e80>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)'))\r\ntp/0 I1213 06:00:35.237288 252396 main waiter_thread.cc:2652 [tp][0] EndSession for client id 1607864609277665002 (server tpe18:6297)\r\n```",
"Looks like you are getting a timeout connecting to `s3.amazonaws.com`. There's not much we can do here.",
"Hi,\r\nI am facing the same issue, the code is running fine on colab but while running it on local system i am getting below error.\r\n\r\nfrom transformers import AutoTokenizer, AutoModelForMaskedLM\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\r\n\r\nmodel = AutoModelForMaskedLM.from_pretrained(\"bert-base-cased\")\r\n\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-9-4dd822b7db9b> in <module>\r\n 1 from transformers import AutoTokenizer, AutoModelForMaskedLM\r\n 2 \r\n----> 3 tokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\r\n 4 \r\n 5 model = AutoModelForMaskedLM.from_pretrained(\"bert-base-cased\")\r\n\r\n~\\Anaconda3\\envs\\bert-test\\lib\\site-packages\\transformers\\models\\auto\\tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)\r\n 308 config = kwargs.pop(\"config\", None)\r\n 309 if not isinstance(config, PretrainedConfig):\r\n--> 310 config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)\r\n 311 \r\n 312 if \"bert-base-japanese\" in str(pretrained_model_name_or_path):\r\n\r\n~\\Anaconda3\\envs\\bert-test\\lib\\site-packages\\transformers\\models\\auto\\configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)\r\n 339 {'foo': False}\r\n 340 \"\"\"\r\n--> 341 config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n 342 \r\n 343 if \"model_type\" in config_dict:\r\n\r\n~\\Anaconda3\\envs\\bert-test\\lib\\site-packages\\transformers\\configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)\r\n 384 proxies=proxies,\r\n 385 resume_download=resume_download,\r\n--> 386 local_files_only=local_files_only,\r\n 387 )\r\n 388 # Load config dict\r\n\r\n~\\Anaconda3\\envs\\bert-test\\lib\\site-packages\\transformers\\file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, local_files_only)\r\n 1005 resume_download=resume_download,\r\n 1006 user_agent=user_agent,\r\n-> 1007 local_files_only=local_files_only,\r\n 1008 )\r\n 1009 elif os.path.exists(url_or_filename):\r\n\r\n~\\Anaconda3\\envs\\bert-test\\lib\\site-packages\\transformers\\file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only)\r\n 1175 else:\r\n 1176 raise ValueError(\r\n-> 1177 \"Connection error, and we cannot find the requested files in the cached path.\"\r\n 1178 \" Please try again or make sure your Internet connection is on.\"\r\n 1179 )\r\n\r\nValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.",
"Can you try the debugging procedure mentioned in https://github.com/huggingface/transformers/issues/8690#issuecomment-737246999?",
"i am able to open 8690 in web browser. but the error still remains:\r\n\r\nqa = text.SimpleQA(INDEXDIR)\r\n\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n~\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\ktrain\\text\\qa\\core.py in __init__(self, bert_squad_model, bert_emb_model)\r\n 67 try:\r\n---> 68 self.model = TFAutoModelForQuestionAnswering.from_pretrained(self.model_name)\r\n 69 except:\r\n\r\n~\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\transformers\\modeling_tf_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 1204 config, kwargs = AutoConfig.from_pretrained(\r\n-> 1205 pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs\r\n 1206 )\r\n\r\n~\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\transformers\\configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)\r\n 332 \"\"\"\r\n--> 333 config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n 334 \r\n\r\n~\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\transformers\\configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)\r\n 387 resume_download=resume_download,\r\n--> 388 local_files_only=local_files_only,\r\n 389 )\r\n\r\n~\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\transformers\\file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, local_files_only)\r\n 954 user_agent=user_agent,\r\n--> 955 local_files_only=local_files_only,\r\n 956 )\r\n\r\n~\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\transformers\\file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only)\r\n 1124 raise ValueError(\r\n-> 1125 \"Connection error, and we cannot find the requested files in the cached path.\"\r\n 1126 \" Please try again or make sure your Internet connection is on.\"\r\n\r\nValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nValueError Traceback (most recent call last)\r\n<ipython-input-72-18505d037255> in <module>\r\n 1 # ask questions (setting higher batch size can further speed up answer retrieval)\r\n----> 2 qa = text.SimpleQA(INDEXDIR)\r\n 3 #answers = qa.ask('What is lotus sutra?', batch_size=8)\r\n\r\n~\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\ktrain\\text\\qa\\core.py in __init__(self, index_dir, bert_squad_model, bert_emb_model)\r\n 348 except:\r\n 349 raise ValueError('index_dir has not yet been created - please call SimpleQA.initialize_index(\"%s\")' % (self.index_dir))\r\n--> 350 super().__init__(bert_squad_model=bert_squad_model, bert_emb_model=bert_emb_model)\r\n 351 \r\n 352 \r\n\r\n~\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\ktrain\\text\\qa\\core.py in __init__(self, bert_squad_model, bert_emb_model)\r\n 68 self.model = TFAutoModelForQuestionAnswering.from_pretrained(self.model_name)\r\n 69 except:\r\n---> 70 self.model = TFAutoModelForQuestionAnswering.from_pretrained(self.model_name, from_pt=True)\r\n 71 self.tokenizer = AutoTokenizer.from_pretrained(self.model_name)\r\n 72 self.maxlen = 512\r\n\r\n~\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\transformers\\modeling_tf_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 1203 if not isinstance(config, PretrainedConfig):\r\n 1204 config, kwargs = AutoConfig.from_pretrained(\r\n-> 1205 pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs\r\n 1206 )\r\n 1207 \r\n\r\n~\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\transformers\\configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)\r\n 331 {'foo': False}\r\n 332 \"\"\"\r\n--> 333 config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n 334 \r\n 335 if \"model_type\" in config_dict:\r\n\r\n~\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\transformers\\configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)\r\n 386 proxies=proxies,\r\n 387 resume_download=resume_download,\r\n--> 388 local_files_only=local_files_only,\r\n 389 )\r\n 390 # Load config dict\r\n\r\n~\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\transformers\\file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, local_files_only)\r\n 953 resume_download=resume_download,\r\n 954 user_agent=user_agent,\r\n--> 955 local_files_only=local_files_only,\r\n 956 )\r\n 957 elif os.path.exists(url_or_filename):\r\n\r\n~\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\transformers\\file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only)\r\n 1123 else:\r\n 1124 raise ValueError(\r\n-> 1125 \"Connection error, and we cannot find the requested files in the cached path.\"\r\n 1126 \" Please try again or make sure your Internet connection is on.\"\r\n 1127 )\r\n\r\nValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.\r\n",
"still get this error for transformer 4.1.1 with torch 1.7.1\r\n\r\nerror message here:\r\n```\r\nTraceback (most recent call last):\r\n File \"run_distributed_eval.py\", line 273, in <module>\r\n run_generate()\r\n File \"run_distributed_eval.py\", line 206, in run_generate\r\n **generate_kwargs,\r\n File \"run_distributed_eval.py\", line 88, in eval_data_dir\r\n tokenizer = AutoTokenizer.from_pretrained(model_name)\r\n File \"/data/User/v5/acl/venv/lib/python3.6/site-packages/transformers/models/auto/tokenization_auto.py\", line 378, in from_pretrained\r\n return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)\r\n File \"/data/User/v5/acl/venv/lib/python3.6/site-packages/transformers/tokenization_utils_base.py\", line 1781, in from_pretrained\r\n use_auth_token=use_auth_token,\r\n File \"/data/User/v5/acl/venv/lib/python3.6/site-packages/transformers/file_utils.py\", line 1085, in cached_path\r\n local_files_only=local_files_only,\r\n File \"/data/User/v5/acl/venv/lib/python3.6/site-packages/transformers/file_utils.py\", line 1264, in get_from_cache\r\n \"Connection error, and we cannot find the requested files in the cached path.\"\r\nValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.\r\n```",
"try transformers 4.00 transformers:4.1 \r\nSame error\r\n\r\n#8690 (comment)\r\nThis can be accessed and downloaded\r\n```\r\nTraceback (most recent call last):\r\n File \"f:\\software\\anaconda\\envs\\py38\\lib\\runpy.py\", line 194, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"f:\\software\\anaconda\\envs\\py38\\lib\\runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"F:\\Software\\Anaconda\\envs\\py38\\Scripts\\rasa.exe\\__main__.py\", line 7, in <module>\r\n File \"f:\\software\\anaconda\\envs\\py38\\lib\\site-packages\\rasa\\__main__.py\", line 116, in main\r\n cmdline_arguments.func(cmdline_arguments)\r\n File \"f:\\software\\anaconda\\envs\\py38\\lib\\site-packages\\rasa\\cli\\train.py\", line 58, in <lambda>\r\n train_parser.set_defaults(func=lambda args: train(args, can_exit=True))\r\n File \"f:\\software\\anaconda\\envs\\py38\\lib\\site-packages\\rasa\\cli\\train.py\", line 90, in train\r\n training_result = rasa.train(\r\n File \"f:\\software\\anaconda\\envs\\py38\\lib\\site-packages\\rasa\\train.py\", line 94, in train\r\n return rasa.utils.common.run_in_loop(\r\n File \"f:\\software\\anaconda\\envs\\py38\\lib\\site-packages\\rasa\\utils\\common.py\", line 308, in run_in_loop\r\n result = loop.run_until_complete(f)\r\n File \"f:\\software\\anaconda\\envs\\py38\\lib\\asyncio\\base_events.py\", line 616, in run_until_complete\r\n return future.result()\r\n File \"f:\\software\\anaconda\\envs\\py38\\lib\\site-packages\\rasa\\train.py\", line 163, in train_async\r\n return await _train_async_internal(\r\n File \"f:\\software\\anaconda\\envs\\py38\\lib\\site-packages\\rasa\\train.py\", line 342, in _train_async_internal\r\n await _do_training(\r\n File \"f:\\software\\anaconda\\envs\\py38\\lib\\site-packages\\rasa\\train.py\", line 388, in _do_training\r\n model_path = await _train_nlu_with_validated_data(\r\n File \"f:\\software\\anaconda\\envs\\py38\\lib\\site-packages\\rasa\\train.py\", line 811, in _train_nlu_with_validated_data\r\n await rasa.nlu.train(\r\n File \"f:\\software\\anaconda\\envs\\py38\\lib\\site-packages\\rasa\\nlu\\train.py\", line 97, in train\r\n trainer = Trainer(\r\n File \"f:\\software\\anaconda\\envs\\py38\\lib\\site-packages\\rasa\\nlu\\model.py\", line 163, in __init__\r\n self.pipeline = self._build_pipeline(cfg, component_builder)\r\n File \"f:\\software\\anaconda\\envs\\py38\\lib\\site-packages\\rasa\\nlu\\model.py\", line 174, in _build_pipeline\r\n component = component_builder.create_component(component_cfg, cfg)\r\n File \"f:\\software\\anaconda\\envs\\py38\\lib\\site-packages\\rasa\\nlu\\components.py\", line 852, in create_component\r\n component = registry.create_component_by_config(component_config, cfg)\r\n File \"f:\\software\\anaconda\\envs\\py38\\lib\\site-packages\\rasa\\nlu\\registry.py\", line 193, in create_component_by_config\r\n return component_class.create(component_config, config)\r\n File \"f:\\software\\anaconda\\envs\\py38\\lib\\site-packages\\rasa\\nlu\\components.py\", line 525, in create\r\n return cls(component_config)\r\n File \"f:\\software\\anaconda\\envs\\py38\\lib\\site-packages\\rasa\\nlu\\utils\\hugging_face\\hf_transformers.py\", line 65, in __init__\r\n self._load_model_instance(skip_model_load)\r\n File \"f:\\software\\anaconda\\envs\\py38\\lib\\site-packages\\rasa\\nlu\\utils\\hugging_face\\hf_transformers.py\", line 121, in _load_model_instance\r\n self.tokenizer = model_tokenizer_dict[self.model_name].from_pretrained(\r\n File \"f:\\software\\anaconda\\envs\\py38\\lib\\site-packages\\transformers\\tokenization_utils_base.py\", line 1774, in from_pretrained\r\n resolved_vocab_files[file_id] = cached_path(\r\n File \"f:\\software\\anaconda\\envs\\py38\\lib\\site-packages\\transformers\\file_utils.py\", line 1077, in cached_path\r\n output_path = get_from_cache(\r\n File \"f:\\software\\anaconda\\envs\\py38\\lib\\site-packages\\transformers\\file_utils.py\", line 1263, in get_from_cache\r\n raise ValueError(\r\nValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on\r\n```",
"I also ran into this error while trying to download any huggingface model. Turns out for me the cause was that I had set an `export REQUESTS_CA_BUNDLE=path/to/some/certificate` in my .bash_profile, which I needed to get some poetry stuff working. Once I removed this line and restarted, the download was working again.",
"It appears to be an SSL/TLS certificate error as @robinderat alludes to, but there are several possible reasons. Here's how I've debugged this, hopefully it helps others although your root cause may be different.\r\n\r\n## Debugging\r\n\r\nOriginal error, fetching model from `https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english`:\r\n\r\n```\r\nValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.\r\n```\r\n\r\nCheck with `curl`:\r\n\r\n```\r\n$ curl -I https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english/resolve/main/config.json\r\ncurl: (60) SSL certificate problem: certificate is not yet valid\r\nMore details here: https://curl.haxx.se/docs/sslcerts.html\r\n\r\ncurl failed to verify the legitimacy of the server and therefore could not\r\nestablish a secure connection to it. To learn more about this situation and\r\nhow to fix it, please visit the web page mentioned above.\r\n```\r\n\r\nChecking with `requests`:\r\n\r\n```\r\n$ python -c \"import requests; requests.get('https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english/resolve/main/config.json')\"\r\nTraceback (most recent call last):\r\n <snip>\r\n File \"/usr/lib/python3.7/ssl.py\", line 412, in wrap_socket\r\n session=session\r\n File \"/usr/lib/python3.7/ssl.py\", line 853, in _create\r\n self.do_handshake()\r\n File \"/usr/lib/python3.7/ssl.py\", line 1117, in do_handshake\r\n self._sslobj.do_handshake()\r\nssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate is not yet valid (_ssl.c:1056)\r\n```\r\n\r\nDisabling curl's certificate validation with `-k` flag works:\r\n\r\n```\r\n$ curl -k -I https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english/resolve/main/config.json\r\nHTTP/1.1 200 OK\r\n```\r\n\r\nAnd now in Python, using `verify=False`:\r\n\r\n```\r\n$ python -c \"import requests; r = requests.get('https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english/resolve/main/config.json', verify=False); print(r)\"\r\n/home/josh/source/examples/Machine Learning/Query Optimization/venv/lib/python3.7/site-packages/urllib3/connectionpool.py:1020: InsecureRequestWarning: Unverified HTTPS request is being made to host 'huggingface.co'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\r\n InsecureRequestWarning,\r\n<Response [200]>\r\n```\r\n\r\n## Resolution\r\n\r\nSo the \"problem\" is in the certificate. Checking in a browser, the root certificate of `huggingface.co` expires 30 April, 2021 but is valid only from 30 January, 2020.\r\n\r\nChecking my server clock shows that it was out of date (27 January 20201) and critically, *before* the certificate is valid *from*, which makes sense that the root error was \"certificate verify failed: certificate is not yet valid\".\r\n\r\nSet the clock to the real time and check again:\r\n\r\n```\r\n$ sudo date -s \"Feb 11 09:34:03 UTC 2021\"\r\n$ python -c \"import requests; r = requests.get('https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english/resolve/main/config.json'); print(r)\"\r\n<Response [200]>\r\n```\r\n\r\nI now suspect that this host in GCP, which was suspended for a while, did not automatically update it's local time causing this specific problem.\r\n\r\n## Conclusion\r\n\r\n@julien-c I would only suggest at this point that making the root cause visible in the error coming out of `transformers` would be really helpful to more immediately see the problem.\r\n\r\n🎉 "
] | 1,605 | 1,669 | 1,619 | NONE | null | Hi
I am runnig seq2seq_trainer on TPUs I am always getting this connection issue could you please have a look
sicne this is on TPUs this is hard for me to debug
thanks
Best
Rabeeh
2389961.mean (11/20/2020 05:24:09 PM) (Detached)
local_files_only=local_files_only,
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/file_utils.py", line 955, in cached_path
local_files_only=local_files_only,
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/file_utils.py", line 1125, in get_from_cache
"Connection error, and we cannot find the requested files in the cached path."
ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.
Traceback (most recent call last):
File "/home/rabeeh//internship/seq2seq/xla_spawn.py", line 71, in <module>
main()
XLA label: %copy.32724.remat = f32[80,12,128,128]{3,2,1,0:T(8,128)} copy(f32[80,12,128,128]{2,3,1,0:T(8,128)} %bitcast.576)
Allocation type: HLO temp
==========================
19. Size: 60.00M
Shape: f32[80,12,128,128]{3,2,1,0:T(8,128)}
Unpadded size: 60.00M
XLA label: %copy.32711.remat = f32[80,12,128,128]{3,2,1,0:T(8,128)} copy(f32[80,12,128,128]{2,3,1,0:T(8,128)
0%| | 2/18060 [08:12<1234:22:09, 246.08s/it]Traceback (most recent call last):
File "/home/rabeeh//internship/seq2seq/xla_spawn.py", line 71, in <module>
main()
File "/home/rabeeh//internship/seq2seq/xla_spawn.py", line 67, in main
xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 395, in spawn
start_method=start_method)
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 157, in start_processes
while not context.join():
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 112, in join
(error_index, exitcode)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8690/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8690/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8689 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8689/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8689/comments | https://api.github.com/repos/huggingface/transformers/issues/8689/events | https://github.com/huggingface/transformers/issues/8689 | 747,560,301 | MDU6SXNzdWU3NDc1NjAzMDE= | 8,689 | [Question] Pegasus tokenizer | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"No problem at all!\r\n\r\n+ The inheritance is just for the purpose of not duplicating code.\r\n+ You can change to whatever unk you would like, i wasn't at all careful about this stuff since i didn't try to replicate/test pre-training, just fine-tuning and generation.\r\n+ Your changes sound like obvious low risk improvements. \r\n+ I don't know whether standard mask-filling will work for integration testing purposes, given the seq2seq pre-training objective ."
] | 1,605 | 1,606 | 1,606 | MEMBER | null | @sshleifer - sorry to ping you here on this. Would be amazing if you find some time to explain the Pegasus tokenizer a bit.
A couple of things I don't understand:
- In the official Pegasus Tokenizer and from reading the paper it seems that exactly 2 mask tokens are necessary.
See https://github.com/google-research/pegasus/blob/master/pegasus/ops/pretrain_parsing_ops.cc#L66
a) ID=2 seems to correspond to the sentence mask token, called `[MASK_1]` and
b) ID=3 seems to correspond to the word mask token, called `[MASK_2]`
=> Why don't we have `[MASK_1]` and `[MASK_2]` tokens in the tokenizer's special tokens? I would actually add them at the id's 2 and 3 instead of having `unk_2` and `unk_3` there. Wdyt?
- Why do we call the tokens unk_2 - unk_104 ? Why unk? And why aren't those part of the `special_tokens_map` - is this on purpose?
- Why does Pegasus inherit from the Reformer Tokenizer -> I don't really see what they have in common...
Would be awesome if you could take 10min to reply :-) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8689/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8688 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8688/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8688/comments | https://api.github.com/repos/huggingface/transformers/issues/8688/events | https://github.com/huggingface/transformers/pull/8688 | 747,525,523 | MDExOlB1bGxSZXF1ZXN0NTI0NzM0ODY4 | 8,688 | Document adam betas TrainingArguments | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | COLLABORATOR | null | # What does this PR do?
#5592 introduced two new fields in `TrainingArguments` (`adam_beta1` and `adam_beta2`) without documenting them in the docstring. This PR fixes that. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8688/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8688",
"html_url": "https://github.com/huggingface/transformers/pull/8688",
"diff_url": "https://github.com/huggingface/transformers/pull/8688.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8688.patch",
"merged_at": 1605882445000
} |
https://api.github.com/repos/huggingface/transformers/issues/8687 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8687/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8687/comments | https://api.github.com/repos/huggingface/transformers/issues/8687/events | https://github.com/huggingface/transformers/pull/8687 | 747,478,824 | MDExOlB1bGxSZXF1ZXN0NTI0Njk1NTk3 | 8,687 | added bangla-bert-sentiment model card | {
"login": "sagorbrur",
"id": 10723655,
"node_id": "MDQ6VXNlcjEwNzIzNjU1",
"avatar_url": "https://avatars.githubusercontent.com/u/10723655?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sagorbrur",
"html_url": "https://github.com/sagorbrur",
"followers_url": "https://api.github.com/users/sagorbrur/followers",
"following_url": "https://api.github.com/users/sagorbrur/following{/other_user}",
"gists_url": "https://api.github.com/users/sagorbrur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sagorbrur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sagorbrur/subscriptions",
"organizations_url": "https://api.github.com/users/sagorbrur/orgs",
"repos_url": "https://api.github.com/users/sagorbrur/repos",
"events_url": "https://api.github.com/users/sagorbrur/events{/privacy}",
"received_events_url": "https://api.github.com/users/sagorbrur/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,605 | 1,606 | 1,606 | CONTRIBUTOR | null | Hi,
I added model card for bangla-bert-sentiment model.
Please check and if possible merge.
thanks and regards | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8687/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8687/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8687",
"html_url": "https://github.com/huggingface/transformers/pull/8687",
"diff_url": "https://github.com/huggingface/transformers/pull/8687.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8687.patch",
"merged_at": 1606128676000
} |
https://api.github.com/repos/huggingface/transformers/issues/8686 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8686/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8686/comments | https://api.github.com/repos/huggingface/transformers/issues/8686/events | https://github.com/huggingface/transformers/pull/8686 | 747,468,446 | MDExOlB1bGxSZXF1ZXN0NTI0Njg2OTE3 | 8,686 | moved temperature warper before topP/topK warpers | {
"login": "theorm",
"id": 89853,
"node_id": "MDQ6VXNlcjg5ODUz",
"avatar_url": "https://avatars.githubusercontent.com/u/89853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theorm",
"html_url": "https://github.com/theorm",
"followers_url": "https://api.github.com/users/theorm/followers",
"following_url": "https://api.github.com/users/theorm/following{/other_user}",
"gists_url": "https://api.github.com/users/theorm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theorm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theorm/subscriptions",
"organizations_url": "https://api.github.com/users/theorm/orgs",
"repos_url": "https://api.github.com/users/theorm/repos",
"events_url": "https://api.github.com/users/theorm/events{/privacy}",
"received_events_url": "https://api.github.com/users/theorm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | # What does this PR do?
Moves the `temperature` warper in `generation_utils.py` before `top_p` and `top_k` warper so that temperature affects sampling. This is how it used to be [before refactoring](https://github.com/huggingface/transformers/blob/v3.4.0/src/transformers/generation_utils.py#L571-L575) in `v.3.5.x`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8686/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8686/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8686",
"html_url": "https://github.com/huggingface/transformers/pull/8686",
"diff_url": "https://github.com/huggingface/transformers/pull/8686.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8686.patch",
"merged_at": 1605897235000
} |
https://api.github.com/repos/huggingface/transformers/issues/8685 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8685/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8685/comments | https://api.github.com/repos/huggingface/transformers/issues/8685/events | https://github.com/huggingface/transformers/issues/8685 | 747,459,567 | MDU6SXNzdWU3NDc0NTk1Njc= | 8,685 | Pegasus Xsum Returning Tokens Not In Source Text | {
"login": "1337-Pete",
"id": 43712596,
"node_id": "MDQ6VXNlcjQzNzEyNTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/43712596?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/1337-Pete",
"html_url": "https://github.com/1337-Pete",
"followers_url": "https://api.github.com/users/1337-Pete/followers",
"following_url": "https://api.github.com/users/1337-Pete/following{/other_user}",
"gists_url": "https://api.github.com/users/1337-Pete/gists{/gist_id}",
"starred_url": "https://api.github.com/users/1337-Pete/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/1337-Pete/subscriptions",
"organizations_url": "https://api.github.com/users/1337-Pete/orgs",
"repos_url": "https://api.github.com/users/1337-Pete/repos",
"events_url": "https://api.github.com/users/1337-Pete/events{/privacy}",
"received_events_url": "https://api.github.com/users/1337-Pete/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Not an expert in summarization, but abstractive text summarization does not extract sequences/tokens from the initial text to produce a summary. That would be extractive text summarization. Abstractive text summarization instead can be done with rephrasing, as it seems to be the case here.\r\n\r\nOn a second note, I believe the Pegasus checkpoints were trained on very long sequences, so I'm not entirely sure how it would deal with smaller sequences as the one you used here.\r\n\r\nOn a third note, we try to keep the github issues reserved for issues/feature requests; you would have more luck asking this over on the [forum](https://discuss.huggingface.co).\r\n\r\n@patrickvonplaten or @patil-suraj can chime in if I'm wrong.",
"The hyperparameters seem very extreme to me... also `temperature=1` does not do anything and `length_penalty=5` is very high - also note that a length_penalty > 1 actually incentivizes longer sequences. @sshleifer 's model already has good hyper-parameters set as default values that you can see here:\r\nhttps://huggingface.co/sshleifer/distill-pegasus-xsum-16-8/blob/main/config.json\r\n\r\nIf you just use those, *e.g.*:\r\n```python\r\ntranslated = model_pegasus_distill_xsum_16_8.generate(**batch)\r\n```\r\n\r\nyou get this summary:\r\n```\r\nEuropean shares fell sharply on Wednesday as investors remained cautious ahead of a speech by France's president later in the day.\r\n```\r\n\r\nYou can try it yourself here:\r\nhttps://huggingface.co/sshleifer/distill-pegasus-xsum-16-8?text=German+shares+suffered+their+weakest+day+since+early+June+on+Wednesday+as+the+government+agreed+on+an+emergency+lockdown+to+combat+surging+COVID-19+cases%2C+with+other+European+markets+following+suit+on+fears+of+more+curbs+around+the+continent.+The+German+DAX+sank+as+much+as+5%25+before+cutting+some+losses+to+close+down+4.2%25+at+its+lowest+in+five+months.+The+precise+measures+were+still+subject+to+negotiation%2C+with+sources+saying+the+government+had+agreed+to+shut+bars+and+restaurants+from+Nov.+2.+The+pan-European+STOXX+600+index+fell+3%25+in+its+sharpest+one-day+drop+in+five+weeks.+France%27s+main+index+dropped+3.4%25+ahead+of+a+televised+address+by+President+Emmanuel+Macron+at+8%3A00+pm+when+he+is+expected+to+issue+stay-at-home+orders.\r\n```\r\n\r\nMy conclusion would be that it's just the hyperparameters that are badly chosen - not sure if @sshleifer has something to add...",
"- Lysandre is correct about abstractive vs. extractive.\r\n- Hallucination is a known issue with Neural Text Generation. It will happen more often if you generate summaries that are more than ~30% the length of the input document (which your length_penalty and max_length encourage).\r\n- `\"sshleifer/distill-pegasus-xsum-16-4\"` is better and faster. See Table 6 of the [best paper in AI history](https://arxiv.org/pdf/2010.13002.pdf) ;). \r\n- I would set `num_beams=4` if I cared at all about speed.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,605 | 1,619 | 1,619 | NONE | null | I'm currently using `sshleifer/distill-pegasus-xsum-16-8` model to perform abstractive text summarization, I've found this particular model to be most useful for my desired application. However, when attempting to summarize on inputted source text, the output returns tokens returned are nowhere in the source text. I suspect Pegasus is returning tokens from the dataset that it was trained. That said, is finetuning needed? Should hyperparameter tweaking solve this?
I wonder if PEGASUS + GAN could help teach the model to abstract from tokens in the input text?
**_Here's an example_**
**Source Text:**
German shares suffered their weakest day since early June on Wednesday as the government agreed on an emergency lockdown to combat surging COVID-19 cases, with other European markets following suit on fears of more curbs around the continent. The German DAX sank as much as 5% before cutting some losses to close down 4.2% at its lowest in five months. The precise measures were still subject to negotiation, with sources saying the government had agreed to shut bars and restaurants from Nov. 2. The pan-European STOXX 600 index fell 3% in its sharpest one-day drop in five weeks. France's main index dropped 3.4% ahead of a televised address by President Emmanuel Macron at 8:00 pm when he is expected to issue stay-at-home orders.
```python
# XSUM 16-8
model_name = "sshleifer/distill-pegasus-xsum-16-8"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model_pegasus_distill_xsum_16_8 = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(torch_device)
batch = tokenizer.prepare_seq2seq_batch([src_text], truncation=True, padding='longest').to(torch_device)
translated = model_pegasus_distill_xsum_16_8.generate(**batch,num_beams=9, num_return_sequences=3, temperature=1, length_penalty=5, max_length = 256, min_length=0)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)`
```
**Output Text:**
Shares in Europe have fallen sharply after the German government agreed to shut down bars and restaurants in a bid to curb the spread of carbon monoxide (CO) in the country's capital, Berlin. The pan-European STOXX 600 index fell 3% in its sharpest one-day drop in five weeks, while the FTSE 100 index closed down 3.7% in its sharpest one-day fall in five weeks.
From the outputted text, one can see that nowhere in the input text was `carbon monoxide (CO)` or `Berlin` or `FTSE 100` mentioned.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8685/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8684 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8684/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8684/comments | https://api.github.com/repos/huggingface/transformers/issues/8684/events | https://github.com/huggingface/transformers/issues/8684 | 747,408,851 | MDU6SXNzdWU3NDc0MDg4NTE= | 8,684 | Bert variants pretrained on Wikipedia are easily downloaded. Are the optimizers from the pretraining also available? | {
"login": "pkadambi",
"id": 11398171,
"node_id": "MDQ6VXNlcjExMzk4MTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/11398171?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pkadambi",
"html_url": "https://github.com/pkadambi",
"followers_url": "https://api.github.com/users/pkadambi/followers",
"following_url": "https://api.github.com/users/pkadambi/following{/other_user}",
"gists_url": "https://api.github.com/users/pkadambi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pkadambi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pkadambi/subscriptions",
"organizations_url": "https://api.github.com/users/pkadambi/orgs",
"repos_url": "https://api.github.com/users/pkadambi/repos",
"events_url": "https://api.github.com/users/pkadambi/events{/privacy}",
"received_events_url": "https://api.github.com/users/pkadambi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,605 | 1,619 | 1,619 | NONE | null | Is there a pretrained optimizer checkpoint available that can be loaded in the same way as a pretrained model?
I noticed that though the pretrained models are available trained on Wikipedia (ex can load a pretrained distillbert using: <br/>`model = DistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased'`)
But, I cannot find the optimizer from the end of the training run on wikipedia. There is no `checkpoint['optimizer']`
For my task, looking at optimizer internals (momentum, second moment, etc) from the end of training on wikipedia may be more useful to me than looking at optimizer internals from training on a downstream task (eg. GLUE). Does such a checkpoint exist (either for TF or Torch?)
Environment info (not really relevant)
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.1.0
- Platform: Linux-4.15.0-123-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.7
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
Trainer: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8684/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8684/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8683 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8683/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8683/comments | https://api.github.com/repos/huggingface/transformers/issues/8683/events | https://github.com/huggingface/transformers/issues/8683 | 747,327,400 | MDU6SXNzdWU3NDczMjc0MDA= | 8,683 | use the torchscript in a gpt model is slower than origin one. | {
"login": "lonelydancer",
"id": 548443,
"node_id": "MDQ6VXNlcjU0ODQ0Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/548443?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lonelydancer",
"html_url": "https://github.com/lonelydancer",
"followers_url": "https://api.github.com/users/lonelydancer/followers",
"following_url": "https://api.github.com/users/lonelydancer/following{/other_user}",
"gists_url": "https://api.github.com/users/lonelydancer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lonelydancer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lonelydancer/subscriptions",
"organizations_url": "https://api.github.com/users/lonelydancer/orgs",
"repos_url": "https://api.github.com/users/lonelydancer/repos",
"events_url": "https://api.github.com/users/lonelydancer/events{/privacy}",
"received_events_url": "https://api.github.com/users/lonelydancer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! TorchScript requires tracing the model beforehand, which slows down the first forward pass through the model. Could you print the timing of the iterations following the initial one?",
"Hi, @LysandreJik \r\ndo you mean in the first iteration \"loaded_model(input_ids)\" will slow?\r\nI already traced model before that.\r\n\r\ntraced_model = torch.jit.trace(model, input_ids)\r\ntorch.jit.save(traced_model, 'trace_gpt2.pt')\r\n\r\nloaded_model = torch.jit.load('trace_gpt2.pt').to('cuda')\r\nloaded_model.eval()\r\n\r\n#print (loaded_model)\r\nstart = time.time()\r\nfor i in range(100):\r\n with torch.no_grad():\r\n loaded_model(input_ids)\r\nend = time.time()\r\nprint ('traced model',(end-start))\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,605 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:2.1.1
- Platform:Linux version 4.15.0-76-generic (buildd@lcy01-amd64-029) (gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1))
- Python version:3.6.9
- PyTorch version (GPU?): 1.6.0+cu101
- Using GPU in script?:No
-GPU-tesla k80
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
when i am using torchscipts to speed up the interference of my gpt2 model, I found it is slower than the origin one
traced model 0.6959998607635498
origin model 0.3259282112121582
The problem arises when using:
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] my own task : gpt2 LM
## To reproduce
Steps to reproduce the behavior:
follow the code below
https://github.com/lonelydancer/algorithm/blob/master/test.py
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
the traced model is faster. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8683/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8683/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8682 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8682/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8682/comments | https://api.github.com/repos/huggingface/transformers/issues/8682/events | https://github.com/huggingface/transformers/pull/8682 | 747,298,062 | MDExOlB1bGxSZXF1ZXN0NTI0NTQ1MDIy | 8,682 | create README.md | {
"login": "bino282",
"id": 17800187,
"node_id": "MDQ6VXNlcjE3ODAwMTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/17800187?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bino282",
"html_url": "https://github.com/bino282",
"followers_url": "https://api.github.com/users/bino282/followers",
"following_url": "https://api.github.com/users/bino282/following{/other_user}",
"gists_url": "https://api.github.com/users/bino282/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bino282/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bino282/subscriptions",
"organizations_url": "https://api.github.com/users/bino282/orgs",
"repos_url": "https://api.github.com/users/bino282/repos",
"events_url": "https://api.github.com/users/bino282/events{/privacy}",
"received_events_url": "https://api.github.com/users/bino282/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,605 | 1,606 | 1,606 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8682/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8682",
"html_url": "https://github.com/huggingface/transformers/pull/8682",
"diff_url": "https://github.com/huggingface/transformers/pull/8682.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8682.patch",
"merged_at": 1606128714000
} |
https://api.github.com/repos/huggingface/transformers/issues/8681 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8681/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8681/comments | https://api.github.com/repos/huggingface/transformers/issues/8681/events | https://github.com/huggingface/transformers/pull/8681 | 747,297,357 | MDExOlB1bGxSZXF1ZXN0NTI0NTQ0NDM2 | 8,681 | Create README.txt | {
"login": "bino282",
"id": 17800187,
"node_id": "MDQ6VXNlcjE3ODAwMTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/17800187?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bino282",
"html_url": "https://github.com/bino282",
"followers_url": "https://api.github.com/users/bino282/followers",
"following_url": "https://api.github.com/users/bino282/following{/other_user}",
"gists_url": "https://api.github.com/users/bino282/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bino282/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bino282/subscriptions",
"organizations_url": "https://api.github.com/users/bino282/orgs",
"repos_url": "https://api.github.com/users/bino282/repos",
"events_url": "https://api.github.com/users/bino282/events{/privacy}",
"received_events_url": "https://api.github.com/users/bino282/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Can you please add metadata as in https://huggingface.co/docs#what-metadata-can-i-add-to-my-model-card?\r\n\r\nThank you!",
"Closing this one as duplicate was already merged!\r\n\r\nFor context please also read https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755"
] | 1,605 | 1,607 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8681/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8681",
"html_url": "https://github.com/huggingface/transformers/pull/8681",
"diff_url": "https://github.com/huggingface/transformers/pull/8681.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8681.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8680 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8680/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8680/comments | https://api.github.com/repos/huggingface/transformers/issues/8680/events | https://github.com/huggingface/transformers/issues/8680 | 747,265,998 | MDU6SXNzdWU3NDcyNjU5OTg= | 8,680 | Result changes if we don't pass attension mask in TFDistilbert model on SQUADv1 dataset | {
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"`attention_mask` is an optional argument, but that doesn't mean that it should not be passed to the function. If `attention_mask` is `None` then it is initialized to attend all tokens (all 1's in the `attention_mask` tensor), which is incorrect if the input is a batch that includes padding tokens. => It's better to simply pass the `attention_mask` to the forward function.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,605 | 1,619 | 1,619 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: latest
- Platform: Colab
- Python version:
- Tensorflow version (GPU?): 2.3.0
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
I used the below code for getting Model
```
from transformers import AutoTokenizer, TFAutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased-distilled-squad')
model = TFAutoModelForQuestionAnswering.from_pretrained('distilbert-base-uncased-distilled-squad', return_dict=True)
```
tokenizers: @mfuntowicz
examples/seq2seq: @patil-suraj
tensorflow: @jplu
## Information
The model I am using **TFDistilbert** pretrained.
The problem arises when using:
* my own modified scripts:
This is the Notebook
[Colab](https://github.com/bhadreshpsavani/UnderstandingNLP/blob/master/TFLiteExperimentsQALatest.ipynb)
The tasks I am working on is:
* an official SQUaD task
## To reproduce
Steps to reproduce the behavior: [Colab](https://github.com/bhadreshpsavani/UnderstandingNLP/blob/master/TFLiteExperimentsQALatest.ipynb)
## Expected behavior
The performance should be the same because the Attention mask is the optional argument, if we don't pass it will create it internally.
With Attention Mask:
```
OrderedDict([('exact', 77.71050141912077),
('f1', 85.5370981182013),
('total', 10570)])
```
Without Attention Mask:
```
OrderedDict([('exact', 72.82876064333927),
('f1', 80.71521545953475),
('total', 10570)])
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8680/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8680/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8679 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8679/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8679/comments | https://api.github.com/repos/huggingface/transformers/issues/8679/events | https://github.com/huggingface/transformers/pull/8679 | 747,195,294 | MDExOlB1bGxSZXF1ZXN0NTI0NDU3ODY0 | 8,679 | gpt2 and t5 model parallelism with tests | {
"login": "alexorona",
"id": 11825654,
"node_id": "MDQ6VXNlcjExODI1NjU0",
"avatar_url": "https://avatars.githubusercontent.com/u/11825654?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexorona",
"html_url": "https://github.com/alexorona",
"followers_url": "https://api.github.com/users/alexorona/followers",
"following_url": "https://api.github.com/users/alexorona/following{/other_user}",
"gists_url": "https://api.github.com/users/alexorona/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexorona/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexorona/subscriptions",
"organizations_url": "https://api.github.com/users/alexorona/orgs",
"repos_url": "https://api.github.com/users/alexorona/repos",
"events_url": "https://api.github.com/users/alexorona/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexorona/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2627272588,
"node_id": "MDU6TGFiZWwyNjI3MjcyNTg4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Model%20Parallel",
"name": "Model Parallel",
"color": "8B66A5",
"default": false,
"description": "Model Parallelilsm Implementations"
}
] | closed | false | null | [] | [] | 1,605 | 1,609 | 1,605 | CONTRIBUTOR | null | # Model Parallelism for GPT2 and T5
Note: version compatible with v4
Adds two new methods to t5 and gpt2 models to enable you to generate and fine-tune models using model parallelism. This feature is most applicable for `gpt2-large` and `gpt2-xl`. Minor modifications are made to the `TrainingArguments` and `Trainer` classes to avoid conflicting data parallelism behavior and related batch_size increases which would negate model parallelism. Note that nearly 64GB of GPU (4 Tesla v100s) are needed to fine-tune `gpt2-xl` @ 1024 tokens.
It is critically important to provide users the ability to specify where to put the blocks of a model because the GPU sizes and numbers are likely to be very diverse. This is done with a dictionary called `device_map`. I am planning on providing some examples and guidelines for the p3, p2 and g3 AWS instances.
Model parallelism has to be baked into the model class itself. Currently working on the T5 model. From my calculations the 11B model cannot fit on the largest p3 instance that I have access to (8 Tesla v100 GPUs). The 3B model can.
The methods are:
- `parallelize`, which will distribute the attention blocks of the model across several devices according to a device map
- `deparallelize`, which will move the model back to cpu
# Example
```
model = GPT2LMHeadModel.from_pretrained('gpt2-xl')
device_map = {0: [0, 1, 2, 3, 4, 5, 6, 7, 8],
1: [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21],
2: [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34],
3: [35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]}
model.parallelize(device_map) # Distributes the model's attention blocks across several devices
model.deparallelize() # Puts the model back on cpu and calls torch.cuda.empty_cache() to liberate GPU memory
```
## Reviewers
@LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8679/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8679/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8679",
"html_url": "https://github.com/huggingface/transformers/pull/8679",
"diff_url": "https://github.com/huggingface/transformers/pull/8679.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8679.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8678 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8678/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8678/comments | https://api.github.com/repos/huggingface/transformers/issues/8678/events | https://github.com/huggingface/transformers/pull/8678 | 747,172,185 | MDExOlB1bGxSZXF1ZXN0NTI0NDM4MDg4 | 8,678 | Update the bibtex with EMNLP demo | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8678/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8678/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8678",
"html_url": "https://github.com/huggingface/transformers/pull/8678",
"diff_url": "https://github.com/huggingface/transformers/pull/8678.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8678.patch",
"merged_at": 1605849993000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/8677 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8677/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8677/comments | https://api.github.com/repos/huggingface/transformers/issues/8677/events | https://github.com/huggingface/transformers/pull/8677 | 747,170,906 | MDExOlB1bGxSZXF1ZXN0NTI0NDM2OTg3 | 8,677 | Model parallel v4 | {
"login": "alexorona",
"id": 11825654,
"node_id": "MDQ6VXNlcjExODI1NjU0",
"avatar_url": "https://avatars.githubusercontent.com/u/11825654?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexorona",
"html_url": "https://github.com/alexorona",
"followers_url": "https://api.github.com/users/alexorona/followers",
"following_url": "https://api.github.com/users/alexorona/following{/other_user}",
"gists_url": "https://api.github.com/users/alexorona/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexorona/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexorona/subscriptions",
"organizations_url": "https://api.github.com/users/alexorona/orgs",
"repos_url": "https://api.github.com/users/alexorona/repos",
"events_url": "https://api.github.com/users/alexorona/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexorona/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | # Model Parallelism for GPT2 and T5
Note: this is a clean pull request for [PR # 7772](https://github.com/huggingface/transformers/pull/7772) that uses code from transformers v4.0.0.
Adds two new methods to `GPT2LMHead` and the `GPT2Model` classes to enable you to generate and fine-tune models using model parallelism. This feature is most applicable for `gpt2-large` and `gpt2-xl`. Minor modifications are made to the `TrainingArguments` and `Trainer` classes to avoid conflicting data parallelism behavior and related batch_size increases which would negate model parallelism. Note that nearly 64GB of GPU (4 Tesla v100s) are needed to fine-tune `gpt2-xl` @ 1024 tokens.
It is critically important to provide users the ability to specify where to put the blocks of a model because the GPU sizes and numbers are likely to be very diverse. This is done with a dictionary called `device_map`. I am planning on providing some examples and guidelines for the p3, p2 and g3 AWS instances.
Model parallelism has to be baked into the model class itself. Currently working on the T5 model. From my calculations the 11B model cannot fit on the largest p3 instance that I have access to (8 Tesla v100 GPUs). The 3B model can.
The methods are:
- `parallelize`, which will distribute the attention blocks of the model across several devices according to a device map
- `deparallelize`, which will move the model back to cpu
# Example
```
model = GPT2LMHeadModel.from_pretrained('gpt2-xl')
device_map = {0: [0, 1, 2, 3, 4, 5, 6, 7, 8],
1: [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21],
2: [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34],
3: [35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]}
model.parallelize(device_map) # Distributes the model's attention blocks across several devices
model.deparallelize() # Puts the model back on cpu and calls torch.cuda.empty_cache() to liberate GPU memory
```
## Reviewers
@LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8677/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8677/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8677",
"html_url": "https://github.com/huggingface/transformers/pull/8677",
"diff_url": "https://github.com/huggingface/transformers/pull/8677.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8677.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8676 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8676/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8676/comments | https://api.github.com/repos/huggingface/transformers/issues/8676/events | https://github.com/huggingface/transformers/pull/8676 | 747,099,832 | MDExOlB1bGxSZXF1ZXN0NTI0Mzc1NDMy | 8,676 | 2 typos in modeling_rag.py | {
"login": "ratthachat",
"id": 56621342,
"node_id": "MDQ6VXNlcjU2NjIxMzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/56621342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ratthachat",
"html_url": "https://github.com/ratthachat",
"followers_url": "https://api.github.com/users/ratthachat/followers",
"following_url": "https://api.github.com/users/ratthachat/following{/other_user}",
"gists_url": "https://api.github.com/users/ratthachat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ratthachat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ratthachat/subscriptions",
"organizations_url": "https://api.github.com/users/ratthachat/orgs",
"repos_url": "https://api.github.com/users/ratthachat/repos",
"events_url": "https://api.github.com/users/ratthachat/events{/privacy}",
"received_events_url": "https://api.github.com/users/ratthachat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Could you run `make style` on your branch so that the code quality check passes? Thanks!",
"Hi guys, I only have mobile phone until Dec. 1. I will do it as soon as I can access PC.",
"@lhoestq @LysandreJik done applying style. Sorry for late!"
] | 1,605 | 1,606 | 1,606 | CONTRIBUTOR | null | # What does this PR do?
Fix 2 typos in `modeling_rag.py`
`from_encoder_generator_configs` --> `from_question_encoder_generator_configs`
## Who can review?
@lhoestq
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8676/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8676/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8676",
"html_url": "https://github.com/huggingface/transformers/pull/8676",
"diff_url": "https://github.com/huggingface/transformers/pull/8676.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8676.patch",
"merged_at": 1606835809000
} |
https://api.github.com/repos/huggingface/transformers/issues/8675 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8675/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8675/comments | https://api.github.com/repos/huggingface/transformers/issues/8675/events | https://github.com/huggingface/transformers/pull/8675 | 747,086,386 | MDExOlB1bGxSZXF1ZXN0NTI0MzY1MTQy | 8,675 | [WIP] Rewrite ProphetNet to adapt converting ONNX friendly | {
"login": "jiafatom",
"id": 30608893,
"node_id": "MDQ6VXNlcjMwNjA4ODkz",
"avatar_url": "https://avatars.githubusercontent.com/u/30608893?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiafatom",
"html_url": "https://github.com/jiafatom",
"followers_url": "https://api.github.com/users/jiafatom/followers",
"following_url": "https://api.github.com/users/jiafatom/following{/other_user}",
"gists_url": "https://api.github.com/users/jiafatom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiafatom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiafatom/subscriptions",
"organizations_url": "https://api.github.com/users/jiafatom/orgs",
"repos_url": "https://api.github.com/users/jiafatom/repos",
"events_url": "https://api.github.com/users/jiafatom/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiafatom/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@mfuntowicz - could you take a look maybe? :-) ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,605 | 1,619 | 1,619 | CONTRIBUTOR | null | # What does this PR do?
We want to convert ProphetNet (pytorch model) to ONNX, but it needs some source code change to adapt it. The current code cannot convert to ONNX because
(1) The current pytorch model generates very large TorchScript IR graph (38k for decoder). We rewrite the way it generates bias: ~~Let's use numpy and then formulate the torch tensor finally.~~
Numpy way can help convert ONNX via tracing, but we prefer using scripting here.
Add script decorator so that the model can be converted via scripting. This reduces IR graph to 5k for decoder.
(2) `torch.new` generates constant dimension for Tensor in IR graph, which is not suitable if we want to do dynamic input axes for the converter. So we use `torch.full` instead.
This PR does not (should not) change any model behavior.
Fixes # (issue)
After this PR, the model can be converted to ONNX via scripting.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@qiweizhen @patrickvonplaten @Zhylkaaa | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8675/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8675/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8675",
"html_url": "https://github.com/huggingface/transformers/pull/8675",
"diff_url": "https://github.com/huggingface/transformers/pull/8675.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8675.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8674 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8674/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8674/comments | https://api.github.com/repos/huggingface/transformers/issues/8674/events | https://github.com/huggingface/transformers/issues/8674 | 747,009,747 | MDU6SXNzdWU3NDcwMDk3NDc= | 8,674 | Issues Fine-tuning XLNET | {
"login": "AdaUchendu",
"id": 32556160,
"node_id": "MDQ6VXNlcjMyNTU2MTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/32556160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AdaUchendu",
"html_url": "https://github.com/AdaUchendu",
"followers_url": "https://api.github.com/users/AdaUchendu/followers",
"following_url": "https://api.github.com/users/AdaUchendu/following{/other_user}",
"gists_url": "https://api.github.com/users/AdaUchendu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AdaUchendu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AdaUchendu/subscriptions",
"organizations_url": "https://api.github.com/users/AdaUchendu/orgs",
"repos_url": "https://api.github.com/users/AdaUchendu/repos",
"events_url": "https://api.github.com/users/AdaUchendu/events{/privacy}",
"received_events_url": "https://api.github.com/users/AdaUchendu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It looks like your using causual language modeling (next word prediction) - _File \"run_clm.py\", line 300, in main_, but xlnet does not use that. It uses permutation language modeling.\r\n\r\nHave you tried with the [new scripts](https://github.com/huggingface/transformers/tree/master/examples/language-modeling#xlnet-and-permutation-language-modeling), I believe this will solve it 👍 ",
"Thank you, Tim. So I know XLNET and Transformer-XL are fine-tuned the same\r\nway and tried the *run_plm.py* like you suggested and got the error below:\r\n\r\n[INFO|tokenization_utils_base.py:1650] 2020-11-20 23:00:33,440 >> loading\r\nfile https://huggingface.co/transfo-xl-wt103/resolve/main/vocab.pkl from\r\ncache at\r\n/root/.cache/torch/transformers/6860d92833eb9d2a42cf185e974ca967fbf4cd58fa8d3d9298e56b9ef7ff8d5c.56c8ef92e693414ef2313bde4ba3679a404de1edbcd5a5780def3971f9706850\r\n[INFO|modeling_utils.py:940] 2020-11-20 23:00:34,096 >> loading weights\r\nfile https://huggingface.co/transfo-xl-wt103/resolve/main/pytorch_model.bin\r\nfrom cache at\r\n/root/.cache/torch/transformers/891af5f0c8372327a961a768d4ee40b7ca95c428f9384c534e73b9b655c75468.923bd8e0844a782c35f009eddd08a3600739804fbe13bd234f592f36230ab8a9\r\nTraceback (most recent call last): File \"run_plm.py\", line 382, in <module>\r\nmain() File \"run_plm.py\", line 244, in main cache_dir=model_args.cache_dir,\r\nFile\r\n\"/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py\",\r\nline 947, in from_pretrained model = cls(config, *model_args,\r\n**model_kwargs) File\r\n\"/usr/local/lib/python3.6/dist-packages/transformers/modeling_xlnet.py\",\r\nline 1294, in __init__ self.transformer = XLNetModel(config) File\r\n\"/usr/local/lib/python3.6/dist-packages/transformers/modeling_xlnet.py\",\r\nline 940, in __init__ self.reuse_len = config.reuse_len AttributeError:\r\n'TransfoXLConfig' object has no attribute 'reuse_len'\r\n\r\nOn Fri, Nov 20, 2020 at 1:52 PM Tim Isbister <[email protected]>\r\nwrote:\r\n\r\n> It looks like your using causual language modeling (next word prediction)\r\n> - *run_clm.py*, but xlnet does not use that. It uses permutation language\r\n> modeling.\r\n>\r\n> Have you tried with the new scripts\r\n> <https://github.com/huggingface/transformers/tree/master/examples/language-modeling#xlnet-and-permutation-language-modeling>,\r\n> I believe this will solve it 👍\r\n>\r\n> —\r\n> You are receiving this because you authored the thread.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/8674#issuecomment-731348230>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AHYMJAF5X3H4GW7WFSAQEOLSQ23F3ANCNFSM4T4CBVKQ>\r\n> .\r\n>\r\n\r\n",
"Hmm strange, well actually I now tried running the provided example that I suggested to you. But also failing with `IndexError: index out of bounds` as you got in the first attempt. \r\n\r\nEdit: Looks like we have problems with loading the data, if I hardcoded a dataset to the `run_plm.py` \r\n`datasets = load_dataset('wikitext', 'wikitext-103-raw-v1’)` it works. \r\n\r\n[Provided example](https://github.com/huggingface/transformers/tree/master/examples/language-modeling#xlnet-and-permutation-language-modeling): \r\n\r\n```\r\npython run_plm.py \\\r\n --model_name_or_path=xlnet-base-cased \\\r\n --dataset_name wikitext \\\r\n --dataset_config_name wikitext-2-raw-v1 \\\r\n --do_train \\\r\n --do_eval \\\r\n --output_dir /tmp/test-plm\r\n```\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"run_plm.py\", line 382, in <module>\r\n main()\r\n File \"run_plm.py\", line 321, in main\r\n tokenized_datasets = tokenized_datasets.map(\r\n File \"/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/dataset_dict.py\", line 286, in map\r\n {\r\n File \"/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/dataset_dict.py\", line 287, in <dictcomp>\r\n k: dataset.map(\r\n File \"/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1243, in map\r\n return self._map_single(\r\n File \"/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 157, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py\", line 163, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1528, in _map_single\r\n writer.write_batch(batch)\r\n File \"/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/arrow_writer.py\", line 278, in write_batch\r\n pa_table = pa.Table.from_pydict(typed_sequence_examples)\r\n File \"pyarrow/table.pxi\", line 1474, in pyarrow.lib.Table.from_pydict\r\n File \"pyarrow/array.pxi\", line 322, in pyarrow.lib.asarray\r\n File \"pyarrow/array.pxi\", line 222, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/arrow_writer.py\", line 100, in __arrow_array__\r\n if trying_type and out[0].as_py() != self.data[0]:\r\n File \"pyarrow/array.pxi\", line 1058, in pyarrow.lib.Array.__getitem__\r\n File \"pyarrow/array.pxi\", line 540, in pyarrow.lib._normalize_index\r\nIndexError: index out of bounds\r\n```",
"Thanks, Tim. Do you have any suggestions for how I can load my own train\r\nand validation datasets? And how will it work with the following code\r\nbelow. I am now using the current transformer version.\r\nThanks again, Tim.\r\n\r\n\r\npython run_plm.py \\\r\n --model_name_or_path=transfo-xl-wt103 \\\r\n --train_file='/content/drive/My Drive/finetuned_models/train.txt' \\\r\n --validation_file='/content/drive/My Drive/finetuned_models/valid.txt' \\\r\n --save_total_limit=5 \\\r\n --num_train_epochs=1.0 \\\r\n --do_train \\\r\n --do_eval \\\r\n --per_gpu_train_batch_size=2 \\\r\n --per_gpu_eval_batch_size=2 \\\r\n\r\nOn Fri, Nov 20, 2020 at 7:23 PM Tim Isbister <[email protected]>\r\nwrote:\r\n\r\n> Hmm strange, well actually I now tried running the provided example that I\r\n> suggested to you. But also failing with IndexError: index out of bounds\r\n> as you got in the first attempt. I have the latest versions of all the\r\n> libraries running on Ubuntu 18.04 LTS with Titan RTX.\r\n>\r\n> Provided example\r\n> <https://github.com/huggingface/transformers/tree/master/examples/language-modeling#xlnet-and-permutation-language-modeling>\r\n> :\r\n>\r\n> python run_plm.py \\\r\n> --model_name_or_path=xlnet-base-cased \\\r\n> --dataset_name wikitext \\\r\n> --dataset_config_name wikitext-2-raw-v1 \\\r\n> --do_train \\\r\n> --do_eval \\\r\n> --output_dir /tmp/test-plm\r\n>\r\n> Traceback (most recent call last):\r\n> File \"run_plm.py\", line 382, in <module>\r\n> main()\r\n> File \"run_plm.py\", line 321, in main\r\n> tokenized_datasets = tokenized_datasets.map(\r\n> File \"/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/dataset_dict.py\", line 286, in map\r\n> {\r\n> File \"/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/dataset_dict.py\", line 287, in <dictcomp>\r\n> k: dataset.map(\r\n> File \"/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1243, in map\r\n> return self._map_single(\r\n> File \"/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 157, in wrapper\r\n> out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n> File \"/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py\", line 163, in wrapper\r\n> out = func(self, *args, **kwargs)\r\n> File \"/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1528, in _map_single\r\n> writer.write_batch(batch)\r\n> File \"/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/arrow_writer.py\", line 278, in write_batch\r\n> pa_table = pa.Table.from_pydict(typed_sequence_examples)\r\n> File \"pyarrow/table.pxi\", line 1474, in pyarrow.lib.Table.from_pydict\r\n> File \"pyarrow/array.pxi\", line 322, in pyarrow.lib.asarray\r\n> File \"pyarrow/array.pxi\", line 222, in pyarrow.lib.array\r\n> File \"pyarrow/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n> File \"/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/arrow_writer.py\", line 100, in __arrow_array__\r\n> if trying_type and out[0].as_py() != self.data[0]:\r\n> File \"pyarrow/array.pxi\", line 1058, in pyarrow.lib.Array.__getitem__\r\n> File \"pyarrow/array.pxi\", line 540, in pyarrow.lib._normalize_index\r\n> IndexError: index out of bounds\r\n>\r\n> —\r\n> You are receiving this because you authored the thread.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/8674#issuecomment-731472522>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AHYMJABV347A2NH2SK6NU4LSQ4B6HANCNFSM4T4CBVKQ>\r\n> .\r\n>",
"Maybe @sgugger has an idea!",
"The problem is that the script uses the tokenizer max length when no `max_seq_length` is passed, and that the XLNet tokenizer has a ridiculously high maximum sequence length. I have suggested a fix in #8738.\r\n\r\nWhile waiting for this PR to be merged, a temporary fix is to just add --max_seq_length 512 (or any value you'd like) to your command.",
"I tried using the max_seq_length argument and set it to 400 and got the\r\nfollowing error:\r\n\r\nTraceback (most recent call last):\r\n File \"run_plm.py\", line 379, in <module>\r\n main()\r\n File \"run_plm.py\", line 244, in main\r\n cache_dir=model_args.cache_dir,\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py\",\r\nline 947, in from_pretrained\r\n model = cls(config, *model_args, **model_kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_xlnet.py\",\r\nline 1294, in __init__\r\n self.transformer = XLNetModel(config)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_xlnet.py\",\r\nline 940, in __init__\r\n self.reuse_len = config.reuse_len\r\nAttributeError: 'TransfoXLConfig' object has no attribute 'reuse_len'\r\n\r\n\r\nOn Mon, Nov 23, 2020 at 4:02 PM Lysandre Debut <[email protected]>\r\nwrote:\r\n\r\n> Closed #8674 <https://github.com/huggingface/transformers/issues/8674>\r\n> via #8738 <https://github.com/huggingface/transformers/pull/8738>.\r\n>\r\n> —\r\n> You are receiving this because you authored the thread.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/8674#event-4029627504>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AHYMJADWOOMGPRQUSIUPRBDSRLEXRANCNFSM4T4CBVKQ>\r\n> .\r\n>",
"The script only works for XLNet models, you will need to tweak it for other models",
"I see. Thank you, Sylvain.\r\n\r\nOn Tue, Nov 24, 2020 at 10:46 AM Sylvain Gugger <[email protected]>\r\nwrote:\r\n\r\n> The script only works for XLNet models, you will need to tweak it for\r\n> other models\r\n>\r\n> —\r\n> You are receiving this because you authored the thread.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/8674#issuecomment-733059560>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AHYMJADG3HM6UCRSXGS56ULSRPIM3ANCNFSM4T4CBVKQ>\r\n> .\r\n>\r\n",
"Just to confirm that I am doing the right thing, what is the correct script\r\nin the language-modeling folder for fine-tuning Transformer-XL and CTRL? I\r\nam using run_clm.py for both of them currently but it keeps returning the\r\nsame error message for both. See error below:\r\n\r\nFor Transformer-XL:\r\n\r\n[INFO|modeling_utils.py:1065] 2020-11-24 15:55:30,982 >> All the\r\nweights of TransfoXLLMHeadModel were initialized from the model\r\ncheckpoint at transfo-xl-wt103.\r\nIf your task is similar to the task the model of the checkpoint was\r\ntrained on, you can already use TransfoXLLMHeadModel for predictions\r\nwithout further training.\r\n 2%|▏ | 6/311 [00:13<11:44, 2.31s/ba]Traceback (most recent\r\ncall last):\r\n File \"run_clm.py\", line 351, in <module>\r\n main()\r\n File \"run_clm.py\", line 261, in main\r\n load_from_cache_file=not data_args.overwrite_cache,\r\n File \"/usr/local/lib/python3.6/dist-packages/datasets/dataset_dict.py\",\r\nline 303, in map\r\n for k, dataset in self.items()\r\n File \"/usr/local/lib/python3.6/dist-packages/datasets/dataset_dict.py\",\r\nline 303, in <dictcomp>\r\n for k, dataset in self.items()\r\n File \"/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py\",\r\nline 1259, in map\r\n update_data=update_data,\r\n File \"/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py\",\r\nline 157, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/datasets/fingerprint.py\",\r\nline 163, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py\",\r\nline 1520, in _map_single\r\n batch, indices, check_same_num_examples=len(self.list_indexes()) >\r\n0, offset=offset\r\n File \"/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py\",\r\nline 1438, in apply_function_on_filtered_inputs\r\n function(*fn_args, effective_indices, **fn_kwargs) if with_indices\r\nelse function(*fn_args, **fn_kwargs)\r\n File \"run_clm.py\", line 254, in tokenize_function\r\n return tokenizer(examples[text_column_name])\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py\",\r\nline 2214, in __call__\r\n **kwargs,\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py\",\r\nline 2399, in batch_encode_plus\r\n **kwargs,\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py\",\r\nline 567, in _batch_encode_plus\r\n verbose=verbose,\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py\",\r\nline 630, in _batch_prepare_for_model\r\n return_attention_mask=return_attention_mask,\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py\",\r\nline 2531, in pad\r\n f\"type of {first_element} unknown: {type(first_element)}. \"\r\nValueError: type of [] unknown: <class 'list'>. Should be one of a\r\npython, numpy, pytorch or tensorflow object.\r\n 2%|▏ | 6/311 [00:16<13:48, 2.72s/ba]\r\n\r\n\r\nFor CTRL:\r\n\r\n[INFO|modeling_utils.py:1065] 2020-11-24 15:52:33,705 >> All the\r\nweights of CTRLLMHeadModel were initialized from the model checkpoint\r\nat ctrl.\r\nIf your task is similar to the task the model of the checkpoint was\r\ntrained on, you can already use CTRLLMHeadModel for predictions\r\nwithout further training.\r\n 0%| | 0/311 [00:00<?,\r\n?ba/s][WARNING|tokenization_utils_base.py:2736] 2020-11-24\r\n15:52:39,268 >> Token indices sequence length is longer than the\r\nspecified maximum sequence length for this model (293 > 256). Running\r\nthis sequence through the model will result in indexing errors\r\n 2%|▏ | 6/311 [00:05<04:21, 1.17ba/s]Traceback (most recent\r\ncall last):\r\n File \"run_clm.py\", line 351, in <module>\r\n main()\r\n File \"run_clm.py\", line 261, in main\r\n load_from_cache_file=not data_args.overwrite_cache,\r\n File \"/usr/local/lib/python3.6/dist-packages/datasets/dataset_dict.py\",\r\nline 303, in map\r\n for k, dataset in self.items()\r\n File \"/usr/local/lib/python3.6/dist-packages/datasets/dataset_dict.py\",\r\nline 303, in <dictcomp>\r\n for k, dataset in self.items()\r\n File \"/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py\",\r\nline 1259, in map\r\n update_data=update_data,\r\n File \"/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py\",\r\nline 157, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/datasets/fingerprint.py\",\r\nline 163, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py\",\r\nline 1520, in _map_single\r\n batch, indices, check_same_num_examples=len(self.list_indexes()) >\r\n0, offset=offset\r\n File \"/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py\",\r\nline 1438, in apply_function_on_filtered_inputs\r\n function(*fn_args, effective_indices, **fn_kwargs) if with_indices\r\nelse function(*fn_args, **fn_kwargs)\r\n File \"run_clm.py\", line 254, in tokenize_function\r\n return tokenizer(examples[text_column_name])\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py\",\r\nline 2214, in __call__\r\n **kwargs,\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py\",\r\nline 2399, in batch_encode_plus\r\n **kwargs,\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py\",\r\nline 567, in _batch_encode_plus\r\n verbose=verbose,\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py\",\r\nline 630, in _batch_prepare_for_model\r\n return_attention_mask=return_attention_mask,\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py\",\r\nline 2531, in pad\r\n f\"type of {first_element} unknown: {type(first_element)}. \"\r\nValueError: type of [] unknown: <class 'list'>. Should be one of a\r\npython, numpy, pytorch or tensorflow object.\r\n 2%|▏ | 6/311 [00:08<07:26, 1.46s/ba]\r\n\r\n\r\n\r\n\r\nOn Tue, Nov 24, 2020 at 10:50 AM Adaku Uchendu <[email protected]> wrote:\r\n\r\n> I see. Thank you, Sylvain.\r\n>\r\n> On Tue, Nov 24, 2020 at 10:46 AM Sylvain Gugger <[email protected]>\r\n> wrote:\r\n>\r\n>> The script only works for XLNet models, you will need to tweak it for\r\n>> other models\r\n>>\r\n>> —\r\n>> You are receiving this because you authored the thread.\r\n>> Reply to this email directly, view it on GitHub\r\n>> <https://github.com/huggingface/transformers/issues/8674#issuecomment-733059560>,\r\n>> or unsubscribe\r\n>> <https://github.com/notifications/unsubscribe-auth/AHYMJADG3HM6UCRSXGS56ULSRPIM3ANCNFSM4T4CBVKQ>\r\n>> .\r\n>>\r\n>\r\n>\r\n> --\r\n> *Adaku Uchendu*\r\n>\r\n> *McNair Scholar*\r\n> *Mathematics major*\r\n> *Statistic minor *\r\n> *Math Lab Tutor*\r\n> *Pre-Calculus LA*\r\n> *University of Maryland, Baltimore County *\r\n> *Class of 2018*\r\n>",
"It looks like it comes from a bug in the slow tokenizers that can't handle an empty sequence at the beginning. We're looking into it. ",
"Thank you, Sylvain.\r\n\r\nOn Tue, Nov 24, 2020 at 11:50 AM Sylvain Gugger <[email protected]>\r\nwrote:\r\n\r\n> It looks like it comes from a bug in the slow tokenizers that can't handle\r\n> an empty sequence at the beginning. We're looking into it.\r\n>\r\n> —\r\n> You are receiving this because you authored the thread.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/8674#issuecomment-733103967>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AHYMJAHQJ4FQ24C6JWITIO3SRPP53ANCNFSM4T4CBVKQ>\r\n> .\r\n>",
"Hi Sylvain, \r\n\r\nI just wanted to know if the slow tokenizer bug is no longer a problem?\r\nThank you ",
"It seems to be solved on master, I didn't try on the v4 release but is might also be solved there too.",
"Hi Sylvain,\r\n\r\nI just tried to re-run my code with the recent changes and I still got the same error. Just to make sure I am doing the correct thing, I have attached my code below. And I am using the transformer version 4.0 with the current github transformer repo.\r\nThank you.\r\n\r\n\r\ncd language-modeling/\r\npython run_clm.py \\\r\n --model_type=transfo-xl \\\r\n --model_name_or_path=transfo-xl-wt103 \\\r\n --train_file='/content/drive/My Drive/finetuned_models/train.txt' \\\r\n --validation_file='/content/drive/My Drive/finetuned_models/valid.txt' \\\r\n --save_total_limit=5 \\\r\n --num_train_epochs=1.0 \\\r\n --do_train \\\r\n --do_eval \\\r\n --per_gpu_train_batch_size=2 \\\r\n --per_gpu_eval_batch_size=2 \\\r\n --output_dir='/content/drive/My Drive/finetuned_models/transformer_xl'",
"Indeed, the PR mentioned above will fix that specific issue.\r\n\r\nNote that `TransfoXLLMHeadModel` is not supported by `Trainer` anyway as it does not return the reduced loss. Once the PR is merged it should work with CTRL however."
] | 1,605 | 1,607 | 1,606 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Google colab
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101
- Tensorflow version (GPU?): 2.3.0
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Text Generation: @patrickvonplaten @TevenLeScao
TransfoXL/XLNet: @TevenLeScao
-->
## Information
Model I am using XLNET:
The problem arises when using:
* [ ] the official example scripts: The old version of transformers. The script is run_language_modeling.py
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] Fine-tuning
## To reproduce
Steps to reproduce the behavior:
1. !git clone https://github.com/huggingface/transformers
import os
os.chdir('/content/transformers')
!git checkout b1ff0b2ae7d368b7db3a8a8472a29cc195d278d8
!pip install .
!pip install -r ./examples/requirements.txt
os.chdir('/content/transformers/examples')
!pip install dict_to_obj
2. !python run_language_modeling.py \
--output_dir='/content/drive/My Drive/finetuned_models/xlnet_large'\
--model_type=xlnet \
--model_name_or_path=xlnet-large-cased \
--should_continue \
--save_total_limit=5 \
--num_train_epochs=1.0 \
--do_train \
--evaluate_during_training \
--logging_steps=500 \
--save_steps=500 \
--train_data_file='/content/drive/My Drive/finetuned_models/train.txt' \
--do_eval \
--eval_data_file='/content/drive/My Drive/finetuned_models/valid.txt' \
--per_gpu_train_batch_size=2 \
--per_gpu_eval_batch_size=2 \
--block_size=128 \
--gradient_accumulation_steps=5
3.
[INFO|modeling_utils.py:1065] 2020-11-17 20:59:57,425 >> All the weights of XLNetLMHeadModel were initialized from the model checkpoint at /content/drive/MyDrive/finetuned_models/xlnet_base/checkpoint-26500.
If your task is similar to the task the model of the checkpoint was trained on, you can already use XLNetLMHeadModel for predictions without further training.
100%|██████████| 311/311 [01:01<00:00, 5.09ba/s]
100%|██████████| 133/133 [00:25<00:00, 5.13ba/s]
0%| | 0/311 [00:00<?, ?ba/s]Traceback (most recent call last):
File "run_clm.py", line 348, in <module>
main()
File "run_clm.py", line 300, in main
load_from_cache_file=not data_args.overwrite_cache,
File "/usr/local/lib/python3.6/dist-packages/datasets/dataset_dict.py", line 300, in map
for k, dataset in self.items()
File "/usr/local/lib/python3.6/dist-packages/datasets/dataset_dict.py", line 300, in <dictcomp>
for k, dataset in self.items()
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 1256, in map
update_data=update_data,
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 156, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/datasets/fingerprint.py", line 163, in wrapper
out = func(self, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 1525, in _map_single
writer.write_batch(batch)
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_writer.py", line 278, in write_batch
pa_table = pa.Table.from_pydict(typed_sequence_examples)
File "pyarrow/table.pxi", line 1474, in pyarrow.lib.Table.from_pydict
File "pyarrow/array.pxi", line 322, in pyarrow.lib.asarray
File "pyarrow/array.pxi", line 222, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_writer.py", line 100, in __arrow_array__
if trying_type and out[0].as_py() != self.data[0]:
File "pyarrow/array.pxi", line 1058, in pyarrow.lib.Array.__getitem__
File "pyarrow/array.pxi", line 540, in pyarrow.lib._normalize_index
IndexError: index out of bounds
0%| | 0/311 [00:02<?, ?ba/s]
## Expected behavior
<!-- I started fine-tuning the XLNET models using google colab. The colab notebook times out after 20 hours which is fine but when I try to continue training, I get the error I above. I have looked at similar issue reports on this repo but I was still unable to get around this error. Please, do you know what I am doing wrong? And what I can do to fix it?
Thanks. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8674/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8674/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8673 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8673/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8673/comments | https://api.github.com/repos/huggingface/transformers/issues/8673/events | https://github.com/huggingface/transformers/pull/8673 | 746,971,330 | MDExOlB1bGxSZXF1ZXN0NTI0MjcxMDc5 | 8,673 | [model_cards] Add card for gpt2-rnm | {
"login": "e-tornike",
"id": 20404466,
"node_id": "MDQ6VXNlcjIwNDA0NDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/20404466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/e-tornike",
"html_url": "https://github.com/e-tornike",
"followers_url": "https://api.github.com/users/e-tornike/followers",
"following_url": "https://api.github.com/users/e-tornike/following{/other_user}",
"gists_url": "https://api.github.com/users/e-tornike/gists{/gist_id}",
"starred_url": "https://api.github.com/users/e-tornike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/e-tornike/subscriptions",
"organizations_url": "https://api.github.com/users/e-tornike/orgs",
"repos_url": "https://api.github.com/users/e-tornike/repos",
"events_url": "https://api.github.com/users/e-tornike/events{/privacy}",
"received_events_url": "https://api.github.com/users/e-tornike/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,605 | 1,606 | 1,606 | CONTRIBUTOR | null | # What does this PR do?
Adds a new model card for the `e-tony/gpt2-rnm` model.
## Before submitting
- [X] This PR fixes a typo or improves the docs.
## Who can review?
@julien-c
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8673/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8673",
"html_url": "https://github.com/huggingface/transformers/pull/8673",
"diff_url": "https://github.com/huggingface/transformers/pull/8673.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8673.patch",
"merged_at": 1606128750000
} |
https://api.github.com/repos/huggingface/transformers/issues/8672 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8672/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8672/comments | https://api.github.com/repos/huggingface/transformers/issues/8672/events | https://github.com/huggingface/transformers/pull/8672 | 746,924,877 | MDExOlB1bGxSZXF1ZXN0NTI0MjMyNDQx | 8,672 | Add sentencepiece to the CI and fix tests | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | COLLABORATOR | null | # What does this PR do?
When removing sentencepiece from the dependencies of Transformers, the CI started to skip all tests that required sentencepiece. As a result some failures due to new breaking changes (switch to fast tokenizer by default and the fact we removed `max_len`, deprecated argument of tokenizers) were unnoticed.
This PR adds back sentenpiece install on all CI checks and fix the resulting failing tests. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8672/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8672/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8672",
"html_url": "https://github.com/huggingface/transformers/pull/8672",
"diff_url": "https://github.com/huggingface/transformers/pull/8672.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8672.patch",
"merged_at": 1605822261000
} |
https://api.github.com/repos/huggingface/transformers/issues/8671 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8671/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8671/comments | https://api.github.com/repos/huggingface/transformers/issues/8671/events | https://github.com/huggingface/transformers/issues/8671 | 746,898,197 | MDU6SXNzdWU3NDY4OTgxOTc= | 8,671 | Running Roberta on Race Multi choice dataset giving error | {
"login": "monk1337",
"id": 17107749,
"node_id": "MDQ6VXNlcjE3MTA3NzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/17107749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monk1337",
"html_url": "https://github.com/monk1337",
"followers_url": "https://api.github.com/users/monk1337/followers",
"following_url": "https://api.github.com/users/monk1337/following{/other_user}",
"gists_url": "https://api.github.com/users/monk1337/gists{/gist_id}",
"starred_url": "https://api.github.com/users/monk1337/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monk1337/subscriptions",
"organizations_url": "https://api.github.com/users/monk1337/orgs",
"repos_url": "https://api.github.com/users/monk1337/repos",
"events_url": "https://api.github.com/users/monk1337/events{/privacy}",
"received_events_url": "https://api.github.com/users/monk1337/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,605 | 1,619 | 1,619 | NONE | null | I am trying to use the script provided for race dataset training using bert/roberta models. I am running the script and getting this error :
```
python3 run_multiple_choice.py --task_name race --model_name_or_path roberta-base--do_train --do_eval --data_dir $SWAG_DIR --learning_rate 5e-5 --num_train_epochs 3 --max_seq_length 80 --output_dir models_bert/swag_base --per_gpu_eval_batch_size=16 --per_device_train_batch_size=16 --gradient_accumulation_steps 2 --overwrite_output
/home/admin/Monk/lib/python3.6/site-packages/transformers/training_args.py:332: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options)
FutureWarning,
11/19/2020 12:22:36 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 4, distributed training: False, 16-bits training: False
Some weights of the model checkpoint at roberta-base were not used when initializing RobertaForMultipleChoice: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight']
- This IS expected if you are initializing RobertaForMultipleChoice from the checkpoint of a model trained on another taskor with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing RobertaForMultipleChoice from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of RobertaForMultipleChoice were not initialized from the model checkpoint at roberta-base and are newly initialized: ['classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Traceback (most recent call last):
File "run_multiple_choice.py", line 237, in <module>
main()
File "run_multiple_choice.py", line 171, in main
if training_args.do_train
File "/home/admin/Monk/hf_mcq/utils_multiple_choice.py", line 113, in __init__
with FileLock(lock_path):
File "/home/admin/Monk/lib/python3.6/site-packages/filelock.py", line 323, in __enter__
self.acquire()
File "/home/admin/Monk/lib/python3.6/site-packages/filelock.py", line 271, in acquire
self._acquire()
File "/home/admin/Monk/lib/python3.6/site-packages/filelock.py", line 384, in _acquire
fd = os.open(self._lock_file, open_mode)
FileNotFoundError: [Errno 2] No such file or directory: '/RACE/cached_train_RobertaTokenizer_80_race.lock'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8671/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8671/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8670 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8670/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8670/comments | https://api.github.com/repos/huggingface/transformers/issues/8670/events | https://github.com/huggingface/transformers/issues/8670 | 746,896,999 | MDU6SXNzdWU3NDY4OTY5OTk= | 8,670 | Is Reformer supported under Encoder-Decoder framework? | {
"login": "spookypineapple",
"id": 48697483,
"node_id": "MDQ6VXNlcjQ4Njk3NDgz",
"avatar_url": "https://avatars.githubusercontent.com/u/48697483?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/spookypineapple",
"html_url": "https://github.com/spookypineapple",
"followers_url": "https://api.github.com/users/spookypineapple/followers",
"following_url": "https://api.github.com/users/spookypineapple/following{/other_user}",
"gists_url": "https://api.github.com/users/spookypineapple/gists{/gist_id}",
"starred_url": "https://api.github.com/users/spookypineapple/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/spookypineapple/subscriptions",
"organizations_url": "https://api.github.com/users/spookypineapple/orgs",
"repos_url": "https://api.github.com/users/spookypineapple/repos",
"events_url": "https://api.github.com/users/spookypineapple/events{/privacy}",
"received_events_url": "https://api.github.com/users/spookypineapple/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"It's not yet supported sadly",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,605 | 1,619 | 1,619 | NONE | null | Hi,
Is it possible to use Reformer with the encoder-decoder framework (i.e Reformer2Reformer)? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8670/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8669 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8669/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8669/comments | https://api.github.com/repos/huggingface/transformers/issues/8669/events | https://github.com/huggingface/transformers/issues/8669 | 746,878,581 | MDU6SXNzdWU3NDY4Nzg1ODE= | 8,669 | Make signature of `compute_metrics` parameter in Trainer class more flexible | {
"login": "francisco-perez-sorrosal",
"id": 918006,
"node_id": "MDQ6VXNlcjkxODAwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/francisco-perez-sorrosal",
"html_url": "https://github.com/francisco-perez-sorrosal",
"followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers",
"following_url": "https://api.github.com/users/francisco-perez-sorrosal/following{/other_user}",
"gists_url": "https://api.github.com/users/francisco-perez-sorrosal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/francisco-perez-sorrosal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/francisco-perez-sorrosal/subscriptions",
"organizations_url": "https://api.github.com/users/francisco-perez-sorrosal/orgs",
"repos_url": "https://api.github.com/users/francisco-perez-sorrosal/repos",
"events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}",
"received_events_url": "https://api.github.com/users/francisco-perez-sorrosal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"> For some models, I had to either pass some extra arguments to perform the metrics calculation or to access remote services to retrieve some additional data.\r\n\r\nHow did you do that in the current `Trainer`? I'm not against making `compute_metrics` more flexible but I don't see how you can add more arguments to its call without subclassing Trainer and overriding certain methods, in which case, you can also override the init.",
"In fact I have my own subclassed Trainer, but I wanted to reuse as many as possible of the current `__init__` parameters of the regular Trainer. I know I can define my own `compute_metrics` function with a different signature in the `__init__` method of my Trainer, but I was trying to avoid that and reuse the current `compute_metrics` signature :-)\r\n\r\nMaybe the example above is not clear enough as the the DummyTrainer is not subclassing the standard one. The example was trying to highlight the function signature more than the reuse of the original Trainer.\r\n",
"This seems like a very edge case to me, so I would leave the current `Trainer` as is, and adapt the `__init__` in your subclasses of `Trainer`.",
"Yes, I agree that is kind of a corner case. In my subclassed Trainer I was trying to reuse the `prediction_loop()` method from the base Trainer but I needed to pass some more parameters to my `compute_metrics` function, apart from the `EvalPrediction` param. So the easiest workaround was of course, to copy the original `prediction_loop()` in my subclassed Trainer and, instead of calling:\r\n\r\nhttps://github.com/huggingface/transformers/blob/8062fa63c564d4cc0d29573acd89092f1eb1df64/src/transformers/trainer.py#L1398\r\n\r\n, I call my version of `compute_metrics` with the extra parameters. \r\n\r\nWith this proposal was trying to avoid that ugly copy-paste I had to do by 1) changing the `compute_metrics` signature as I described above, 2) defining an attribute in Trainer to serve as a placeholder for the extra metrics arguments (and which can be set from subclassed Trainers,) e.g.:\r\n\r\n```python\r\nself.metrics_extra_args: Optional[Dict] = None\r\n```\r\nand 3) changing line 1398 to something like:\r\n\r\n```python\r\nmetrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids), self.metrics_extra_args)\r\n```\r\n\r\nBut I understand that is kind of convoluted. Thanks @sgugger in any case for your consideration!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I really need this"
] | 1,605 | 1,663 | 1,619 | NONE | null | # 🚀 Feature request
Current typing signature for the `compute_metrics` parameter in the `Trainer` class is:
```python
class Trainer:
...
def __init__(
...
compute_metrics: Optional[Callable[[EvalPrediction], Dict]] = None,
...
```
As it is described now in with the Python typing system, the only parameter that you can pass to the function is `EvalPrediction`, containing the model predictions to calculate the metrics. I propose to make the function signature of `compute_metrics` a little bit more flexible, for example:
```python
compute_metrics: Optional[Callable[[EvalPrediction, Optional[Any]], Dict]] = None,
```
or
```python
compute_metrics: Optional[Callable[[EvalPrediction, Optional[Dict]], Dict]] = None,
```
so users can pass an extra argument -e.g. a Dict- with additional information that can be used in the function.
Solution is not perfect in the sense that the typing system check of IDEs will scream when already defined functions in current user projects, pass a single parameter (see example below) as I haven't found a way of assigning a default value to the `Optional` in the `Callable` signature; using the `Ellipsis` is not possible either unless I've missed something (comments are welcome on this!!!)
## Motivation
For some models, I had to either pass some extra arguments to perform the metrics calculation or to access remote services to retrieve some additional data. The current typing signature for `compute_metrics` does not allow to pass these extra params so I had to do dirty workarounds.
```python
from abc import ABC
from typing import Optional, Callable, Dict, Any
def g(a:int):
print(f"a in g: {a}")
return {}
def h(a:int, b: Optional[int] = None):
print(f"a in h: {a}")
if b:
print(f"b passed to g: {b}")
return {}
class Dummy(ABC):
def __init__(self, f: Optional[Callable[[int, Optional[Any]], Dict]] = None):
self.f = f
def test_f(self):
if self.f:
print(f"Calling {self.f}")
if self.f.__name__ == "g":
self.f(1) # <- here the typing system screams a bit
elif self.f.__name__ == "h":
self.f(2,3)
else:
print("Not calling anything")
if __name__ == '__main__':
o = Dummy(g)
o.test_f()
o = Dummy(h)
o.test_f()
```
## Contribution
If someone else has had similar needs, you think this is a good idea, or you have better suggestion, I can provide a PR for this. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8669/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8669/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8668 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8668/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8668/comments | https://api.github.com/repos/huggingface/transformers/issues/8668/events | https://github.com/huggingface/transformers/pull/8668 | 746,870,260 | MDExOlB1bGxSZXF1ZXN0NTI0MTg3OTAy | 8,668 | Update bert-base-multilingual-cased-README.md | {
"login": "AsliRoy",
"id": 28287489,
"node_id": "MDQ6VXNlcjI4Mjg3NDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/28287489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AsliRoy",
"html_url": "https://github.com/AsliRoy",
"followers_url": "https://api.github.com/users/AsliRoy/followers",
"following_url": "https://api.github.com/users/AsliRoy/following{/other_user}",
"gists_url": "https://api.github.com/users/AsliRoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AsliRoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AsliRoy/subscriptions",
"organizations_url": "https://api.github.com/users/AsliRoy/orgs",
"repos_url": "https://api.github.com/users/AsliRoy/repos",
"events_url": "https://api.github.com/users/AsliRoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/AsliRoy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Good catch, thanks!"
] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | The heading was originally uncased, which did not reflect the contents of this README. Changed it to cased.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8668/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8668/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8668",
"html_url": "https://github.com/huggingface/transformers/pull/8668",
"diff_url": "https://github.com/huggingface/transformers/pull/8668.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8668.patch",
"merged_at": 1605818706000
} |
https://api.github.com/repos/huggingface/transformers/issues/8667 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8667/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8667/comments | https://api.github.com/repos/huggingface/transformers/issues/8667/events | https://github.com/huggingface/transformers/pull/8667 | 746,856,577 | MDExOlB1bGxSZXF1ZXN0NTI0MTc2NTg2 | 8,667 | Alternative to globals() | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | COLLABORATOR | null | # What does this PR do?
This PR does two things:
- remove some tokenizer classes that were used in a dictionary before being erased which was super weird
- add a function that goes from tokenizer class name to tokenizer class to avoid using `globals` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8667/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8667/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8667",
"html_url": "https://github.com/huggingface/transformers/pull/8667",
"diff_url": "https://github.com/huggingface/transformers/pull/8667.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8667.patch",
"merged_at": 1605839162000
} |
https://api.github.com/repos/huggingface/transformers/issues/8666 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8666/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8666/comments | https://api.github.com/repos/huggingface/transformers/issues/8666/events | https://github.com/huggingface/transformers/pull/8666 | 746,756,863 | MDExOlB1bGxSZXF1ZXN0NTI0MDk1NDA2 | 8,666 | Fix a few last paths for the new repo org | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | COLLABORATOR | null | # What does this PR do?
Fixes a few old paths in documentation or examples. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8666/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8666/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8666",
"html_url": "https://github.com/huggingface/transformers/pull/8666",
"diff_url": "https://github.com/huggingface/transformers/pull/8666.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8666.patch",
"merged_at": 1605805002000
} |
https://api.github.com/repos/huggingface/transformers/issues/8665 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8665/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8665/comments | https://api.github.com/repos/huggingface/transformers/issues/8665/events | https://github.com/huggingface/transformers/pull/8665 | 746,745,364 | MDExOlB1bGxSZXF1ZXN0NTI0MDg2NDIw | 8,665 | Use return_dict in RagModel forward pass | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Closing since it was fixed in #8585"
] | 1,605 | 1,605 | 1,605 | MEMBER | null | There were changes in the output format of models bu it looks like the RagModel forward pass was not updated to use `return_dict` as noticed in #8653
I'm running the slow tests right now. If they pass I will update from draft pull request to open request | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8665/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8665/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8665",
"html_url": "https://github.com/huggingface/transformers/pull/8665",
"diff_url": "https://github.com/huggingface/transformers/pull/8665.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8665.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8664 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8664/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8664/comments | https://api.github.com/repos/huggingface/transformers/issues/8664/events | https://github.com/huggingface/transformers/pull/8664 | 746,735,168 | MDExOlB1bGxSZXF1ZXN0NTI0MDc3OTU5 | 8,664 | Fix run_ner script | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | COLLABORATOR | null | # What does this PR do?
There have been a few breaking changes in the Datasets library that resulted in `run_ner` not working. This PR addresses that.
Fixes #8654 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8664/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8664/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8664",
"html_url": "https://github.com/huggingface/transformers/pull/8664",
"diff_url": "https://github.com/huggingface/transformers/pull/8664.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8664.patch",
"merged_at": 1605812371000
} |
https://api.github.com/repos/huggingface/transformers/issues/8663 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8663/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8663/comments | https://api.github.com/repos/huggingface/transformers/issues/8663/events | https://github.com/huggingface/transformers/pull/8663 | 746,722,722 | MDExOlB1bGxSZXF1ZXN0NTI0MDY3NTM1 | 8,663 | transformers-cli: LFS multipart uploads (> 5GB) | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
":+1: nice job!",
"Hi, this command may need a args?\r\n```\r\ntransformers-cli lfs-enable-largefiles repo_path\r\n```\r\n",
"> Hi, this command may need a args?\r\n\r\nYes, correct\r\n",
"Thanks a lot for the PR, and thanks for letting me know.\r\n\r\nI will give my feedback after testing it with the 11B model soon. ",
"Hi, thanks a lot for the PR.\r\nI try to reinstall transformers with this new branch and follow with this comment:\r\n```\r\ngit clone https://huggingface.co/mymusise/CPM-Third-Party\r\ncd CPM-Third-Party\r\ngit lfs install \r\ntransformers-cli lfs-enable-largefiles .\r\n\r\ncp ../models/tf_model.h5 ./\r\ngit add . && git commit -m 'add model'\r\ngit push\r\n```\r\nThen, after half an hour later, it raised an error:\r\n```\r\n$ git push\r\nGit LFS: (0 of 1 files) 0 B / 9.68 GB\r\nGit LFS: (0 of 1 files) 4.66 GB / 9.68 GB\r\nEOFoading LFS objects: 0% (0/1), 9.31 B / 9.68 GB\r\nerror: failed to push some refs to 'https://huggingface.co/mymusise/CPM-Third-Party'\r\n```\r\n\r\nDid I do something wrong?",
"@mymusise might have been an intermittent server error. Can you try again?",
"> @mymusise might have been an intermittent server error. Can you try again?\r\n\r\nYes, I try again and again. But it always raises this error at `9.31 GB / 9.68 GB`.\r\n\r\n:eyes: But, now `git push` will return a ` 502 ` error :\r\n```\r\nfatal: unable to access 'https://huggingface.co/mymusise/CPM-Third-Party/': The requested URL returned error: 502\r\n```",
"@mymusise Yes, looks like this is crashing/locking the server 😱\r\n\r\nDo you mind trying again on Monday? As we'll have more bandwidth to fix then. Sorry about that :/",
"It's ok, guy, haha. Waiting for your good news. \r\nHappy weekend!",
">Yes, I try again and again. But it always raises this error at 9.31 GB / 9.68 GB.\r\n\r\nHi, I try again today and push the big model file without any exception! :tada: \r\nThank guys!",
"Hi, here I push another model file(4.9GB) again, but this time it gives me a **504** Gateway Time-out error :sweat:\r\n\r\n```\r\nroot@iZt4n9z4x3ph9oc3hhrdneZ:~/CPM-FP16-Third-Party# time git push \r\nUsername for 'https://huggingface.co': mymusise\r\nPassword for 'https://[email protected]': \r\nCounting objects: 3, done.\r\nCompressing objects: 100% (2/2), done.\r\nWriting objects: 66% (2/3), 2.52 GiB | 3.62 MiB/s \r\nWriting objects: 100% (3/3), 4.46 GiB | 3.77 MiB/s, done.\r\nTotal 3 (delta 0), reused 1 (delta 0)\r\nerror: RPC failed; HTTP 504 curl 22 The requested URL returned error: 504 Gateway Time-out\r\nfatal: The remote end hung up unexpectedly\r\nfatal: The remote end hung up unexpectedly\r\nEverything up-to-date\r\n\r\nreal\t22m8.653s\r\nuser\t0m29.758s\r\nsys\t0m17.136s\r\n```\r\n\r\nSeems the big file is uploaded completely, it looks like there is some problem with the server configuration about the timeout. @julien-c \r\n\r\nTried three times with the same result.",
"@Pierrci - I ran some tests on ~10 GB files (`t5-3b`) and didn't encounter any problems! However when doing it for ~45GB files (`t5-11b`), I encounter some problems.\r\n\r\n```\r\ncd t5-11b-repo\r\ntransformers-cli lfs-enable-largefiles /path/to/repo\r\ngit add .\r\ngit commit -m \"add\"\r\ngit push # <= this command failse\r\n```\r\n\r\nIn case it is useful, here is a link to the `git trace` error message: https://github.com/patrickvonplaten/files_to_link_to/blob/master/output.txt\r\n\r\nI can always go back to the way of manually uploading to the git-lfs hash path. Maybe the error message is helpful though :-) ",
"Thanks @patrickvonplaten, I see where it might be coming from, gonna look at it now!",
"I deployed a fix that should address your problem, can you try again @patrickvonplaten?\r\n\r\n@mymusise Is it possible for you to try again with `GIT_CURL_VERBOSE=1 git push` so that we can try to get more information? From what you shared so far, your error seems to be different from Patrick's one.",
"Thanks, @Pierrci. Yes, I think my error is different from Patrick's one. Here is the [information](https://gist.github.com/mymusise/bfa331e3effe0876efbc0011334ead96) with `GIT_CURL_VERBOSE=1 git push`. \r\nHope it help.\r\n",
"> I deployed a fix that should address your problem, can you try again @patrickvonplaten?\r\n> \r\n> @mymusise Is it possible for you to try again with `GIT_CURL_VERBOSE=1 git push` so that we can try to get more information? From what you shared so far, your error seems to be different from Patrick's one.\r\n\r\nAwesome it works now @Pierrci - thanks! :-) ",
"> Thanks, @Pierrci. Yes, I think my error is different from Patrick's one. Here is the [information](https://gist.github.com/mymusise/bfa331e3effe0876efbc0011334ead96) with `GIT_CURL_VERBOSE=1 git push`.\r\n> Hope it help.\r\n\r\n@mymusise Are you sure LFS is properly installed and configured for the repo? From your logs it seems your `git push` command isn't doing any LFS work (like running a `pre-push` hook or calling our LFS endpoint), trying instead to push all the files through the classic git endpoint, which can't work."
] | 1,605 | 1,607 | 1,607 | MEMBER | null | ### Implementation of a custom transfer agent for the transfer type "multipart" for git-lfs. This lets users upload large files >5GB 🔥.
Spec for LFS custom transfer agent is: https://github.com/git-lfs/git-lfs/blob/master/docs/custom-transfers.md
The PR introduces two commands to the CLI:
```
transformers-cli lfs-enable-largefiles ./path/to/repo
```
^ Do this once per model repo where you want to push >5GB files. It's documented in the error message you get if you just try to `git push` a 5GB file without having enabled it before.
```
transformers-cli lfs-multipart-upload
```
^ is the custom transfer agent itself. This is not meant to be called by the user, but by lfs directly.
### Things to experiment with:
- [ ] upload speed. Is it sufficient? Please comment with your upload speeds e.g. for https://huggingface.co/t5-3b
Experiment:
```bash
time git clone https://huggingface.co/t5-3b
cd t5-3b
git remote set-url origin https://huggingface.co/$USER/t5-3b-clone
# ^ After having created this model repo in your account
transformers-cli lfs-enable-largefiles .
git reset 5e0a32db352b33091ea9fb2f8d8782d47a505986 # go back to initial commit for lfs to reupload files
git add pytorch_model.bin
git commit -m "ok"
time git push
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8663/reactions",
"total_count": 6,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8663/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8663",
"html_url": "https://github.com/huggingface/transformers/pull/8663",
"diff_url": "https://github.com/huggingface/transformers/pull/8663.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8663.patch",
"merged_at": 1607377120000
} |
https://api.github.com/repos/huggingface/transformers/issues/8662 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8662/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8662/comments | https://api.github.com/repos/huggingface/transformers/issues/8662/events | https://github.com/huggingface/transformers/issues/8662 | 746,695,429 | MDU6SXNzdWU3NDY2OTU0Mjk= | 8,662 | Can't upload the larger model file(9GB) | {
"login": "mymusise",
"id": 6883957,
"node_id": "MDQ6VXNlcjY4ODM5NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6883957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mymusise",
"html_url": "https://github.com/mymusise",
"followers_url": "https://api.github.com/users/mymusise/followers",
"following_url": "https://api.github.com/users/mymusise/following{/other_user}",
"gists_url": "https://api.github.com/users/mymusise/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mymusise/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mymusise/subscriptions",
"organizations_url": "https://api.github.com/users/mymusise/orgs",
"repos_url": "https://api.github.com/users/mymusise/repos",
"events_url": "https://api.github.com/users/mymusise/events{/privacy}",
"received_events_url": "https://api.github.com/users/mymusise/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is a known issue that's being tracked at #8663"
] | 1,605 | 1,607 | 1,607 | CONTRIBUTOR | null | Hey, guy, I got a problem when I upload my new model file, it raises an error.
```
/data2/CPM-TF/CPM-Third-Party on main ⌚ 23:30:47
$ git push
Username for 'https://huggingface.co': mymusises
Password for 'https://[email protected]':
LFS: Client error: https://s3.amazonaws.com/lfs.huggingface.co/mymusise/CPM-Third-Party/d853c...?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AK...%2F20201119%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201119T153653Z&X-Amz-Expires=900&X-Amz-Signature=84...&X-Amz-SignedHeaders=host
error: failed to push some refs to 'https://huggingface.co/mymusise/CPM-Third-Party'
(env)
/data2
```
Then, I delete the folder and try it again, and I noted there's an error when I add the model file:
```
/data2/CPM-TF/CPM-Third-Party on main! ⌚ 23:24:04
$ git add --all
Encountered 1 file(s) that may not have been copied correctly on Windows:
tf_model.h5
```
And I sure I have installed `glf` with `git lfs install` before doing this.
What should I do?
## system info
system: ubuntu 18.04.4
git version: 2.17.1
git-lfs version: 2.12.1
Model Cards: @julien-c | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8662/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8662/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8661 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8661/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8661/comments | https://api.github.com/repos/huggingface/transformers/issues/8661/events | https://github.com/huggingface/transformers/issues/8661 | 746,694,353 | MDU6SXNzdWU3NDY2OTQzNTM= | 8,661 | cannot load t5-base config | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, could you please fill the issue template when opening issues? Otherwise we cannot help you. Thanks.",
"## Environment info\r\n \r\n- `transformers` version: 3.5.1\r\n- Platform: cpu \r\n- Python version: 3.7\r\n- PyTorch version (GPU?): 1.0.4\r\n- Tensorflow version (GPU?): \r\n tensorflow-datasets 4.1.0 <pip>\r\ntensorflow-metadata 0.25.0 <pip>\r\n- Using GPU in script?: - \r\n- Using distributed or parallel set-up in script?: - \r\n\r\n### Who can help\r\n\r\n Here is my two lines to get t5-config:\r\n\r\n```\r\nfrom transformers import AutoConfig\r\nconfig = AutoConfig.from_pretrained('t5-base')\r\n```\r\n Model Cards: @julien-c\r\n T5: @patrickvonplaten\r\n\r\n## Information\r\nHere are the errors, is there an issue with t5-base storage? I am really confused, thank you for your help on this.\r\nthanks \r\n\r\n```\r\nfile t5-base/config.json not found\r\nTraceback (most recent call last):\r\n File \"/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_utils.py\", line 388, in get_config_dict\r\n local_files_only=local_files_only,\r\n File \"/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/file_utils.py\", line 962, in cached_path\r\n raise EnvironmentError(\"file {} not found\".format(url_or_filename))\r\nOSError: file t5-base/config.json not found\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"test.py\", line 2, in <module>\r\n config = AutoConfig.from_pretrained('t5-base')\r\n File \"/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_auto.py\", line 333, in from_pretrained\r\n config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n File \"/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_utils.py\", line 400, in get_config_dict\r\n raise EnvironmentError(msg)\r\nOSError: Can't load config for 't5-base'. Make sure that:\r\n\r\n- 't5-base' is a correct model identifier listed on 'https://huggingface.co/models'\r\n\r\n- or 't5-base' is the correct path to a directory containing a config.json file\r\n```\r\n\r\n## To reproduce\r\nplease run the two lines above.\r\n\r\n## Expected behavior\r\nloading the config \r\n",
"Hi, sure, done",
"Hi @LysandreJik this issues is really weird and really blocking me, I greatly appreciate having a look. thanks ",
"Hey @rabeehk - I cannot reproduce the error. I suspect the following:\r\n\r\nYou run the code from a directory that includes a local folder that is called `t5-base` which does not have a `config.json`.\r\nCould you try to run:\r\n\r\n```\r\nfrom transformers import AutoConfig\r\nconfig = AutoConfig.from_pretrained('t5-base')\r\n```\r\n\r\nfrom another folder or delete the `t5-base` folder that might be the reason? ",
"Hi patrick\nthank you for the reply, this is what happening when I call this line:\n\n config = T5Config.from_pretrained(\n model_args.config_name if model_args.config_name else\nmodel_args.model_name_or_path,\n )\n\nthe code by itself creates an empty t5-base directory, I delete it and then\nit recreates it.\nDo you have an idea on this?\nthanks\nBest\nRabeeh\n\nOn Thu, Nov 19, 2020 at 9:32 PM Patrick von Platen <[email protected]>\nwrote:\n\n> Hey @rabeehk <https://github.com/rabeehk> - I cannot reproduce the error.\n> I suspect the following:\n>\n> You run the code from a directory that includes a local folder that is\n> called t5-base which does not have a config.json.\n> Could you try to run:\n>\n> from transformers import AutoConfig\n> config = AutoConfig.from_pretrained('t5-base')\n>\n> from another folder or delete the t5-base folder that might be the reason?\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/8661#issuecomment-730620761>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABP4ZCGIXURMYC7WWVFZPXDSQV6GTANCNFSM4T3SASPQ>\n> .\n>\n",
"Could you tell me please how to set the path when using datasets that it\ndoes not cache in the default path?\nto me there might not be space for the model and this is all happening.\nOverall all the caches done in huggingface code, could you tell me how to\nset the different path for them\nthanks\n\nOn Fri, Nov 20, 2020 at 1:12 PM Rabeeh Karimi <[email protected]> wrote:\n\n> Hi patrick\n> thank you for the reply, this is what happening when I call this line:\n>\n> config = T5Config.from_pretrained(\n> model_args.config_name if model_args.config_name else\n> model_args.model_name_or_path,\n> )\n>\n> the code by itself creates an empty t5-base directory, I delete it and\n> then it recreates it.\n> Do you have an idea on this?\n> thanks\n> Best\n> Rabeeh\n>\n> On Thu, Nov 19, 2020 at 9:32 PM Patrick von Platen <\n> [email protected]> wrote:\n>\n>> Hey @rabeehk <https://github.com/rabeehk> - I cannot reproduce the\n>> error. I suspect the following:\n>>\n>> You run the code from a directory that includes a local folder that is\n>> called t5-base which does not have a config.json.\n>> Could you try to run:\n>>\n>> from transformers import AutoConfig\n>> config = AutoConfig.from_pretrained('t5-base')\n>>\n>> from another folder or delete the t5-base folder that might be the\n>> reason?\n>>\n>> —\n>> You are receiving this because you were mentioned.\n>> Reply to this email directly, view it on GitHub\n>> <https://github.com/huggingface/transformers/issues/8661#issuecomment-730620761>,\n>> or unsubscribe\n>> <https://github.com/notifications/unsubscribe-auth/ABP4ZCGIXURMYC7WWVFZPXDSQV6GTANCNFSM4T3SASPQ>\n>> .\n>>\n>\n",
"So when I run the codes, there is caching done here\ncahce dir /idiap/home/rkarimi/.cache/huggingface/datasets\ncahce dir /idiap/home/rkarimi/.cache/huggingface/datasets/downloads\n\ncould I change this?\nthanks\nBest\nRabeeh\n\nOn Fri, Nov 20, 2020 at 1:44 PM Rabeeh Karimi <[email protected]> wrote:\n\n> Could you tell me please how to set the path when using datasets that it\n> does not cache in the default path?\n> to me there might not be space for the model and this is all happening.\n> Overall all the caches done in huggingface code, could you tell me how to\n> set the different path for them\n> thanks\n>\n> On Fri, Nov 20, 2020 at 1:12 PM Rabeeh Karimi <[email protected]>\n> wrote:\n>\n>> Hi patrick\n>> thank you for the reply, this is what happening when I call this line:\n>>\n>> config = T5Config.from_pretrained(\n>> model_args.config_name if model_args.config_name else\n>> model_args.model_name_or_path,\n>> )\n>>\n>> the code by itself creates an empty t5-base directory, I delete it and\n>> then it recreates it.\n>> Do you have an idea on this?\n>> thanks\n>> Best\n>> Rabeeh\n>>\n>> On Thu, Nov 19, 2020 at 9:32 PM Patrick von Platen <\n>> [email protected]> wrote:\n>>\n>>> Hey @rabeehk <https://github.com/rabeehk> - I cannot reproduce the\n>>> error. I suspect the following:\n>>>\n>>> You run the code from a directory that includes a local folder that is\n>>> called t5-base which does not have a config.json.\n>>> Could you try to run:\n>>>\n>>> from transformers import AutoConfig\n>>> config = AutoConfig.from_pretrained('t5-base')\n>>>\n>>> from another folder or delete the t5-base folder that might be the\n>>> reason?\n>>>\n>>> —\n>>> You are receiving this because you were mentioned.\n>>> Reply to this email directly, view it on GitHub\n>>> <https://github.com/huggingface/transformers/issues/8661#issuecomment-730620761>,\n>>> or unsubscribe\n>>> <https://github.com/notifications/unsubscribe-auth/ABP4ZCGIXURMYC7WWVFZPXDSQV6GTANCNFSM4T3SASPQ>\n>>> .\n>>>\n>>\n",
"> Hi patrick thank you for the reply, this is what happening when I call this line: config = T5Config.from_pretrained( model_args.config_name if model_args.config_name else model_args.model_name_or_path, ) the code by itself creates an empty t5-base directory, I delete it and then it recreates it. Do you have an idea on this? thanks Best Rabeeh\r\n> […](#)\r\n> On Thu, Nov 19, 2020 at 9:32 PM Patrick von Platen ***@***.***> wrote: Hey @rabeehk <https://github.com/rabeehk> - I cannot reproduce the error. I suspect the following: You run the code from a directory that includes a local folder that is called t5-base which does not have a config.json. Could you try to run: from transformers import AutoConfig config = AutoConfig.from_pretrained('t5-base') from another folder or delete the t5-base folder that might be the reason? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <[#8661 (comment)](https://github.com/huggingface/transformers/issues/8661#issuecomment-730620761)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ABP4ZCGIXURMYC7WWVFZPXDSQV6GTANCNFSM4T3SASPQ> .\r\n\r\nI don't think `from_pretrained(...)` ever creates a directory -> this should not happen. Not sure what's going on there. Can you maybe add a colab where I can reproduce your error?",
"> Could you tell me please how to set the path when using datasets that it does not cache in the default path? to me there might not be space for the model and this is all happening. Overall all the caches done in huggingface code, could you tell me how to set the different path for them thanks\r\n> […](#)\r\n> On Fri, Nov 20, 2020 at 1:12 PM Rabeeh Karimi ***@***.***> wrote: Hi patrick thank you for the reply, this is what happening when I call this line: config = T5Config.from_pretrained( model_args.config_name if model_args.config_name else model_args.model_name_or_path, ) the code by itself creates an empty t5-base directory, I delete it and then it recreates it. Do you have an idea on this? thanks Best Rabeeh On Thu, Nov 19, 2020 at 9:32 PM Patrick von Platen < ***@***.***> wrote: > Hey @rabeehk <https://github.com/rabeehk> - I cannot reproduce the > error. I suspect the following: > > You run the code from a directory that includes a local folder that is > called t5-base which does not have a config.json. > Could you try to run: > > from transformers import AutoConfig > config = AutoConfig.from_pretrained('t5-base') > > from another folder or delete the t5-base folder that might be the > reason? > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <[#8661 (comment)](https://github.com/huggingface/transformers/issues/8661#issuecomment-730620761)>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABP4ZCGIXURMYC7WWVFZPXDSQV6GTANCNFSM4T3SASPQ> > . >\r\n\r\nIs this issue about `t5-base` config or about datasets? I don't follow here",
"Hi Patrick\nthis is now solved, I was mistakenly choose the output_path as t5-base and\nthis was the reason for the creation of empty t5-base directory.\nI would like to thank you so much for the help.\nBest\nRabeeh\n\nOn Fri, Nov 20, 2020 at 3:01 PM Patrick von Platen <[email protected]>\nwrote:\n\n> Could you tell me please how to set the path when using datasets that it\n> does not cache in the default path? to me there might not be space for the\n> model and this is all happening. Overall all the caches done in huggingface\n> code, could you tell me how to set the different path for them thanks\n> … <#m_3116611502979953478_>\n> On Fri, Nov 20, 2020 at 1:12 PM Rabeeh Karimi *@*.*> wrote: Hi patrick\n> thank you for the reply, this is what happening when I call this line:\n> config = T5Config.from_pretrained( model_args.config_name if\n> model_args.config_name else model_args.model_name_or_path, ) the code by\n> itself creates an empty t5-base directory, I delete it and then it\n> recreates it. Do you have an idea on this? thanks Best Rabeeh On Thu, Nov\n> 19, 2020 at 9:32 PM Patrick von Platen < @.*> wrote: > Hey @rabeehk\n> <https://github.com/rabeehk> https://github.com/rabeehk - I cannot\n> reproduce the > error. I suspect the following: > > You run the code from a\n> directory that includes a local folder that is > called t5-base which does\n> not have a config.json. > Could you try to run: > > from transformers\n> import AutoConfig > config = AutoConfig.from_pretrained('t5-base') > > from\n> another folder or delete the t5-base folder that might be the > reason? > >\n> — > You are receiving this because you were mentioned. > Reply to this\n> email directly, view it on GitHub > <#8661 (comment)\n> <https://github.com/huggingface/transformers/issues/8661#issuecomment-730620761>>,\n> > or unsubscribe >\n> https://github.com/notifications/unsubscribe-auth/ABP4ZCGIXURMYC7WWVFZPXDSQV6GTANCNFSM4T3SASPQ\n> > . >\n>\n> Is this issue about t5-base config or about datasets? I don't follow here\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/8661#issuecomment-731187426>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABP4ZCATGEHAA7275ZAEFZLSQZZFPANCNFSM4T3SASPQ>\n> .\n>\n"
] | 1,605 | 1,605 | 1,605 | NONE | null | Hi
Here is my two lines to get t5-config:
```
from transformers import AutoConfig
config = AutoConfig.from_pretrained('t5-base')
```
Here are the errors, is there an issue with t5-base storage? I am really confused, thank you for your help on this.
thanks
```
file t5-base/config.json not found
Traceback (most recent call last):
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_utils.py", line 388, in get_config_dict
local_files_only=local_files_only,
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/file_utils.py", line 962, in cached_path
raise EnvironmentError("file {} not found".format(url_or_filename))
OSError: file t5-base/config.json not found
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test.py", line 2, in <module>
config = AutoConfig.from_pretrained('t5-base')
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_auto.py", line 333, in from_pretrained
config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_utils.py", line 400, in get_config_dict
raise EnvironmentError(msg)
OSError: Can't load config for 't5-base'. Make sure that:
- 't5-base' is a correct model identifier listed on 'https://huggingface.co/models'
- or 't5-base' is the correct path to a directory containing a config.json file
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8661/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8661/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8660 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8660/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8660/comments | https://api.github.com/repos/huggingface/transformers/issues/8660/events | https://github.com/huggingface/transformers/pull/8660 | 746,631,712 | MDExOlB1bGxSZXF1ZXN0NTIzOTkxOTMx | 8,660 | Fix bug in x-attentions output for roberta and harden test to catch it | {
"login": "ysgit",
"id": 898918,
"node_id": "MDQ6VXNlcjg5ODkxOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/898918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ysgit",
"html_url": "https://github.com/ysgit",
"followers_url": "https://api.github.com/users/ysgit/followers",
"following_url": "https://api.github.com/users/ysgit/following{/other_user}",
"gists_url": "https://api.github.com/users/ysgit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ysgit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ysgit/subscriptions",
"organizations_url": "https://api.github.com/users/ysgit/orgs",
"repos_url": "https://api.github.com/users/ysgit/repos",
"events_url": "https://api.github.com/users/ysgit/events{/privacy}",
"received_events_url": "https://api.github.com/users/ysgit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,606 | 1,606 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
@patrickvonplaten this fixed a bug i missed in #8071
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8660/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8660/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8660",
"html_url": "https://github.com/huggingface/transformers/pull/8660",
"diff_url": "https://github.com/huggingface/transformers/pull/8660.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8660.patch",
"merged_at": 1606134510000
} |
https://api.github.com/repos/huggingface/transformers/issues/8659 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8659/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8659/comments | https://api.github.com/repos/huggingface/transformers/issues/8659/events | https://github.com/huggingface/transformers/pull/8659 | 746,583,284 | MDExOlB1bGxSZXF1ZXN0NTIzOTUxODgz | 8,659 | Improve bert-japanese tokenizer handling | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,606 | 1,606 | MEMBER | null | remove an inelegant string-testing hack in favor of the tooling we now have (support for `config.tokenizer_class`, and versioning of model files in the hub):
see commits enabling this on huggingface.co side:
- https://huggingface.co/yosuke/bert-base-japanese-char/commit/e8365f5c923b98b8cfbf258cfeb14ac536477a31
- https://huggingface.co/daigo/bert-base-japanese-sentiment/commit/88b611269eb04ce60afa4d448b27ffee0a48f5a0
- https://huggingface.co/bandainamco-mirai/distilbert-base-japanese/commit/f411ce0e53839adab9e39187ef179e3b5c836f7c | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8659/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8659/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8659",
"html_url": "https://github.com/huggingface/transformers/pull/8659",
"diff_url": "https://github.com/huggingface/transformers/pull/8659.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8659.patch",
"merged_at": 1606148102000
} |
https://api.github.com/repos/huggingface/transformers/issues/8658 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8658/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8658/comments | https://api.github.com/repos/huggingface/transformers/issues/8658/events | https://github.com/huggingface/transformers/pull/8658 | 746,570,117 | MDExOlB1bGxSZXF1ZXN0NTIzOTQwNDYy | 8,658 | ConvBERT | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,651 | 1,613 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8658/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8658/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8658",
"html_url": "https://github.com/huggingface/transformers/pull/8658",
"diff_url": "https://github.com/huggingface/transformers/pull/8658.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8658.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/8657 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8657/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8657/comments | https://api.github.com/repos/huggingface/transformers/issues/8657/events | https://github.com/huggingface/transformers/pull/8657 | 746,552,353 | MDExOlB1bGxSZXF1ZXN0NTIzOTI1NjMw | 8,657 | Fix embeddings resizing in TF models | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks @sgugger for your useful comments. I was thinking the same about `get_output_embeddings` but I didn't want to change to much things in same time.\r\n\r\nI like very much the solution you proposed and I'm totally fine with it!",
"@sgugger I have reworked the resizing for the bias and applied it on BERT at first for testing. Are you agree with this new way to do? If yes, I will do the same for the other models.",
"@sgugger @patrickvonplaten @LysandreJik This PR takes care of resizing all the bias, and if we start to change how the embeddings are resized + modify the generation, I think it would be a bit too much and out of the scope of this PR. Then, what I propose is to keep how it was at the beginning in `generation_tf_utils.py` and the `self.get_output_embeddings` methods and move this discussion on another PR. In this another PR I would like as well to fully review how the resizing is done, because the number of line of codes can be largely reduced and simplified. What do you think?",
"It would be awesome if we can keep the `get_output_embeddings()` method and leave `generate()` as it is and only focus on the resizing problem here. I'm 100% on board with fixing the resizing problem and it'd be awesome to do this orthogonally to `get_output_embeddings()`. \r\n\r\nA couple of reasons why I would like to keep `get_output_embeddings()` (I can copy this to the new PR as well):\r\n1) Consistency with PyTorch. In PyTorch `get_output_embeddings()` is even more integrated with other functionalities (like weight tying) and I think we should stay consistent in TF and PT\r\n2) `get_output_embeddings()` is an important function IMO to quickly get the correct logit matrix. Without this function it's not at all always obvious how to get the output embeddings for some models (especially EncoderDecoder, RAG, ...). A unified API for all models is of great help here IMO and I use it a lot actually\r\n3) Don't want to tie the capability of a model to `generate()` with the `MODEL_FOR_....` classes - this is inconsistent with PyTorch and unnecessarily creates a dependency IMO.\r\n\r\n",
"Thanks a lot @patrickvonplaten for sharing this! I think we should move this talk to a more suited place, and meanwhile I will revert that part of the changes.",
"I disagree with you on this @patrickvonplaten \r\n\r\n> 1. Consistency with PyTorch. In PyTorch get_output_embeddings() is even more integrated with other functionalities (like weight tying) and I think we should stay consistent in TF and PT\r\n\r\nThe weight tying cannot be done the same way in TF (and honestly the resizing on the PyTorch side is a bit hacky and very hard to understand it kind of goes against our principle of no magic code), so this alone is not an argument for keeping the `get_output_embeddings` method\r\n\r\n> 2. get_output_embeddings() is an important function IMO to quickly get the correct logit matrix. Without this function it's not at all always obvious how to get the output embeddings for some models (especially EncoderDecoder, RAG, ...). A unified API for all models is of great help here IMO and I use it a lot actually\r\n\r\nThe problem is that this function is always implemented to return the input embeddings, so the function as it is does not do anything more than `get_input_embeddings` while giving the user a false sense of what it returns. (Note that there is no model in TF apart from mobileBERT that has the capability of having different weights for the embeddings and the decoder, the weights are **always** tied).\r\n\r\n> 3. Don't want to tie the capability of a model to `generate()` with the `MODEL_FOR_....` classes - this is inconsistent with PyTorch and unnecessarily creates a dependency IMO.\r\n\r\nThe PyTorch side has no assert, so in that case, the consistent thing is to remove the assert entirely. \r\n\r\nI could be convinced to leave the `get_output_embeddings` method for mobileBERT only since it's the only model where it returns something useful, but it's dangerous to have it otherwise (unless we had a way to untie the weights, but that's for another PR!)",
"Ok we debriefed a bit with @patrickvonplaten to avoid spamming the PR. I had missed that some models are already using an output embeddings that is different from the input embeddings (most models are tied), like T5 or mT5. So those, like `mobileBERT`, will definitely need the `get_output_embeddings` method. Right now though, the resizing does not work for those models.\r\n\r\nIn the end, we both agree on keeping that method, add the `get_output_bias` method and the `resize_embeddings` should use the outputs of those two methods as well as `get_input_embeddings` in all the things it has to resize. To check if the input embeddings and output_embeddings are the same (and not resize them twice) we could use the `._handle_name` attribute of their weights (or something else if you have a better idea).\r\n\r\nDoes that all make sense?",
"Ok, I'm totally fine with this 👍 ! Nevertheless, there are still few things I don't get.\r\n\r\n> Right now though, the resizing does not work for those models.\r\n\r\nWhat do you mean by the resizing does not work? Which one? Do you have a more specific example?\r\n\r\n> To check if the input embeddings and output_embeddings are the same (and not resize them twice) we could use the ._handle_name attribute of their weights (or something else if you have a better idea).\r\n\r\nI don't understand this sentence, do you have an example? What do we have to check if the input/output embeddings are different if we get them with two separate methods (namely get_input_embeddings and get_output_embeddings).",
"The new T5 and mT5 models have an output embedding layer that is sometimes tied to the input embeddings (so same weights like BERT) and sometimes different. When it's different, it is not resized.\r\n\r\n> I don't understand this sentence, do you have an example? What do we have to check if the input/output embeddings are different if we get them with two separate methods (namely get_input_embeddings and get_output_embeddings).\r\n\r\nThe output embeddings are, very often, the same as the input embeddings (BERT situation) so in most instances `get_output_embeddings` will return the same thing as `get_input_embeddings` (which is why we initially decided to remove `get_output_embeddings` when discussing together). However, in some cases, it returns something different (mT5 and T5 as mentioned above, or mobileBERT) which is (with avoiding a breaking change) the main argument to keep this `get_output_embeddings` method. However, when taking its result in `resize_embeddings`, we should check if we get a different result from `get_input_embeddings`. Is that clearer?",
"Crystal clear!!! Thanks a lot for the details! I will proceed to the changes once the sprint is finished 👍 ",
"@patrickvonplaten I have put back the `get_output_embeddings`, does it seems ok for you now, or did I forget something?",
"Did I miss anything else?",
"@LysandreJik any idea why all the tests are failing with a timeout?",
"Yes, I can see why. Seeing with how to fix it, you'll probably have to rebase.",
"@jplu, please kindly rebase your branch - this is yet another edge case I haven't expected - fixed in master. Thank you!",
"Thanks @stas00 and @LysandreJik for having fixed the issue.\r\n\r\n@LysandreJik @sgugger @patrickvonplaten is there anything else I have to do in this PR? or it looks ok for you?",
"@patrickvonplaten any other comments? Or you are fine with the current version?",
"Not sure why it says @mfuntowicz force-pushed on this PR, but sadly it seems like the history was messed up a bit. Maybe it can be resolved by just `git reset --hard` to the commit before Morgan's force-push? ",
"Oh no @mfuntowicz killed all my open PRs 😄 ok trying to fix this :)",
"Ok should be good now!",
"LGTM! @LysandreJik just missing your approval. The Flax tests do not pass and I don't know why :("
] | 1,605 | 1,607 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
Currenlty when the embeddings are resized the biases are not resized in same time. In TF there is no explicit link between the decoder weights and biases in a dense layer contrarily than in PT. This PR fixes this issue by resizing in same time the biases, even thought I don't know if this is the best solution. @LysandreJik @sgugger what do you think? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8657/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8657/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8657",
"html_url": "https://github.com/huggingface/transformers/pull/8657",
"diff_url": "https://github.com/huggingface/transformers/pull/8657.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8657.patch",
"merged_at": 1607918725000
} |
https://api.github.com/repos/huggingface/transformers/issues/8656 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8656/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8656/comments | https://api.github.com/repos/huggingface/transformers/issues/8656/events | https://github.com/huggingface/transformers/issues/8656 | 746,552,206 | MDU6SXNzdWU3NDY1NTIyMDY= | 8,656 | Return output probabilities with Generate function | {
"login": "tshrjn",
"id": 8372098,
"node_id": "MDQ6VXNlcjgzNzIwOTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8372098?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tshrjn",
"html_url": "https://github.com/tshrjn",
"followers_url": "https://api.github.com/users/tshrjn/followers",
"following_url": "https://api.github.com/users/tshrjn/following{/other_user}",
"gists_url": "https://api.github.com/users/tshrjn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tshrjn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tshrjn/subscriptions",
"organizations_url": "https://api.github.com/users/tshrjn/orgs",
"repos_url": "https://api.github.com/users/tshrjn/repos",
"events_url": "https://api.github.com/users/tshrjn/events{/privacy}",
"received_events_url": "https://api.github.com/users/tshrjn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Duplicate of https://github.com/huggingface/transformers/issues/7654 . But yes, it seems like many people are asking for this feature and it should be quite straight-forward to implement it... -> feel free to give a shot :-) ",
"Hi. When can we expect this to be released? 😊",
"Next week Tuesday or Wednesday :-) ",
"Hey! Can anyone point me to the API/code location of this feature? Sorry if I have missed something. :) Thank you!",
"@moqingyan https://huggingface.co/transformers/internal/generation_utils.html",
"> @moqingyan https://huggingface.co/transformers/internal/generation_utils.html\r\n\r\nThank you!\r\n\r\nMy problem was: I set the configuration `output_scores` to `True` in the `generate` function and failed obtain the scores in the returned results. After I struggled in the libraries for hours, I finally figured out I also need to set `return_dict_in_generate` to `True` to obtain the attention values from the `generate` function. :) \r\n\r\nI think this behavior is non-intuitive, as I have specified I need the scores in the output already. But anyway I figured this out. Hope this comment is helpful to anyone who runs into this issue. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,605 | 1,621 | 1,621 | NONE | null | # 🚀 Feature request
Output the token probabilities along with the tokens when generating sequence.
## Motivation
For understanding model confidence, this is quite useful.
Also, for abstractive QA with long contexts, one needs to use doc-strides to take into account the contexts & then choose the best answer according to the probability of the generated text.
## Your contribution
I can try submitting a PR for non-beam decoding, but guidance would be appreciated.
Also, are there any existing solutions to this issue? If so, what & where? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8656/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8656/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8655 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8655/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8655/comments | https://api.github.com/repos/huggingface/transformers/issues/8655/events | https://github.com/huggingface/transformers/pull/8655 | 746,429,794 | MDExOlB1bGxSZXF1ZXN0NTIzODI1MTQ4 | 8,655 | [model card] : fix Geotrend/bert-base-15lang-cased | {
"login": "amineabdaoui",
"id": 17952908,
"node_id": "MDQ6VXNlcjE3OTUyOTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/17952908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amineabdaoui",
"html_url": "https://github.com/amineabdaoui",
"followers_url": "https://api.github.com/users/amineabdaoui/followers",
"following_url": "https://api.github.com/users/amineabdaoui/following{/other_user}",
"gists_url": "https://api.github.com/users/amineabdaoui/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amineabdaoui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amineabdaoui/subscriptions",
"organizations_url": "https://api.github.com/users/amineabdaoui/orgs",
"repos_url": "https://api.github.com/users/amineabdaoui/repos",
"events_url": "https://api.github.com/users/amineabdaoui/events{/privacy}",
"received_events_url": "https://api.github.com/users/amineabdaoui/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"we use marked.js as a markdown parser so rendering should be pretty close to GitHub in general but there can always be some small inconsistencies"
] | 1,605 | 1,605 | 1,605 | NONE | null | @julien-c it's me again.
The table in [Geotrend/bert-base-15lang-cased](https://huggingface.co/Geotrend/bert-base-15lang-cased) is badly formatted even if it looks good on [Github](https://github.com/huggingface/transformers/blob/master/model_cards/Geotrend/bert-base-15lang-cased/README.md).
I guess I have to add a **double line break**.
Thanks !
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8655/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8655/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8655",
"html_url": "https://github.com/huggingface/transformers/pull/8655",
"diff_url": "https://github.com/huggingface/transformers/pull/8655.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8655.patch",
"merged_at": 1605782462000
} |
https://api.github.com/repos/huggingface/transformers/issues/8654 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8654/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8654/comments | https://api.github.com/repos/huggingface/transformers/issues/8654/events | https://github.com/huggingface/transformers/issues/8654 | 746,426,531 | MDU6SXNzdWU3NDY0MjY1MzE= | 8,654 | Error in NER examples, run.sh | {
"login": "pradeepkr12",
"id": 1430530,
"node_id": "MDQ6VXNlcjE0MzA1MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1430530?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pradeepkr12",
"html_url": "https://github.com/pradeepkr12",
"followers_url": "https://api.github.com/users/pradeepkr12/followers",
"following_url": "https://api.github.com/users/pradeepkr12/following{/other_user}",
"gists_url": "https://api.github.com/users/pradeepkr12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pradeepkr12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pradeepkr12/subscriptions",
"organizations_url": "https://api.github.com/users/pradeepkr12/orgs",
"repos_url": "https://api.github.com/users/pradeepkr12/repos",
"events_url": "https://api.github.com/users/pradeepkr12/events{/privacy}",
"received_events_url": "https://api.github.com/users/pradeepkr12/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
}
] | [
"Can confirm that this also appears on latest master (0a80959bddd5da08742d22dca07e0facf0b4cd11)",
"Related to: #8212",
"Yes. \r\nThanks, I managed to install py 3.8 in Colab and ran it successfully."
] | 1,605 | 1,605 | 1,605 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.5.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@stefan-it
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
Bert
The problem arises when using:
* [x] the official example scripts: (give details below)
examples/token-classification/run.sh
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
NER with conll2003 dataset
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. !sh examples/token-classification/run.sh
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Error traceback
Traceback (most recent call last):
File "run_ner.py", line 383, in <module>
main()
File "run_ner.py", line 285, in main
load_from_cache_file=not data_args.overwrite_cache,
File "/usr/local/lib/python3.6/dist-packages/datasets/dataset_dict.py", line 300, in map
for k, dataset in self.items()
File "/usr/local/lib/python3.6/dist-packages/datasets/dataset_dict.py", line 300, in <dictcomp>
for k, dataset in self.items()
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 1256, in map
update_data=update_data,
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 156, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/datasets/fingerprint.py", line 158, in wrapper
self._fingerprint, transform, kwargs_for_fingerprint
File "/usr/local/lib/python3.6/dist-packages/datasets/fingerprint.py", line 105, in update_fingerprint
hasher.update(transform_args[key])
File "/usr/local/lib/python3.6/dist-packages/datasets/fingerprint.py", line 57, in update
self.m.update(self.hash(value).encode("utf-8"))
File "/usr/local/lib/python3.6/dist-packages/datasets/fingerprint.py", line 53, in hash
return cls.hash_default(value)
File "/usr/local/lib/python3.6/dist-packages/datasets/fingerprint.py", line 46, in hash_default
return cls.hash_bytes(dumps(value))
File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 367, in dumps
dump(obj, file)
File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 339, in dump
Pickler(file, recurse=True).dump(obj)
File "/usr/local/lib/python3.6/dist-packages/dill/_dill.py", line 454, in dump
StockPickler.dump(self, obj)
File "/usr/lib/python3.6/pickle.py", line 409, in dump
self.save(obj)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/lib/python3.6/dist-packages/dill/_dill.py", line 1447, in save_function
obj.__dict__, fkwdefaults), obj=obj)
File "/usr/lib/python3.6/pickle.py", line 610, in save_reduce
save(args)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.6/pickle.py", line 751, in save_tuple
save(element)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.6/pickle.py", line 751, in save_tuple
save(element)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/lib/python3.6/dist-packages/dill/_dill.py", line 1178, in save_cell
pickler.save_reduce(_create_cell, (f,), obj=obj)
File "/usr/lib/python3.6/pickle.py", line 610, in save_reduce
save(args)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.6/pickle.py", line 736, in save_tuple
save(element)
File "/usr/lib/python3.6/pickle.py", line 521, in save
self.save_reduce(obj=obj, *rv)
File "/usr/lib/python3.6/pickle.py", line 605, in save_reduce
save(cls)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/lib/python3.6/dist-packages/dill/_dill.py", line 1374, in save_type
obj.__bases__, _dict), obj=obj)
File "/usr/lib/python3.6/pickle.py", line 610, in save_reduce
save(args)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.6/pickle.py", line 751, in save_tuple
save(element)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/lib/python3.6/dist-packages/dill/_dill.py", line 941, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/usr/lib/python3.6/pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File "/usr/lib/python3.6/pickle.py", line 847, in _batch_setitems
save(v)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/lib/python3.6/dist-packages/dill/_dill.py", line 941, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/usr/lib/python3.6/pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File "/usr/lib/python3.6/pickle.py", line 847, in _batch_setitems
save(v)
File "/usr/lib/python3.6/pickle.py", line 507, in save
self.save_global(obj, rv)
File "/usr/lib/python3.6/pickle.py", line 927, in save_global
(obj, module_name, name))
_pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union
## Expected behavior
It should train and evaluate, give accuracy details.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8654/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8654/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8653 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8653/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8653/comments | https://api.github.com/repos/huggingface/transformers/issues/8653/events | https://github.com/huggingface/transformers/pull/8653 | 746,410,150 | MDExOlB1bGxSZXF1ZXN0NTIzODA5NzAx | 8,653 | Fix missing return_dict in RAG example to use a custom knowledge source | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@lhoestq \r\n\r\nWhat the version of transformers you have used in this PR? ",
"Hi !\r\nThe one on the master branch",
"> Hi !\r\n> The one on the master branch\r\n\r\nSo you have installed from sources, right **(Version: 4.0.0.dev0)**? I tried to execute **use_own_knowledge_dataset.py** with your [previous PR](https://github.com/huggingface/transformers/pull/8585). But I got the following error. Seems like **question_enc_outputs** is not a dict but just the tensor. \r\n\r\n`/transformers/src/transformers/models/rag/modeling_rag.py\", line 628, in forward\r\n question_enc_hidden_states = question_enc_outputs.hidden_states\r\nAttributeError: 'tuple' object has no attribute 'hidden_states'`\r\n\r\n\r\n",
"Thanks for letting me know ! I'll fix that as well :) ",
"Perfect. Btw it works perfectly for version 3.4.0.\n\nOn Thu, Nov 19, 2020, 23:20 Quentin Lhoest <[email protected]> wrote:\n\n> Thanks for letting me know ! I'll fix that as well :)\n>\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/8653#issuecomment-730274786>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGQVTDUZUCDJWDMROG3SQTWQBANCNFSM4T3EZTTA>\n> .\n>\n"
] | 1,605 | 1,605 | 1,605 | MEMBER | null | We did some changes regarding the output of models but didn't change the return_dict parameter of this RAG example script.
It works as expected now | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8653/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8653",
"html_url": "https://github.com/huggingface/transformers/pull/8653",
"diff_url": "https://github.com/huggingface/transformers/pull/8653.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8653.patch",
"merged_at": 1605795438000
} |
https://api.github.com/repos/huggingface/transformers/issues/8652 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8652/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8652/comments | https://api.github.com/repos/huggingface/transformers/issues/8652/events | https://github.com/huggingface/transformers/issues/8652 | 746,398,603 | MDU6SXNzdWU3NDYzOTg2MDM= | 8,652 | WNLI benchmark results clarification | {
"login": "sumanthd17",
"id": 28291870,
"node_id": "MDQ6VXNlcjI4MjkxODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/28291870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sumanthd17",
"html_url": "https://github.com/sumanthd17",
"followers_url": "https://api.github.com/users/sumanthd17/followers",
"following_url": "https://api.github.com/users/sumanthd17/following{/other_user}",
"gists_url": "https://api.github.com/users/sumanthd17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sumanthd17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sumanthd17/subscriptions",
"organizations_url": "https://api.github.com/users/sumanthd17/orgs",
"repos_url": "https://api.github.com/users/sumanthd17/repos",
"events_url": "https://api.github.com/users/sumanthd17/events{/privacy}",
"received_events_url": "https://api.github.com/users/sumanthd17/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Wondering the same! Is it a typo? Since they report the exact same number for tiny, mini, small & medium. However, one would not expect that to be higher than [bert-base-cased that recieved 45.07%](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-pytorch-version).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,605 | 1,619 | 1,619 | NONE | null | The BERT paper mentions that the accuracy for WNLI is 62.3% [BERT Repository](https://github.com/google-research/bert) but the model card on HuggingFace reports the WNLI accuracy as 45.07%. Is there any particular for the big gap between the 2 models? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8652/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8652/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8651 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8651/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8651/comments | https://api.github.com/repos/huggingface/transformers/issues/8651/events | https://github.com/huggingface/transformers/issues/8651 | 746,394,975 | MDU6SXNzdWU3NDYzOTQ5NzU= | 8,651 | RAG: OSError: Can't load tokenizer for 'facebook/rag-sequence-nq/question_encoder_tokenizer' | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Can you try installing transformers from master?",
"Thanks for working perfectly when loading from sources."
] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | Hi I am trying to run [use_own_knowledge_dataset.py] (https://github.com/huggingface/transformers/blob/master/examples/rag/use_own_knowledge_dataset.py)) with **Transformers Version: 3.5.1** ((from your latest [PR](https://github.com/huggingface/transformers/pull/8585)). But it gives the following error.
```
OSError: Can't load tokenizer for 'facebook/rag-sequence-nq/question_encoder_tokenizer'. Make sure that:
- 'facebook/rag-sequence-nq/question_encoder_tokenizer' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'facebook/rag-seq
```uence-nq/question_encoder_tokenizer' is the correct path to a directory containing relevant tokenizer files
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8651/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8650 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8650/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8650/comments | https://api.github.com/repos/huggingface/transformers/issues/8650/events | https://github.com/huggingface/transformers/issues/8650 | 746,335,984 | MDU6SXNzdWU3NDYzMzU5ODQ= | 8,650 | Why use 'BertLayerNorm' instead of torch.nn.LayerNorm ? | {
"login": "daydayfun",
"id": 39835967,
"node_id": "MDQ6VXNlcjM5ODM1OTY3",
"avatar_url": "https://avatars.githubusercontent.com/u/39835967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daydayfun",
"html_url": "https://github.com/daydayfun",
"followers_url": "https://api.github.com/users/daydayfun/followers",
"following_url": "https://api.github.com/users/daydayfun/following{/other_user}",
"gists_url": "https://api.github.com/users/daydayfun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daydayfun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daydayfun/subscriptions",
"organizations_url": "https://api.github.com/users/daydayfun/orgs",
"repos_url": "https://api.github.com/users/daydayfun/repos",
"events_url": "https://api.github.com/users/daydayfun/events{/privacy}",
"received_events_url": "https://api.github.com/users/daydayfun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"There once was a difference, but there is none anymore. I believe the `BertLayerNorm` was removed and is not available anymore in recent versions.",
"I have done some test, \r\nself.assertTensorsEqual(\r\nout_BertLayerNorm, out_mlu_nativeLayerNorm, 0, use_RAE=True)\r\nand the diff is 2.8356e-07"
] | 1,605 | 1,605 | 1,605 | NONE | null | # 🌟 New model addition
What's the difference between 'BertLayerNorm' and torch.nn.LayerNorm
## Model description
1.pytorch torch.nn.LayerNorm
https://pytorch.org/docs/stable/generated/torch.nn.LayerNorm.html?highlight=layernorm#torch.nn.LayerNorm
2.modeling.py
class BertLayerNorm(Module):
def __init__(self, hidden_size, eps=1e-12):
super(BertLayerNorm, self).__init__()
self.shape = torch.Size((hidden_size,))
self.eps = eps
self.weight = nn.Parameter(torch.ones(hidden_size))
self.bias = nn.Parameter(torch.zeros(hidden_size))
self.apex_enabled = APEX_IS_AVAILABLE
@torch.jit.unused
def fused_layer_norm(self, x):
return FusedLayerNormAffineFunction.apply(
x, self.weight, self.bias, self.shape, self.eps)
def forward(self, x):
if self.apex_enabled and not torch.jit.is_scripting():
x = self.fused_layer_norm(x)
else:
u = x.mean(-1, keepdim=True)
s = (x - u).pow(2).mean(-1, keepdim=True)
x = (x - u) / torch.sqrt(s + self.eps)
x = self.weight * x + self.bias
return x
<!-- Important information -->
It seems like torch.nn.LayerNorm has the same function of belows ops in BertLayerNorm
u = x.mean(-1, keepdim=True)
s = (x - u).pow(2).mean(-1, keepdim=True)
x = (x - u) / torch.sqrt(s + self.eps)
x = self.weight * x + self.bias
Why we don't use torch.nn.LayerNorm ?
Thanks a lot for answering my question
## Open source status
* [ ] the model implementation is available: (give details)
* [ ] the model weights are available: (give details)
* [ ] who are the authors: (mention them, if possible by @gh-username)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8650/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8649 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8649/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8649/comments | https://api.github.com/repos/huggingface/transformers/issues/8649/events | https://github.com/huggingface/transformers/issues/8649 | 746,300,462 | MDU6SXNzdWU3NDYzMDA0NjI= | 8,649 | from_pretrained()'s load() blocks forever in subprocess | {
"login": "levon003",
"id": 33158587,
"node_id": "MDQ6VXNlcjMzMTU4NTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/33158587?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/levon003",
"html_url": "https://github.com/levon003",
"followers_url": "https://api.github.com/users/levon003/followers",
"following_url": "https://api.github.com/users/levon003/following{/other_user}",
"gists_url": "https://api.github.com/users/levon003/gists{/gist_id}",
"starred_url": "https://api.github.com/users/levon003/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/levon003/subscriptions",
"organizations_url": "https://api.github.com/users/levon003/orgs",
"repos_url": "https://api.github.com/users/levon003/repos",
"events_url": "https://api.github.com/users/levon003/events{/privacy}",
"received_events_url": "https://api.github.com/users/levon003/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"If you've identified the issue to be coming from `nn.Module._load_from_state_dict`, then I guess this is more of a PyTorch issue than a `transformers` one? Do you have an idea what might cause this hang with that method?",
"Well, I don't know enough about torch state_dict behavior to understand why `transformers` would be directly calling the underscored \"internal use\" method `_load_from_state_dict` in the first place, but it strikes me that transformers is making assumptions about the functioning of this internal method that may not hold in practice; I don't see anything obvious in `_load_from_state_dict` that would cause it to lock up under these (or any) conditions, but we may be violating a usage assumption (e.g. providing a bad pre-load hook). ",
"Oh, I see. Looking at the `torch.load_state_dict` however, it doesn't seem to be doing something very differently to what we do. Have you managed to load several models using `torch.load()` with the same multiprocessing approach you have used?",
"Well, a fair test would be to load the _same_ (roBERTa-base) model, but I'm not sure how to write the code to do that... that's why I'm using `transformers`! But it's easy to verify that there's no problem with multi-process loading of PyTorch models:\r\n\r\n```python\r\nimport torch\r\nimport torch.nn as nn\r\nimport multiprocessing as mp\r\n\r\nUSE_STATE_DICT = True\r\n\r\nclass SimpleNet(nn.Module):\r\n def __init__(self):\r\n super(SimpleNet, self).__init__()\r\n self.fc1 = nn.Linear(768, 1)\r\n \r\n def forward(self, x):\r\n x = self.fc1(x)\r\n return x\r\n\r\n\r\ndef save_model():\r\n model = SimpleNet()\r\n torch.save(model, './full_model.pt')\r\n torch.save(model.state_dict(), './model_state_dict.pt')\r\n\r\ndef load_model_in_subprocess():\r\n print(\"Started subprocess.\")\r\n if USE_STATE_DICT:\r\n model = SimpleNet()\r\n model.load_state_dict(torch.load('./model_state_dict.pt'))\r\n else:\r\n model = torch.load('./full_model.pt')\r\n print(f\"Model loaded in subprocess: {model}\")\r\n\r\ndef main():\r\n save_model()\r\n print(\"Saved model.\")\r\n \r\n if USE_STATE_DICT:\r\n model = SimpleNet()\r\n model.load_state_dict(torch.load('./model_state_dict.pt'))\r\n else:\r\n model = torch.load('./full_model.pt')\r\n print(f\"Model loaded in main process: {model}\")\r\n\r\n p = mp.Process(target=load_model_in_subprocess, daemon=True)\r\n p.start()\r\n p.join()\r\n print(\"Main thread terminating.\")\r\n \r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\nThis script terminates fine when loading from state dict or a pickled model file.",
"Adding some debug prints to `transformers` load in modeling_utils.py, I can confirm that it is the call to `nn.Module._load_from_state_dict` when:\r\n\r\n```\r\nprefix = roberta.embeddings.word_embeddings.\r\nlocal_metadata = {'version': 1}\r\nmissing_keys = ['roberta.embeddings.position_ids']\r\nunexpected_keys = []\r\nerror_msgs = []\r\nstrict = True\r\n```\r\n\r\nThe keys in the state_dict are:\r\n```\r\nroberta.embeddings.word_embeddings.weight\r\nroberta.embeddings.position_embeddings.weight\r\nroberta.embeddings.token_type_embeddings.weight\r\nroberta.embeddings.LayerNorm.weight\r\nroberta.embeddings.LayerNorm.bias\r\n<snipping all of the individual layer keys e.g. roberta.encoder.layer.0.attention.self.query.weight>\r\nroberta.pooler.dense.weight\r\nroberta.pooler.dense.bias\r\nlm_head.bias\r\nlm_head.dense.weight\r\nlm_head.dense.bias\r\nlm_head.layer_norm.weight\r\nlm_head.layer_norm.bias\r\nlm_head.decoder.weight\r\n```\r\n\r\nThe shape of the `roberta.embeddings.word_embeddings.weight` tensor is [50265,768].\r\n\r\n(note: same blocking behavior when loading bert-base-uncased)\r\n\r\n",
"Okay, I did more investigation, and the problem is a blocking call to `Tensor.copy_` that copies the Parameter in the state_dict into the Parameter in the Module (in this case, the `Embedding(50265, 768, padding_idx=1)` parameter in the roBERTa model). \r\n\r\nThe [documentation](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.copy_) indicates a non_blocking parameter that can be used when copying between CPU and GPU, but we are copying between CPU and CPU. I confirmed that non_blocking does nothing, and that the `device` of both Parameters is `cpu`. \r\n\r\nThat's where I'm going to stop pursuing this bug. I don't know the structure of the C++ code, but it seems likely that this is an issue with the PyTorch CPU copy implementation and the idiosyncrasies of the specific OS I'm using. If this problem can be reproduced on others systems it may be worth investigating further, but it does seem like the fault probably lies with `PyTorch` and not with `transformers`. Hopefully this affects only a small set of OSes.\r\n\r\n@LysandreJik, you may want to close this issue?",
"Thank you very much for your deep investigation of this issue. Unfortunately I don't see how we could change that on our front to make it work, so we'll close this for now.\r\n\r\nIf we get other reports of this we'll investigate further. ",
"Got the same issue in this environment : \r\n\r\n Platform: Linux clem-MacBookAir 5.13.0-40-generic #45~20.04.1-Ubuntu x86_64 x86_64 x86_64 GNU/Linux\r\n Python version: 3.9.7\r\n PyTorch version (GPU?): 1.11.0+cu102 (False)\r\n Tensorflow version (GPU?): not installed (NA)\r\n Using GPU in script?: No\r\n Using distributed or parallel set-up in script?: Yes",
"Experiencing the same issue as well with a `torchvision.models` which seems to be coming from `nn.Module._load_from_state_dict` running as subprocess on CPU, unsure why this has just started to happen.\r\n\r\nmoving the model to the GPU before loading works as a workaround\r\n```\r\nmodel.to(device)\r\nmodel.load_from_state_dict(ckpt)\r\n```\r\n\r\n"
] | 1,605 | 1,652 | 1,606 | NONE | null | ## Environment info
- `transformers` version: 3.5.1
- Platform: Linux-5.4.58 x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Yes
### Who can help
Anyone familiar with the from_pretrained() code path. Perhaps @sgugger? Thank you!
## Information
[from_pretrained()'s load()](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_utils.py#L1004) blocks forever loading roberta-base, due specifically to the call to `nn.Module._load_from_state_dict` that would load the "embeddings.word_embeddings". Occurs when loading the model in both the first process and a subprocess started via `multiprocessing`.
I observe the same behavior when loading via keyword vs loading local files cached via `save_pretrained`.
Model I am using (Bert, XLNet ...): roberta-base
The problem arises when using:
* [ ] the official example scripts:
* [x] my own modified scripts: see sample script below.
## To reproduce
Steps to reproduce the behavior:
```python
import torch
import transformers
import multiprocessing as mp
def load_model_in_subprocess():
print("Started subprocess.")
model2 = transformers.RobertaModel.from_pretrained('roberta-base')
print("Model loaded in subprocess.")
def main():
model1 = transformers.RobertaModel.from_pretrained('roberta-base')
print("Model loaded in main process.")
p = mp.Process(target=load_model_in_subprocess, daemon=True)
p.start()
p.join()
print("Main thread terminating.")
if __name__ == "__main__":
main()
```
Output:
```
Model loaded in main process.
Started subprocess.
<never terminates>
```
## Expected behavior
Model loads and is functional in both main process and subprocess.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8649/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8649/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8648 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8648/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8648/comments | https://api.github.com/repos/huggingface/transformers/issues/8648/events | https://github.com/huggingface/transformers/pull/8648 | 746,295,371 | MDExOlB1bGxSZXF1ZXN0NTIzNzE1NDU2 | 8,648 | Create README.md | {
"login": "bino282",
"id": 17800187,
"node_id": "MDQ6VXNlcjE3ODAwMTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/17800187?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bino282",
"html_url": "https://github.com/bino282",
"followers_url": "https://api.github.com/users/bino282/followers",
"following_url": "https://api.github.com/users/bino282/following{/other_user}",
"gists_url": "https://api.github.com/users/bino282/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bino282/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bino282/subscriptions",
"organizations_url": "https://api.github.com/users/bino282/orgs",
"repos_url": "https://api.github.com/users/bino282/repos",
"events_url": "https://api.github.com/users/bino282/events{/privacy}",
"received_events_url": "https://api.github.com/users/bino282/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Closing this one as duplicate was already merged!\r\n\r\nFor context please also read https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755"
] | 1,605 | 1,607 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8648/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8648",
"html_url": "https://github.com/huggingface/transformers/pull/8648",
"diff_url": "https://github.com/huggingface/transformers/pull/8648.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8648.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8647 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8647/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8647/comments | https://api.github.com/repos/huggingface/transformers/issues/8647/events | https://github.com/huggingface/transformers/issues/8647 | 746,218,743 | MDU6SXNzdWU3NDYyMTg3NDM= | 8,647 | How can get the input embeddings_output for BERT? | {
"login": "ppyu",
"id": 32732750,
"node_id": "MDQ6VXNlcjMyNzMyNzUw",
"avatar_url": "https://avatars.githubusercontent.com/u/32732750?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ppyu",
"html_url": "https://github.com/ppyu",
"followers_url": "https://api.github.com/users/ppyu/followers",
"following_url": "https://api.github.com/users/ppyu/following{/other_user}",
"gists_url": "https://api.github.com/users/ppyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ppyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ppyu/subscriptions",
"organizations_url": "https://api.github.com/users/ppyu/orgs",
"repos_url": "https://api.github.com/users/ppyu/repos",
"events_url": "https://api.github.com/users/ppyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ppyu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,605 | 1,605 | 1,605 | NONE | null | How can get the input `embeddings_output ` for BERT? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8647/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8647/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8646 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8646/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8646/comments | https://api.github.com/repos/huggingface/transformers/issues/8646/events | https://github.com/huggingface/transformers/issues/8646 | 746,176,464 | MDU6SXNzdWU3NDYxNzY0NjQ= | 8,646 | CPM LM | {
"login": "AnShengqiang",
"id": 12854979,
"node_id": "MDQ6VXNlcjEyODU0OTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/12854979?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AnShengqiang",
"html_url": "https://github.com/AnShengqiang",
"followers_url": "https://api.github.com/users/AnShengqiang/followers",
"following_url": "https://api.github.com/users/AnShengqiang/following{/other_user}",
"gists_url": "https://api.github.com/users/AnShengqiang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AnShengqiang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AnShengqiang/subscriptions",
"organizations_url": "https://api.github.com/users/AnShengqiang/orgs",
"repos_url": "https://api.github.com/users/AnShengqiang/repos",
"events_url": "https://api.github.com/users/AnShengqiang/events{/privacy}",
"received_events_url": "https://api.github.com/users/AnShengqiang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"@JetRunner might be interested in that!",
"Hi, I didn't notice this before I finished the translation to `Transformer` just now. May [this script](https://github.com/mymusise/CPM-TF2Transformer/blob/main/transfor_CMP.ipynb) help.\r\n\r\nBTW, I met some problems when uploading. #8662",
"> @JetRunner might be interested in that!\r\n\r\nYes I was working on it but it seems @mymusise has already worked it out!\r\n\r\n@mymusise I will assist you through the uploading process!",
"@mymusise I think the generated result from your repo is a little buggy here. Any idea why? \r\n```\r\n[{'generated_text': '你好 ▁ , ▁我 ▁是 ▁ 个 ▁ <unk> ▁ <unk> ▁ <unk> ▁ <unk> ▁ <unk> ▁ <unk>'}]\r\n```\r\nhttps://github.com/mymusise/CPM-TF2Transformer/blob/e5ea4799603f19ab7f92596f7ad7472198c505c6/transfor_CMP.ipynb#L881",
"|```\r\n|[{'generated_text': '你好 ▁ , ▁我 ▁是 ▁ 个 ▁ <unk> ▁ <unk> ▁ <unk> ▁ <unk> ▁ <unk> ▁ <unk>'}]\r\n|```\r\n\r\n@JetRunner Hi, because I use `BertTokenizer` here, didn't use the `bpe` method. \r\nAnd seems `GPT2Tokenizer` does not yet support other languages such as Chinese, the [byte_encoder](https://github.com/huggingface/transformers/blob/dd52804f5fce0a568ffbb3dc7fd088d2de0a0e56/src/transformers/models/gpt2/tokenization_gpt2.py#L246) here will encode other languages to unknown token. Any advice?",
"I see. It's not a big problem since we can now specify the tokenizer type in the configuration file. I can take care of those once you've uploaded the model file. Let's wait for @julien-c to solve the big file uploading problem first.",
"OK, thank you, guy.",
"Hi, the model has been uploaded, see: https://huggingface.co/mymusise/CPM-Third-Party",
"@mymusise Awesome news! Let me take care of the rest and I will keep you updated.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"😄 Is there any news? \r\nI have tested this before and the result is different from the official repo. \r\nHas this problem been solved? (I don't have a card now so I can't test it, sorry😢.)\r\nIf so, I will close this issue. @mymusise @JetRunner ",
"> I have tested this before and the result is different from the official repo.\r\n\r\nHi, there are a few problems with the old one. I've recreated the model card, please try the new one: [mymusise/CPM-GPT2](https://huggingface.co/mymusise/CPM-GPT2)\r\n\r\nand the FP16 version has been uploaded already: [mymusise/CPM-GPT2-FP16](https://huggingface.co/mymusise/CPM-GPT2-FP16)\r\n",
"Thanks, it works perfectly! 😄"
] | 1,605 | 1,614 | 1,614 | NONE | null | # 🌟 New model addition (2.6B params)
## Model description
CPM(Chinese Pre-Trained Language Models), which has 2.6B parameters, can be used for zero shot、one shot、few shot learning.
Code and model is available.
## Open source status
* [✅] the model implementation is available: [CPM-Generate pytorch](https://github.com/TsinghuaAI/CPM-Generate) [CPM-LM-TF2](https://github.com/qhduan/CPM-LM-TF2)
* [✅] the model weights are available: can be found [here](https://github.com/TsinghuaAI/CPM-Generate)
* [✅] who are the authors: (Research team of Beijing Zhiyuan Institute of artificial intelligence and Tsinghua University @ TsinghuaAI)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8646/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8646/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8645 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8645/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8645/comments | https://api.github.com/repos/huggingface/transformers/issues/8645/events | https://github.com/huggingface/transformers/pull/8645 | 746,170,769 | MDExOlB1bGxSZXF1ZXN0NTIzNjE0NzI5 | 8,645 | [core] implement support for run-time dependency version checking | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sgugger, wrt your comment in your [announcement](https://discuss.huggingface.co/t/transformers-v4-0-0-announcement/1990/1)\r\n> => Resulting breaking change: some people will have to install sentencepiece explicitly while they didn’t have to before with the command pip install transformers[sentencepiece].\r\n\r\nAfter this PR, you can just `require_version(\"sentencepiece\", \"pip install transformers[sentencepiece]\")` in the code that needs it at run-time. You can expand the hint (second arg) as you wish to be self-explanatory.",
"@LysandreJik and @sgugger - it's ready to merge whenever you have a chance to review. Thank you.\r\n\r\nProbably it is best to merge post v4-release, in case I missed something.",
"Will do a final review tomorrow!"
] | 1,605 | 1,606 | 1,606 | CONTRIBUTOR | null | As discussed at https://github.com/huggingface/transformers/pull/8073#issuecomment-729181330 this PR:
* [x] adds mechanics for run-time dependency version checks (module with fixed and unversioned too)
* [x] adds thorough tests (this is where it's the easiest to see how things work)
* [x] creates one source for all dependency versions in setup.py - need to rerun setup.py on its update to re-generated src/transformers/dependency_versions_table.py, which is then used by transformers
* [x] adds support and deploys python runtime version check
* [x] deploys runtime checks for versioned modules in setup.py's `install_requires` (i.e. must modules)
* [x] switches `examples/lightning_base.py` to a fatal-on-failure requirement check.
* [x] deploys the version lookup `setup.py`'s `extras` definitions and `install_requires`
* [x] adds new `Makefile` target `deps_table_update` that updates the dep table, and inserts it into `style/quality/fixup` targets so the sync shouldn't take too long if forgotten to be run explicitly
@sgugger, @LysandreJik, @patrickvonplaten, @thomwolf
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8645/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8645",
"html_url": "https://github.com/huggingface/transformers/pull/8645",
"diff_url": "https://github.com/huggingface/transformers/pull/8645.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8645.patch",
"merged_at": 1606242146000
} |
https://api.github.com/repos/huggingface/transformers/issues/8644 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8644/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8644/comments | https://api.github.com/repos/huggingface/transformers/issues/8644/events | https://github.com/huggingface/transformers/pull/8644 | 746,156,845 | MDExOlB1bGxSZXF1ZXN0NTIzNjAzODg2 | 8,644 | Fix small typo | {
"login": "matthiaslmz",
"id": 19335932,
"node_id": "MDQ6VXNlcjE5MzM1OTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/19335932?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matthiaslmz",
"html_url": "https://github.com/matthiaslmz",
"followers_url": "https://api.github.com/users/matthiaslmz/followers",
"following_url": "https://api.github.com/users/matthiaslmz/following{/other_user}",
"gists_url": "https://api.github.com/users/matthiaslmz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matthiaslmz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matthiaslmz/subscriptions",
"organizations_url": "https://api.github.com/users/matthiaslmz/orgs",
"repos_url": "https://api.github.com/users/matthiaslmz/repos",
"events_url": "https://api.github.com/users/matthiaslmz/events{/privacy}",
"received_events_url": "https://api.github.com/users/matthiaslmz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patrickvonplaten pls review & approve a this small fix of a typo in the XLNet section. Thank you! "
] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | Fixed a small typo on the XLNet and permutation language modelling section
@patrickvonplaten
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8644/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8644/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8644",
"html_url": "https://github.com/huggingface/transformers/pull/8644",
"diff_url": "https://github.com/huggingface/transformers/pull/8644.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8644.patch",
"merged_at": 1605803052000
} |
https://api.github.com/repos/huggingface/transformers/issues/8643 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8643/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8643/comments | https://api.github.com/repos/huggingface/transformers/issues/8643/events | https://github.com/huggingface/transformers/issues/8643 | 746,147,759 | MDU6SXNzdWU3NDYxNDc3NTk= | 8,643 | Model embedding size and tokenizer size mismatch; resizing embedding will cause CUDA assert error | {
"login": "huu4ontocord",
"id": 8900094,
"node_id": "MDQ6VXNlcjg5MDAwOTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8900094?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/huu4ontocord",
"html_url": "https://github.com/huu4ontocord",
"followers_url": "https://api.github.com/users/huu4ontocord/followers",
"following_url": "https://api.github.com/users/huu4ontocord/following{/other_user}",
"gists_url": "https://api.github.com/users/huu4ontocord/gists{/gist_id}",
"starred_url": "https://api.github.com/users/huu4ontocord/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/huu4ontocord/subscriptions",
"organizations_url": "https://api.github.com/users/huu4ontocord/orgs",
"repos_url": "https://api.github.com/users/huu4ontocord/repos",
"events_url": "https://api.github.com/users/huu4ontocord/events{/privacy}",
"received_events_url": "https://api.github.com/users/huu4ontocord/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @ontocord, I cannot reproduce your error on master...\r\n\r\n```python\r\nfrom transformers import AutoTokenizer, AutoModel\r\ntokenizer = AutoTokenizer.from_pretrained(\"t5-base\")\r\nprint (len(tokenizer))\r\nmodel = AutoModel.from_pretrained(\"t5-base\")\r\nprint (model.shared)\r\nmodel.resize_token_embeddings(len(tokenizer))\r\nmodel.to('cuda')\r\n```\r\nworks fine for me.",
"I am able to correctly shorten the embedding matrix",
"@patrickvonplaten Thank you. It's also working now in my code too with latest version of transformer. Thanks for looking into this!"
] | 1,605 | 1,608 | 1,608 | NONE | null | ## Environment info
Google colab
### Who can help
T5: @patrickvonplaten
## Information
I'm noticing something strange with T5. The model embedding size and the tokenizer size does not match.
When I try to resize the model to have a smaller embedding this crashes CUDA. This is probably two bugs - one for the size mismatch, and one for shortening the embedding causing a crash.
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("t5-base")
print (len(tokenizer))
model = AutoModel.from_pretrained("t5-base")
print (model.shared)
model.resize_token_embeddings(len(tokenizer))
model.to('cuda')
```
## Expected behavior
Expected behaviour is regular loading of the model into cuda.
What I got instead was:
32100
Some weights of T5Model were not initialized from the model checkpoint at t5-base and are newly initialized: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Embedding(32128, 768)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-18-145ba8d0b52c> in <module>()
5 print (model.shared)
6 model.resize_token_embeddings(len(tokenizer))
----> 7 model.to('cuda')
3 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in convert(t)
608 if convert_to_format is not None and t.dim() == 4:
609 return t.to(device, dtype if t.is_floating_point() else None, non_blocking, memory_format=convert_to_format)
--> 610 return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
611
612 return self._apply(convert)
RuntimeError: CUDA error: device-side assert triggered | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8643/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8643/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8642 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8642/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8642/comments | https://api.github.com/repos/huggingface/transformers/issues/8642/events | https://github.com/huggingface/transformers/issues/8642 | 746,136,789 | MDU6SXNzdWU3NDYxMzY3ODk= | 8,642 | Setting Evaluation Strategy in the TrainingArgs does not print validation metrics | {
"login": "TarasPriadka",
"id": 14134797,
"node_id": "MDQ6VXNlcjE0MTM0Nzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/14134797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TarasPriadka",
"html_url": "https://github.com/TarasPriadka",
"followers_url": "https://api.github.com/users/TarasPriadka/followers",
"following_url": "https://api.github.com/users/TarasPriadka/following{/other_user}",
"gists_url": "https://api.github.com/users/TarasPriadka/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TarasPriadka/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TarasPriadka/subscriptions",
"organizations_url": "https://api.github.com/users/TarasPriadka/orgs",
"repos_url": "https://api.github.com/users/TarasPriadka/repos",
"events_url": "https://api.github.com/users/TarasPriadka/events{/privacy}",
"received_events_url": "https://api.github.com/users/TarasPriadka/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The steps you indicate to reproduce are incomplete, there is little we can do without knowing which script you're running and having access to the full code. For instance\r\n```\r\npython examples/text-classification/run_glue.py \\\r\n --model_name_or_path bert-base-cased \\\r\n --task_name mrpc \\\r\n --do_train \\\r\n --do_eval \\\r\n --max_seq_length 128 \\\r\n --per_device_train_batch_size 32 \\\r\n --learning_rate 4e-5 \\\r\n --num_train_epochs 3.0 \\\r\n --output_dir ~/tmp/mnli/ \\\r\n --overwrite_output_dir \\\r\n --save_total_limit 5 \\\r\n --evaluation_strategy steps \\\r\n --eval_steps 128 \\\r\n --logging_steps 128\r\n```\r\ndoes print the metrics every 128 steps.\r\n\r\nMy guess is this is all because you're not creating a `training_args` using the `TrainingArguments` init, thus `self.training_args.evaluation_strategy` is improperly set. Try using \r\n```\r\nself.training_args.evaluation_strategy = EvaluationStrategy.STEPS\r\n```\r\n(but you really should be using the `TrainingArguments` init that has more checks and properly sets those arguments).",
"@sgugger After trying to set the strategy in the constructor, it works as intended! Thank you for a quick solve. I was setting some of the `Arg` object's field without the constructor, so I was getting the unexpected behaviour.",
"So be sure to use that then. The init wraps the string in the enum for you, that's why you don't need to do it when using it.\r\nClosing this issue since you're saying it's solved, don't hesitate to reopen if needed."
] | 1,605 | 1,605 | 1,605 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.5.1
- Platform: Linux-5.4.0-1029-aws-x86_64-with-debian-buster-sid
- Python version: 3.7.6
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger
## Information
Model I am using (Bert, XLNet ...): BertForSequenceClassification
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Set training args to
```self.training_args.do_eval = True
self.training_args.evaluate_during_training = True
self.training_args.evaluation_strategy = "steps"
self.training_args.eval_steps=128
self.training_args.logging_steps=128
```
Then pass to Trainer
```
trainer = Trainer(
model=model,
args=self.training_args,
train_dataset=training_set,
eval_dataset=eval_set,
compute_metrics=self.compute_metrics,
)
```
## Expected behavior
Validation metrics are printed out on every 128th step. Right now, only logging steps appear in logs on the console. I
looked thorough forums and others don't seem to have this issue. Any help on resolving this would be hugely appreciated
since I can't train without validation metrics.
It looked like evaluate_during_training isn't required, but it won't work with or without it set.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8642/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8642/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8641 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8641/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8641/comments | https://api.github.com/repos/huggingface/transformers/issues/8641/events | https://github.com/huggingface/transformers/issues/8641 | 746,112,110 | MDU6SXNzdWU3NDYxMTIxMTA= | 8,641 | Bi-Directional Reformer text multi class classification | {
"login": "Zozos972",
"id": 53522341,
"node_id": "MDQ6VXNlcjUzNTIyMzQx",
"avatar_url": "https://avatars.githubusercontent.com/u/53522341?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zozos972",
"html_url": "https://github.com/Zozos972",
"followers_url": "https://api.github.com/users/Zozos972/followers",
"following_url": "https://api.github.com/users/Zozos972/following{/other_user}",
"gists_url": "https://api.github.com/users/Zozos972/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zozos972/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zozos972/subscriptions",
"organizations_url": "https://api.github.com/users/Zozos972/orgs",
"repos_url": "https://api.github.com/users/Zozos972/repos",
"events_url": "https://api.github.com/users/Zozos972/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zozos972/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Have you taken a look at the [RobertaForSequenceClassification](https://huggingface.co/transformers/model_doc/roberta.html#tfrobertaforsequenceclassification) documentation page?\r\n\r\nYou might also be interested in the [sequence classification task](https://huggingface.co/transformers/task_summary.html#sequence-classification) documentation page.\r\nWe aim to have the exact same API for all models, so while this example showcases BERT with the auto models, you can use a RoBERTa architecture instead.\r\n\r\n\r\nYou can see the [ReformerForSequenceClassification](https://huggingface.co/transformers/model_doc/reformer.html#reformerforsequenceclassification) documentation page if you're particularly interested in Reformer.\r\n",
"Excellent! Thanks,\r\nZ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,605 | 1,619 | 1,619 | NONE | null | # 🚀 Feature request
I am fairly new to transformers, but good experience in ML. I have tried to start by building a RoBERTa multi class classification model but the doc and examples are not clear.
Can you point me to the set of literature that would get me going with RoBERTa?
Afterwards, I would love to support the effort building the Bi-Directional Reformer text multi class classification model. Of if you send me the docs, I can jump directly to the Reformer model.
Many thanks,
Z | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8641/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8640 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8640/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8640/comments | https://api.github.com/repos/huggingface/transformers/issues/8640/events | https://github.com/huggingface/transformers/pull/8640 | 746,106,495 | MDExOlB1bGxSZXF1ZXN0NTIzNTYyNTE0 | 8,640 | Bump notebook from 6.1.4 to 6.1.5 in /examples/lxmert | {
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,605 | 1,651 | 1,614 | CONTRIBUTOR | null | Bumps [notebook](https://github.com/jupyter/jupyterhub) from 6.1.4 to 6.1.5.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a href="https://github.com/jupyter/jupyterhub/commits">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/configuring-github-dependabot-security-updates)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8640/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8640/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8640",
"html_url": "https://github.com/huggingface/transformers/pull/8640",
"diff_url": "https://github.com/huggingface/transformers/pull/8640.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8640.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8639 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8639/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8639/comments | https://api.github.com/repos/huggingface/transformers/issues/8639/events | https://github.com/huggingface/transformers/pull/8639 | 746,092,901 | MDExOlB1bGxSZXF1ZXN0NTIzNTUwMDky | 8,639 | grammar | {
"login": "cronoik",
"id": 18630848,
"node_id": "MDQ6VXNlcjE4NjMwODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/18630848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cronoik",
"html_url": "https://github.com/cronoik",
"followers_url": "https://api.github.com/users/cronoik/followers",
"following_url": "https://api.github.com/users/cronoik/following{/other_user}",
"gists_url": "https://api.github.com/users/cronoik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cronoik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cronoik/subscriptions",
"organizations_url": "https://api.github.com/users/cronoik/orgs",
"repos_url": "https://api.github.com/users/cronoik/repos",
"events_url": "https://api.github.com/users/cronoik/events{/privacy}",
"received_events_url": "https://api.github.com/users/cronoik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | # What does this PR do?
Fixes a typo in the pull request template.
<!-- Remove if not applicable -->
## Before submitting
- [x ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
documentation: @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8639/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8639/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8639",
"html_url": "https://github.com/huggingface/transformers/pull/8639",
"diff_url": "https://github.com/huggingface/transformers/pull/8639.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8639.patch",
"merged_at": 1605740665000
} |
https://api.github.com/repos/huggingface/transformers/issues/8638 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8638/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8638/comments | https://api.github.com/repos/huggingface/transformers/issues/8638/events | https://github.com/huggingface/transformers/issues/8638 | 746,092,238 | MDU6SXNzdWU3NDYwOTIyMzg= | 8,638 | AttributeError: module 'typing' has no attribute '_ClassVar' | {
"login": "HatterTheMad",
"id": 41959296,
"node_id": "MDQ6VXNlcjQxOTU5Mjk2",
"avatar_url": "https://avatars.githubusercontent.com/u/41959296?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HatterTheMad",
"html_url": "https://github.com/HatterTheMad",
"followers_url": "https://api.github.com/users/HatterTheMad/followers",
"following_url": "https://api.github.com/users/HatterTheMad/following{/other_user}",
"gists_url": "https://api.github.com/users/HatterTheMad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HatterTheMad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HatterTheMad/subscriptions",
"organizations_url": "https://api.github.com/users/HatterTheMad/orgs",
"repos_url": "https://api.github.com/users/HatterTheMad/repos",
"events_url": "https://api.github.com/users/HatterTheMad/events{/privacy}",
"received_events_url": "https://api.github.com/users/HatterTheMad/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is weird! Looking at the stacktrace, I have a few questions:\r\n\r\n- Are you running a file in `access/main.py`?\r\n- If yes, is there a `dataclasses.py` file in that `access` folder?\r\n\r\nIt seems far-fetched but if that's the case, it might be possible that this file is interfering with the `dataclasses` module.",
"It seems related to an incompatibility of python > 3.6 with package dataclasses, as explained here:\r\n\r\n https://github.com/google/flax/pull/270",
"Yes, but we only install `dataclasses` on Python versions that are inferior to 3.7: https://github.com/huggingface/transformers/blob/master/setup.py#L137",
"I encountered the same problem with the following setup:\r\n\r\n- transformers version: 3.5.1\r\n- Platform: Linux\r\n- Python version: 3.8\r\n- PyTorch version (GPU?): 1.7.0+cpu\r\n- Using GPU in script?: No\r\n- Using distributed or parallel set-up in script?: No\r\n\r\nLocal execution works fine, but when running the code on Google App Engine (Standard Environment), it fails with error `AttributeError: module 'typing' has no attribute '_ClassVar'`. There is no file called `dataclasses.py` anywhere in the project,\r\n\r\nStacktrace:\r\n```\r\nFile \"/srv/application/stance_pred/bert_inference.py\", line 8, in <module> from transformers import DistilBertForSequenceClassification, DistilBertTokenizer, DistilBertConfig\r\nFile \"/layers/google.python.pip/pip/transformers/__init__.py\", line 22, in <module> from .integrations import ( # isort:skip\r\nFile \"/layers/google.python.pip/pip/transformers/integrations.py\", line 82, in <module> from .trainer_callback import TrainerCallback # noqa: E402\r\nFile \"/layers/google.python.pip/pip/transformers/trainer_callback.py\", line 27, in <module> from .training_args import TrainingArguments\r\nFile \"/layers/google.python.pip/pip/transformers/training_args.py\", line 36, in <module> class TrainingArguments:\r\nFile \"/layers/google.python.pip/pip/dataclasses.py\", line 958, in dataclass return wrap(_cls)\r\nFile \"/layers/google.python.pip/pip/dataclasses.py\", line 950, in wrap return _process_class(cls, init, repr, eq, order, unsafe_hash, frozen)\r\nFile \"/layers/google.python.pip/pip/dataclasses.py\", line 800, in _process_class cls_fields = [_get_field(cls, name, type)\r\nFile \"/layers/google.python.pip/pip/dataclasses.py\", line 800, in <listcomp> cls_fields = [_get_field(cls, name, type)\r\nFile \"/layers/google.python.pip/pip/dataclasses.py\", line 659, in _get_field if (_is_classvar(a_type, typing)\r\nFile \"/layers/google.python.pip/pip/dataclasses.py\", line 550, in _is_classvar return type(a_type) is typing._ClassVar AttributeError: module 'typing' has no attribute '_ClassVar'\r\n```",
"Any solution?",
"Could one of you post the result of `pip list` in the environment where that is failing? Or even better paste the result of `pip freeze`, alongside a few lines of code that reproduce the issue.\r\n\r\nThank you!",
"I solved this problem by removing `dataclasses*`",
"I have solved this problem by downgrading to Python version: 3.6\r\n\r\nThanks @attardi ",
"note that `fairseq` 0.10.1 requires `dataclasses` even for py>3.7 where it's built-in and HF Trainer breaks when `dataclasses` are installed for these versions. So if some project pulls in `fairseq` which will force the install of `dataclasses` HF Trainer will break. Probably need to ask `fairseq` to fix their dependencies.\r\n\r\nUntil then @thesby's solution is the easiest one.\r\n\r\n```\r\npip uninstall dataclasses -y\r\n```",
"I don't know why, but for me changing the version of Python worked (from 3.9.12 to 3.9.5.)",
"> note that `fairseq` 0.10.1 requires `dataclasses` even for py>3.7 where it's built-in and HF Trainer breaks when `dataclasses` are installed for these versions. So if some project pulls in `fairseq` which will force the install of `dataclasses` HF Trainer will break. Probably need to ask `fairseq` to fix their dependencies.\r\n> \r\n> Until then @thesby's solution is the easiest one.\r\n> \r\n> ```\r\n> pip uninstall dataclasses -y\r\n> ```\r\n\r\nIn my case `pip` command itself was broken so I had to do by hand:\r\n\r\n```\r\nrm -rf lib/dataclasses-0.6.dist-info\r\nrm lib/dataclasses.py\r\n```\r\n\r\nin the location where `dataclasses` has been installed!",
"Dependency to `dataclasses` in setup.py should be python 3.6 only:\r\n\r\n```diff\r\n-'dataclasses'\r\n+'dataclasses; python_version < \"3.7\"'\r\n```\r\n\r\nor use `if sys.version_info >= (3, 7):` check to dynamically configure `_deps` in https://github.com/huggingface/transformers/blob/main/setup.py#L98.",
"I met the same error without having dataclasses installed in my environment (python 3.9). I installed dataclasses and uninstalled it again - from this point on the error disappeared. "
] | 1,605 | 1,695 | 1,607 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.5.1
- Platform: Linux
- Python version: 3.8
- PyTorch version (GPU?): 1.7.0+cpu
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Yes
### Who can help @sgugger .... @LysandreJik ... mb?
## Information
Model I am using: Distilbert
The problem arises when using:
Just this:
``` python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
```
The tasks I am working on is:
* [ ] my own task or dataset:
I am using default Distilbert for my flask API
## To reproduce
Steps to reproduce the behavior:
That's a big part of the question. It works just fine on my local machine, but gives this error when run on my AWS server.
``` python
from flask import Flask, request, jsonify
from flask import Flask
from flask_restful import Api, Resource
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
from transformers.pipelines import pipeline
tokenizer = AutoTokenizer.from_pretrained('./Dis_Save/')
model = AutoModelForQuestionAnswering.from_pretrained('./Dis_Save/')
nlp_qa = pipeline('question-answering', tokenizer=tokenizer,model=model)
app = Flask(__name__)
@app.route('/api/QandA', methods=['GET', 'POST'])
def QandA():
content = request.json
print(content['userMessages'])
X = nlp_qa(context=content['userMessages'], question=content['question'])
return(jsonify({"answer":X["answer"], "score":X["score"]}))
if __name__ == "__main__":
app.run(debug=True)
```
#This is all the code that I have. Here is the full error that I get:
```
File "access/main.py", line 4, in <module>
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
File "/home/ubuntu/access/transformers/__init__.py", line 22, in <module>
from .integrations import ( # isort:skip
File "/home/ubuntu/access/transformers/integrations.py", line 82, in <module>
from .trainer_callback import TrainerCallback # noqa: E402
File "/home/ubuntu/access/transformers/trainer_callback.py", line 27, in <module>
from .training_args import TrainingArguments
File "/home/ubuntu/access/transformers/training_args.py", line 36, in <module>
class TrainingArguments:
File "/home/ubuntu/access/dataclasses.py", line 958, in dataclass
return wrap(_cls)
File "/home/ubuntu/access/dataclasses.py", line 950, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash, frozen)
File "/home/ubuntu/access/dataclasses.py", line 800, in _process_class
cls_fields = [_get_field(cls, name, type)
File "/home/ubuntu/access/dataclasses.py", line 800, in <listcomp>
cls_fields = [_get_field(cls, name, type)
File "/home/ubuntu/access/dataclasses.py", line 659, in _get_field
if (_is_classvar(a_type, typing)
File "/home/ubuntu/access/dataclasses.py", line 550, in _is_classvar
return type(a_type) is typing._ClassVar
AttributeError: module 'typing' has no attribute '_ClassVar'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8638/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8638/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8637 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8637/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8637/comments | https://api.github.com/repos/huggingface/transformers/issues/8637/events | https://github.com/huggingface/transformers/pull/8637 | 746,090,073 | MDExOlB1bGxSZXF1ZXN0NTIzNTQ3NDgz | 8,637 | Add FastFormers to the example directory | {
"login": "ykim362",
"id": 22177353,
"node_id": "MDQ6VXNlcjIyMTc3MzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/22177353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ykim362",
"html_url": "https://github.com/ykim362",
"followers_url": "https://api.github.com/users/ykim362/followers",
"following_url": "https://api.github.com/users/ykim362/following{/other_user}",
"gists_url": "https://api.github.com/users/ykim362/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ykim362/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ykim362/subscriptions",
"organizations_url": "https://api.github.com/users/ykim362/orgs",
"repos_url": "https://api.github.com/users/ykim362/repos",
"events_url": "https://api.github.com/users/ykim362/events{/privacy}",
"received_events_url": "https://api.github.com/users/ykim362/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | null | [] | [
"Thanks, @JetRunner ! I will clean up the code and fix the CI checks.",
"Thanks for the opinions, @LysandreJik!\r\n\r\n1. The onnx related files are necessary before it's merges into master branch of onnxruntime\r\n2. I agree with you regarding the binary files. I can put the binary files in a different place and put a script to download them, here.\r\n3. That sounds good. I will look at the dataset library to see how to utilize it.",
"That's great, thanks a lot @ykim362! I think we can do with the ONNX files while your PR is in waiting over at onnxruntime.",
"Thanks for the review @patrickvonplaten ! I can make changes most of them as you recommended.\r\n\r\nRegarding `attention_head_size`, this is necessary for head pruned transformers. I can think of two ways to keep backward compatibility.\r\n1. Create a new model class by subclassing `RoBERTa`. FastFormers supports BERT and RoBERTa, so that would work for both.\r\n2. In the current BERT model, add a default behavior (same as current logic) when `attention_head_size` doesn't exist. Then, it could be used only when `attention_head_size` parameter exists in the config file.\r\n\r\nOr, I am open to any suggestions. :)",
"What is the status with this project? Anything I can help with?",
"> What is the status with this project? Anything I can help with?\r\n\r\nI think it's mostly about this: https://github.com/huggingface/transformers/pull/8637/files?file-filters%5B%5D=.png&file-filters%5B%5D=.py&file-filters%5B%5D=.whl#r545051113 -> in a first PR we should not touch this logic IMO",
"I am sorry, but I have been fully loaded with some other stuffs. I won't be able to make a progress. I'd like to close this to avoid any confusion.",
"Thank you for the clarification @ykim362, I hope we may still collaborate in the future!",
"Thanks, @LysandreJik ! Likewise for the future collaboration! :)"
] | 1,605 | 1,632 | 1,632 | NONE | null | # What does this PR do?
Add FastFormers into the example directory.
https://github.com/huggingface/transformers/issues/8083
https://arxiv.org/abs/2010.13382
https://github.com/microsoft/fastformers
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@JetRunner @LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8637/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8637/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8637",
"html_url": "https://github.com/huggingface/transformers/pull/8637",
"diff_url": "https://github.com/huggingface/transformers/pull/8637.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8637.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8636 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8636/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8636/comments | https://api.github.com/repos/huggingface/transformers/issues/8636/events | https://github.com/huggingface/transformers/pull/8636 | 746,082,695 | MDExOlB1bGxSZXF1ZXN0NTIzNTQwNzI5 | 8,636 | Updated the Extractive Question Answering code snippets | {
"login": "cronoik",
"id": 18630848,
"node_id": "MDQ6VXNlcjE4NjMwODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/18630848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cronoik",
"html_url": "https://github.com/cronoik",
"followers_url": "https://api.github.com/users/cronoik/followers",
"following_url": "https://api.github.com/users/cronoik/following{/other_user}",
"gists_url": "https://api.github.com/users/cronoik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cronoik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cronoik/subscriptions",
"organizations_url": "https://api.github.com/users/cronoik/orgs",
"repos_url": "https://api.github.com/users/cronoik/repos",
"events_url": "https://api.github.com/users/cronoik/events{/privacy}",
"received_events_url": "https://api.github.com/users/cronoik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for flagging this! I think it would be better to show how to use the attributes, so something like:\r\n```\r\noutputs = model(**inputs)\r\nanswer_start_scores = outputs.start_logits\r\nanswer_end_scores = outputs.end_logits\r\n```",
"Yes, you are right. @sgugger "
] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | # What does this PR do?
The Extractive Question Answering code snippets do not work anymore since the models return task-specific output objects. This commit fixes the pytorch and tensorflow examples but adding `.values()` to the model call.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
documentation: @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8636/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8636/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8636",
"html_url": "https://github.com/huggingface/transformers/pull/8636",
"diff_url": "https://github.com/huggingface/transformers/pull/8636.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8636.patch",
"merged_at": 1605743808000
} |
https://api.github.com/repos/huggingface/transformers/issues/8635 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8635/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8635/comments | https://api.github.com/repos/huggingface/transformers/issues/8635/events | https://github.com/huggingface/transformers/pull/8635 | 746,079,222 | MDExOlB1bGxSZXF1ZXN0NTIzNTM3NTUy | 8,635 | Small formatting fix | {
"login": "timpal0l",
"id": 6556710,
"node_id": "MDQ6VXNlcjY1NTY3MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6556710?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timpal0l",
"html_url": "https://github.com/timpal0l",
"followers_url": "https://api.github.com/users/timpal0l/followers",
"following_url": "https://api.github.com/users/timpal0l/following{/other_user}",
"gists_url": "https://api.github.com/users/timpal0l/gists{/gist_id}",
"starred_url": "https://api.github.com/users/timpal0l/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timpal0l/subscriptions",
"organizations_url": "https://api.github.com/users/timpal0l/orgs",
"repos_url": "https://api.github.com/users/timpal0l/repos",
"events_url": "https://api.github.com/users/timpal0l/events{/privacy}",
"received_events_url": "https://api.github.com/users/timpal0l/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Just adding the bash formatting for the markdown in the run_mlm_wwm.py snippet
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
--> documentation: @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8635/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8635/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8635",
"html_url": "https://github.com/huggingface/transformers/pull/8635",
"diff_url": "https://github.com/huggingface/transformers/pull/8635.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8635.patch",
"merged_at": 1605742523000
} |
https://api.github.com/repos/huggingface/transformers/issues/8634 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8634/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8634/comments | https://api.github.com/repos/huggingface/transformers/issues/8634/events | https://github.com/huggingface/transformers/pull/8634 | 746,054,962 | MDExOlB1bGxSZXF1ZXN0NTIzNTE1ODYx | 8,634 | Fix a bunch of slow tests | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Good for me - thanks a lot for taking care of it! \r\n\r\nIt would probably all save us a lot of time to find once and for all a good solution for the `test_saved_model_with_attentions_output` and `test_saved_model_with_hidden_states_output` functions. I've spent way too much time trying to fix those for TFT5 as well and without finding a good solution. If you have a good idea of how to deal with this functionality/test in the future let me know @LysandreJik :-) \r\n\r\n@sgugger - not sure where the `MODIFY` statements are coming from...I think we can delete it along with `return_dict=True` now",
"@patrickvonplaten I tried but then test failed ;-)",
"> @patrickvonplaten I tried but then test failed ;-)\r\n\r\nHmm maybe @lhoestq has an idea",
"The test_modeling_dpr was added recently in #8203 \r\nMaybe @ratthachat knows why the `# MODIFY` are there ?\r\nWe should indeed remove them\r\n\r\nAlso I'm ok with adding token_type_ids since it's a common additional input to models based on bert",
"Hi guys, first of all I apologize if there's a problem at the `MODIFY` tag which is about `return_dict` argument.\r\n\r\nI translated `test_modeling_tf_dpr` from the Pytorch's one. \r\nIf I remember correctly, I found out that there's some tests in `test_modeling_tf_common.py` \r\nneed `return_dict=False` argument. \r\n(and when I looked at the tests, I judged that all tests just need to ensure the correct values of output,\r\nnot about `return_dict` argument.) \r\nThat's why I changed the config to `return_dict=False` as default, and left the `MODIFY` comments \r\njust to note that this part was modified from the Pytorch's one. \r\n(Again, I thought the main tests are on outputs' values)\r\n\r\nIt's my first time to write this kind of test file here, so I apologize again if I made something wrong!",
"> Hi guys, first of all I apologize if there's a problem at the `MODIFY` tag which is about `return_dict` argument.\r\n> \r\n> I translated `test_modeling_tf_dpr` from the Pytorch's one.\r\n> If I remember correctly, I found out that there's some tests in `test_modeling_tf_common.py`\r\n> need `return_dict=False` argument.\r\n> (and when I looked at the tests, I judged that all tests just need to ensure the correct values of output,\r\n> not about `return_dict` argument.)\r\n> That's why I changed the config to `return_dict=False` as default, and left the `MODIFY` comments\r\n> just to note that this part was modified from the Pytorch's one.\r\n> (Again, I thought the main tests are on outputs' values)\r\n> \r\n> It's my first time to write this kind of test file here, so I apologize again if I made something wrong!\r\n\r\nAbsolutely no problem! I should have been more careful when reviewing your PR -> don't worry at all :-) \r\nWe also have some difficulties with those `test_compile_tf_model` tests in TF, so I only understand it too well why you added those `return_dict=False/True` statements ;-) \r\n\r\nIf you run into similar problems with TF compilation/ TF graph tests when integrating TFRAG, you can just point it out to us. It's more important to have TFRag fully work in eager mode in the beginning and then we are more then happy to help you out if you encounter problems with graph mode / compilation",
"Thanks again for your kind help @patrickvonplaten !!\r\nYes, as you predicted, there are similar (many more) hacks I did to make TFRag works at the moment.\r\n\r\nWhen submitting PR I will make sure to list everything to you guys :)",
"Thanks for your reviews/comments/fixes!"
] | 1,605 | 1,605 | 1,605 | MEMBER | null | This PR fixes a bunch of slow tests.
DPR had a few issues, which this PR fixes. There was an issue where the `DPRReader` object was not using `token_type_ids`, but for some unknown reason the interaction with the underlying `TFBertMainLayer` that requires those crashed when using the oh-so-terrifying `tf.saved_model.save`.
I chose to add the `token_type_ids` to that model, as imo an additional feature is not a bad idea, even if it wasn't in the original model. I can imagine a bunch of reasons why users might want to have `token_type_ids` in that model even though it doesn't exist now.
All in all, after an hour of debugging I feel that this is the only way to have the slow `tf.saved_model.save` test passing on TFDPR. @patrickvonplaten @lhoestq please tell me what you think. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8634/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8634/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8634",
"html_url": "https://github.com/huggingface/transformers/pull/8634",
"diff_url": "https://github.com/huggingface/transformers/pull/8634.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8634.patch",
"merged_at": 1605800502000
} |
https://api.github.com/repos/huggingface/transformers/issues/8633 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8633/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8633/comments | https://api.github.com/repos/huggingface/transformers/issues/8633/events | https://github.com/huggingface/transformers/pull/8633 | 745,989,522 | MDExOlB1bGxSZXF1ZXN0NTIzNDU5MTYz | 8,633 | Better filtering of the model outputs in Trainer | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,606 | 1,605 | COLLABORATOR | null | # What does this PR do?
As discovered since merging #8530, sometimes (e.g. when using nvidia apex with the O2 optimization) the new model outputs lose their type and become regular dictionaries. This means we can't index into them with integers and some rework in the internals of `Trainer` has become necessary.
This PR:
- fixes the training by indexing in the outputs by string if they are dict, int otherwise when grabbing the loss
- fixes the evaluation by indexing in the outputs by string if they are dict, int otherwise when grabbing the loss
but it also takes advantage of the new dict outputs to better filter the outputs at inference. We had several issues recently when using models outputing past states (such as Reformer, XLNet, GPT-2) during evaluation in `Trainer`. This PR introduces a new API that looks at a possible key in the config of the model to get some attributes to ignore in the ouputs during evaluation (those outputs are then discarded from the predictions returned by the function `Trainer.predict` or passed along to metric computation in `Trainer.evaluate`). Since a user might have some use cases where they want to ignore more keys or output those keys, a new argument is added to both `Trainer.predict` and `Trainer.evaluate` to fully control the keys ignored in those dictionaries.
If the model outputs tuple, this is all ignored.
Fixes #8523 among others
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8633/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8633",
"html_url": "https://github.com/huggingface/transformers/pull/8633",
"diff_url": "https://github.com/huggingface/transformers/pull/8633.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8633.patch",
"merged_at": 1605800595000
} |
https://api.github.com/repos/huggingface/transformers/issues/8632 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8632/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8632/comments | https://api.github.com/repos/huggingface/transformers/issues/8632/events | https://github.com/huggingface/transformers/issues/8632 | 745,983,629 | MDU6SXNzdWU3NDU5ODM2Mjk= | 8,632 | [s2s] distillation.py fails with apex | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,605 | 1,614 | 1,614 | CONTRIBUTOR | null | Splitting off from https://github.com/huggingface/transformers/pull/8631,
`finetune.py` works with apex, but `distillation.py` doesn't (no idea whether it ever did):
```
$ python distillation.py --teacher facebook/bart-large-xsum --data_dir xsum --tokenizer_name facebook/bart-large-xsum --student_decoder_layers 6 --student_encoder_layers 12 --freeze_encoder --freeze_embeds --learning_rate=3e-4 --do_train --do_predict --fp16 --val_check_interval 0.1 --n_val 1 --eval_beams 1 --length_penalty=0.5 --max_target_length=60 --val_max_target_length=60 --test_max_target_length=100 --model_name_or_path IGNORED --alpha_hid=3. --train_batch_size=16 --eval_batch_size=16 --gradient_accumulation_steps=2 --sortish_sampler --num_train_epochs=6 --warmup_steps 1 --output_dir distilbart_xsum_12_6 --amp_backend=apex --n_train 1 --gpus 1
[...]
2020-11-18 12:25:48.713431: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
using module SummarizationDistiller
/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:45: UserWarning: Checkpoint directory /mnt/nvme1/code/huggingface/transformers-s2s-dict/examples/seq2seq/distilbart_xsum_12_6 exists and is not empty. With save_top_k=1, all files in this directory will be deleted when a checkpoint is saved!
warnings.warn(*args, **kwargs)
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [1]
Using APEX 16bit precision.
Selected optimization level O2: FP16 training with FP32 batchnorm and FP32 master weights.
Defaults for this optimization level are:
enabled : True
opt_level : O2
cast_model_type : torch.float16
patch_torch_functions : False
keep_batchnorm_fp32 : True
master_weights : True
loss_scale : dynamic
Processing user overrides (additional kwargs that are not None)...
After processing overrides, optimization options are:
enabled : True
opt_level : O2
cast_model_type : torch.float16
patch_torch_functions : False
keep_batchnorm_fp32 : True
master_weights : True
loss_scale : dynamic
Warning: multi_tensor_applier fused unscale kernel is unavailable, possibly because apex was installed without --cuda_ext --cpp_ext. Using Python fallback. Original ImportError was: ModuleNotFoundError("No module named 'amp_C'")
/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:200: UserWarning: Please also save or load the state of the optimzer when saving or loading the scheduler.
warnings.warn(SAVE_STATE_WARNING, UserWarning)
Validation sanity check: 0it [00:00, ?it/s]Traceback (most recent call last):
File "distillation.py", line 308, in <module>
distill_main(args)
File "distillation.py", line 299, in distill_main
return ft_main(args, model=model)
File "/mnt/nvme1/code/huggingface/transformers-s2s-dict/examples/seq2seq/finetune.py", line 409, in main
trainer: pl.Trainer = generic_train(
File "/mnt/nvme1/code/huggingface/transformers-s2s-dict/examples/lightning_base.py", line 398, in generic_train
trainer.fit(model)
File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 444, in fit
results = self.accelerator_backend.train()
File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 63, in train
results = self.train_or_test()
File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 74, in train_or_test
results = self.trainer.train()
File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 466, in train
self.run_sanity_check(self.get_model())
File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 658, in run_sanity_check
_, eval_results = self.run_evaluation(test_mode=False, max_batches=self.num_sanity_val_batches)
File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 578, in run_evaluation
output = self.evaluation_loop.evaluation_step(test_mode, batch, batch_idx, dataloader_idx)
File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 171, in evaluation_step
output = self.trainer.accelerator_backend.validation_step(args)
File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 87, in validation_step
output = self.__validation_step(args)
File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 95, in __validation_step
output = self.trainer.model.validation_step(*args)
File "/mnt/nvme1/code/huggingface/transformers-s2s-dict/examples/seq2seq/finetune.py", line 182, in validation_step
return self._generative_step(batch)
File "/mnt/nvme1/code/huggingface/transformers-s2s-dict/examples/seq2seq/finetune.py", line 226, in _generative_step
loss_tensors = self._step(batch)
File "distillation.py", line 193, in _step
teacher_outputs = self.teacher(
File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/bart/modeling_bart.py", line 1022, in forward
outputs = self.model(
File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/bart/modeling_bart.py", line 905, in forward
decoder_outputs = self.decoder(
File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/bart/modeling_bart.py", line 593, in forward
x, layer_self_attn, layer_past, layer_cross_attn = decoder_layer(
File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/bart/modeling_bart.py", line 453, in forward
x, cross_attn_weights = self.encoder_attn(
File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/bart/modeling_bart.py", line 695, in forward
k = self.k_proj(key)
File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 91, in forward
return F.linear(input, self.weight, self.bias)
File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/torch/nn/functional.py", line 1676, in linear
output = input.matmul(weight.t())
RuntimeError: expected scalar type Float but found Half
```
@patil-suraj, @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8632/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8632/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8631 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8631/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8631/comments | https://api.github.com/repos/huggingface/transformers/issues/8631/events | https://github.com/huggingface/transformers/pull/8631 | 745,982,093 | MDExOlB1bGxSZXF1ZXN0NTIzNDUzMDA0 | 8,631 | [s2s] distillation apex breaks return_dict obj | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | This is a continuation of https://github.com/huggingface/transformers/pull/8612 for `distillation.py` - this PR is switching from `.property` to `["property"]`.
Unfortunately, the script itself doesn't seem to work under apex even after the fix - perhaps it never was.
But it's probably still OK to merge, since it no longer fails with the #8530-related symptoms and is in sync with `finetune.py` now.
I filed a separate issue about it: https://github.com/huggingface/transformers/issues/8632
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8631/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8631/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8631",
"html_url": "https://github.com/huggingface/transformers/pull/8631",
"diff_url": "https://github.com/huggingface/transformers/pull/8631.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8631.patch",
"merged_at": 1605732689000
} |
https://api.github.com/repos/huggingface/transformers/issues/8630 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8630/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8630/comments | https://api.github.com/repos/huggingface/transformers/issues/8630/events | https://github.com/huggingface/transformers/pull/8630 | 745,980,812 | MDExOlB1bGxSZXF1ZXN0NTIzNDUxOTY0 | 8,630 | Create README.md | {
"login": "moniquebm",
"id": 60358442,
"node_id": "MDQ6VXNlcjYwMzU4NDQy",
"avatar_url": "https://avatars.githubusercontent.com/u/60358442?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moniquebm",
"html_url": "https://github.com/moniquebm",
"followers_url": "https://api.github.com/users/moniquebm/followers",
"following_url": "https://api.github.com/users/moniquebm/following{/other_user}",
"gists_url": "https://api.github.com/users/moniquebm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moniquebm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moniquebm/subscriptions",
"organizations_url": "https://api.github.com/users/moniquebm/orgs",
"repos_url": "https://api.github.com/users/moniquebm/repos",
"events_url": "https://api.github.com/users/moniquebm/events{/privacy}",
"received_events_url": "https://api.github.com/users/moniquebm/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,605 | 1,606 | 1,606 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8630/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8630/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8630",
"html_url": "https://github.com/huggingface/transformers/pull/8630",
"diff_url": "https://github.com/huggingface/transformers/pull/8630.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8630.patch",
"merged_at": 1606124891000
} |
https://api.github.com/repos/huggingface/transformers/issues/8629 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8629/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8629/comments | https://api.github.com/repos/huggingface/transformers/issues/8629/events | https://github.com/huggingface/transformers/pull/8629 | 745,938,248 | MDExOlB1bGxSZXF1ZXN0NTIzNDE2MTcy | 8,629 | Fix mark-up (missing opening code-tag) | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think this has already been fixed by https://github.com/huggingface/transformers/pull/8635, which was opened a bit after yours ... sorry about that! Next time don't hesitate to tag @sgugger directly when doing documentation changes so he's aware of such PRs!"
] | 1,605 | 1,605 | 1,605 | COLLABORATOR | null | Small mark up fix | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8629/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8629/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8629",
"html_url": "https://github.com/huggingface/transformers/pull/8629",
"diff_url": "https://github.com/huggingface/transformers/pull/8629.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8629.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8628 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8628/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8628/comments | https://api.github.com/repos/huggingface/transformers/issues/8628/events | https://github.com/huggingface/transformers/issues/8628 | 745,903,029 | MDU6SXNzdWU3NDU5MDMwMjk= | 8,628 | CUDA error when training roBERTa from scratch with data parallel. | {
"login": "balazik",
"id": 35840115,
"node_id": "MDQ6VXNlcjM1ODQwMTE1",
"avatar_url": "https://avatars.githubusercontent.com/u/35840115?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/balazik",
"html_url": "https://github.com/balazik",
"followers_url": "https://api.github.com/users/balazik/followers",
"following_url": "https://api.github.com/users/balazik/following{/other_user}",
"gists_url": "https://api.github.com/users/balazik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/balazik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/balazik/subscriptions",
"organizations_url": "https://api.github.com/users/balazik/orgs",
"repos_url": "https://api.github.com/users/balazik/repos",
"events_url": "https://api.github.com/users/balazik/events{/privacy}",
"received_events_url": "https://api.github.com/users/balazik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The flag `whole_word_mask` cannot work wirth RoBERTa as it's only compatible with the BERT tokenizer and RoBERTa uses a different tokenizer. I'm surprised it was working on one GPU.\r\n\r\nIn any case, the `run_language_modeling.py` script is not maintained anymore, it has been replaced by new versions (`run_clm`, `run_mlm`, `run_plm`) that you can find in the `language-modeling` folder. Those new scripts are tested on a multi-GPU setup.",
"@sgugger Thanks for quick reply.\r\n\r\nAs I mentioned we tried other examples sa well.\r\n#### EsperBERTo with Trainer class: \r\nhttps://github.com/huggingface/blog/blob/master/how-to-train.md.\r\n\r\nPlease don't mind the whole_word_mask flag because we also tried the model with & without it. Every time as soon as we use multiple GPUs we get above mentioned error:\r\n```console\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n```\r\n\r\n#### Here we installed the newest transformers master with new version of language-modeling\\run_mlm.py\r\nRun parameters exact from README.md:\r\n```shell\r\npython run_mlm.py \\\r\n --model_name_or_path roberta-base \\\r\n --dataset_name wikitext \\\r\n --dataset_config_name wikitext-2-raw-v1 \\\r\n --do_train \\\r\n --do_eval \\\r\n --output_dir /tmp/test-mlm\r\n```\r\nAnd again we get the same error log (I shortened the error THCUNN part):\r\n```console\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [29,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [30,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [31,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nTraceback (most recent call last):\r\n File \"run_mlm.py\", line 392, in <module>\r\n main()\r\n File \"run_mlm.py\", line 362, in main\r\n trainer.train(model_path=model_path)\r\n File \"/home/aime/.local/lib/python3.8/site-packages/transformers/trainer.py\", line 747, in train\r\n tr_loss += self.training_step(model, inputs)\r\n File \"/home/aime/.local/lib/python3.8/site-packages/transformers/trainer.py\", line 1075, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/home/aime/.local/lib/python3.8/site-packages/transformers/trainer.py\", line 1099, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/home/aime/.local/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 727, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/aime/.local/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py\", line 162, in forward\r\n return self.gather(outputs, self.output_device)\r\n File \"/home/aime/.local/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py\", line 174, in gather\r\n return gather(outputs, output_device, dim=self.dim)\r\n File \"/home/aime/.local/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py\", line 68, in gather\r\n res = gather_map(outputs)\r\n File \"/home/aime/.local/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py\", line 61, in gather_map\r\n return type(out)(((k, gather_map([d[k] for d in outputs]))\r\n File \"<string>\", line 7, in __init__\r\n File \"/home/aime/.local/lib/python3.8/site-packages/transformers/file_utils.py\", line 1305, in __post_init__\r\n for element in iterator:\r\n File \"/home/aime/.local/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py\", line 61, in <genexpr>\r\n return type(out)(((k, gather_map([d[k] for d in outputs]))\r\n File \"/home/aime/.local/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py\", line 55, in gather_map\r\n return Gather.apply(target_device, dim, *outputs)\r\n File \"/home/aime/.local/lib/python3.8/site-packages/torch/nn/parallel/_functions.py\", line 71, in forward\r\n return comm.gather(inputs, ctx.dim, ctx.target_device)\r\n File \"/home/aime/.local/lib/python3.8/site-packages/torch/nn/parallel/comm.py\", line 230, in gather\r\n return torch._C._gather(tensors, dim, destination)\r\nRuntimeError: CUDA error: device-side assert triggered\r\n 0%|▍ | 1/450 [00:09<1:07:51, 9.07s/it]\r\n```\r\n\r\n#### We even tried a custom simplified training code inspired by your docs.\r\nhttps://huggingface.co/transformers/custom_datasets.html#fine-tuning-with-native-pytorch-tensorflow\r\n\r\n```python\r\nfrom transformers import LineByLineTextDataset\r\nfrom transformers import DataCollatorForLanguageModeling, Trainer, TrainingArguments, RobertaConfig\r\nfrom transformers import RobertaForMaskedLM\r\nfrom transformers import AdamW\r\nfrom torch.utils.data import DataLoader\r\n\r\ndataset = LineByLineTextDataset(\r\n tokenizer=tokenizer,\r\n file_path=r'./data/oscar.eo.txt',\r\n block_size=512,\r\n)\r\n\r\ndata_collator = DataCollatorForLanguageModeling(\r\n tokenizer=tokenizer, mlm=True, mlm_probability=0.15\r\n)\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=r'./EsperBERTo',\r\n overwrite_output_dir=True,\r\n num_train_epochs=1,\r\n per_device_train_batch_size=4,\r\n save_steps=10_000,\r\n save_total_limit=2,\r\n prediction_loss_only=True\r\n)\r\n\r\nprint(training_args.n_gpu)\r\n\r\nconfig = RobertaConfig(\r\n vocab_size=52_000,\r\n max_position_embeddings=514,\r\n num_attention_heads=12,\r\n num_hidden_layers=6,\r\n type_vocab_size=1,\r\n)\r\n\r\nargs = training_args\r\ndevice = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')\r\n\r\nmodel = RobertaForMaskedLM(config=config)\r\nmodel = torch.nn.DataParallel(model)\r\nmodel = model.to(device)\r\n\r\nmodel.train()\r\ntrain_loader = DataLoader(dataset, batch_size=4, shuffle=True)\r\noptim = AdamW(model.parameters(), lr=5e-5)\r\n\r\nfor epoch in range(3):\r\n for batch in train_loader:\r\n optim.zero_grad()\r\n input_ids = batch['input_ids'].to(device)\r\n outputs = model(input_ids)\r\n loss = outputs[0]\r\n \r\n if args.n_gpu > 1:\r\n loss = loss.mean() # mean() to average on multi-gpu parallel training\r\n \r\n loss.backward()\r\n optim.step()\r\n\r\nmodel.eval()\r\n```\r\nBut we always get the same error.",
"I don't have the error on my side with two GPUs and the same command, so I think the bug comes from something in your enviromnent. The fact the simple training loop also fails encourages me in the same direction.\r\nIf you try to use just two GPUs with `CUDA_VISIBLE_DEVICES`, does the problem persist? Maybe one of your GPUs is in a bad state?",
"After some laborious debugging we figured out that the problem was indeed in our HW configuration.\r\n\r\nFor others if your machine has an AMD EPYC 7402 you will probably needed to **disable IOMMU** (AMD I/O Virtualization Technology) in BIOS. After disabling all examples work.\r\n\r\nI apologise for the inconvenience."
] | 1,605 | 1,606 | 1,606 | NONE | null | ## Environment info
- `transformers` version: 3.5.0
- Platform: Linux-5.4.0-52-generic-x86_64-Ubuntu 18.04.5 LTS
- Python version: 3.8.3
- PyTorch version (GPU?): 1.7.0+cu110
- Tensorflow version (GPU?): 2.3.1
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
@LysandreJik @sgugger
## Information
Model I am using (Bert, XLNet ...): roberta-base
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run transformers\examples\language-modeling\run_language_modeling.py as MLM task with WikiText-2 dataset (as mentioned in official README.md):
```shell
export CUDA_VISIBLE_DEVICES=0,1,2,3
export TRAIN_FILE=data/wiki.train.raw
export TEST_FILE=data/wiki.test.raw
python run_language_modeling.py \
--output_dir=output \
--model_type=roberta \
--model_name_or_path=roberta-base \
--do_train \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$TEST_FILE \
--mlm \
--whole_word_mask
```
When we enable (with export CUDA_VISIBLE_DEVICES=0,1,2,3) all four GPUs (4xNVIDIA A100) training will throw RuntimeError: CUDA error: device-side assert triggered.
If we keep only one GPU enabled (with export CUDA_VISIBLE_DEVICES=0) training works flawlessly.
What we tried:
- run run_language_modeling.py with WikiText-2 dataset
- run run_language_modeling.py with custom sentenced text dataset
- official example for EsperBERTo in jupyter
All of the above mentioned have failed with the same error as soon as we enabled multiple GPUs!
Full error for run_language_modeling.py:
```console
./run_mlm_wiki.sh
11/18/2020 15:29:20 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 4, distributed training: False, 16-bits training: False
11/18/2020 15:29:20 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir='output', overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=False, evaluate_during_training=False, evaluation_strategy=<EvaluationStrategy.NO: 'no'>, prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Nov18_15-29-20_a4000-20an1', logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name='output', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None)
/home/aime/.local/lib/python3.8/site-packages/transformers/modeling_auto.py:845: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models.
warnings.warn(
Some weights of RobertaForMaskedLM were not initialized from the model checkpoint at roberta-base and are newly initialized: ['lm_head.decoder.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
/home/aime/.local/lib/python3.8/site-packages/transformers/tokenization_utils_base.py:1541: FutureWarning: The `max_len` attribute has been deprecated and will be removed in a future version, use `model_max_length` instead.
warnings.warn(
/home/aime/.local/lib/python3.8/site-packages/transformers/data/datasets/language_modeling.py:40: FutureWarning: This dataset will be removed from the library soon, preprocessing should be handled with the 🤗 Datasets library. You can have a look at this example script for pointers: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py
warnings.warn(
11/18/2020 15:29:27 - INFO - filelock - Lock 139760254204416 acquired on data/cached_lm_RobertaTokenizerFast_510_wiki.train.raw.lock
11/18/2020 15:29:27 - INFO - filelock - Lock 139760254204416 released on data/cached_lm_RobertaTokenizerFast_510_wiki.train.raw.lock
11/18/2020 15:29:27 - INFO - filelock - Lock 139760254204848 acquired on data/cached_lm_RobertaTokenizerFast_510_wiki.test.raw.lock
11/18/2020 15:29:27 - INFO - filelock - Lock 139760254204848 released on data/cached_lm_RobertaTokenizerFast_510_wiki.test.raw.lock
/home/aime/.local/lib/python3.8/site-packages/transformers/trainer.py:277: FutureWarning: Passing `prediction_loss_only` as a keyword argument is deprecated and won't be possible in a future version. Use `args.prediction_loss_only` instead. Setting `args.prediction_loss_only=True
warnings.warn(
0%| | 0/447 [00:00<?, ?it/s]/home/aime/.local/lib/python3.8/site-packages/torch/nn/parallel/_functions.py:64: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
warnings.warn('Was asked to gather along dimension 0, but all '
0%|â– | 1/447 [00:07<58:40, 7.89s/it]Traceback (most recent call last):
File "run_language_modeling.py", line 351, in <module>
main()
File "run_language_modeling.py", line 315, in main
trainer.train(model_path=model_path)
File "/home/aime/.local/lib/python3.8/site-packages/transformers/trainer.py", line 775, in train
tr_loss += self.training_step(model, inputs)
File "/home/aime/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1112, in training_step
loss = self.compute_loss(model, inputs)
File "/home/aime/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1136, in compute_loss
outputs = model(**inputs)
File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 161, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 171, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/home/aime/.local/lib/python3.8/site-packages/torch/_utils.py", line 428, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/aime/.local/lib/python3.8/site-packages/transformers/modeling_roberta.py", line 894, in forward
outputs = self.roberta(
File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/aime/.local/lib/python3.8/site-packages/transformers/modeling_roberta.py", line 686, in forward
encoder_outputs = self.encoder(
File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/aime/.local/lib/python3.8/site-packages/transformers/modeling_roberta.py", line 421, in forward
layer_outputs = layer_module(
File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/aime/.local/lib/python3.8/site-packages/transformers/modeling_roberta.py", line 341, in forward
self_attention_outputs = self.attention(
File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/aime/.local/lib/python3.8/site-packages/transformers/modeling_roberta.py", line 273, in forward
self_outputs = self.self(
File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/aime/.local/lib/python3.8/site-packages/transformers/modeling_roberta.py", line 203, in forward
attention_probs = nn.Softmax(dim=-1)(attention_scores)
File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/modules/activation.py", line 1198, in forward
return F.softmax(input, self.dim, _stacklevel=5)
File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/functional.py", line 1512, in softmax
ret = input.softmax(dim)
RuntimeError: CUDA error: device-side assert triggered
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [4,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [5,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.
...
```
## Expected behavior
Training should start in parallel on all four Nvidia A100 GPUs without errors. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8628/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8628/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8627 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8627/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8627/comments | https://api.github.com/repos/huggingface/transformers/issues/8627/events | https://github.com/huggingface/transformers/pull/8627 | 745,872,999 | MDExOlB1bGxSZXF1ZXN0NTIzMzYyMTQ0 | 8,627 | Diverse beam search | {
"login": "ayushtiku5",
"id": 40797286,
"node_id": "MDQ6VXNlcjQwNzk3Mjg2",
"avatar_url": "https://avatars.githubusercontent.com/u/40797286?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayushtiku5",
"html_url": "https://github.com/ayushtiku5",
"followers_url": "https://api.github.com/users/ayushtiku5/followers",
"following_url": "https://api.github.com/users/ayushtiku5/following{/other_user}",
"gists_url": "https://api.github.com/users/ayushtiku5/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayushtiku5/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayushtiku5/subscriptions",
"organizations_url": "https://api.github.com/users/ayushtiku5/orgs",
"repos_url": "https://api.github.com/users/ayushtiku5/repos",
"events_url": "https://api.github.com/users/ayushtiku5/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayushtiku5/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patrickvonplaten I am implementing diverse beam search. Please do suggest code design for this. 😃 ",
"> @patrickvonplaten I am implementing diverse beam search. Please do suggest code design for this.\r\n\r\nAwesome that you work on this!\r\n\r\nI think this looks like the right approach! However, I'd also recommend creating a new beam_scorer to be sure to not break backwards compatilibily. We can see at a later stage if we can try to merge some code together with the current beam search code :-) \r\n\r\nAlso, can you add a link to the paper in this PR ? this would be great :-) ",
"@patrickvonplaten please review. I have made the required changes :)",
"@patrickvonplaten just a gentle reminder to review the PR. Thanks!",
"> @patrickvonplaten just a gentle reminder to review the PR. Thanks!\r\n\r\nSorry, I'll review the PR this week! Also wondering how this PR relates to this one: #8840",
"@patrickvonplaten I think #8840 ensures that first token of every predicted sequence is different. This PR ensures diversity between group of beams at every time step of sequence generation. I think this will be more generic. Also we can change extent of diversity using `diversity_penalty` parameter.",
"@patrickvonplaten Also I was thinking that currently I am subtracting the diversity penalty directly from the `beam_scores`. So, finally when we are doing `beam_scorer.finalize()`, the `final_beam_scores` will also include the effect of `diversity_penalty`. \r\n\r\nI was thinking maybe we should penalise the `beam_scores` with diversity penalty only when we are selecting top `2*group_size` beam candidates:\r\n`next_token_scores, next_tokens = torch.topk(\r\n next_token_scores, 2 * group_size, dim=1, largest=True, sorted=True\r\n )`\r\n\r\nBut for choosing the final beams in the end the scores shouldn't include the penalty due to diversity. What do you think?",
"> @patrickvonplaten Also I was thinking that currently I am subtracting the diversity penalty directly from the `beam_scores`. So, finally when we are doing `beam_scorer.finalize()`, the `final_beam_scores` will also include the effect of `diversity_penalty`.\r\n> \r\n> I was thinking maybe we should penalise the `beam_scores` with diversity penalty only when we are selecting top `2*group_size` beam candidates:\r\n> `next_token_scores, next_tokens = torch.topk( next_token_scores, 2 * group_size, dim=1, largest=True, sorted=True )`\r\n> \r\n> But for choosing the final beams in the end the scores shouldn't include the penalty due to diversity. What do you think?\r\n\r\nHey @ayushtiku5,\r\n\r\nThat's a good point! I do think though that we should leave the `beam_scores` as there are in the end as well. My main arguments are:\r\n\r\n1) It helps to have more diversity in the output. If we only use the diversity penalty for choosing the next beam_token, but not add it to the `_beam_scores`, the beam_scores will be very high for beams of similar tokens, which I think is what we want to prevent here. I think `beam_scores` should be penalized for every token in the corresponding `beam_idx` that is also present in another `beam_idx` of the same `beam_group`. It's also more consistent and logical IMO: We should update the `beam_score` with the `probability` that the current beam_id was selected. \r\n\r\n2) It would be very ugly to implement and I'd like to avoid it...\r\n\r\nIs that fine for you?",
"> > @patrickvonplaten Also I was thinking that currently I am subtracting the diversity penalty directly from the `beam_scores`. So, finally when we are doing `beam_scorer.finalize()`, the `final_beam_scores` will also include the effect of `diversity_penalty`.\r\n> > I was thinking maybe we should penalise the `beam_scores` with diversity penalty only when we are selecting top `2*group_size` beam candidates:\r\n> > `next_token_scores, next_tokens = torch.topk( next_token_scores, 2 * group_size, dim=1, largest=True, sorted=True )`\r\n> > But for choosing the final beams in the end the scores shouldn't include the penalty due to diversity. What do you think?\r\n> \r\n> Hey @ayushtiku5,\r\n> \r\n> That's a good point! I do think though that we should leave the `beam_scores` as there are in the end as well. My main arguments are:\r\n> \r\n> 1. It helps to have more diversity in the output. If we only use the diversity penalty for choosing the next beam_token, but not add it to the `_beam_scores`, the beam_scores will be very high for beams of similar tokens, which I think is what we want to prevent here. I think `beam_scores` should be penalized for every token in the corresponding `beam_idx` that is also present in another `beam_idx` of the same `beam_group`. It's also more consistent and logical IMO: We should update the `beam_score` with the `probability` that the current beam_id was selected.\r\n> 2. It would be very ugly to implement and I'd like to avoid it...\r\n> \r\n> Is that fine for you?\r\n\r\n@patrickvonplaten yeah sure, I am fine with this.",
"@ayushtiku5 - hope it's ok that I fiddled quite a bit with your PR. The functionality is kept 1:1 the same (I added an integration test in the very beginning to be sure of that), but the design is slightly different with the main goal to keep the method as general as possible. \r\n\r\nIMO, the PR is now good to merge :-) Could you take a final look at whether the new names and design is ok for you? \r\n\r\nAfterward, we can think about a nice code snippet / use case to advertise the big new feature of `transformers` :-) \r\nAwesome job!",
"@ayushtiku5 do you think the following code snippet could be a nice use case of diverse beam search?\r\n\r\n```python\r\nfrom transformers import pipeline\r\nsummarizer = pipeline(\"summarization\", model=\"sshleifer/distilbart-xsum-12-6\")\r\n\r\nARTICLE = \"\"\"Part of the Broad Road was closed to traffic on Sunday at about 18:00 GMT.\r\nThe three adults and three children have been taken to Altnagelvin Hospital with non\r\nlife-threatening injuries. The Fire Service, Northern Ireland Ambulance Service\r\nand police attended the crash. The Broad Road has since been reopened.\"\"\"\r\n\r\n# normal beam search\r\nsummarizer(ARTICLE, num_return_sequences=2)\r\n# => [' Five people, including three children, have been taken to hospital following a two-vehicle crash in Londonderry.',\r\n# ' Five people, including three children, have been taken to hospital after a two-vehicle crash in Londonderry.']\r\n\r\n# diverse beam search\r\nsummarizer(ARTICLE, num_return_sequences=2, num_beam_groups=6, diversity_penalty=10.0)\r\n# => ['Three men are in hospital after a car and a lorry crashed in Londonderry.',\r\n# 'Six pedestrians were injured when a car and two vehicles crashed in County Antrim.']\r\n```",
"> @ayushtiku5 - hope it's ok that I fiddled quite a bit with your PR. The functionality is kept 1:1 the same (I added an integration test in the very beginning to be sure of that), but the design is slightly different with the main goal to keep the method as general as possible.\r\n> \r\n> IMO, the PR is now good to merge :-) Could you take a final look at whether the new names and design is ok for you?\r\n> \r\n> Afterward, we can think about a nice code snippet / use case to advertise the big new feature of `transformers` :-)\r\n> Awesome job!\r\n\r\nHey @patrickvonplaten ,\r\n\r\nJust one thing. In the `BeamScorer`'s `finalize()` method, we are directly selecting top `num_beams` beams from the `final_beam_scores`. This assumes that the beam scores in `final_beam_scores` will be sorted in decreasing order for a particular `batch_idx`. However, this will not be the case for our diverse beam search. `final_beam_scores` will be sorted for the beams inside a particular group, but not necessarily for all the beams for a particular `batch_idx`. So, I think we will have to sort the `final_beam_scores` for every `batch_idx`. I did this previously [here](https://github.com/huggingface/transformers/pull/8627/commits/14d5b6ca6e527eac2cdb9e9400d4c00f6d7add01#diff-098eb3834a12a0788445325f6795950fc5d59ec8fc8d34fef115ae5e379e18f2R292)\r\n\r\nThe rest looks good to me. Thanks for refactoring!\r\n\r\n[UPDATE]: added this in [this](https://github.com/huggingface/transformers/pull/8627/commits/c99eb5a8dc57a7b0d33a8ac06d8c6a32a7812ad4) commit",
"Hey @ayushtiku5,\r\n\r\nsorry I forgot to mention on why I deleted those lines. IMO we don't need to add this functionality because it doesn't matter whether the scores are sorted or not. In this line: https://github.com/huggingface/transformers/blob/72d6c9c68ba19b2e991b0d7a32989410399b33f5/src/transformers/generation_beam_search.py#L330 you can see that the `add(...)` method automatically keeps the best scores and throws out the worse scores. Since the loop goes through all scores anyway it does not matter IMO whether they are sorted or not. \r\n\r\nWhat do you think? IMO, we can revert the last commit.",
"> Hey @ayushtiku5,\r\n> \r\n> sorry I forgot to mention on why I deleted those lines. IMO we don't need to add this functionality because it doesn't matter whether the scores are sorted or not. In this line:\r\n> \r\n> https://github.com/huggingface/transformers/blob/72d6c9c68ba19b2e991b0d7a32989410399b33f5/src/transformers/generation_beam_search.py#L330\r\n> \r\n> you can see that the `add(...)` method automatically keeps the best scores and throws out the worse scores. Since the loop goes through all scores anyway it does not matter IMO whether they are sorted or not.\r\n> What do you think? IMO, we can revert the last commit.\r\n\r\nYeah sorry! I completely missed it. Reverted the commit.",
"> > Hey @ayushtiku5,\r\n> > sorry I forgot to mention on why I deleted those lines. IMO we don't need to add this functionality because it doesn't matter whether the scores are sorted or not. In this line:\r\n> > https://github.com/huggingface/transformers/blob/72d6c9c68ba19b2e991b0d7a32989410399b33f5/src/transformers/generation_beam_search.py#L330\r\n> > \r\n> > you can see that the `add(...)` method automatically keeps the best scores and throws out the worse scores. Since the loop goes through all scores anyway it does not matter IMO whether they are sorted or not.\r\n> > What do you think? IMO, we can revert the last commit.\r\n> \r\n> Yeah sorry! I completely missed it. Reverted the commit.\r\n\r\nNo worries :-) The comment wasn't the best either - I updated it. Think it's a bit clearer now.",
"@ayushtiku5 - super sorry, we messed up the previous branch yesterday. I opened a new PR with the same authorship -> so it should be good to merge :-) "
] | 1,605 | 1,607 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Implementation of diverse beam search decoding as described in the paper: https://arxiv.org/pdf/1610.02424.pdf
diversity function reference taken from: https://github.com/ashwinkalyan/dbs
## Implementation details
Consider a T5 summarization task.
`article="Justin Timberlake and Jessica Biel, welcome to parenthood.
The celebrity couple announced the arrival of their son, Silas Randall Timberlake, in statements to People.
"Silas was the middle name of Timberlake's maternal grandfather Bill Bomar, who died in 2012, while Randall is the musician's own middle name, as well as his father's first," People reports.
The couple announced the pregnancy in January, with an Instagram post. It is the first baby for both."`
Generation using normal beam search can be done as:
`model.generate(
input_ids=input_ids,
num_beams=2,
num_return_sequences=2
)`
This generates:
`['the couple announced the pregnancy in January. it is the first baby for both.', 'the couple announced the pregnancy in January. it is the first baby for both of them ']`
Generation using diverse beam search can be done as:
`model.generate(
input_ids=input_ids,
num_beams=2,
num_return_sequences=2,
beam_groups=2,
diversity_penalty=1.5
)`
This generates:
`['the couple announced the pregnancy in January. it is the first baby for both.', 'Justin Timberlake and Jessica Biel have welcomed their son, Silas Randall ']`
This means that 2 beams will be divided into 2 groups of 1 beam each, ensuring diversity between each group. NOTE: If `beam_groups=1`, then it will be same as the normal beam search as all the beams belong to the same group. Higher `diversity_penalty` will ensure more diversity between the groups of beams. When doing generation using diverse beam search, we need to ensure that `num_beams>=beam_groups` and also `num_beams` is divisible by `beam_groups`.
## Who can review?
@patrickvonplaten, @TevenLeScao
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8627/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8627/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8627",
"html_url": "https://github.com/huggingface/transformers/pull/8627",
"diff_url": "https://github.com/huggingface/transformers/pull/8627.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8627.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8626 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8626/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8626/comments | https://api.github.com/repos/huggingface/transformers/issues/8626/events | https://github.com/huggingface/transformers/issues/8626 | 745,861,455 | MDU6SXNzdWU3NDU4NjE0NTU= | 8,626 | run_pl_glue.py (almost equivalent performance with non-english bert models) | {
"login": "timpal0l",
"id": 6556710,
"node_id": "MDQ6VXNlcjY1NTY3MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6556710?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timpal0l",
"html_url": "https://github.com/timpal0l",
"followers_url": "https://api.github.com/users/timpal0l/followers",
"following_url": "https://api.github.com/users/timpal0l/following{/other_user}",
"gists_url": "https://api.github.com/users/timpal0l/gists{/gist_id}",
"starred_url": "https://api.github.com/users/timpal0l/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timpal0l/subscriptions",
"organizations_url": "https://api.github.com/users/timpal0l/orgs",
"repos_url": "https://api.github.com/users/timpal0l/repos",
"events_url": "https://api.github.com/users/timpal0l/events{/privacy}",
"received_events_url": "https://api.github.com/users/timpal0l/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I can't say with certainty, but I actually think it's entirely feasible that this is legitimate result. Here's a [recent ACL paper](https://www.aclweb.org/anthology/2020.acl-main.421/) showing that a monolingual model can be fine-tuned on another language with competitive performance. The authors do learn a new embedding layer for the new target language in an intermediate pre-training step, so it's not entirely the same, but I wouldn't find this result too surprising. It's also likely that these non-English models had exposure to some English that wasn't scrubbed from their pre-training corpora, in which case the model might already have decent embeddings for tokens sourced from English text just from pre-training.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,605 | 1,614 | 1,614 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: `4.0.0.dev0`
- Platform: `Ubuntu 20.04.1 LTS`
- Python version: `3.8.5`
- PyTorch version (GPU?): `1.7.0` (GPU - yes)
- Tensorflow version (GPU?):
- Using GPU in script?: Yes (GeForce GTX Titan X)
- Using distributed or parallel set-up in script?: distributed
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> perhaps @sgugger
## Information
I tested running the glue benchmark with a few non-english models such as; Arabic, Swedish and Chinese.
Models I am using: `asafaya/bert-base-arabic`, `KB/bert-base-swedish-cased`, `bert-base-chinese`.
I recieve almost identical results as in [Run PyTorch version](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-pytorch-version), it differs with a few percentages for each task, where some are even slightly better than using the default `bert-base-cased`
Am not sure this is a bug, but it seems a bit strange that with using different embeddings that are really far away from English such as Arabic and Chinese I get very similair results.
The problem arises when using:
* [X] the official example scripts: [run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: GLUE, (sts-b in this example)
* [ ] my own task or dataset: (give details below)
## To reproduce
I get almost identical results when running a non-english bert on the glue benchmark. In this case on `stsb` using the `bert-base-chinese`, `asafaya/bert-base-arabic` and `KB/bert-base-swedish-cased`.
```
export TASK_NAME=stsb
python run_glue.py \
--model_name_or_path bert-base-chinese \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/$TASK_NAME/
```
Chinese:
```
11/18/2020 17:10:42 - INFO - __main__ - ***** Eval results stsb *****
11/18/2020 17:10:42 - INFO - __main__ - eval_loss = 0.8410218954086304
11/18/2020 17:10:42 - INFO - __main__ - eval_pearson = 0.7922208042884891
11/18/2020 17:10:42 - INFO - __main__ - eval_spearmanr = 0.7956508384154777
11/18/2020 17:10:42 - INFO - __main__ - eval_combined_score = 0.7939358213519834
11/18/2020 17:10:42 - INFO - __main__ - epoch = 3.0
```
Arabic:
```
11/18/2020 17:14:04 - INFO - __main__ - ***** Eval results stsb *****
11/18/2020 17:14:04 - INFO - __main__ - eval_loss = 0.8082903027534485
11/18/2020 17:14:04 - INFO - __main__ - eval_pearson = 0.8357733212850804
11/18/2020 17:14:04 - INFO - __main__ - eval_spearmanr = 0.8386964712863125
11/18/2020 17:14:04 - INFO - __main__ - eval_combined_score = 0.8372348962856965
11/18/2020 17:14:04 - INFO - __main__ - epoch = 3.0
```
Swedish:
```
11/18/2020 17:32:26 - INFO - __main__ - ***** Eval results stsb *****
11/18/2020 17:32:26 - INFO - __main__ - eval_loss = 0.7071832418441772
11/18/2020 17:32:26 - INFO - __main__ - eval_pearson = 0.8379047445076137
11/18/2020 17:32:26 - INFO - __main__ - eval_spearmanr = 0.8350383734219187
11/18/2020 17:32:26 - INFO - __main__ - eval_combined_score = 0.8364715589647662
11/18/2020 17:32:26 - INFO - __main__ - epoch = 3.0
```
Is expected behaviour? Meaning that the readaption of the embedding matrices can work with non english vocabs such as Chinese and Arabic since they perhaps contain some latin characters.
With English model `bert-base-cased` we get pearson: `83.95` and Arabic model `asafaya/bert-base-arabic` pearson: `83.57`.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Thanks!
## Expected behavior
Not sure..
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8626/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8626/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.