url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/11844 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11844/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11844/comments | https://api.github.com/repos/huggingface/transformers/issues/11844/events | https://github.com/huggingface/transformers/pull/11844 | 899,616,309 | MDExOlB1bGxSZXF1ZXN0NjUxMjkzNTg4 | 11,844 | Fix flos single node | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Not sure why the CI is failing, is it related to the PR? Doesn't look so to me but I may be missing something",
"No, the CI is failing all the time those days, don't worry about it."
] | 1,621 | 1,621 | 1,621 | CONTRIBUTOR | null | This PR fixes a bug typo where in single-node settings, flos in the Trainer would stay constant, and also updates the flo count in the trainer state at every log occasion (instead of every model-saving occasion) so that users that wish to use a flo-logging callback can access it more frequently. I feel like the first bug should have been caught by a test (at the moment few other people use trainer flos, since it's mostly for large-scale training and researchers that need or want to report flos, and neither group uses the Trainer a lot, so it's mostly the HF bigscience efforts), and I should make one.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11844/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11844",
"html_url": "https://github.com/huggingface/transformers/pull/11844",
"diff_url": "https://github.com/huggingface/transformers/pull/11844.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11844.patch",
"merged_at": 1621880152000
} |
https://api.github.com/repos/huggingface/transformers/issues/11843 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11843/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11843/comments | https://api.github.com/repos/huggingface/transformers/issues/11843/events | https://github.com/huggingface/transformers/issues/11843 | 899,585,015 | MDU6SXNzdWU4OTk1ODUwMTU= | 11,843 | Issues loading finetuned BERT | {
"login": "lorinaandr",
"id": 48472861,
"node_id": "MDQ6VXNlcjQ4NDcyODYx",
"avatar_url": "https://avatars.githubusercontent.com/u/48472861?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lorinaandr",
"html_url": "https://github.com/lorinaandr",
"followers_url": "https://api.github.com/users/lorinaandr/followers",
"following_url": "https://api.github.com/users/lorinaandr/following{/other_user}",
"gists_url": "https://api.github.com/users/lorinaandr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lorinaandr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lorinaandr/subscriptions",
"organizations_url": "https://api.github.com/users/lorinaandr/orgs",
"repos_url": "https://api.github.com/users/lorinaandr/repos",
"events_url": "https://api.github.com/users/lorinaandr/events{/privacy}",
"received_events_url": "https://api.github.com/users/lorinaandr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It's advised to save models using the [`.save_pretrained()` method](https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel.save_pretrained). You can then read it back in using the `.from_pretrained()` method. \r\n\r\nNote that you should specify the name of a directory, not the name of a PyTorch checkpoint.",
"I already tried to use` .save_pretrained()` but I have defined a class name BertClassifier where i defined an LSTM layer and a linear layer to be added to preatrained BertModel.from_pretrained('bert-base-multilingual-uncased') and when i tried to use save_pretrained() i received an error saying that save_pretrained is not defined in BertClassifier class. \r\n**Update** i have updated the question with how my model is computed. ",
"Oh ok, so your `BertClassifier` is just an nn.Module? I now see why your model reloading is not working. As you can see here:\r\n\r\n```\r\nRuntimeError: Error(s) in loading state_dict for BertModel:\r\n\r\nMissing key(s) in state_dict: \"embeddings.position_ids\", \"embeddings.word_embeddings.weight\", \"embeddings.position_embeddings.weight\", \"embeddings.token_type_embeddings.weight\", \"embeddings.LayerNorm.weight\", \"embeddings.LayerNorm.bias\", \"encoder.layer.0.attention.self.query.weight\", \"encoder.layer.0.attention.self.query.bias\", \"encoder.layer.0.attention.self.key.weight\", \"encoder.layer.0.attention.self.key.bias\", \"encoder.layer.0.attention.self.value.weight\", \"encoder.layer.0.attention.self.value.bias\", \"encoder.layer.0.attention.output.dense.weight\", \"encoder.layer.0.attention.output.dense.bias\", \"encoder.layer.0.attention.output.LayerNorm.weight\", \"encoder.layer.0.attention.output.LayerNorm.bias\", \"encoder.layer.0.intermediate.dense.weight\", \"encoder.layer.0.intermediate.dense.bias\", \"encoder.layer.0.output.dense.weight\", \"encoder.layer.0.output.dense.bias\", \"encoder.layer.0.output.LayerNorm.weight\", \"encoder.layer.0.output.LayerNorm.bias\", \"encoder.layer.1.attention.self.query.weight\", \"encoder.layer.1.attention.self.query.bias\", \"encoder.layer.1.attention.self.key.weight\", \"encoder.layer.1.attention.self.key.bias\", \"encoder.layer.1.attention.self.value.weight\", \"encoder.layer.1.attention.self.value.bias\", \"encoder.layer.1.attention.output.dense.weight\", \"encoder.layer.1.attention.output.dense.bias\", \"encoder.layer.1.attention.output.LayerNorm.weight\", \"encoder.layer.1.attention.output.LayerNorm.bias\", \"encoder.layer.1.intermediate.dense.weight\", \"encoder.layer.1.intermediate.dense.bias\", \"encoder.layer.1.output.dense.weight\", \"encoder.layer.1.output.dense.bias\", \"encoder.layer.1.output.LayerNorm.weight\", \"encoder.layer.1.output.LayerNorm.bias\", \"encoder.layer.2.attention.self.query.weight\", \"encoder.layer.2.attention.self.query.bias\", \"encoder.layer.2.attention.self.key.weight\", \"encoder.layer.2.attention.self.key.bias\", \"encoder.layer.2.attention.self.value.weight\", \"encoder.layer.2.attention.self.value.bias\", \"encoder.layer.2.attention.output.dense.weight\", \"encoder.layer.2.attention.output.dense.bias\", \"encoder.layer.2.attention.output.LayerNorm.weight\", \"encoder.layer.2.attention.output.LayerNorm.bias\", \"encoder.layer.2.intermediate.dense.weight\", \"encoder.layer.2.intermediate.dense.bias\", \"encoder.layer.2.output.dense.weight\", \"encoder.layer.2.output.dense.bias\", \"encoder.layer.2.output.LayerNorm.weight\", \"encoder.layer.2.output.LayerNorm.bias\", \"encoder.layer.3.attention.self.query.weight\", \"encoder.layer.3.attention.self.query.bias\", \"encoder.layer.3.attention.self.key.weight\", \"encoder.layer.3.attention.self.key.bias\", \"encoder.layer.3.attention.self.value.weight\", \"encoder.layer.3.attention.self.value.bias\", \"encoder.layer.3.attention.output.dense.weight\", \"encoder.layer.3.attention.output.dense.bias\", \"encoder.layer.3.attention.output.LayerNorm.weight\", \"encoder.layer.3.attention.output.LayerNorm.bias\", \"encoder.layer.3.intermediate.dense.weight\", \"encoder.layer.3.intermediate.dense.bias\", \"encoder.layer.3.output.dense.weight\", \"encoder.layer.3.output.dense.bias\", \"encoder.layer.3.output.LayerNorm.weight\", \"encoder.layer.3.output.LayerNorm.bias\", \"encoder.layer.4.attention.self.query.weight\", \"encoder.layer.4.attention.self.query.bias\", \"encoder.layer.4.attention.self.key.weight\", \"encoder.layer.4.attention.self.key.bias\", \"encoder.layer.4.attention.self.value.weight\", \"encoder.layer.4.attention.self.value.bias\", \"encoder.layer.4.attention.output.dense.weight\", \"encoder.layer.4.attention.output.dense.bias\", \"encoder.layer.4.attention.output.LayerNorm.weight\", \"encoder.layer.4.attention.output.LayerNorm.bias\", \"encoder.layer.4.intermediate.dense.weight\", \"encoder.layer.4.intermediate.dense.bias\", \"encoder.layer.4.output.dense.weight\", \"encoder.layer.4.output.dense.bias\", \"encoder.layer.4.output.LayerNorm.weight\", \"encoder.layer.4.output.LayerNorm.bias\", \"encoder.layer.5.attention.self.query.weight\", \"encoder.layer.5.attention.self.query.bias\", \"encoder.layer.5.attention.self.key.weight\", \"encoder.layer.5.attention.self.key.bias\", \"encoder.layer.5.attention.self.value.weight\", \"encoder.layer.5.attention.self.value.bias\", \"encoder.layer.5.attention.output.dense.weight\", \"encoder.layer.5.attention.output.dense.bias\", \"encoder.layer.5.attention.output.LayerNorm.weight\", \"encoder.layer.5.attention.output.LayerNorm.bias\", \"encoder.layer.5.intermediate.dense.weight\", \"encoder.layer.5.intermediate.dense.bias\", \"encoder.layer.5.output.dense.weight\", \"encoder.layer.5.output.dense.bias\", \"encoder.layer.5.output.LayerNorm.weight\", \"encoder.layer.5.output.LayerNorm.bias\", \"encoder.layer.6.attention.self.query.weight\", \"encoder.layer.6.attention.self.query.bias\", \"encoder.layer.6.attention.self.key.weight\", \"encoder.layer.6.attention.self.key.bias\", \"encoder.layer.6.attention.self.value.weight\", \"encoder.layer.6.attention.self.value.bias\", \"encoder.layer.6.attention.output.dense.weight\", \"encoder.layer.6.attention.output.dense.bias\", \"encoder.layer.6.attention.output.LayerNorm.weight\", \"encoder.layer.6.attention.output.LayerNorm.bias\", \"encoder.layer.6.intermediate.dense.weight\", \"encoder.layer.6.intermediate.dense.bias\", \"encoder.layer.6.output.dense.weight\", \"encoder.layer.6.output.dense.bias\", \"encoder.layer.6.output.LayerNorm.weight\", \"encoder.layer.6.output.LayerNorm.bias\", \"encoder.layer.7.attention.self.query.weight\", \"encoder.layer.7.attention.self.query.bias\", \"encoder.layer.7.attention.self.key.weight\", \"encoder.layer.7.attention.self.key.bias\", \"encoder.layer.7.attention.self.value.weight\", \"encoder.layer.7.attention.self.value.bias\", \"encoder.layer.7.attention.output.dense.weight\", \"encoder.layer.7.attention.output.dense.bias\", \"encoder.layer.7.attention.output.LayerNorm.weight\", \"encoder.layer.7.attention.output.LayerNorm.bias\", \"encoder.layer.7.intermediate.dense.weight\", \"encoder.layer.7.intermediate.dense.bias\", \"encoder.layer.7.output.dense.weight\", \"encoder.layer.7.output.dense.bias\", \"encoder.layer.7.output.LayerNorm.weight\", \"encoder.layer.7.output.LayerNorm.bias\", \"encoder.layer.8.attention.self.query.weight\", \"encoder.layer.8.attention.self.query.bias\", \"encoder.layer.8.attention.self.key.weight\", \"encoder.layer.8.attention.self.key.bias\", \"encoder.layer.8.attention.self.value.weight\", \"encoder.layer.8.attention.self.value.bias\", \"encoder.layer.8.attention.output.dense.weight\", \"encoder.layer.8.attention.output.dense.bias\", \"encoder.layer.8.attention.output.LayerNorm.weight\", \"encoder.layer.8.attention.output.LayerNorm.bias\", \"encoder.layer.8.intermediate.dense.weight\", \"encoder.layer.8.intermediate.dense.bias\", \"encoder.layer.8.output.dense.weight\", \"encoder.layer.8.output.dense.bias\", \"encoder.layer.8.output.LayerNorm.weight\", \"encoder.layer.8.output.LayerNorm.bias\", \"encoder.layer.9.attention.self.query.weight\", \"encoder.layer.9.attention.self.query.bias\", \"encoder.layer.9.attention.self.key.weight\", \"encoder.layer.9.attention.self.key.bias\", \"encoder.layer.9.attention.self.value.weight\", \"encoder.layer.9.attention.self.value.bias\", \"encoder.layer.9.attention.output.dense.weight\", \"encoder.layer.9.attention.output.dense.bias\", \"encoder.layer.9.attention.output.LayerNorm.weight\", \"encoder.layer.9.attention.output.LayerNorm.bias\", \"encoder.layer.9.intermediate.dense.weight\", \"encoder.layer.9.intermediate.dense.bias\", \"encoder.layer.9.output.dense.weight\", \"encoder.layer.9.output.dense.bias\", \"encoder.layer.9.output.LayerNorm.weight\", \"encoder.layer.9.output.LayerNorm.bias\", \"encoder.layer.10.attention.self.query.weight\", \"encoder.layer.10.attention.self.query.bias\", \"encoder.layer.10.attention.self.key.weight\", \"encoder.layer.10.attention.self.key.bias\", \"encoder.layer.10.attention.self.value.weight\", \"encoder.layer.10.attention.self.value.bias\", \"encoder.layer.10.attention.output.dense.weight\", \"encoder.layer.10.attention.output.dense.bias\", \"encoder.layer.10.attention.output.LayerNorm.weight\", \"encoder.layer.10.attention.output.LayerNorm.bias\", \"encoder.layer.10.intermediate.dense.weight\", \"encoder.layer.10.intermediate.dense.bias\", \"encoder.layer.10.output.dense.weight\", \"encoder.layer.10.output.dense.bias\", \"encoder.layer.10.output.LayerNorm.weight\", \"encoder.layer.10.output.LayerNorm.bias\", \"encoder.layer.11.attention.self.query.weight\", \"encoder.layer.11.attention.self.query.bias\", \"encoder.layer.11.attention.self.key.weight\", \"encoder.layer.11.attention.self.key.bias\", \"encoder.layer.11.attention.self.value.weight\", \"encoder.layer.11.attention.self.value.bias\", \"encoder.layer.11.attention.output.dense.weight\", \"encoder.layer.11.attention.output.dense.bias\", \"encoder.layer.11.attention.output.LayerNorm.weight\", \"encoder.layer.11.attention.output.LayerNorm.bias\", \"encoder.layer.11.intermediate.dense.weight\", \"encoder.layer.11.intermediate.dense.bias\", \"encoder.layer.11.output.dense.weight\", \"encoder.layer.11.output.dense.bias\", \"encoder.layer.11.output.LayerNorm.weight\", \"encoder.layer.11.output.LayerNorm.bias\", \"pooler.dense.weight\", \"pooler.dense.bias\". \r\n\tUnexpected key(s) in state_dict: \"bert.embeddings.position_ids\", \"bert.embeddings.word_embeddings.weight\", \"bert.embeddings.position_embeddings.weight\", \"bert.embeddings.token_type_embeddings.weight\", \"bert.embeddings.LayerNorm.weight\", \"bert.embeddings.LayerNorm.bias\", \"bert.encoder.layer.0.attention.self.query.weight\", \"bert.encoder.layer.0.attention.self.query.bias\", \"bert.encoder.layer.0.attention.self.key.weight\", \"bert.encoder.layer.0.attention.self.key.bias\", \"bert.encoder.layer.0.attention.self.value.weight\", \"bert.encoder.layer.0.attention.self.value.bias\", \"bert.encoder.layer.0.attention.output.dense.weight\", \"bert.encoder.layer.0.attention.output.dense.bias\", \"bert.encoder.layer.0.attention.output.LayerNorm.weight\", \"bert.encoder.layer.0.attention.output.LayerNorm.bias\", \"bert.encoder.layer.0.intermediate.dense.weight\", \"bert.encoder.layer.0.intermediate.dense.bias\", \"bert.encoder.layer.0.output.dense.weight\", \"bert.encoder.layer.0.output.dense.bias\", \"bert.encoder.layer.0.output.LayerNorm.weight\", \"bert.encoder.layer.0.output.LayerNorm.bias\", \"bert.encoder.layer.1.attention.self.query.weight\", \"bert.encoder.layer.1.attention.self.query.bias\", \"bert.encoder.layer.1.attention.self.key.weight\", \"bert.encoder.layer.1.attention.self.key.bias\", \"bert.encoder.layer.1.attention.self.value.weight\", \"bert.encoder.layer.1.attention.self.value.bias\", \"bert.encoder.layer.1.attention.output.dense.weight\", \"bert.encoder.layer.1.attention.output.dense.bias\", \"bert.encoder.layer.1.attention.output.LayerNorm.weight\", \"bert.encoder.layer.1.attention.output.LayerNorm.bias\", \"bert.encoder.layer.1.intermediate.dense.weight\", \"bert.encoder.layer.1.intermediate.dense.bias\", \"bert.encoder.layer.1.output.dense.weight\", \"bert.encoder.layer.1.output.dense.bias\", \"bert.encoder.layer.1.output.LayerNorm.weight\", \"bert.encoder.layer.1.output.LayerNorm.bias\", \"bert.encoder.layer\r\n```\r\n\r\nEvery parameter name that you saved has a \"bert\" prefix to it, because when you defined your `BertClassifier`, you probably defined the `BertModel` inside it using `self.bert = BertModel.from_pretrained(\"...\")`. \r\n\r\nSo of course, you can't load it back into a `BertModel`, without first removing the \"bert\" prefix from all parameter names. Do you understand? You should however be able to directly load the weights into a `BertClassifier`.",
"Oh ok, so basically i have to remove the bert. prefix? And how would i have to do this ?",
"Is there a reason you don't want to load your weights into a `BertClassifier`, but only the `BertModel`? Because this:\r\n\r\n```\r\nmodel = BertClassifier(freeze_bert=False)\r\nmodel.load_state_dict(torch.load('finetuned_model.pt')))\r\n```\r\nshould work. In case you only want to have a `BertModel`, then you'll need to remove the \"bert\" prefix from the parameter names. This can be done as follows:\r\n\r\n```\r\nfrom transformers import BertModel, BertConfig\r\n\r\nmodel = BertClassifier(freeze_bert=False)\r\nmodel.load_state_dict(torch.load('finetuned_model.pt'))\r\n\r\nnew_state_dict = dict()\r\nfor name, param in model.state_dict().items():\r\n name = name[4:]\r\n new_state_dict[name] = param\r\n\r\nconfig = BertConfig.from_pretrained('bert-base-multilingual-uncased', num_labels=2)\r\nmodel = BertModel.from_pretrained('bert-base-multilingual-uncased')\r\n\r\nfor name, param in model.state_dict().items():\r\n model.state_dict()[name].copy_(new_state_dict[name])\r\n```\r\n\r\n\r\n",
"So the thing is, i trained and defined the BertClassifier in a .py file and in another .py file i want to use the fine-tuned model on user input data. If I do 'model = BertClassifier(freeze_bert=False)' i will have to import ` from bert import BertClassifier` and when i run the code, it starts the training of the model again... \r\n**Update:** So if i use the above code and after that torch.save the model.state_dict() and will it be the same finetuned model? will i have the same accuracy? ",
"If i just take the definition of BertClassifier class in the .py file where I want to test the model on user input and i do the following :\r\n```\r\nmodel = BertClassifier(freeze_bert=False)\r\nmodel.load_state_dict(torch.load('finetuned_model.pt'))\r\n```\r\nWill that be a workaround? As i told before, if I just import the BertClassifier from the other .py file where i trained it, on this file it will start the training again). Doing the above i see that doesn't start the training and `print(model.state_dict().keys())` returns `\"bert.embeddings.position_ids\", \"bert.embeddings.word_embeddings.weight\"` which should work fine if I'm not wrong. ",
"It's weird that when you import the model, it starts the training again. You only need to import the definition of the model, not the training related code.\r\n\r\n```\r\nmodel = BertClassifier(freeze_bert=False)\r\nmodel.load_state_dict(torch.load('finetuned_model.pt'))\r\n```\r\n\r\nThis should work indeed. ",
"Many thanks for the help!"
] | 1,621 | 1,621 | 1,621 | NONE | null | Hello, I’m having issues loading a finetuned BERT model for binary classification. I have this class for the BERT model:
```
class BertClassifier(nn.Module):
def __init__(self, freeze_bert=False):
super(BertClassifier, self).__init__()
self.bert = BertModel.from_pretrained('bert-base-multilingual-uncased')
self.lstm = nn.LSTM(768, 50, batch_first=True, bidirectional=True)
self.linear = nn.Linear(50*2 , 2)
if freeze_bert:
for param in self.bert.parameters():
param.requires_grad = False
def forward(self, input_ids, attention_mask):
outputs = self.bert(input_ids=input_ids,attention_mask=attention_mask)
sequence_output = outputs[0]
sequence_output, _ = self.lstm(sequence_output)
linear_output = self.linear(sequence_output[:, -1])
return linear_output
```
The model is `bert_classifier = BertClassifier(freeze_bert=False)`
I save the model by the below line:
`torch.save(bert_classifier.state_dict(), 'finetuned_model.pt')`
Then in another .py file i want to load the model and i have the below code:
```
model = BertModel.from_pretrained('bert-base-multilingual-uncased')
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-uncased', do_lower_case=True)
model.load_state_dict(torch.load('finetuned_model.pt'))
```
When I run the code, i receive the below error on this line `model.load_state_dict(torch.load('finetuned_model.pt'))` :
```
RuntimeError: Error(s) in loading state_dict for BertModel:
Missing key(s) in state_dict: "embeddings.position_ids", "embeddings.word_embeddings.weight", "embeddings.position_embeddings.weight", "embeddings.token_type_embeddings.weight", "embeddings.LayerNorm.weight", "embeddings.LayerNorm.bias", "encoder.layer.0.attention.self.query.weight", "encoder.layer.0.attention.self.query.bias", "encoder.layer.0.attention.self.key.weight", "encoder.layer.0.attention.self.key.bias", "encoder.layer.0.attention.self.value.weight", "encoder.layer.0.attention.self.value.bias", "encoder.layer.0.attention.output.dense.weight", "encoder.layer.0.attention.output.dense.bias", "encoder.layer.0.attention.output.LayerNorm.weight", "encoder.layer.0.attention.output.LayerNorm.bias", "encoder.layer.0.intermediate.dense.weight", "encoder.layer.0.intermediate.dense.bias", "encoder.layer.0.output.dense.weight", "encoder.layer.0.output.dense.bias", "encoder.layer.0.output.LayerNorm.weight", "encoder.layer.0.output.LayerNorm.bias", "encoder.layer.1.attention.self.query.weight", "encoder.layer.1.attention.self.query.bias", "encoder.layer.1.attention.self.key.weight", "encoder.layer.1.attention.self.key.bias", "encoder.layer.1.attention.self.value.weight", "encoder.layer.1.attention.self.value.bias", "encoder.layer.1.attention.output.dense.weight", "encoder.layer.1.attention.output.dense.bias", "encoder.layer.1.attention.output.LayerNorm.weight", "encoder.layer.1.attention.output.LayerNorm.bias", "encoder.layer.1.intermediate.dense.weight", "encoder.layer.1.intermediate.dense.bias", "encoder.layer.1.output.dense.weight", "encoder.layer.1.output.dense.bias", "encoder.layer.1.output.LayerNorm.weight", "encoder.layer.1.output.LayerNorm.bias", "encoder.layer.2.attention.self.query.weight", "encoder.layer.2.attention.self.query.bias", "encoder.layer.2.attention.self.key.weight", "encoder.layer.2.attention.self.key.bias", "encoder.layer.2.attention.self.value.weight", "encoder.layer.2.attention.self.value.bias", "encoder.layer.2.attention.output.dense.weight", "encoder.layer.2.attention.output.dense.bias", "encoder.layer.2.attention.output.LayerNorm.weight", "encoder.layer.2.attention.output.LayerNorm.bias", "encoder.layer.2.intermediate.dense.weight", "encoder.layer.2.intermediate.dense.bias", "encoder.layer.2.output.dense.weight", "encoder.layer.2.output.dense.bias", "encoder.layer.2.output.LayerNorm.weight", "encoder.layer.2.output.LayerNorm.bias", "encoder.layer.3.attention.self.query.weight", "encoder.layer.3.attention.self.query.bias", "encoder.layer.3.attention.self.key.weight", "encoder.layer.3.attention.self.key.bias", "encoder.layer.3.attention.self.value.weight", "encoder.layer.3.attention.self.value.bias", "encoder.layer.3.attention.output.dense.weight", "encoder.layer.3.attention.output.dense.bias", "encoder.layer.3.attention.output.LayerNorm.weight", "encoder.layer.3.attention.output.LayerNorm.bias", "encoder.layer.3.intermediate.dense.weight", "encoder.layer.3.intermediate.dense.bias", "encoder.layer.3.output.dense.weight", "encoder.layer.3.output.dense.bias", "encoder.layer.3.output.LayerNorm.weight", "encoder.layer.3.output.LayerNorm.bias", "encoder.layer.4.attention.self.query.weight", "encoder.layer.4.attention.self.query.bias", "encoder.layer.4.attention.self.key.weight", "encoder.layer.4.attention.self.key.bias", "encoder.layer.4.attention.self.value.weight", "encoder.layer.4.attention.self.value.bias", "encoder.layer.4.attention.output.dense.weight", "encoder.layer.4.attention.output.dense.bias", "encoder.layer.4.attention.output.LayerNorm.weight", "encoder.layer.4.attention.output.LayerNorm.bias", "encoder.layer.4.intermediate.dense.weight", "encoder.layer.4.intermediate.dense.bias", "encoder.layer.4.output.dense.weight", "encoder.layer.4.output.dense.bias", "encoder.layer.4.output.LayerNorm.weight", "encoder.layer.4.output.LayerNorm.bias", "encoder.layer.5.attention.self.query.weight", "encoder.layer.5.attention.self.query.bias", "encoder.layer.5.attention.self.key.weight", "encoder.layer.5.attention.self.key.bias", "encoder.layer.5.attention.self.value.weight", "encoder.layer.5.attention.self.value.bias", "encoder.layer.5.attention.output.dense.weight", "encoder.layer.5.attention.output.dense.bias", "encoder.layer.5.attention.output.LayerNorm.weight", "encoder.layer.5.attention.output.LayerNorm.bias", "encoder.layer.5.intermediate.dense.weight", "encoder.layer.5.intermediate.dense.bias", "encoder.layer.5.output.dense.weight", "encoder.layer.5.output.dense.bias", "encoder.layer.5.output.LayerNorm.weight", "encoder.layer.5.output.LayerNorm.bias", "encoder.layer.6.attention.self.query.weight", "encoder.layer.6.attention.self.query.bias", "encoder.layer.6.attention.self.key.weight", "encoder.layer.6.attention.self.key.bias", "encoder.layer.6.attention.self.value.weight", "encoder.layer.6.attention.self.value.bias", "encoder.layer.6.attention.output.dense.weight", "encoder.layer.6.attention.output.dense.bias", "encoder.layer.6.attention.output.LayerNorm.weight", "encoder.layer.6.attention.output.LayerNorm.bias", "encoder.layer.6.intermediate.dense.weight", "encoder.layer.6.intermediate.dense.bias", "encoder.layer.6.output.dense.weight", "encoder.layer.6.output.dense.bias", "encoder.layer.6.output.LayerNorm.weight", "encoder.layer.6.output.LayerNorm.bias", "encoder.layer.7.attention.self.query.weight", "encoder.layer.7.attention.self.query.bias", "encoder.layer.7.attention.self.key.weight", "encoder.layer.7.attention.self.key.bias", "encoder.layer.7.attention.self.value.weight", "encoder.layer.7.attention.self.value.bias", "encoder.layer.7.attention.output.dense.weight", "encoder.layer.7.attention.output.dense.bias", "encoder.layer.7.attention.output.LayerNorm.weight", "encoder.layer.7.attention.output.LayerNorm.bias", "encoder.layer.7.intermediate.dense.weight", "encoder.layer.7.intermediate.dense.bias", "encoder.layer.7.output.dense.weight", "encoder.layer.7.output.dense.bias", "encoder.layer.7.output.LayerNorm.weight", "encoder.layer.7.output.LayerNorm.bias", "encoder.layer.8.attention.self.query.weight", "encoder.layer.8.attention.self.query.bias", "encoder.layer.8.attention.self.key.weight", "encoder.layer.8.attention.self.key.bias", "encoder.layer.8.attention.self.value.weight", "encoder.layer.8.attention.self.value.bias", "encoder.layer.8.attention.output.dense.weight", "encoder.layer.8.attention.output.dense.bias", "encoder.layer.8.attention.output.LayerNorm.weight", "encoder.layer.8.attention.output.LayerNorm.bias", "encoder.layer.8.intermediate.dense.weight", "encoder.layer.8.intermediate.dense.bias", "encoder.layer.8.output.dense.weight", "encoder.layer.8.output.dense.bias", "encoder.layer.8.output.LayerNorm.weight", "encoder.layer.8.output.LayerNorm.bias", "encoder.layer.9.attention.self.query.weight", "encoder.layer.9.attention.self.query.bias", "encoder.layer.9.attention.self.key.weight", "encoder.layer.9.attention.self.key.bias", "encoder.layer.9.attention.self.value.weight", "encoder.layer.9.attention.self.value.bias", "encoder.layer.9.attention.output.dense.weight", "encoder.layer.9.attention.output.dense.bias", "encoder.layer.9.attention.output.LayerNorm.weight", "encoder.layer.9.attention.output.LayerNorm.bias", "encoder.layer.9.intermediate.dense.weight", "encoder.layer.9.intermediate.dense.bias", "encoder.layer.9.output.dense.weight", "encoder.layer.9.output.dense.bias", "encoder.layer.9.output.LayerNorm.weight", "encoder.layer.9.output.LayerNorm.bias", "encoder.layer.10.attention.self.query.weight", "encoder.layer.10.attention.self.query.bias", "encoder.layer.10.attention.self.key.weight", "encoder.layer.10.attention.self.key.bias", "encoder.layer.10.attention.self.value.weight", "encoder.layer.10.attention.self.value.bias", "encoder.layer.10.attention.output.dense.weight", "encoder.layer.10.attention.output.dense.bias", "encoder.layer.10.attention.output.LayerNorm.weight", "encoder.layer.10.attention.output.LayerNorm.bias", "encoder.layer.10.intermediate.dense.weight", "encoder.layer.10.intermediate.dense.bias", "encoder.layer.10.output.dense.weight", "encoder.layer.10.output.dense.bias", "encoder.layer.10.output.LayerNorm.weight", "encoder.layer.10.output.LayerNorm.bias", "encoder.layer.11.attention.self.query.weight", "encoder.layer.11.attention.self.query.bias", "encoder.layer.11.attention.self.key.weight", "encoder.layer.11.attention.self.key.bias", "encoder.layer.11.attention.self.value.weight", "encoder.layer.11.attention.self.value.bias", "encoder.layer.11.attention.output.dense.weight", "encoder.layer.11.attention.output.dense.bias", "encoder.layer.11.attention.output.LayerNorm.weight", "encoder.layer.11.attention.output.LayerNorm.bias", "encoder.layer.11.intermediate.dense.weight", "encoder.layer.11.intermediate.dense.bias", "encoder.layer.11.output.dense.weight", "encoder.layer.11.output.dense.bias", "encoder.layer.11.output.LayerNorm.weight", "encoder.layer.11.output.LayerNorm.bias", "pooler.dense.weight", "pooler.dense.bias".
Unexpected key(s) in state_dict: "bert.embeddings.position_ids", "bert.embeddings.word_embeddings.weight", "bert.embeddings.position_embeddings.weight", "bert.embeddings.token_type_embeddings.weight", "bert.embeddings.LayerNorm.weight", "bert.embeddings.LayerNorm.bias", "bert.encoder.layer.0.attention.self.query.weight", "bert.encoder.layer.0.attention.self.query.bias", "bert.encoder.layer.0.attention.self.key.weight", "bert.encoder.layer.0.attention.self.key.bias", "bert.encoder.layer.0.attention.self.value.weight", "bert.encoder.layer.0.attention.self.value.bias", "bert.encoder.layer.0.attention.output.dense.weight", "bert.encoder.layer.0.attention.output.dense.bias", "bert.encoder.layer.0.attention.output.LayerNorm.weight", "bert.encoder.layer.0.attention.output.LayerNorm.bias", "bert.encoder.layer.0.intermediate.dense.weight", "bert.encoder.layer.0.intermediate.dense.bias", "bert.encoder.layer.0.output.dense.weight", "bert.encoder.layer.0.output.dense.bias", "bert.encoder.layer.0.output.LayerNorm.weight", "bert.encoder.layer.0.output.LayerNorm.bias", "bert.encoder.layer.1.attention.self.query.weight", "bert.encoder.layer.1.attention.self.query.bias", "bert.encoder.layer.1.attention.self.key.weight", "bert.encoder.layer.1.attention.self.key.bias", "bert.encoder.layer.1.attention.self.value.weight", "bert.encoder.layer.1.attention.self.value.bias", "bert.encoder.layer.1.attention.output.dense.weight", "bert.encoder.layer.1.attention.output.dense.bias", "bert.encoder.layer.1.attention.output.LayerNorm.weight", "bert.encoder.layer.1.attention.output.LayerNorm.bias", "bert.encoder.layer.1.intermediate.dense.weight", "bert.encoder.layer.1.intermediate.dense.bias", "bert.encoder.layer.1.output.dense.weight", "bert.encoder.layer.1.output.dense.bias", "bert.encoder.layer.1.output.LayerNorm.weight", "bert.encoder.layer.1.output.LayerNorm.bias", "bert.encoder.layer
```
I’ve tried to modify the save part to model.save_pretrained('finetuned_model.pt') but i received an error saying that save_pretrained function doesn’t exist in the model that i defined.
I also tried to save it with `torch.save(bert_classifier.state_dict(), 'finetuned_model.pt') `and load it with` model = BertModel.from_pretrained('finetuned_model.pt') `but i receive the error : `UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte`
And I also tried to save it like this `torch.save(bert_classifier.state_dict(), 'model/finetuned_model.bin')` and load it like this:
```
config = BertConfig.from_pretrained('bert-base-multilingual-uncased', num_labels=2)
model = BertModel.from_pretrained('bert-base-multilingual-uncased')
model.load_state_dict(torch.load("model/finetuned_model.bin"))
```
and received the same big error as above. Any idea how can this be fixed so i can save and load the model successfully ? Any help will be much appreciated. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11843/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11842 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11842/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11842/comments | https://api.github.com/repos/huggingface/transformers/issues/11842/events | https://github.com/huggingface/transformers/pull/11842 | 899,530,303 | MDExOlB1bGxSZXF1ZXN0NjUxMjE5NTg4 | 11,842 | Fix bug in Masked Language Modeling example scripts (#11840)) | {
"login": "bzantium",
"id": 19511788,
"node_id": "MDQ6VXNlcjE5NTExNzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/19511788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bzantium",
"html_url": "https://github.com/bzantium",
"followers_url": "https://api.github.com/users/bzantium/followers",
"following_url": "https://api.github.com/users/bzantium/following{/other_user}",
"gists_url": "https://api.github.com/users/bzantium/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bzantium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bzantium/subscriptions",
"organizations_url": "https://api.github.com/users/bzantium/orgs",
"repos_url": "https://api.github.com/users/bzantium/repos",
"events_url": "https://api.github.com/users/bzantium/events{/privacy}",
"received_events_url": "https://api.github.com/users/bzantium/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The `line_by_line=False` argument is not supposed to be used for BERT-like pretraining objectives, it it there to do GPT-like pretraining. Maybe it does not make sense to have it in `run_mlm` at all.\r\nIn any case this fix will not necessarily work for all models supported by the script, as the special tokens may be slightly different than what is hard-coded.",
"#10737 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,625 | 1,625 | CONTRIBUTOR | null | # What does this PR do?
when `data_args.line_by_line == False`, the script firstly converts given examples into input_ids, token_type_ids, attention_mask and special_tokens_mask including cls_token, sep_token. Then it concatenates all tokenized outputs and generate chunks of max_seq_length. However, it will generate unintended training examples such as [871, 512, 2492, 1111, 947, 533] not [2 (cls_token), 512, 2492, 1111, 947, 3 (sep_token)]. I fix this problem.
Fixes #11840
@sgugger, @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11842/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11842",
"html_url": "https://github.com/huggingface/transformers/pull/11842",
"diff_url": "https://github.com/huggingface/transformers/pull/11842.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11842.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11841 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11841/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11841/comments | https://api.github.com/repos/huggingface/transformers/issues/11841/events | https://github.com/huggingface/transformers/issues/11841 | 899,522,561 | MDU6SXNzdWU4OTk1MjI1NjE= | 11,841 | Generate Function call throughs error when "inputs_embeds" argument passed | {
"login": "abhikasd6523",
"id": 24733033,
"node_id": "MDQ6VXNlcjI0NzMzMDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/24733033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhikasd6523",
"html_url": "https://github.com/abhikasd6523",
"followers_url": "https://api.github.com/users/abhikasd6523/followers",
"following_url": "https://api.github.com/users/abhikasd6523/following{/other_user}",
"gists_url": "https://api.github.com/users/abhikasd6523/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhikasd6523/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhikasd6523/subscriptions",
"organizations_url": "https://api.github.com/users/abhikasd6523/orgs",
"repos_url": "https://api.github.com/users/abhikasd6523/repos",
"events_url": "https://api.github.com/users/abhikasd6523/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhikasd6523/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @abhikasd6523,\r\n\r\nI don't think that `generate()` currently supports `inputs_embeds` correctly. It would require quite some changes in `generate()` to make it work I'm afraid. Can you give me some more background on your use-case for passing `inputs_embeds` instead of `input_ids`. If it's a general enough use-case, I think we could try to make the required changes to `generate()`",
"Thanks a lot for replying.\r\nI am trying to connect a custom encoder to the GPT2 model and would want to pass the vectored last layer values of the Encoder as input embeds. The goal is to generate a random sentence with this type of connection and architecture.",
"I see - did you try to directly use the `sample(...)` method? Think that this could work",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,626 | 1,626 | NONE | null | When using `inputs_embeds` as the argument instead of 'input_ids' while trying to generate text with GPT2 model, an error pops up about `input_ids`.
```
from transformers import GPT2Tokenizer, GPT2LMHeadModel, GPT2Config
import transformers
import torch
import torch.nn as nn
model = GPT2LMHeadModel.from_pretrained('gpt2')
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
some_random_texts = "This is a nice place to eat"
tokenized_text = tokenizer.encode(some_random_texts, return_tensors='pt')
tokenized_text_embeds = model.transformer.wte(tokenized_text)
output = model.generate(inputs_embeds=tokenized_text_embeds, max_length=50)
```
The error generated is:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-2-643fe4803ba4> in <module>
12 tokenized_text_embeds = model.transformer.wte(tokenized_text)
13
---> 14 output = model.generate(inputs_embeds=tokenized_text_embeds, max_length=50)
C:\ProgramData\Anaconda3\envs\hugging_face\lib\site-packages\torch\autograd\grad_mode.py in decorate_context(*args, **kwargs)
24 def decorate_context(*args, **kwargs):
25 with self.__class__():
---> 26 return func(*args, **kwargs)
27 return cast(F, decorate_context)
28
C:\ProgramData\Anaconda3\envs\hugging_face\lib\site-packages\transformers\generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, synced_gpus, **model_kwargs)
891 # init `attention_mask` depending on `pad_token_id`
892 model_kwargs["attention_mask"] = self._prepare_attention_mask_for_generation(
--> 893 input_ids, pad_token_id, eos_token_id
894 )
895
C:\ProgramData\Anaconda3\envs\hugging_face\lib\site-packages\transformers\generation_utils.py in _prepare_attention_mask_for_generation(self, input_ids, pad_token_id, eos_token_id)
401 if is_pad_token_in_inputs_ids and is_pad_token_not_equal_to_eos_token_id:
402 return input_ids.ne(pad_token_id).long()
--> 403 return input_ids.new_ones(input_ids.shape, dtype=torch.long)
404
405 def _prepare_encoder_decoder_kwargs_for_generation(
AttributeError: 'NoneType' object has no attribute 'new_ones'
```
While solving this issue, I find that 'attention_mask' argument needs to be included too, to avoid this `self._prepare_attention_mask_for_generation` function being called altogether, another error pops ups.
The changes made are as follows:
```
from transformers import GPT2Tokenizer, GPT2LMHeadModel, GPT2Config
import transformers
import torch
import torch.nn as nn
model = GPT2LMHeadModel.from_pretrained('gpt2')
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
some_random_texts = "This is a nice place to eat"
tokenized_text = tokenizer.encode(some_random_texts, return_tensors='pt')
tokenized_text_embeds = model.transformer.wte(tokenized_text)
att_mask = torch.ones(tokenized_text.shape[1])
output = model.generate(inputs_embeds=tokenized_text_embeds, attention_mask=att_mask, max_length=50)
```
and the error that pops up now is:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-7-17cdf18aa52b> in <module>
14 att_mask = torch.ones(tokenized_text.shape[1])
15
---> 16 output = model.generate(inputs_embeds=tokenized_text_embeds, attention_mask=att_mask, max_length=50)
C:\ProgramData\Anaconda3\envs\hugging_face\lib\site-packages\torch\autograd\grad_mode.py in decorate_context(*args, **kwargs)
24 def decorate_context(*args, **kwargs):
25 with self.__class__():
---> 26 return func(*args, **kwargs)
27 return cast(F, decorate_context)
28
C:\ProgramData\Anaconda3\envs\hugging_face\lib\site-packages\transformers\generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, synced_gpus, **model_kwargs)
917 raise ValueError("Make sure that `model_kwargs` include `encoder_outputs` of type `ModelOutput`.")
918
--> 919 if input_ids.shape[-1] >= max_length:
920 input_ids_string = "decoder_input_ids" if self.config.is_encoder_decoder else "input_ids"
921 logger.warning(
AttributeError: 'NoneType' object has no attribute 'shape'
```
Am I doing something wrong here or is there is a bug in the code of `generation_utils.GenerationMixin.generate()`?
Version control:
transformers: 4.6.1
torch: 1.7.1
python: 3.7.4
@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11841/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11840 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11840/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11840/comments | https://api.github.com/repos/huggingface/transformers/issues/11840/events | https://github.com/huggingface/transformers/issues/11840 | 899,495,487 | MDU6SXNzdWU4OTk0OTU0ODc= | 11,840 | Bug in MLM example scripts | {
"login": "bzantium",
"id": 19511788,
"node_id": "MDQ6VXNlcjE5NTExNzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/19511788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bzantium",
"html_url": "https://github.com/bzantium",
"followers_url": "https://api.github.com/users/bzantium/followers",
"following_url": "https://api.github.com/users/bzantium/following{/other_user}",
"gists_url": "https://api.github.com/users/bzantium/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bzantium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bzantium/subscriptions",
"organizations_url": "https://api.github.com/users/bzantium/orgs",
"repos_url": "https://api.github.com/users/bzantium/repos",
"events_url": "https://api.github.com/users/bzantium/events{/privacy}",
"received_events_url": "https://api.github.com/users/bzantium/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,625 | 1,625 | CONTRIBUTOR | null | when `data_args.line_by_line == False`, the script firstly converts given examples into input_ids, token_type_ids, attention_mask and special_tokens_mask **including cls_token, sep_token**. Then it concatenates all tokenized outputs and generate chunks of max_seq_length. However, it will generate unintended training examples such as [871, 512, 2492, 1111, 947, 533] not [2 (cls_token), 512, 2492, 1111, 947, 3 (sep_token)].
```python
if data_args.line_by_line:
# When using line_by_line, we just tokenize each nonempty line.
padding = "max_length" if data_args.pad_to_max_length else False
def tokenize_function(examples):
# Remove empty lines
examples["text"] = [line for line in examples["text"] if len(line) > 0 and not line.isspace()]
return tokenizer(
examples["text"],
padding=padding,
truncation=True,
max_length=max_seq_length,
# We use this option because DataCollatorForLanguageModeling (see below) is more efficient when it
# receives the `special_tokens_mask`.
return_special_tokens_mask=True,
)
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
num_proc=data_args.preprocessing_num_workers,
remove_columns=[text_column_name],
load_from_cache_file=not data_args.overwrite_cache,
)
else:
# Otherwise, we tokenize every text, then concatenate them together before splitting them in smaller parts.
# We use `return_special_tokens_mask=True` because DataCollatorForLanguageModeling (see below) is more
# efficient when it receives the `special_tokens_mask`.
def tokenize_function(examples):
return tokenizer(examples[text_column_name], return_special_tokens_mask=True)
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
num_proc=data_args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not data_args.overwrite_cache,
)
# Main data processing function that will concatenate all texts from our dataset and generate chunks of
# max_seq_length.
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
total_length = (total_length // max_seq_length) * max_seq_length
# Split by chunks of max_len.
result = {
k: [t[i : i + max_seq_length] for i in range(0, total_length, max_seq_length)]
for k, t in concatenated_examples.items()
}
return result
# Note that with `batched=True`, this map processes 1,000 texts together, so group_texts throws away a
# remainder for each of those groups of 1,000 texts. You can adjust that batch_size here but a higher value
# might be slower to preprocess.
#
# To speed up this part, we use multiprocessing. See the documentation of the map method for more information:
# https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map
tokenized_datasets = tokenized_datasets.map(
group_texts,
batched=True,
num_proc=data_args.preprocessing_num_workers,
load_from_cache_file=not data_args.overwrite_cache,
)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11840/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11839 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11839/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11839/comments | https://api.github.com/repos/huggingface/transformers/issues/11839/events | https://github.com/huggingface/transformers/pull/11839 | 899,489,700 | MDExOlB1bGxSZXF1ZXN0NjUxMTg0NTM4 | 11,839 | [Flax] Fix PyTorch import error | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Running `run_mlm_flax.py` should not have to rely on a PyTorch import. Thanks for spotting this error @marcvanzee !
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11839/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11839/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11839",
"html_url": "https://github.com/huggingface/transformers/pull/11839",
"diff_url": "https://github.com/huggingface/transformers/pull/11839.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11839.patch",
"merged_at": 1621849271000
} |
https://api.github.com/repos/huggingface/transformers/issues/11838 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11838/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11838/comments | https://api.github.com/repos/huggingface/transformers/issues/11838/events | https://github.com/huggingface/transformers/issues/11838 | 899,458,036 | MDU6SXNzdWU4OTk0NTgwMzY= | 11,838 | Is 10% in annotation different from 0.5 in code? | {
"login": "aixuedegege",
"id": 19356707,
"node_id": "MDQ6VXNlcjE5MzU2NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/19356707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aixuedegege",
"html_url": "https://github.com/aixuedegege",
"followers_url": "https://api.github.com/users/aixuedegege/followers",
"following_url": "https://api.github.com/users/aixuedegege/following{/other_user}",
"gists_url": "https://api.github.com/users/aixuedegege/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aixuedegege/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aixuedegege/subscriptions",
"organizations_url": "https://api.github.com/users/aixuedegege/orgs",
"repos_url": "https://api.github.com/users/aixuedegege/repos",
"events_url": "https://api.github.com/users/aixuedegege/events{/privacy}",
"received_events_url": "https://api.github.com/users/aixuedegege/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,625 | 1,625 | NONE | null | ERROR: type should be string, got "https://github.com/huggingface/transformers/blob/0cbddfb190ab9b05b6575fbf818aae17bad4d24a/src/transformers/data/data_collator.py#L387\r\n\r\n```python\r\n # 10% of the time, we replace masked input tokens with random word\r\n indices_random = torch.bernoulli(torch.full(labels.shape, 0.5)).bool() & masked_indices & ~indices_replaced\r\n random_words = torch.randint(len(self.tokenizer), labels.shape, dtype=torch.long)\r\n inputs[indices_random] = random_words[indices_random]\r\n```\r\n\r\n" | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11838/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11837 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11837/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11837/comments | https://api.github.com/repos/huggingface/transformers/issues/11837/events | https://github.com/huggingface/transformers/issues/11837 | 899,317,558 | MDU6SXNzdWU4OTkzMTc1NTg= | 11,837 | Module torch has no attribute minimum for modeling_big_bird.py | {
"login": "robinsongh381",
"id": 42966248,
"node_id": "MDQ6VXNlcjQyOTY2MjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/42966248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/robinsongh381",
"html_url": "https://github.com/robinsongh381",
"followers_url": "https://api.github.com/users/robinsongh381/followers",
"following_url": "https://api.github.com/users/robinsongh381/following{/other_user}",
"gists_url": "https://api.github.com/users/robinsongh381/gists{/gist_id}",
"starred_url": "https://api.github.com/users/robinsongh381/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/robinsongh381/subscriptions",
"organizations_url": "https://api.github.com/users/robinsongh381/orgs",
"repos_url": "https://api.github.com/users/robinsongh381/repos",
"events_url": "https://api.github.com/users/robinsongh381/events{/privacy}",
"received_events_url": "https://api.github.com/users/robinsongh381/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"`torch.minimum` was only added in August 2020 to PyTorch, so `torch.minimum` is probably only part of torch 1.7+. To work for previous versions, it should indeed be replaced by `torch.min`.\r\n\r\nThe README of this repository states that: \"This repository is tested on Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+ and TensorFlow 2.3+.\" \r\n\r\ncc @vasudevgupta7",
"This bug is making it impossible to use BigBird in combination with AWS HuggingFace setup as that one is currently restricted to Pytorch 1.6. (https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face) \r\n\r\nAre there any plans to fix the modeling_big_bird.py so that it is backward compatible or agree with AWS on support of never version of Pytorch for HuggingFace containers?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,626 | 1,626 | NONE | null | Hello
I came across `module 'torch' has no attribute minimum` from the following two lines
1.https://github.com/huggingface/transformers/blob/73fde1defe9be259a47b9024525882f3ec420994/src/transformers/models/big_bird/modeling_big_bird.py#L662
2.https://github.com/huggingface/transformers/blob/73fde1defe9be259a47b9024525882f3ec420994/src/transformers/models/big_bird/modeling_big_bird.py#L796
I think `torch.minimum` should be replaced with `torch.min` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11837/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11837/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11836 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11836/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11836/comments | https://api.github.com/repos/huggingface/transformers/issues/11836/events | https://github.com/huggingface/transformers/issues/11836 | 899,314,132 | MDU6SXNzdWU4OTkzMTQxMzI= | 11,836 | Not able to fine tune language model | {
"login": "ghoshmithun",
"id": 32670037,
"node_id": "MDQ6VXNlcjMyNjcwMDM3",
"avatar_url": "https://avatars.githubusercontent.com/u/32670037?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghoshmithun",
"html_url": "https://github.com/ghoshmithun",
"followers_url": "https://api.github.com/users/ghoshmithun/followers",
"following_url": "https://api.github.com/users/ghoshmithun/following{/other_user}",
"gists_url": "https://api.github.com/users/ghoshmithun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghoshmithun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghoshmithun/subscriptions",
"organizations_url": "https://api.github.com/users/ghoshmithun/orgs",
"repos_url": "https://api.github.com/users/ghoshmithun/repos",
"events_url": "https://api.github.com/users/ghoshmithun/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghoshmithun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @ghoshmithun, \r\nWhen you are using the `examples/` / \"use sagemaker\" with custom data on s3 `huggingface_estimator.fit({'train':'s3://train-data-gpt/'})`. \r\nYou need to provide the hyperparameter `train_file` with the path to your file from s3. In your case, this would be `/opt/ml/input/data/train/my_train_file.csv`. \r\n[reference to `train_file` parameter defined in the `run_mlm.py`](https://github.com/huggingface/transformers/blob/6da129cb3152d93c425aab08a92d68c99e09d252/examples/pytorch/language-modeling/run_mlm.py#L114) \r\n[documentation for language-modelling](https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling#language-model-training)\r\n\r\nP.S. Not sure if `masked language modeling` is the preferred task for `GPT-NEO`. I think it is`causal language modeling` as for `GPT-2`\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,625 | 1,625 | NONE | null | I am trying to fine tune a language model using sagemaker huggingface API
I am using the code
```
import sagemaker
from sagemaker.huggingface import HuggingFace
# gets role for executing training job
role = sagemaker.get_execution_role()
hyperparameters = {
'model_name_or_path':'EleutherAI/gpt-neo-1.3B',
'output_dir':'/opt/ml/model' ,
'data_dir': container_data_train,
'output_dir': container_model_dir
# add your remaining hyperparameters
# more info here https://github.com/huggingface/transformers/tree/v4.4.2/examples/language-modeling
}
# git configuration to download our fine-tuning script
git_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'v4.4.2'}
container_data_train = '/opt/ml/input/data/training'
container_data_test = '/opt/ml/input/data/testing'
container_model_dir = '/opt/ml/model'
# creates Hugging Face estimator
huggingface_estimator = HuggingFace(
entry_point='run_mlm.py',
source_dir='./examples/language-modeling',
instance_type='ml.p3.2xlarge',
instance_count=1,
role=role,
git_config=git_config,
transformers_version='4.4.2',
pytorch_version='1.6.0',
py_version='py36',
hyperparameters = hyperparameters
)
# starting the train job
huggingface_estimator.fit({'train':'s3://train-data-gpt/'})
```
The input data is there. It is a .txt file with english sentences in each line. However after any number of try I am getting error as
**File "run_mlm.py", line 170, in __post_init__
raise ValueError("Need either a dataset name or a training/validation file.")
ValueError: Need either a dataset name or a training/validation file.**
However the training job was launched as here is the log:
```
Training Env:
{
"additional_framework_parameters": {},
"channel_input_dirs": {
"train": "/opt/ml/input/data/train"
},
"current_host": "algo-1",
"framework_module": "sagemaker_pytorch_container.training:main",
"hosts": [
"algo-1"
],
"hyperparameters": {
"output_dir": "/opt/ml/model",
"model_name_or_path": "EleutherAI/gpt-neo-1.3B",
"data_dir": "/opt/ml/input/data/training"
},
"input_config_dir": "/opt/ml/input/config",
"input_data_config": {
"train": {
"TrainingInputMode": "File",
"S3DistributionType": "FullyReplicated",
"RecordWrapperType": "None"
}
},
"input_dir": "/opt/ml/input",
"is_master": true,
"job_name": "huggingface-pytorch-training-2021-05-24-06-11-02-967",
"log_level": 20,
"master_hostname": "algo-1",
"model_dir": "/opt/ml/model",
"module_dir": "s3://sagemaker-us-west-2-928825629101/huggingface-pytorch-training-2021-05-24-06-11-02-967/source/sourcedir.tar.gz",
"module_name": "run_mlm",
"network_interface_name": "eth0",
"num_cpus": 8,
"num_gpus": 1,
"output_data_dir": "/opt/ml/output/data",
"output_dir": "/opt/ml/output",
"output_intermediate_dir": "/opt/ml/output/intermediate",
"resource_config": {
"current_host": "algo-1",
"hosts": [
"algo-1"
],
"network_interface_name": "eth0"
},
"user_entry_point": "run_mlm.py"
}
Environment variables:
SM_HOSTS=["algo-1"]
SM_NETWORK_INTERFACE_NAME=eth0
SM_HPS={"data_dir":"/opt/ml/input/data/training","model_name_or_path":"EleutherAI/gpt-neo-1.3B","output_dir":"/opt/ml/model"}
SM_USER_ENTRY_POINT=run_mlm.py
SM_FRAMEWORK_PARAMS={}
SM_RESOURCE_CONFIG={"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"}
SM_INPUT_DATA_CONFIG={"train":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}}
SM_OUTPUT_DATA_DIR=/opt/ml/output/data
SM_CHANNELS=["train"]
SM_CURRENT_HOST=algo-1
SM_MODULE_NAME=run_mlm
SM_LOG_LEVEL=20
SM_FRAMEWORK_MODULE=sagemaker_pytorch_container.training:main
SM_INPUT_DIR=/opt/ml/input
SM_INPUT_CONFIG_DIR=/opt/ml/input/config
SM_OUTPUT_DIR=/opt/ml/output
SM_NUM_CPUS=8
SM_NUM_GPUS=1
SM_MODEL_DIR=/opt/ml/model
SM_MODULE_DIR=s3://sagemaker-us-west-2-928825629101/huggingface-pytorch-training-2021-05-24-06-11-02-967/source/sourcedir.tar.gz
SM_TRAINING_ENV={"additional_framework_parameters":{},"channel_input_dirs":{"train":"/opt/ml/input/data/train"},"current_host":"algo-1","framework_module":"sagemaker_pytorch_container.training:main","hosts":["algo-1"],"hyperparameters":{"data_dir":"/opt/ml/input/data/training","model_name_or_path":"EleutherAI/gpt-neo-1.3B","output_dir":"/opt/ml/model"},"input_config_dir":"/opt/ml/input/config","input_data_config":{"train":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}},"input_dir":"/opt/ml/input","is_master":true,"job_name":"huggingface-pytorch-training-2021-05-24-06-11-02-967","log_level":20,"master_hostname":"algo-1","model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-west-2-928825629101/huggingface-pytorch-training-2021-05-24-06-11-02-967/source/sourcedir.tar.gz","module_name":"run_mlm","network_interface_name":"eth0","num_cpus":8,"num_gpus":1,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"},"user_entry_point":"run_mlm.py"}
SM_USER_ARGS=["--data_dir","/opt/ml/input/data/training","--model_name_or_path","EleutherAI/gpt-neo-1.3B","--output_dir","/opt/ml/model"]
SM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate
SM_CHANNEL_TRAIN=/opt/ml/input/data/train
SM_HP_OUTPUT_DIR=/opt/ml/model
SM_HP_MODEL_NAME_OR_PATH=EleutherAI/gpt-neo-1.3B
SM_HP_DATA_DIR=/opt/ml/input/data/training
PYTHONPATH=/opt/ml/code:/opt/conda/bin:/opt/conda/lib/python36.zip:/opt/conda/lib/python3.6:/opt/conda/lib/python3.6/lib-dynload:/opt/conda/lib/python3.6/site-packages
Invoking script with the following command:
/opt/conda/bin/python3.6 run_mlm.py --data_dir /opt/ml/input/data/training --model_name_or_path EleutherAI/gpt-neo-1.3B --output_dir /opt/ml/model
```
@philschmid @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11836/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11836/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11835 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11835/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11835/comments | https://api.github.com/repos/huggingface/transformers/issues/11835/events | https://github.com/huggingface/transformers/pull/11835 | 899,290,972 | MDExOlB1bGxSZXF1ZXN0NjUxMDA4NDc0 | 11,835 | Tiny fix in README.md of run_flax_mlm | {
"login": "marcvanzee",
"id": 180100,
"node_id": "MDQ6VXNlcjE4MDEwMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/180100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcvanzee",
"html_url": "https://github.com/marcvanzee",
"followers_url": "https://api.github.com/users/marcvanzee/followers",
"following_url": "https://api.github.com/users/marcvanzee/following{/other_user}",
"gists_url": "https://api.github.com/users/marcvanzee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcvanzee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcvanzee/subscriptions",
"organizations_url": "https://api.github.com/users/marcvanzee/orgs",
"repos_url": "https://api.github.com/users/marcvanzee/repos",
"events_url": "https://api.github.com/users/marcvanzee/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcvanzee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,625 | 1,625 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11835/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11835",
"html_url": "https://github.com/huggingface/transformers/pull/11835",
"diff_url": "https://github.com/huggingface/transformers/pull/11835.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11835.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11834 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11834/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11834/comments | https://api.github.com/repos/huggingface/transformers/issues/11834/events | https://github.com/huggingface/transformers/issues/11834 | 899,186,063 | MDU6SXNzdWU4OTkxODYwNjM= | 11,834 | convert_pytorch_checkpoint_to_tf2.py AttributeError: embeddings.word_embeddings.weight not found in PyTorch model | {
"login": "ffaisal93",
"id": 22006050,
"node_id": "MDQ6VXNlcjIyMDA2MDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/22006050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ffaisal93",
"html_url": "https://github.com/ffaisal93",
"followers_url": "https://api.github.com/users/ffaisal93/followers",
"following_url": "https://api.github.com/users/ffaisal93/following{/other_user}",
"gists_url": "https://api.github.com/users/ffaisal93/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ffaisal93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ffaisal93/subscriptions",
"organizations_url": "https://api.github.com/users/ffaisal93/orgs",
"repos_url": "https://api.github.com/users/ffaisal93/repos",
"events_url": "https://api.github.com/users/ffaisal93/events{/privacy}",
"received_events_url": "https://api.github.com/users/ffaisal93/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Are you trying to convert it a HuggingFace TensorFlow object? If so can you do the following?\r\n\r\n```\r\nfrom transformers import TFBertForPreTraining\r\n\r\nmodel = TFBertForPreTraining.from_pretrained(path_to_checkpoint, from_pt=True)\r\n```\r\n\r\nHow did you fine-tune your model? How did you save it? Did you train it using HuggingFace transformers? Can you load it back in a PyTorch object or is it failing too?",
"@LysandreJik thanks for the reply.\r\nthis was pretrained to do multilingual alignment and trained using pytorch-pretrained-bert. Example code snippet:\r\n```\r\ndef get_bert(bert_model, bert_do_lower_case): \r\n from pytorch_pretrained_bert import BertTokenizer, BertModel \r\n tokenizer = BertTokenizer.from_pretrained(bert_model, do_lower_case = bert_do_lower_case) bert = \r\n BertModel.from_pretrained(bert_model) return tokenizer, bert\r\nclass WordLevelBert(nn.Module): \r\n\"\"\" Runs BERT on sentences but only keeps the last subword embedding for each word. \"\"\" \r\n def __init__(self, model, do_lower_case): \r\n super().__init__() \r\n self.bert_tokenizer, self.bert = get_bert(model, do_lower_case) \r\n self.dim = self.bert.pooler.dense.in_features \r\n self.max_len = self.bert.embeddings.position_embeddings.num_embeddings \r\n if use_cuda: \r\n self.cuda() \r\n def forward(self, sentences, include_clssep = True): \r\n batch_size = 128 \r\n ann_full = None \r\n for i in range(0, len(sentences), batch_size): \r\n ann = self.annotate(sentences[i:i+batch_size], include_clssep = include_clssep)\r\n .....\r\n``` \r\n\r\nand I saved it in the following manner after training:\r\n```\r\ntorch.save({'state_dict': model.state_dict(), 'trainer' : trainer.state_dict(),}, 'best_network.pt')\r\n```\r\n\r\nUpdate:\r\nI could get rid of the error by making start_prefix_to_remove=\"\" and by making pt_state_dict=pt_state_dict['state_dict'] in the file: https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_pytorch_utils.py\r\n\r\nBut now I get this new error:\r\n```\r\n~/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py in load_pytorch_checkpoint_in_tf2_model(tf_model, pytorch_checkpoint_path, tf_inputs, allow_missing_keys)\r\n 92 \r\n 93 return load_pytorch_weights_in_tf2_model(\r\n---> 94 tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys\r\n 95 )\r\n 96 \r\n\r\n~/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py in load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs, allow_missing_keys)\r\n 169 continue\r\n 170 \r\n--> 171 raise AttributeError(\"{} not found in PyTorch model\".format(name))\r\n 172 \r\n 173 array = pt_state_dict[name].numpy()\r\n\r\nAttributeError: cls.seq_relationship.weight not found in PyTorch model\r\n\r\n```\r\n\r\nthe fine tuned state dict can be loaded fine by:\r\n```\r\nfrom pytorch_pretrained_bert import BertTokenizer, BertModel\r\nbert = BertModel.from_pretrained('bert-base-multilingual-cased')\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased', do_lower_case = False)\r\nbert.load_state_dict(torch.load('best_network.pt')['state_dict'])\r\n```\r\nAnd I got similar type of error while finetuning and saving a tutorial notebook this one https://www.kaggle.com/eggwhites2705/transformers-multi-label-classification\r\n\r\n----------------------------------------------------------------------------------------------\r\n\r\nI use this code now and got a tf model:\r\n```\r\nfrom transformers import TFBertModel\r\nmodel = TFBertModel.from_pretrained(\"./demo_model\", from_pt=True)\r\nmodel.save(\"./demo_tf\")\r\n```\r\n\r\nI got a .pb model and variable files like .data and .index but no .meta file. My aim is to use this .data and .index file insted of the original bert initial checkpoint in the tydiqa code: https://github.com/google-research-datasets/tydiqa/tree/master/baseline\r\n\r\n\r\n",
"Thank you for clarifying! In this case, if you have a PyTorch model that correctly loads (we recommend always using `from_pretrained`/`save_pretrained` rather than `.save()` and `torch.load`) that you want to convert to the \"original\" TensorFlow, then you should be able to use this script: \r\n\r\nhttps://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/convert_bert_pytorch_checkpoint_to_original_tf.py\r\n\r\nIt will convert your model to a TF1 model usable in Google's original repository.",
"@LysandreJik thanks a lot. This solved my issue. Can you give me an idea about one thing: when I saved the tensorflow checkpoint with TFBertModel.from_pretrained(\"./demo_model\", from_pt=True).save(..) the saved checkpoint was smaller in size like 411 mb. Today I got the right sized checkpoint with the script you suggested (711 mb)....why the size was smaller in previous approach?",
"I'm not entirely sure, but I guess it would make sense for the sizes to be different as the saving format is different between TFBertModel (TF2) and the TF1 saved checkpoint. Maybe our TF expert @Rocketknight1 has more insights :)",
"No idea, unfortunately! I don't **think** the format changed that massively between TF1 and TF2.\r\n\r\nOne things that strikes me is that 711mb for a bert-base model is quite large: With 110M parameters, we should expect it to take up about 110*4 = 440mb of space uncompressed, because each parameter is a 32-bit (4-byte) float. That said, if it works, don't question it™",
"THANKS @LysandreJik AND @Rocketknight1 . This is helpful. Yes it took 440 earlier but when I used the tf1 conversion script it took 711 mb. I will finetune tydiqa on both of these models and let's see, if there is any difference any performence. "
] | 1,621 | 1,626 | 1,626 | NONE | null | I am trying to convert a finetuned bert model to tensorflow. The model was finetuned using pytorch-pretrained-bert on bert-base-multilingual-cased. But I am getting the following error while trying to convert using the tuned checkpoint.
code:
```
from transformers import convert_pytorch_checkpoint_to_tf2
convert_pytorch_checkpoint_to_tf2.convert_pt_checkpoint_to_tf("bert", "best_network.pt",
"bert-base-multilingual-cased",
"bert_aligned.ckpt")
```
error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-16-e0bc4c4758a1> in <module>
1 convert_pytorch_checkpoint_to_tf2.convert_pt_checkpoint_to_tf("bert", "best_network.pt",
2 "bert-base-multilingual-cased",
----> 3 "bert_aligned.ckpt")
~/opt/anaconda3/lib/python3.7/site-packages/transformers/convert_pytorch_checkpoint_to_tf2.py in convert_pt_checkpoint_to_tf(model_type, pytorch_checkpoint_path, config_file, tf_dump_path, compare_with_pt_model, use_cached_models)
271 pytorch_checkpoint_path = cached_path(pytorch_checkpoint_url, force_download=not use_cached_models)
272 # Load PyTorch checkpoint in tf2 model:
--> 273 tf_model = load_pytorch_checkpoint_in_tf2_model(tf_model, pytorch_checkpoint_path)
274
275 if compare_with_pt_model:
~/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py in load_pytorch_checkpoint_in_tf2_model(tf_model, pytorch_checkpoint_path, tf_inputs, allow_missing_keys)
91
92 return load_pytorch_weights_in_tf2_model(
---> 93 tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys
94 )
95
~/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py in load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs, allow_missing_keys)
166 continue
167
--> 168 raise AttributeError("{} not found in PyTorch model".format(name))
169
170 array = pt_state_dict[name].numpy()
AttributeError: embeddings.word_embeddings.weight not found in PyTorch model
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11834/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11833 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11833/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11833/comments | https://api.github.com/repos/huggingface/transformers/issues/11833/events | https://github.com/huggingface/transformers/issues/11833 | 899,127,189 | MDU6SXNzdWU4OTkxMjcxODk= | 11,833 | [BUG] Trainer predict bug under DDP model. | {
"login": "hijkzzz",
"id": 19810594,
"node_id": "MDQ6VXNlcjE5ODEwNTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/19810594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hijkzzz",
"html_url": "https://github.com/hijkzzz",
"followers_url": "https://api.github.com/users/hijkzzz/followers",
"following_url": "https://api.github.com/users/hijkzzz/following{/other_user}",
"gists_url": "https://api.github.com/users/hijkzzz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hijkzzz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hijkzzz/subscriptions",
"organizations_url": "https://api.github.com/users/hijkzzz/orgs",
"repos_url": "https://api.github.com/users/hijkzzz/repos",
"events_url": "https://api.github.com/users/hijkzzz/events{/privacy}",
"received_events_url": "https://api.github.com/users/hijkzzz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Without seeing the whole stack trace, your version of Transformers used (please follow the issue template!) or the code you are using to build your dataset, there is little we can do to help.",
"> Without seeing the whole stack trace, your version of Transformers used (please follow the issue template!) or the code you are using to build your dataset, there is little we can do to help.\r\n\r\nHi, the code is in https://colab.research.google.com/drive/1wmdjmU54iVSGXeJXzLO706uRzg2hlVZb?usp=sharing\r\n\r\nAt present, **I change the evaluate batch size to 1, and the prediction is successful. But it' s very slow.**\r\n\r\nNote that I trained the model offline, not in the colab. I think maybe `Transformers` should provide a api to specify parallel traning model (the defaut is nn.DataParallel, however ... the bugs).\r\n\r\n```\r\n# load the best model, batch_size = 1 (for DDP bug, batch_size=8 get a error)\r\ntraining_args2 = TrainingArguments(\r\n output_dir='./results', # output directory\r\n per_device_eval_batch_size=1, # batch size for evaluation\r\n logging_dir='./logs', # directory for storing logs\r\n logging_steps=100,\r\n)\r\n\r\ntrainer2 = Trainer(\r\n model=trainer.model, # the instantiated Transformers model to be trained\r\n args=training_args2, # training arguments, defined above\r\n compute_metrics=compute_metrics\r\n)\r\n```\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,625 | 1,625 | NONE | null | ### Background
The model is trained with DDP.
The error denotes that the batch is smaller than the model required.
However, the test file cannot use `droplast`.
How I can predict the test file with DDP or remove DDP?
### Code
```
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=10, # total number of training epochs
per_device_train_batch_size=2, # batch size per device during training
per_device_eval_batch_size=8, # batch size for evaluation
warmup_ratio=0.1, # number of warmup steps for learning rate scheduler
weight_decay=1e-2, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=100,
evaluation_strategy='epoch',
gradient_accumulation_steps=4,
metric_for_best_model="f1",
fp16=True
)
trainer = Trainer(
model=model, # the instantiated Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=val_dataset, # evaluation dataset
compute_metrics=compute_metrics
)
trainer.train()
test_dataset = WNUTDataset(test_encodings)
predictions, labels, _ = trainer.predict(test_dataset)
predictions = np.argmax(predictions, axis=2)
```
### Bugs
```
~/anaconda3/lib/python3.7/site-packages/torch/nn/parallel/_functions.py in forward(ctx, target_device, dim, *inputs)
66 ctx.unsqueezed_scalar = False
67 ctx.input_sizes = tuple(map(lambda i: i.size(ctx.dim), inputs))
---> 68 return comm.gather(inputs, ctx.dim, ctx.target_device)
69
70 @staticmethod
~/anaconda3/lib/python3.7/site-packages/torch/cuda/comm.py in gather(tensors, dim, destination)
164 concatenating ``tensors`` along ``dim``.
165 """
--> 166 return torch._C._gather(tensors, dim, destination)
RuntimeError: Gather got an input of invalid size: got [1024, 7, 768], but expected [1024, 8, 768]
```
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11833/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11832 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11832/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11832/comments | https://api.github.com/repos/huggingface/transformers/issues/11832/events | https://github.com/huggingface/transformers/issues/11832 | 899,047,240 | MDU6SXNzdWU4OTkwNDcyNDA= | 11,832 | Seq2seq-based model running slowly on TPU | {
"login": "heraclex12",
"id": 13283488,
"node_id": "MDQ6VXNlcjEzMjgzNDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/13283488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/heraclex12",
"html_url": "https://github.com/heraclex12",
"followers_url": "https://api.github.com/users/heraclex12/followers",
"following_url": "https://api.github.com/users/heraclex12/following{/other_user}",
"gists_url": "https://api.github.com/users/heraclex12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/heraclex12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/heraclex12/subscriptions",
"organizations_url": "https://api.github.com/users/heraclex12/orgs",
"repos_url": "https://api.github.com/users/heraclex12/repos",
"events_url": "https://api.github.com/users/heraclex12/events{/privacy}",
"received_events_url": "https://api.github.com/users/heraclex12/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm not sure if we ever tested whether those seq2seq models run correctly on TPU (@patil-suraj). It might be the case that a lot of the computations are dynamic and therefore are constantly re-compiled.",
"I realize that I haven't passed --pad_to_max_length to the script, which leads to our model running slowly. So I will close this issue. Thank you for your support and sorry about that.",
"I have tested T5 and Marian on colab TPU and they work well\r\n\r\n@heraclex12 you are right, on TPU we should always pass `--pad_to_max_length` to avoid XLA re-compilation, and ideally `max_length` should be multiple of 8."
] | 1,621 | 1,622 | 1,621 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: v4.5.1 and newest version
- Platform: Colab
- Python version: 3.8
- PyTorch version (GPU?): 1.8.1
- TPU v2-8
### Who can help
@patil-suraj @patrickvonplaten
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik @sgugger
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
I tried to finetune mBART, T5, and MarianMTModel but all of them are running slowly on Colab TPUv2. I think the cause may come from Seq2seqTrainer since it works very well when I finetuned BERT on TPUv2 for MNLI Text classification.
The problem arises when using:
* The official example scripts: I use exactly the example parameter in README to train Translation model.
```
python xla_spawn.py --num_cores 8 \
seq2seq/run_translation.py \
--model_name_or_path Helsinki-NLP/opus-mt-en-ro \
--do_train \
--do_eval \
--source_lang en \
--target_lang ro \
--dataset_name wmt16 \
--dataset_config_name ro-en \
--output_dir /tmp/tst-translation \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir
```
I provide my [Colab notebook](https://colab.research.google.com/drive/1Y8kSbuZJ8ChIjgAf67F1cSciVaHfGTyq?usp=sharing)
The tasks I am working on is:
* an official GLUE/SQUaD task: WMT16
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Currently, with each step, the training duration increases many times. I want the duration is more stable and faster than GPU.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11832/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11832/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11831 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11831/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11831/comments | https://api.github.com/repos/huggingface/transformers/issues/11831/events | https://github.com/huggingface/transformers/issues/11831 | 898,941,236 | MDU6SXNzdWU4OTg5NDEyMzY= | 11,831 | [docs] XLnet reference link bug in description of past_index Parameter of TrainingArguments | {
"login": "Muktan",
"id": 31338369,
"node_id": "MDQ6VXNlcjMxMzM4MzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/31338369?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Muktan",
"html_url": "https://github.com/Muktan",
"followers_url": "https://api.github.com/users/Muktan/followers",
"following_url": "https://api.github.com/users/Muktan/following{/other_user}",
"gists_url": "https://api.github.com/users/Muktan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Muktan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Muktan/subscriptions",
"organizations_url": "https://api.github.com/users/Muktan/orgs",
"repos_url": "https://api.github.com/users/Muktan/repos",
"events_url": "https://api.github.com/users/Muktan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Muktan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for flagging! Should be fixed by the PR linked above!"
] | 1,621 | 1,621 | 1,621 | CONTRIBUTOR | null | XLnet reference link bug in description of past_index Parameter of TrainingArguments
link to the doc: https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments
**current description:** Some models like TransformerXL or **:doc`XLNet <../model_doc/xlnet>`** can make use of the past hidden states for their predictions. If this argument is set to a positive int, the Trainer will use the corresponding output (usually index 2) as the past state and feed it to the model at the next training step under the keyword argument mems.
**expected description:** Some models like TransformerXL or **XLNet** can make use of the past hidden states for their predictions. If this argument is set to a positive int, the Trainer will use the corresponding output (usually index 2) as the past state and feed it to the model at the next training step under the keyword argument mems.
## Environment info
Not required
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
### Who can help
@sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11831/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11831/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11830 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11830/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11830/comments | https://api.github.com/repos/huggingface/transformers/issues/11830/events | https://github.com/huggingface/transformers/issues/11830 | 898,935,181 | MDU6SXNzdWU4OTg5MzUxODE= | 11,830 | Delete key or set to `None` in __getstate__ impl. | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hmmm - thinking about it - the value is also set to None by default in the constructor. I will close this..."
] | 1,621 | 1,621 | 1,621 | CONTRIBUTOR | null | Hi,
there are some places that implement `__getstate__` because an object has a reference to an other object that is not pickable.
`__getstate__` then "deletes" the reference by setting it to `None`. Just a few examples:
https://github.com/huggingface/transformers/blob/73fde1defe9be259a47b9024525882f3ec420994/src/transformers/models/m2m_100/tokenization_m2m_100.py#L272
https://github.com/huggingface/transformers/blob/73fde1defe9be259a47b9024525882f3ec420994/src/transformers/models/marian/tokenization_marian.py#L299
IMO it would be better to delete the keys instead of setting them to `None`. Like this: `del state["sp_model"]`
What do you think @sgugger @LysandreJik ? I can provide a PR if wanted. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11830/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11830/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11829 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11829/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11829/comments | https://api.github.com/repos/huggingface/transformers/issues/11829/events | https://github.com/huggingface/transformers/issues/11829 | 898,918,722 | MDU6SXNzdWU4OTg5MTg3MjI= | 11,829 | [AutomaticSpeechRecognitionPipeline] CUDA support | {
"login": "francescorubbo",
"id": 5140987,
"node_id": "MDQ6VXNlcjUxNDA5ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5140987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/francescorubbo",
"html_url": "https://github.com/francescorubbo",
"followers_url": "https://api.github.com/users/francescorubbo/followers",
"following_url": "https://api.github.com/users/francescorubbo/following{/other_user}",
"gists_url": "https://api.github.com/users/francescorubbo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/francescorubbo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/francescorubbo/subscriptions",
"organizations_url": "https://api.github.com/users/francescorubbo/orgs",
"repos_url": "https://api.github.com/users/francescorubbo/repos",
"events_url": "https://api.github.com/users/francescorubbo/events{/privacy}",
"received_events_url": "https://api.github.com/users/francescorubbo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sure, feel free to open a PR! Thanks @francescorubbo "
] | 1,621 | 1,622 | 1,622 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.6.0
- Platform: Linux-4.15.0-106-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
- pipelines: @LysandreJik
## Information
Model I am using (Bert, XLNet ...): Wav2Vec2
The problem arises when using:
* [ X ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
* [ X ] Automatic Speech Recognition
## To reproduce
Steps to reproduce the behavior:
1. Instantiate AutomaticSpeechRecognitionPipeline with device set to GPU
2. Run pipeline inference on example audio input
```
import transformers
model = transformers.Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-xlsr-53-spanish")
tokenizer = transformers.AutoTokenizer.from_pretrained("facebook/wav2vec2-large-xlsr-53-spanish")
feature_extractor = transformers.AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-large-xlsr-53-spanish")
pl = transformers.AutomaticSpeechRecognitionPipeline(feature_extractor=feature_extractor, model=model, tokenizer=tokenizer, framework='pt',device=0)
pl('waveform.wav')
```
The snippet above results in the following error:
`RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same`
## Expected behavior
Inputs should be converted to CUDA tensor.
I believe this is happening because the feature extractor doesn't preserve the device.
I'm able to solve the issue if I add
`processed = self.ensure_tensor_on_device(**processed)`
after https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/automatic_speech_recognition.py#L136-L138
If this solution is acceptable, I'm happy to open a PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11829/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11829/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11828 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11828/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11828/comments | https://api.github.com/repos/huggingface/transformers/issues/11828/events | https://github.com/huggingface/transformers/issues/11828 | 898,869,591 | MDU6SXNzdWU4OTg4Njk1OTE= | 11,828 | possible bug in `TokenizerFast` when setting `return_offset_mapping=True` | {
"login": "YiweiJiang2015",
"id": 36023486,
"node_id": "MDQ6VXNlcjM2MDIzNDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/36023486?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YiweiJiang2015",
"html_url": "https://github.com/YiweiJiang2015",
"followers_url": "https://api.github.com/users/YiweiJiang2015/followers",
"following_url": "https://api.github.com/users/YiweiJiang2015/following{/other_user}",
"gists_url": "https://api.github.com/users/YiweiJiang2015/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YiweiJiang2015/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YiweiJiang2015/subscriptions",
"organizations_url": "https://api.github.com/users/YiweiJiang2015/orgs",
"repos_url": "https://api.github.com/users/YiweiJiang2015/repos",
"events_url": "https://api.github.com/users/YiweiJiang2015/events{/privacy}",
"received_events_url": "https://api.github.com/users/YiweiJiang2015/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, the model does not accept the `offset_mapping`, and does not need them for anything; so when using the standard BERT model, make sure you don't feed this value to the model.\r\n\r\nIf you're making a custom BERT model that accepts offset mappings, then you should also update the signature to handle them!",
"Ok. Thanks for reminding!"
] | 1,621 | 1,621 | 1,621 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1, 4.6.1
- `tokenizers` version 0.10.2
- Platform: Linux/Ubuntu 18.04
- Python version: 3.9.1
- PyTorch version (GPU?): 1.7.1
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
- tokenizers: @n1t0, @LysandreJik
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
I am building a model (using Bert) which needs the `offsets` between tokens and original words. However, if I set `return_offset_mapping=True` in `BertTokenizerFast`, the returned encodings are not accepted by the model. Is this a bug or is it intended behavior?
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
The following code snippet reproduces the problem for me:
```python
from transformers import BertTokenizerFast, BertModel
if __name__ == '__main__':
test_string = 'text with percentage%'
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
encodings = tokenizer(test_string, return_offsets_mapping=True, return_tensors='pt')
print(encodings.keys())
model = BertModel.from_pretrained('bert-base-uncased')
out = model(**encodings)
```
I got the following error trace showing that `BertModel.forward()` does not accept `offset_mapping` which is included in the dict of `encodings`:
```
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'offset_mapping'])
Traceback (most recent call last):
File "~/trans_test.py", line 9, in <module>
out = model(**tokens)
File "~/miniconda3/envs/tf/lib/python3.9/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'offset_mapping'
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
At least, `Model.forward()` is supposed to have a key-arg placeholder for `offset_mapping`. For now, a workaround solution is to pop out `offset_mapping` from `encodings` before feeding the model. :(
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11828/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11827 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11827/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11827/comments | https://api.github.com/repos/huggingface/transformers/issues/11827/events | https://github.com/huggingface/transformers/issues/11827 | 898,843,738 | MDU6SXNzdWU4OTg4NDM3Mzg= | 11,827 | My modified `run_glue.py` works well with v4.1.1 but not good with v4.6.0 | {
"login": "forest1988",
"id": 2755894,
"node_id": "MDQ6VXNlcjI3NTU4OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/forest1988",
"html_url": "https://github.com/forest1988",
"followers_url": "https://api.github.com/users/forest1988/followers",
"following_url": "https://api.github.com/users/forest1988/following{/other_user}",
"gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/forest1988/subscriptions",
"organizations_url": "https://api.github.com/users/forest1988/orgs",
"repos_url": "https://api.github.com/users/forest1988/repos",
"events_url": "https://api.github.com/users/forest1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/forest1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You should upgrade to 4.6.1, I think this is related to a bug fixed by #11785. Let us know if this doesn't solve your problem!",
"@sgugger \r\nThank you for telling me the information!\r\nI upgraded to 4.6.1 and tried running the script again, and now got the expected (or even better) result! \r\n(I expected to reproduce the result of 4.1.1 because I used the same hyperparameters, but the result was better than that.)\r\n\r\nThank you again!\r\n"
] | 1,621 | 1,621 | 1,621 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
`The environment in which the script doesn't work well`
- `transformers` version: 4.6.0
- Platform: Linux-5.4.0-72-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
`The environment in which the script works well`
- `transformers` version: 4.1.1
- Platform: Linux-5.4.0-72-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
Models:
- bert: @LysandreJik
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
## Information
Model I am using (Bert, XLNet ...): bert-base-cased
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Here are some modified parts from the official `run_glue.py` example.
The version of the `run_glue.py` was for v4.1.1.
https://github.com/huggingface/transformers/blob/v4.1.1/examples/text-classification/run_glue.py
``` python
# Preprocessing the datasets
if data_args.task_name is not None:
sentence1_key, sentence2_key = task_to_keys[data_args.task_name]
else:
# Again, we try to have some nice defaults but don't hesitate to tweak to your use case.
non_label_column_names = [name for name in datasets["train"].column_names if name != "label"]
if "sentence1" in non_label_column_names and "sentence2" in non_label_column_names:
sentence1_key, sentence2_key = "sentence1", "sentence2"
else:
if len(non_label_column_names) >= 2:
sentence1_key, sentence2_key = non_label_column_names[:2]
if sentence2_key == "id" or sentence2_key == "idx":
sentence2_key = None
else:
sentence1_key, sentence2_key = non_label_column_names[0], None
print(f"sentence1_key {sentence1_key}")
print(f"sentence2_key {sentence2_key}")
```
``` python
train_dataset = datasets["train"]
eval_dataset = datasets["validation_matched" if data_args.task_name == "mnli" else "validation"]
# if data_args.task_name is not None:
# test_dataset = datasets["test_matched" if data_args.task_name == "mnli" else "test"]
test_dataset = datasets["test_matched" if data_args.task_name == "mnli" else "test"]
# Log a few random samples from the training set:
for index in random.sample(range(len(train_dataset)), 3):
logger.info(f"Sample {index} of the training set: {train_dataset[index]}.")
# Get the metric function
if data_args.task_name is not None:
metric = load_metric("glue", data_args.task_name)
# TODO: When datasets metrics include regular accuracy, make an else here and remove special branch from
# compute_metrics
```
``` python
def compute_metrics(p: EvalPrediction):
preds = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions
preds = np.squeeze(preds) if is_regression else np.argmax(preds, axis=1)
if data_args.task_name is not None:
result = metric.compute(predictions=preds, references=p.label_ids)
if len(result) > 1:
result["combined_score"] = np.mean(list(result.values())).item()
return result
elif is_regression:
# return {"mse": ((preds - p.label_ids) ** 2).mean().item()}
# use the same metric as stsb (for pearsonr, spearmanr)
metric = load_metric("glue", "stsb")
result = metric.compute(predictions=preds, references=p.label_ids)
return result
else:
return {"accuracy": (preds == p.label_ids).astype(np.float32).mean().item()}
```
I tried to update my modified script referring to the latest version of `run_glue.py,` but it didn't solve the problem.
Steps to reproduce the behavior:
``` sh
(transformers4.1.1) $ CUDA_VISIBLE_DEVICES=0 python run_emobank_4.1.1.py \
--model_name_or_path bert-base-cased \
--train_file /path/to/train.csv \
--validation_file /path/to/validation.csv \
--test_file /path/to/test.csv \
--do_train \
--do_eval \
--do_predict \
--max_seq_length 64 \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 10.0 \
--load_best_model_at_end \
--evaluation_strategy epoch \
--metric_for_best_model eval_pearson \
--output_dir /path/to/result/v4.1.1 \
--overwrite_output_dir
```
If I run the script with transformers 4.1.1, it works well.
If 4.6.0, it runs without any error, but the result is not good.
`4.1.1 validation result`
```
eval_loss = 0.10065479576587677
eval_pearson = 0.6559863196369287
eval_spearmanr = 0.6244913632922552
epoch = 10.0
```
`4.6.0 validation result`
```
eval_loss = 0.16216666996479034
eval_pearson = 0.1785468733027603
eval_spearmanr = 0.18945952641568345
eval_runtime = 3.1986
eval_samples_per_second = 120.992
epoch = 10.0
```
## Expected behavior
Are there any tips to update my own script, which is written for v4.1.1, to be applied to v4.6.0+?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11827/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11826 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11826/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11826/comments | https://api.github.com/repos/huggingface/transformers/issues/11826/events | https://github.com/huggingface/transformers/pull/11826 | 898,753,876 | MDExOlB1bGxSZXF1ZXN0NjUwNTkwOTM4 | 11,826 | feat: add contributor over time graph to README | {
"login": "guoqqqi",
"id": 72343596,
"node_id": "MDQ6VXNlcjcyMzQzNTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/72343596?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guoqqqi",
"html_url": "https://github.com/guoqqqi",
"followers_url": "https://api.github.com/users/guoqqqi/followers",
"following_url": "https://api.github.com/users/guoqqqi/following{/other_user}",
"gists_url": "https://api.github.com/users/guoqqqi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guoqqqi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guoqqqi/subscriptions",
"organizations_url": "https://api.github.com/users/guoqqqi/orgs",
"repos_url": "https://api.github.com/users/guoqqqi/repos",
"events_url": "https://api.github.com/users/guoqqqi/events{/privacy}",
"received_events_url": "https://api.github.com/users/guoqqqi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,625 | 1,625 | NONE | null | Hi, community!
To better present how our community grows, we develop a tool to show contributors growing history on [https://github.com/api7/contributor-graph](https://github.com/api7/contributor-graph). Since we found it helpful, we think maybe if it could help some other community.
## WHAT IT IS
Basically, it just shows the contributors growth over time, just like the stargazers over time on the README. It would be the same with stars that we would update the graph each day, so the link would always present the real-time data. There is some other stuff to play around with if you would like to give it a try~

## HOW IT WORKS
We use Github API to get all commits, try to find the “Github way” to filter commits so the result data would be similar to Github, and then get the first commit time of each user.
Don't hesitate to tell us if there is a better place to present this graph other than this, or there are some other worries or other features you would like to have~🍻
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11826/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/11826/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11826",
"html_url": "https://github.com/huggingface/transformers/pull/11826",
"diff_url": "https://github.com/huggingface/transformers/pull/11826.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11826.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11825 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11825/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11825/comments | https://api.github.com/repos/huggingface/transformers/issues/11825/events | https://github.com/huggingface/transformers/pull/11825 | 898,606,327 | MDExOlB1bGxSZXF1ZXN0NjUwNDY1NTE5 | 11,825 | Faster list concat for trainer_pt_utils.get_length_grouped_indices() | {
"login": "ctheodoris",
"id": 6326111,
"node_id": "MDQ6VXNlcjYzMjYxMTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6326111?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ctheodoris",
"html_url": "https://github.com/ctheodoris",
"followers_url": "https://api.github.com/users/ctheodoris/followers",
"following_url": "https://api.github.com/users/ctheodoris/following{/other_user}",
"gists_url": "https://api.github.com/users/ctheodoris/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ctheodoris/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ctheodoris/subscriptions",
"organizations_url": "https://api.github.com/users/ctheodoris/orgs",
"repos_url": "https://api.github.com/users/ctheodoris/repos",
"events_url": "https://api.github.com/users/ctheodoris/events{/privacy}",
"received_events_url": "https://api.github.com/users/ctheodoris/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"No problem, thank you for all your wonderful work!"
] | 1,621 | 1,621 | 1,621 | CONTRIBUTOR | null | # What does this PR do?
substitutes faster list concatenation for get_length_grouped_indices() in LengthGroupedSampler and DistributedLengthGroupedSampler as prior sum(megabatches, []) is prohibitively slow for large number of megabatches (in test case takes hours for ~270k megabatches with 100 items each).
Fixes #11795
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11825/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11825/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11825",
"html_url": "https://github.com/huggingface/transformers/pull/11825",
"diff_url": "https://github.com/huggingface/transformers/pull/11825.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11825.patch",
"merged_at": 1621693640000
} |
https://api.github.com/repos/huggingface/transformers/issues/11824 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11824/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11824/comments | https://api.github.com/repos/huggingface/transformers/issues/11824/events | https://github.com/huggingface/transformers/pull/11824 | 898,600,709 | MDExOlB1bGxSZXF1ZXN0NjUwNDYwMTgz | 11,824 | Add flax text class colab | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds official link to notebook
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11824/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11824/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11824",
"html_url": "https://github.com/huggingface/transformers/pull/11824",
"diff_url": "https://github.com/huggingface/transformers/pull/11824.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11824.patch",
"merged_at": 1621635118000
} |
https://api.github.com/repos/huggingface/transformers/issues/11823 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11823/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11823/comments | https://api.github.com/repos/huggingface/transformers/issues/11823/events | https://github.com/huggingface/transformers/issues/11823 | 898,302,624 | MDU6SXNzdWU4OTgzMDI2MjQ= | 11,823 | Hugging Face model Bio_ClinicalBERT producing 404 error | {
"login": "NicoleJaneway",
"id": 44853527,
"node_id": "MDQ6VXNlcjQ0ODUzNTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/44853527?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NicoleJaneway",
"html_url": "https://github.com/NicoleJaneway",
"followers_url": "https://api.github.com/users/NicoleJaneway/followers",
"following_url": "https://api.github.com/users/NicoleJaneway/following{/other_user}",
"gists_url": "https://api.github.com/users/NicoleJaneway/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NicoleJaneway/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NicoleJaneway/subscriptions",
"organizations_url": "https://api.github.com/users/NicoleJaneway/orgs",
"repos_url": "https://api.github.com/users/NicoleJaneway/repos",
"events_url": "https://api.github.com/users/NicoleJaneway/events{/privacy}",
"received_events_url": "https://api.github.com/users/NicoleJaneway/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @NicoleJaneway ,\r\n\r\nI think this issue is similar to the following one in `ktrain` repo:\r\n\r\nhttps://github.com/amaiya/ktrain/issues/367\r\n\r\n\"Problem\" is, that there's no TensorFlow compatible model found on the hub (more precisely the `tf_model.h5` one). One good \"workaround\" would be if the model owner (pinging @EmilyAlsentzer here) would upload such a model to avoid these message :hugs: ",
"Thanks, @stefan-it! Unfortunately, with the 404 error, my app is no longer working. I posted a new [ktrain issue](https://github.com/amaiya/ktrain/issues/369) about it. In my experience, the creator has been amazingly responsive, so let's see what comes of the question.",
"Hello @NicoleJaneway, looking at the repository and its commit history, I don't think there ever was a `.h5` file uploaded.\r\n\r\nCould you share the code you're using when using local files so that we can see what's going on? If using local files, `transformers` should look locally before looking on the server, so you shouldn't get a 404 error",
"Hey @LysandreJik, thanks for trying to help - I don't have this project up on a public github yet. I'll let you know when I do."
] | 1,621 | 1,621 | 1,621 | NONE | null | I'm building a Named Entity Recognition (NER) model using the Hugging Face implementation of emilyalsentzer/Bio_ClinicalBERT. Up to today, I've had no issues with the model. Today it's not working as expected.
Question 1 - today, trying to train using:
MODEL_NAME = 'emilyalsentzer/Bio_ClinicalBERT'
model = text.sequence_tagger('bilstm-bert', preproc, bert_model=MODEL_NAME)
results in this error: 404 Client Error: Not Found for url: https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT/resolve/main/tf_model.h5
Does Hugging Face offer any kind of health check to ascertain the status of their models?
Question 2 - working with files (model.h5, model.json, and preproc.sav) I'd saved from earlier training iterations, I'm getting the same 404 error shown above. I don't understand wherein these files the call to Hugging Face is occurring. It doesn't seem to be in the .json, and the .h5 and .sav file formats are hard to inspect. Read more about what these files are: https://medium.com/analytics-vidhya/how-to-deploy-your-neural-network-model-using-ktrain-ae255b134c77
Back in February, I'd used these exact model.h5, model.json, and preproc.sav files to run the NER app using Streamlit, no problem. Not sure if this is temporary issue with Bio_ClinicalBERT or if I need to retool my original approach due to potentially permanent problems with this transformer model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11823/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11822 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11822/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11822/comments | https://api.github.com/repos/huggingface/transformers/issues/11822/events | https://github.com/huggingface/transformers/issues/11822 | 898,249,684 | MDU6SXNzdWU4OTgyNDk2ODQ= | 11,822 | Training Transformer XL from scratch | {
"login": "vishrawas",
"id": 13724037,
"node_id": "MDQ6VXNlcjEzNzI0MDM3",
"avatar_url": "https://avatars.githubusercontent.com/u/13724037?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vishrawas",
"html_url": "https://github.com/vishrawas",
"followers_url": "https://api.github.com/users/vishrawas/followers",
"following_url": "https://api.github.com/users/vishrawas/following{/other_user}",
"gists_url": "https://api.github.com/users/vishrawas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vishrawas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vishrawas/subscriptions",
"organizations_url": "https://api.github.com/users/vishrawas/orgs",
"repos_url": "https://api.github.com/users/vishrawas/repos",
"events_url": "https://api.github.com/users/vishrawas/events{/privacy}",
"received_events_url": "https://api.github.com/users/vishrawas/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! I believe you should be using `TransfoXLLMHeadModel` instead, as right now you're using the Transfo XL model without it's LM head.\r\n\r\nThe TransfoXL model is one of our older models which doesn't fit one-to-one with other models, unfortunately. I invite you to take a look at the signature here: https://huggingface.co/transformers/model_doc/transformerxl.html#transformers.TransfoXLLMHeadModel.forward\r\n\r\nIt doesn't accept the `attention_mask` parameter, so you would need to tell the tokenizer it doesn't need to output those. The easiest way you can achieve that is by changing the following line:\r\n\r\n```diff\r\n- tokenizer = PreTrainedTokenizerFast(tokenizer_file=\"espertransXL.json\")\r\n+ tokenizer = PreTrainedTokenizerFast(tokenizer_file=\"espertransXL.json\", model_input_names=[\"input_ids\"])\r\n```",
"@LysandreJik \r\nThank you for the reply. I made those changes and while that error is resolved, I am getting the error `KeyError: 'loss'` \r\n On searching the internet, it seems that this error comes when `labels` are not defined, but I believe I have defined it. \r\nI have created this public notebook for transformerXL https://colab.research.google.com/drive/1vMVoPhtkHFC_-0X-hgwHvH03ynGT0j5i?usp=sharing . Can you please check and advise.\r\n\r\nI would be happy to publish this as a tutorial/example once it is working as I see this question on training transformer-xl has come up in past.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hello @vishrawas!\r\n\r\nYou could subclass `TransfoXLLMHeadModel` and change its output dictionary from `losses` to `loss`, so it would work with the trainer. Please note that you will probably have to reduce the loss prior to the return, as it has not been reduced yet, for example: `loss.mean()`:\r\n\r\n```Python\r\nclass OwnTransfoXLLMHeadModel(TransfoXLLMHeadModel):\r\n def __init__(self, *args, **kwargs) -> None:\r\n super(OwnTransfoXLLMHeadModel, self).__init__(*args, **kwargs)\r\n\r\n def forward(\r\n self,\r\n input_ids=None,\r\n mems=None,\r\n head_mask=None,\r\n inputs_embeds=None,\r\n labels=None,\r\n output_attentions=None,\r\n output_hidden_states=None,\r\n return_dict=None,\r\n ):\r\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\r\n if input_ids is not None:\r\n bsz, tgt_len = input_ids.size(0), input_ids.size(1)\r\n elif inputs_embeds is not None:\r\n bsz, tgt_len = inputs_embeds.size(0), inputs_embeds.size(1)\r\n else:\r\n raise ValueError(\"You have to specify either input_ids or inputs_embeds\")\r\n\r\n transformer_outputs = self.transformer(\r\n input_ids,\r\n mems=mems,\r\n head_mask=head_mask,\r\n inputs_embeds=inputs_embeds,\r\n output_attentions=output_attentions,\r\n output_hidden_states=output_hidden_states,\r\n return_dict=return_dict,\r\n )\r\n\r\n last_hidden = transformer_outputs[0]\r\n pred_hid = last_hidden[:, -tgt_len:]\r\n\r\n softmax_output = self.crit(pred_hid, labels)\r\n prediction_scores = softmax_output.view(bsz, tgt_len, -1) if labels is None else ()\r\n loss = softmax_output.view(bsz, tgt_len - 1) if labels is not None else None\r\n loss = loss.mean()\r\n\r\n if not return_dict:\r\n output = (prediction_scores,) + transformer_outputs[1:]\r\n return ((loss,) + output) if loss is not None else output\r\n\r\n return TransfoXLLMHeadModelOutput(\r\n loss=loss,\r\n prediction_scores=prediction_scores,\r\n mems=transformer_outputs.mems,\r\n hidden_states=transformer_outputs.hidden_states,\r\n attentions=transformer_outputs.attentions,\r\n )\r\n```\r\n\r\nAdditionally, you will need to subclass `ModelOutput` in the same way `TransfoXLLMHeadModelOutput` does and change the `losses` argument to `loss`:\r\n\r\n```Python\r\nclass TransfoXLLMHeadModelOutput(ModelOutput):\r\n loss: Optional[torch.FloatTensor] = None\r\n prediction_scores: torch.FloatTensor = None\r\n mems: List[torch.FloatTensor] = None\r\n hidden_states: Optional[Tuple[torch.FloatTensor]] = None\r\n attentions: Optional[Tuple[torch.FloatTensor]] = None\r\n\r\n @property\r\n def logits(self):\r\n return self.prediction_scores\r\n```",
"> @LysandreJik Thank you for the reply. I made those changes and while that error is resolved, I am getting the error `KeyError: 'loss'` On searching the internet, it seems that this error comes when `labels` are not defined, but I believe I have defined it. I have created this public notebook for transformerXL https://colab.research.google.com/drive/1vMVoPhtkHFC_-0X-hgwHvH03ynGT0j5i?usp=sharing . Can you please check and advise.\r\n> \r\n> I would be happy to publish this as a tutorial/example once it is working as I see this question on training transformer-xl has come up in past.\r\n\r\nHello there! I wonder if you have an updated version of the transformer-XL notebook? Thank you for your help!",
"> Hello @vishrawas!\r\n> \r\n> You could subclass `TransfoXLLMHeadModel` and change its output dictionary from `losses` to `loss`, so it would work with the trainer. Please note that you will probably have to reduce the loss prior to the return, as it has not been reduced yet, for example: `loss.mean()`:\r\n> \r\n> ```python\r\n> class OwnTransfoXLLMHeadModel(TransfoXLLMHeadModel):\r\n> def __init__(self, *args, **kwargs) -> None:\r\n> super(OwnTransfoXLLMHeadModel, self).__init__(*args, **kwargs)\r\n> \r\n> def forward(\r\n> self,\r\n> input_ids=None,\r\n> mems=None,\r\n> head_mask=None,\r\n> inputs_embeds=None,\r\n> labels=None,\r\n> output_attentions=None,\r\n> output_hidden_states=None,\r\n> return_dict=None,\r\n> ):\r\n> return_dict = return_dict if return_dict is not None else self.config.use_return_dict\r\n> if input_ids is not None:\r\n> bsz, tgt_len = input_ids.size(0), input_ids.size(1)\r\n> elif inputs_embeds is not None:\r\n> bsz, tgt_len = inputs_embeds.size(0), inputs_embeds.size(1)\r\n> else:\r\n> raise ValueError(\"You have to specify either input_ids or inputs_embeds\")\r\n> \r\n> transformer_outputs = self.transformer(\r\n> input_ids,\r\n> mems=mems,\r\n> head_mask=head_mask,\r\n> inputs_embeds=inputs_embeds,\r\n> output_attentions=output_attentions,\r\n> output_hidden_states=output_hidden_states,\r\n> return_dict=return_dict,\r\n> )\r\n> \r\n> last_hidden = transformer_outputs[0]\r\n> pred_hid = last_hidden[:, -tgt_len:]\r\n> \r\n> softmax_output = self.crit(pred_hid, labels)\r\n> prediction_scores = softmax_output.view(bsz, tgt_len, -1) if labels is None else ()\r\n> loss = softmax_output.view(bsz, tgt_len - 1) if labels is not None else None\r\n> loss = loss.mean()\r\n> \r\n> if not return_dict:\r\n> output = (prediction_scores,) + transformer_outputs[1:]\r\n> return ((loss,) + output) if loss is not None else output\r\n> \r\n> return TransfoXLLMHeadModelOutput(\r\n> loss=loss,\r\n> prediction_scores=prediction_scores,\r\n> mems=transformer_outputs.mems,\r\n> hidden_states=transformer_outputs.hidden_states,\r\n> attentions=transformer_outputs.attentions,\r\n> )\r\n> ```\r\n> \r\n> Additionally, you will need to subclass `ModelOutput` in the same way `TransfoXLLMHeadModelOutput` does and change the `losses` argument to `loss`:\r\n> \r\n> ```python\r\n> class TransfoXLLMHeadModelOutput(ModelOutput):\r\n> loss: Optional[torch.FloatTensor] = None\r\n> prediction_scores: torch.FloatTensor = None\r\n> mems: List[torch.FloatTensor] = None\r\n> hidden_states: Optional[Tuple[torch.FloatTensor]] = None\r\n> attentions: Optional[Tuple[torch.FloatTensor]] = None\r\n> \r\n> @property\r\n> def logits(self):\r\n> return self.prediction_scores\r\n> ```\r\n\r\n\r\nThank you for the comments that you left. It helped me a lot, too!\r\n\r\nCould you explain a little more why this part of the code 'softmax_output.view(bsz, tgt_len - 1)' is a loss???\r\n\r\nThere's a lot I don't know because I'm still studying"
] | 1,621 | 1,698 | 1,625 | NONE | null | Hello, I am trying to recreate this notebook https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb for transformer XL
I made changes to the tokenizer as follows
```
%%time
from pathlib import Path
from tokenizers import Tokenizer
from tokenizers.models import WordLevel
from tokenizers import normalizers
from tokenizers.normalizers import Lowercase, NFD, StripAccents
from tokenizers.pre_tokenizers import Whitespace
from tokenizers.processors import TemplateProcessing
from tokenizers.trainers import WordPieceTrainer
from tokenizers.trainers import WordLevelTrainer
tokenizer = Tokenizer(WordLevel(unk_token="[UNK]"))
tokenizer.normalizer = normalizers.Sequence([NFD(), Lowercase(), StripAccents()])
tokenizer.pre_tokenizer = Whitespace()
bert_tokenizer.post_processor = TemplateProcessing(
single="[CLS] $A [SEP]",
pair="[CLS] $A [SEP] $B:1 [SEP]:1",
special_tokens=[
("[CLS]", 1),
("[SEP]", 2),
],
)
trainer = WordLevelTrainer(show_progress=True, special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
files = [str(x) for x in Path(".").glob("**/*.txt")]
tokenizer.train(files, trainer)
tokenizer.save("espertransXL.json")
```
and then loaded it into the FastTokenizer
```
from transformers import PreTrainedTokenizerFast
tokenizer = PreTrainedTokenizerFast(tokenizer_file="espertransXL.json")
tokenizer.bos_token="[CLS]"
tokenizer.eos_token="[SEP]"
tokenizer.sep_token="[SEP]"
tokenizer.cls_token="[CLS]"
tokenizer.unk_token="[UNK]"
tokenizer.pad_token="[PAD]"
tokenizer.mask_token="[MASK]"
tokenizer._bos_token="[CLS]"
tokenizer._eos_token="[SEP]"
tokenizer._sep_token="[SEP]"
tokenizer._cls_token="[CLS]"
tokenizer._unk_token="[UNK]"
tokenizer._pad_token="[PAD]"
tokenizer._mask_token="[MASK]"
```
Post that, I instantiated the model
```
from transformers import TransfoXLConfig, TransfoXLModel
config = TransfoXLConfig()
model = TransfoXLModel(config=config)
```
Set up the data collator:
```
from transformers import DataCollatorForLanguageModeling
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=True, mlm_probability=0.15
)
```
Setting up the trainer as follows
```
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir="./TransfoXL",
overwrite_output_dir=True,
num_train_epochs=1,
per_gpu_train_batch_size=16,
save_steps=10_000,
save_total_limit=2,
prediction_loss_only=True,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset,
)
```
When I execute:
```
%%time
trainer.train()
```
I get the following error:
```
TypeError Traceback (most recent call last)
<timed eval> in <module>
/opt/conda/envs/Python-3.7-CUDA/lib/python3.7/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs)
1270 tr_loss += self.training_step(model, inputs)
1271 else:
-> 1272 tr_loss += self.training_step(model, inputs)
1273 self.current_flos += float(self.floating_point_ops(inputs))
1274
/opt/conda/envs/Python-3.7-CUDA/lib/python3.7/site-packages/transformers/trainer.py in training_step(self, model, inputs)
1732 loss = self.compute_loss(model, inputs)
1733 else:
-> 1734 loss = self.compute_loss(model, inputs)
1735
1736 if self.args.n_gpu > 1:
/opt/conda/envs/Python-3.7-CUDA/lib/python3.7/site-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs)
1764 else:
1765 labels = None
-> 1766 outputs = model(**inputs)
1767 # Save past state if it exists
1768 # TODO: this needs to be fixed and made cleaner later.
/opt/conda/envs/Python-3.7-CUDA/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
TypeError: forward() got an unexpected keyword argument 'attention_mask'
```
Can some please advise on this or if they have a working notebook example point to it?
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11822/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11821 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11821/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11821/comments | https://api.github.com/repos/huggingface/transformers/issues/11821/events | https://github.com/huggingface/transformers/pull/11821 | 898,187,907 | MDExOlB1bGxSZXF1ZXN0NjUwMDg5ODA2 | 11,821 | [run_clm.py] restore caching | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"No we can't just add the new argument without checking the version, as it's probably not going to work anymore for earlier versions of datasets (that's why it's bad to do breaking changes :-P).\r\nIt seems like it's the way the Datasets library wants to be used, so I would leave the default behavior here and you can change the script locally for your use case. If the defaults of the Datasets library are not satisfactory, then maybe those defaults should be changed.",
"Makes sense, @sgugger - thank you - back to `datasets`"
] | 1,621 | 1,621 | 1,621 | CONTRIBUTOR | null | `datasets==0.1.6` introduced in-memory datasets, which unfortunately has no caching which makes it very slow to develop with as the dataset gets reprocessed on every run. Supposedly this should make things faster overall, but at this huge cost to us developers.
It's also inconsistent where some datasets behave in one way, others in another way. This is too magical, IMHO.
This PR adds `keep_in_memory=False`, to disable in-memory cache, but restores normal caching.
Perhaps adding a note in the example that the user can change it to `True` if they don't care for the slow startup?
Alternatively, if you believe that the new behavior is good, let's create an env var at `datasets` that will control that, so that we can turn off this painful behavior w/o needing to manually modify the code.
Fixes: https://github.com/huggingface/transformers/issues/11801
p.s. working on this one script on many fronts - and then will sync other scripts at once.
@sgugger, @VictorSanh, @lhoestq | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11821/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11821/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11821",
"html_url": "https://github.com/huggingface/transformers/pull/11821",
"diff_url": "https://github.com/huggingface/transformers/pull/11821.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11821.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11820 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11820/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11820/comments | https://api.github.com/repos/huggingface/transformers/issues/11820/events | https://github.com/huggingface/transformers/pull/11820 | 898,166,775 | MDExOlB1bGxSZXF1ZXN0NjUwMDcwNTgx | 11,820 | [Flax] Small fixes in `run_flax_glue.py` | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes a typo.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11820/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11820",
"html_url": "https://github.com/huggingface/transformers/pull/11820",
"diff_url": "https://github.com/huggingface/transformers/pull/11820.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11820.patch",
"merged_at": 1621612343000
} |
https://api.github.com/repos/huggingface/transformers/issues/11819 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11819/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11819/comments | https://api.github.com/repos/huggingface/transformers/issues/11819/events | https://github.com/huggingface/transformers/pull/11819 | 898,069,793 | MDExOlB1bGxSZXF1ZXN0NjQ5OTg2NTg1 | 11,819 | Add option to log only once in multinode training | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | COLLABORATOR | null | # What does this PR do?
This PR adds the option to only log on one node when doing multinode training. This is controlled by the `is_local_process_zero` method, so I apply the switch there to avoid putting in multiple places.
Fixes #11796 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11819/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11819",
"html_url": "https://github.com/huggingface/transformers/pull/11819",
"diff_url": "https://github.com/huggingface/transformers/pull/11819.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11819.patch",
"merged_at": 1621944223000
} |
https://api.github.com/repos/huggingface/transformers/issues/11818 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11818/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11818/comments | https://api.github.com/repos/huggingface/transformers/issues/11818/events | https://github.com/huggingface/transformers/pull/11818 | 898,063,723 | MDExOlB1bGxSZXF1ZXN0NjQ5OTgxNDkx | 11,818 | [Trainer] Report both steps and num samples per second | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | COLLABORATOR | null | # What does this PR do?
As seen with @stas00, there is a bug in the current speed metrics reporting: training reports the number of training steps per second while evaluation and predict report the number of samples per second. After discussion we concluded that both are interesting, so this PR updates the Trainer to report both. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11818/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11818/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11818",
"html_url": "https://github.com/huggingface/transformers/pull/11818",
"diff_url": "https://github.com/huggingface/transformers/pull/11818.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11818.patch",
"merged_at": 1621900303000
} |
https://api.github.com/repos/huggingface/transformers/issues/11817 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11817/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11817/comments | https://api.github.com/repos/huggingface/transformers/issues/11817/events | https://github.com/huggingface/transformers/issues/11817 | 898,026,822 | MDU6SXNzdWU4OTgwMjY4MjI= | 11,817 | same sentence different padding length result different embedding. | {
"login": "JJplane",
"id": 28783826,
"node_id": "MDQ6VXNlcjI4NzgzODI2",
"avatar_url": "https://avatars.githubusercontent.com/u/28783826?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JJplane",
"html_url": "https://github.com/JJplane",
"followers_url": "https://api.github.com/users/JJplane/followers",
"following_url": "https://api.github.com/users/JJplane/following{/other_user}",
"gists_url": "https://api.github.com/users/JJplane/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JJplane/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JJplane/subscriptions",
"organizations_url": "https://api.github.com/users/JJplane/orgs",
"repos_url": "https://api.github.com/users/JJplane/repos",
"events_url": "https://api.github.com/users/JJplane/events{/privacy}",
"received_events_url": "https://api.github.com/users/JJplane/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,624 | 1,624 | NONE | null | I use nn.Softmax(dim=-1) to softmax. I find different outputs.
```
a = [-3.6180e-01, 6.6926e-01, 1.2248e+01, -9.5795e-01]
b = [-3.6180e-01, 6.6926e-01, 1.2248e+01, -9.5795e-01, -9.5795e-01]
```
softmax(a) = [3.3403e-06, 9.366**2**e-06, 9.999**9**e-01, 1.8402e-06]
softmax(b) =[3.3403e-06, 9.366**1**e-06, 9.999**8**e-01, 1.8402e-06, 1.8402e-06]
The different softmax results result in different sentence embedding, sometimes the embedding differ a lot.I test transeformers the question cant repoduce. This bug appears in transformers modified by our company. Any help is appreciate! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11817/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11817/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11816 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11816/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11816/comments | https://api.github.com/repos/huggingface/transformers/issues/11816/events | https://github.com/huggingface/transformers/issues/11816 | 898,019,574 | MDU6SXNzdWU4OTgwMTk1NzQ= | 11,816 | ValueError batch-size mismatch when redefining classifier layer on BertForSequenceClassification | {
"login": "eSharpMinor",
"id": 78321513,
"node_id": "MDQ6VXNlcjc4MzIxNTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/78321513?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eSharpMinor",
"html_url": "https://github.com/eSharpMinor",
"followers_url": "https://api.github.com/users/eSharpMinor/followers",
"following_url": "https://api.github.com/users/eSharpMinor/following{/other_user}",
"gists_url": "https://api.github.com/users/eSharpMinor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eSharpMinor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eSharpMinor/subscriptions",
"organizations_url": "https://api.github.com/users/eSharpMinor/orgs",
"repos_url": "https://api.github.com/users/eSharpMinor/repos",
"events_url": "https://api.github.com/users/eSharpMinor/events{/privacy}",
"received_events_url": "https://api.github.com/users/eSharpMinor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | NONE | null | Hi,
I am currently using BertForSequenceClassification for my project, to show some results regarding transfer performance on the GLUE Benchmark.
I wan't to do two things.
1. Add a seperate nn.Linear() head on top of the already fine-tuned BertForSequenceClassification model and train the entire model:
Input -> BERT_BASE_MODEL -> CLASSIFIER -> nn.Linear
2. Remove (or reinitialize) classifier head with random weights with |out_features| = |labels of the new task| of the classifier.layer of BertForSequenceClassification model and retrain it on the new task.
I have problem with 2. If I try to execute my script, I always get the following error.
I tried various things, but couldn't get the code work. I also searched for similar posts, but couldn't find one.
>> source t_ft.sh
Selected cpu as device.
b'Skipping line 24810: expected 12 fields, saw 13\nSkipping line 33961: expected 12 fields, saw 13\n'
b'Skipping line 75911: expected 12 fields, saw 13\nSkipping line 100114: expected 12 fields, saw 13\n'
b'Skipping line 150638: expected 12 fields, saw 13\nSkipping line 158834: expected 12 fields, saw 13\nSkipping line 173104: expected 12 fields, saw 13\nSkipping line 178252: expected 12 fields, saw 13\n'
b'Skipping line 221951: expected 12 fields, saw 13\n'
b'Skipping line 286845: expected 12 fields, saw 13\nSkipping line 314110: expected 12 fields, saw 13\n'
Processing 1000 / 391120 Samples
Processing 2000 / 391120 Samples
Processing 3000 / 391120 Samples
Processing 1000 / 9714 Samples
Processing 2000 / 9714 Samples
Processing 3000 / 9714 Samples
add_head: no
remove_head: yes
>>======== Epoch 1 / 2 ========
Training...
Traceback (most recent call last):
File "bert_pipeline.py", line 1117, in <module>
main()
File "bert_pipeline.py", line 976, in main
outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask,
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/bert/modeling_bert.py", line 1513, in forward
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/loss.py", line 1047, in forward
return F.cross_entropy(input, target, weight=self.weight,
File "/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py", line 2693, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py", line 2384, in nll_loss
raise ValueError(
ValueError: Expected input batch_size (24) to match target batch_size (16).
I guess the error is in line 891-892
```
if (remove_head == 'yes'):
model.classifier = nn.Linear(in_features=model.classifier.in_features, out_features=num_labels)
```
Full Code:
```
import numpy as np_
import pandas as pd
import torch
import torch.nn as nn
from torch.nn import CrossEntropyLoss, MSELoss
import random
import time
import datetime
import argparse
import copy
import json
import csv
from transformers import BertConfig, BertTokenizer, BertForSequenceClassification, get_linear_schedule_with_warmup, AdamW
from transformers.data import metrics
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
from sklearn.metrics import f1_score, matthews_corrcoef
from scipy.stats import pearsonr, spearmanr
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
# Methods for Transfer-Learning
def freeze_base_weights(model):
pass
# Types of some new BERT architectures
class BertWithAdditionalHead(nn.Module):
def __init__(self,base_model, num_labels):
super(BertWithAdditionalHead,self).__init__()
self.num_labels = num_labels
self.base_model = base_model
self.activation = nn.GELU()
self.fc1 = nn.Linear(self.base_model.num_labels, self.num_labels)
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
):
r"""
labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):
Labels for computing the sequence classification/regression loss.
Indices should be in :obj:`[0, ..., config.num_labels - 1]`.
If :obj:`config.num_labels == 1` a regression loss is computed (Mean-Square loss),
If :obj:`config.num_labels > 1` a classification loss is computed (Cross-Entropy).
"""
return_dict = return_dict if return_dict is not None else self.base_model.config.use_return_dict
outputs = self.base_model.bert(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
pooled_output = outputs[1]
pooled_output = self.base_model.dropout(pooled_output)
outputs = self.base_model.classifier(pooled_output)
outputs = self.activation(outputs)
logits = self.fc1(outputs)
loss = None
if labels is not None:
if self.num_labels == 1:
# We are doing regression
loss_fct = MSELoss()
loss = loss_fct(logits.view(-1), labels.view(-1))
else:
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
if not return_dict:
output = (logits, outputs[2:])
return (loss, output) if loss is not None else output
return ((loss,logits))
# Processors
class ColaProcessor:
def __init__(self, data_dir):
self.data_dir = data_dir
self.labels=["0","1"]
self.train_label_index=[1]
self.dev_label_index=[1]
self.train_sentence_index=[3]
self.dev_sentence_index=[3]
self.test_sentence_index=[1]
def get_train_data(self):
data = pd.read_csv(self.data_dir + "train.tsv",
delimiter="\t",
error_bad_lines=False,
header=None,
encoding='utf8',
dtype=str)
#Remove NaN values
data = data.dropna(subset=(self.train_sentence_index + self.train_label_index))
train_data = data.iloc[:,self.train_sentence_index].copy()
train_labels = data.iloc[:,self.train_label_index].copy()
return((train_data, train_labels))
def get_dev_data(self):
data = pd.read_csv(self.data_dir + "dev.tsv",
delimiter="\t",
error_bad_lines=False,
encoding='utf8',
header=None,
dtype=str)
data = data.dropna(subset=(self.dev_sentence_index + self.dev_label_index))
dev_data = data.iloc[:,self.dev_sentence_index].copy()
dev_labels = data.iloc[:,self.dev_label_index].copy()
return((dev_data, dev_labels))
def get_test_data(self):
data = pd.read_csv(self.data_dir + "test.tsv",
delimiter="\t",
encoding='utf8',
error_bad_lines=False,
dtype=str)
data = data.dropna(subset=self.test_sentence_index)
test_data = data.iloc[:,self.test_sentence_index].copy()
return(test_data)
def get_label_list(self):
return(self.labels)
def get_index(self):
return((self.train_sentence_index, self.train_label_index),
(self.dev_sentence_index, self.dev_label_index),
(self.test_sentence_index))
# TODO
class MRPCProcessor:
def __init__(self, data_dir):
self.data_dir = data_dir
self.labels=["0","1"]
self.train_label_index=[0]
self.dev_label_index=[0]
self.train_sentence_index=[3,4]
self.dev_sentence_index=[3,4]
self.test_sentence_index=[3,4]
def get_train_data(self):
data = pd.read_csv(self.data_dir + "train.tsv",
delimiter="\t",
dtype=str,
header=None,
skiprows=[0],
encoding='utf8',
error_bad_lines=False)
#Remove NaN values
data = data.dropna(subset=(self.train_sentence_index + self.train_label_index))
train_data = data.iloc[:,self.train_sentence_index].copy()
train_labels = data.iloc[:,self.train_label_index].copy()
return((train_data, train_labels))
def get_dev_data(self):
data = pd.read_csv(self.data_dir + "dev.tsv",
delimiter="\t",
dtype=str,
header=None,
skiprows=[0],
encoding='utf8',
error_bad_lines=False)
data = data.dropna(subset=(self.dev_sentence_index + self.dev_label_index))
dev_data = data.iloc[:,self.dev_sentence_index].copy()
dev_labels = data.iloc[:,self.dev_label_index].copy()
return((dev_data, dev_labels))
def get_test_data(self):
data = pd.read_csv(self.data_dir + "test.tsv",
delimiter="\t",
dtype=str,
header=None,
skiprows=[0],
encoding='utf8',
error_bad_lines=False)
data = data.dropna(subset=self.test_sentence_index)
test_data = data.iloc[:,self.test_sentence_index].copy()
return(test_data)
def get_label_list(self):
return(self.labels)
def get_index(self):
return((self.train_sentence_index, self.train_label_index),
(self.dev_sentence_index, self.dev_label_index),
(self.test_sentence_index))
class MNLIMatchedProcessor:
def __init__(self, data_dir):
self.data_dir = data_dir
self.labels=["contradiction", "entailment", "neutral"]
self.train_label_index=[11]
self.dev_label_index=[15]
self.train_sentence_index=[8,9]
self.dev_sentence_index=[8,9]
self.test_sentence_index=[8,9]
def get_train_data(self):
data = pd.read_csv(self.data_dir + "train.tsv",
delimiter="\t",
dtype=str,
header=None,
skiprows=[0],
encoding='utf8',
error_bad_lines=False)
#Remove NaN values
data = data.dropna(subset=(self.train_sentence_index + self.train_label_index))
train_data = data.iloc[:,self.train_sentence_index].copy()
train_labels = data.iloc[:,self.train_label_index].copy()
return((train_data, train_labels))
def get_dev_data(self):
data = pd.read_csv(self.data_dir + "dev_matched.tsv",
delimiter="\t",
dtype=str,
header=None,
skiprows=[0],
encoding='utf8',
error_bad_lines=False)
data = data.dropna(subset=(self.dev_sentence_index + self.dev_label_index))
dev_data = data.iloc[:,self.dev_sentence_index].copy()
dev_labels = data.iloc[:,self.dev_label_index].copy()
return((dev_data, dev_labels))
def get_test_data(self):
data = pd.read_csv(self.data_dir + "test_matched.tsv",
delimiter="\t",
dtype=str,
header=None,
skiprows=[0],
encoding='utf8',
error_bad_lines=False)
data = data.dropna(subset=self.test_sentence_index)
test_data = data.iloc[:,self.test_sentence_index]
return(test_data)
def get_label_list(self):
return(self.labels)
def get_index(self):
return((self.train_sentence_index, self.train_label_index),
(self.dev_sentence_index, self.dev_label_index),
(self.test_sentence_index))
class QNLIProcessor:
def __init__(self, data_dir):
self.data_dir = data_dir
self.labels=["entailment", "not_entailment"]
self.train_label_index=[3]
self.dev_label_index=[3]
self.train_sentence_index=[1,2]
self.dev_sentence_index=[1,2]
self.test_sentence_index=[1,2]
def get_train_data(self):
data = pd.read_csv(self.data_dir + "train.tsv",
delimiter="\t",
dtype=str,
header=None,
skiprows=[0],
encoding='utf8',
error_bad_lines=False)
#Remove NaN values
data = data.dropna(subset=(self.train_sentence_index + self.train_label_index))
train_data = data.iloc[:,self.train_sentence_index].copy()
train_labels = data.iloc[:,self.train_label_index].copy()
return((train_data, train_labels))
def get_dev_data(self):
data = pd.read_csv(self.data_dir + "dev.tsv",
delimiter="\t",
dtype=str,
header=None,
skiprows=[0],
encoding='utf8',
error_bad_lines=False)
data = data.dropna(subset=(self.dev_sentence_index + self.dev_label_index))
dev_data = data.iloc[:,self.dev_sentence_index].copy()
dev_labels = data.iloc[:,self.dev_label_index].copy()
return((dev_data, dev_labels))
def get_test_data(self):
data = pd.read_csv(self.data_dir + "test.tsv",
delimiter="\t",
dtype=str,
header=None,
skiprows=[0],
encoding='utf8',
error_bad_lines=False)
data = data.dropna(subset=self.test_sentence_index)
test_data = data.iloc[:,self.test_sentence_index]
return(test_data)
def get_label_list(self):
return(self.labels)
def get_index(self):
return((self.train_sentence_index, self.train_label_index),
(self.dev_sentence_index, self.dev_label_index),
(self.test_sentence_index))
class QQPProcessor:
def __init__(self, data_dir):
self.data_dir = data_dir
self.labels=["0", "1"]
self.train_label_index=[5]
self.dev_label_index=[5]
self.train_sentence_index=[3,4]
self.dev_sentence_index=[3,4]
self.test_sentence_index=[1,2]
def get_train_data(self):
data = pd.read_csv(self.data_dir + "train.tsv",
delimiter="\t",
dtype=str,
header=None,
skiprows=[0],
encoding='utf8',
error_bad_lines=False)
#Remove NaN values
data = data.dropna(subset=(self.train_sentence_index + self.train_label_index))
train_data = data.iloc[:,self.train_sentence_index].copy()
train_labels = data.iloc[:,self.train_label_index].copy()
return((train_data, train_labels))
def get_dev_data(self):
data = pd.read_csv(self.data_dir + "dev.tsv",
delimiter="\t",
dtype=str,
header=None,
skiprows=[0],
encoding='utf8',
error_bad_lines=False)
data = data.dropna(subset=(self.dev_sentence_index + self.dev_label_index))
dev_data = data.iloc[:,self.dev_sentence_index].copy()
dev_labels = data.iloc[:,self.dev_label_index].copy()
return((dev_data, dev_labels))
def get_test_data(self):
data = pd.read_csv(self.data_dir + "test.tsv",
delimiter="\t",
dtype=str,
header=None,
skiprows=[0],
encoding='utf8',
error_bad_lines=False)
data = data.dropna(subset=self.test_sentence_index)
test_data = data.iloc[:,self.test_sentence_index]
return(test_data)
def get_label_list(self):
return(self.labels)
def get_index(self):
return((self.train_sentence_index, self.train_label_index),
(self.dev_sentence_index, self.dev_label_index),
(self.test_sentence_index))
class RTEProcessor:
def __init__(self, data_dir):
self.data_dir = data_dir
self.labels=["entailment", "not_entailment"]
self.train_label_index=[3]
self.dev_label_index=[3]
self.train_sentence_index=[1,2]
self.dev_sentence_index=[1,2]
self.test_sentence_index=[1,2]
def get_train_data(self):
data = pd.read_csv(self.data_dir + "train.tsv",
delimiter="\t",
dtype=str,
header=None,
skiprows=[0],
encoding='utf8',
error_bad_lines=False)
#Remove NaN values
data = data.dropna(subset=(self.train_sentence_index + self.train_label_index))
train_data = data.iloc[:,self.train_sentence_index].copy()
train_labels = data.iloc[:,self.train_label_index].copy()
return((train_data, train_labels))
def get_dev_data(self):
data = pd.read_csv(self.data_dir + "dev.tsv",
delimiter="\t",
dtype=str,
header=None,
skiprows=[0],
encoding='utf8',
error_bad_lines=False)
data = data.dropna(subset=(self.dev_sentence_index + self.dev_label_index))
dev_data = data.iloc[:,self.dev_sentence_index].copy()
dev_labels = data.iloc[:,self.dev_label_index].copy()
return((dev_data, dev_labels))
def get_test_data(self):
data = pd.read_csv(self.data_dir + "test.tsv",
delimiter="\t",
dtype=str,
header=None,
skiprows=[0],
encoding='utf8',
error_bad_lines=False)
data = data.dropna(subset=self.test_sentence_index)
test_data = data.iloc[:,self.test_sentence_index]
return(test_data)
def get_label_list(self):
return(self.labels)
def get_index(self):
return((self.train_sentence_index, self.train_label_index),
(self.dev_sentence_index, self.dev_label_index),
(self.test_sentence_index))
class SST2Processor:
def __init__(self, data_dir):
self.data_dir = data_dir
self.labels=["0", "1"]
self.train_label_index=[1]
self.dev_label_index=[1]
self.train_sentence_index=[0]
self.dev_sentence_index=[0]
self.test_sentence_index=[1]
def get_train_data(self):
data = pd.read_csv(self.data_dir + "train.tsv",
delimiter="\t",
dtype=str,
header=None,
skiprows=[0],
encoding='utf8',
error_bad_lines=False)
#Remove NaN values
data = data.dropna(subset=(self.train_sentence_index + self.train_label_index))
train_data = data.iloc[:,self.train_sentence_index].copy()
train_labels = data.iloc[:,self.train_label_index].copy()
return((train_data, train_labels))
def get_dev_data(self):
data = pd.read_csv(self.data_dir + "dev.tsv",
delimiter="\t",
dtype=str,
header=None,
skiprows=[0],
encoding='utf8',
error_bad_lines=False)
data = data.dropna(subset=(self.dev_sentence_index + self.dev_label_index))
dev_data = data.iloc[:,self.dev_sentence_index].copy()
dev_labels = data.iloc[:,self.dev_label_index].copy()
return((dev_data, dev_labels))
def get_test_data(self):
data = pd.read_csv(self.data_dir + "test.tsv",
delimiter="\t",
dtype=str,
header=None,
skiprows=[0],
encoding='utf8',
error_bad_lines=False)
data = data.dropna(subset=self.test_sentence_index)
test_data = data.iloc[:,self.test_sentence_index]
return(test_data)
def get_label_list(self):
return(self.labels)
def get_index(self):
return((self.train_sentence_index, self.train_label_index),
(self.dev_sentence_index, self.dev_label_index),
(self.test_sentence_index))
class STSBProcessor:
def __init__(self, data_dir):
self.data_dir = data_dir
self.labels=[]
self.train_label_index=[9]
self.dev_label_index=[9]
self.train_sentence_index=[7,8]
self.dev_sentence_index=[7,8]
self.test_sentence_index=[7,8]
def get_train_data(self):
data = pd.read_csv(self.data_dir + "train.tsv",
delimiter="\t",
dtype=str,
header=None,
skiprows=[0],
encoding='utf8',
error_bad_lines=False)
#Remove NaN values
data = data.dropna(subset=(self.train_sentence_index + self.train_label_index))
train_data = data.iloc[:,self.train_sentence_index].copy()
train_labels = data.iloc[:,self.train_label_index].copy()
return((train_data, train_labels))
def get_dev_data(self):
data = pd.read_csv(self.data_dir + "dev.tsv",
delimiter="\t",
dtype=str,
header=None,
skiprows=[0],
encoding='utf8',
error_bad_lines=False)
data = data.dropna(subset=(self.dev_sentence_index + self.dev_label_index))
dev_data = data.iloc[:,self.dev_sentence_index].copy()
dev_labels = data.iloc[:,self.dev_label_index].copy()
return((dev_data, dev_labels))
def get_test_data(self):
data = pd.read_csv(self.data_dir + "test.tsv",
delimiter="\t",
dtype=str,
header=None,
skiprows=[0],
quoting=csv.QUOTE_NONE,
encoding='utf8',
error_bad_lines=False)
data = data.dropna(subset=self.test_sentence_index)
test_data = data.iloc[:,self.test_sentence_index]
return(test_data)
def get_label_list(self):
return(self.labels)
def get_index(self):
return((self.train_sentence_index, self.train_label_index),
(self.dev_sentence_index, self.dev_label_index),
(self.test_sentence_index))
class WNLIProcessor:
def __init__(self, data_dir):
self.data_dir = data_dir
self.labels=["0", "1"]
self.train_label_index=[3]
self.dev_label_index=[3]
self.train_sentence_index=[1,2]
self.dev_sentence_index=[1,2]
self.test_sentence_index=[1,2]
def get_train_data(self):
data = pd.read_csv(self.data_dir + "train.tsv",
delimiter="\t",
dtype=str,
header=None,
skiprows=[0],
encoding='utf8',
error_bad_lines=False)
#Remove NaN values
data = data.dropna(subset=(self.train_sentence_index + self.train_label_index))
train_data = data.iloc[:,self.train_sentence_index].copy()
train_labels = data.iloc[:,self.train_label_index].copy()
return((train_data, train_labels))
def get_dev_data(self):
data = pd.read_csv(self.data_dir + "dev.tsv",
delimiter="\t",
dtype=str,
header=None,
skiprows=[0],
encoding='utf8',
error_bad_lines=False)
data = data.dropna(subset=(self.dev_sentence_index + self.dev_label_index))
dev_data = data.iloc[:,self.dev_sentence_index].copy()
dev_labels = data.iloc[:,self.dev_label_index].copy()
return((dev_data, dev_labels))
def get_test_data(self):
data = pd.read_csv(self.data_dir + "test.tsv",
delimiter="\t",
dtype=str,
header=None,
skiprows=[0],
quoting=csv.QUOTE_NONE,
encoding='utf8',
error_bad_lines=False)
data = data.dropna(subset=self.test_sentence_index)
test_data = data.iloc[:,self.test_sentence_index]
return(test_data)
def get_label_list(self):
return(self.labels)
def get_index(self):
return((self.train_sentence_index, self.train_label_index),
(self.dev_sentence_index, self.dev_label_index),
(self.test_sentence_index))
# Metrics
class Metrics():
def __init__(self, is_regression):
self.is_regression = is_regression
def get_dict(self):
if self.is_regression:
return({"pearson_corr": None, "spearman_corr": None})
else:
return({"accuracy": None, "f1_score": None, "mcc": None})
def calculate_metrics(self, preds, labels):
eval_dict = {}
if self.is_regression:
eval_dict["pearson_corr"] = pearsonr(preds, labels)[0]
eval_dict["spearman_corr"] = spearmanr(preds, labels)[0]
else:
eval_dict["accuracy"] = metrics.simple_accuracy(preds, labels)
eval_dict["f1_score"] = f1_score(y_true=labels, y_pred=preds)
eval_dict["mcc"] = matthews_corrcoef(labels, preds)
return(eval_dict)
def log_eval(epoch_i, avg_train_loss, eval_loss, eval_dict, output_dir, t_train, t_val):
eval_file = open((output_dir + "/eval_result_transfer.txt"), "a")
eval_file.writelines("epoch : {} \n".format(epoch_i))
eval_file.writelines("train_loss : {} \n".format(avg_train_loss))
eval_file.writelines("train_time : {} \n".format(t_train))
eval_file.writelines("eval_loss : {} \n".format(eval_loss))
eval_file.writelines("eval_time : {} \n".format(t_val))
for key in eval_dict:
eval_file.writelines(key + ": {} \n".format(eval_dict[key]))
eval_file.writelines("\n")
eval_file.close()
with open(output_dir + "/epoch_{}.json".format(epoch_i), "w") as f:
json.dump(eval_dict, f)
f.close()
def preprocesser(dataset, sentence_idx, tokenizer, max_seq_len):
input_ids = []
attention_mask = []
token_type_ids = []
if len(sentence_idx) == 1:
for j, sentence in enumerate(dataset.iloc()):
sentence1 = sentence[sentence_idx[0]]
if pd.isnull(sentence1):
continue
if j%1000==0 and not (j == 0):
print("Processing {} / {} Samples".format(j, len(dataset)))
# Tokenize the sentence
sentence1 = tokenizer.tokenize(sentence1)
# Convert tokens to ids
sentence1 = tokenizer.convert_tokens_to_ids(sentence1)
# Additional preprocessing
## Padding to a max_seq_len
## or Truncating to max_seq_len
## computing input_ids, attention mask and token_type_ids
tokenized_dict = tokenizer.prepare_for_model(
ids=sentence1, pair_ids=None, add_special_tokens=True
,padding='max_length', truncation='longest_first',
max_length=max_seq_len, return_tensors='np', return_token_type_ids=True
, return_attention_mask=True)
input_ids.append(tokenized_dict['input_ids'])
attention_mask.append(tokenized_dict['attention_mask'])
token_type_ids.append(tokenized_dict['token_type_ids'])
if len(sentence_idx) == 2:
for j,sentence in enumerate(dataset.iloc()):
sentence1 = sentence[sentence_idx[0]]
sentence2 = sentence[sentence_idx[1]]
if pd.isnull(sentence1) or pd.isnull(sentence2):
continue
if j%1000==0 and not (j == 0):
print("Processing {} / {} Samples".format(j, len(dataset)))
if j==3000:
break
# Tokenize the sentence
sentence1 = tokenizer.tokenize(sentence1)
sentence2 = tokenizer.tokenize(sentence2)
# Convert tokens to ids
sentence1 = tokenizer.convert_tokens_to_ids(sentence1)
sentence2 = tokenizer.convert_tokens_to_ids(sentence2)
# Additional preprocessing
## Padding to a max_seq_len
## or Truncating to max_seq_len
## computing input_ids, attention mask and token_type_ids
tokenized_dict = tokenizer.prepare_for_model(ids=sentence1, pair_ids=sentence2,
add_special_tokens=True,padding='max_length', truncation='longest_first',
max_length=max_seq_len, return_tensors='np',return_token_type_ids=True ,
return_attention_mask=True)
input_ids.append(tokenized_dict['input_ids'])
attention_mask.append(tokenized_dict['attention_mask'])
token_type_ids.append(tokenized_dict['token_type_ids'])
# Converting to pytorch tensors
input_ids = torch.tensor(input_ids)
attention_mask = torch.tensor(attention_mask)
token_type_ids = torch.tensor(token_type_ids)
return (input_ids, attention_mask, token_type_ids)
def format_time(elapsed):
'''
Takes a time in seconds and returns a string hh:mm:ss
'''
# Round to the nearest second.
elapsed_rounded = int(round((elapsed)))
# Format as hh:mm:ss
return str(datetime.timedelta(seconds=elapsed_rounded))
def main():
# Arguments
parser = argparse.ArgumentParser(description='A BERT pipeline with transformers library')
parser.add_argument('-t_n', '--task_name', help='Name of the task', default=None, type=str)
parser.add_argument('-d_t', '--do_train', help='Whether model needs to be trained yes/no', default='no', type=str)
parser.add_argument('-d_e', '--do_eval', help='Whether you want to evaluate on dev set', default='no', type=str)
parser.add_argument('-d_p', '--do_predict', help='Whether you want to do predictions on test set yes/no', default='no', type=str)
parser.add_argument('-a_h', '--add_head', help='Whether you want to add a new head on the given BERT model yes/no', default='no', type=str)
parser.add_argument('-r_h', '--remove_head', help='Whether you want to remove head and instantiante new head with random weights yes/no',
default='no', type=str)
parser.add_argument('-f_b', '--freeze_base', help="Whether you only want to train the classification layer yes/no", default='no', type=str)
parser.add_argument('-i_r', '--is_regression', help='Whether the given task is a regression task yes/no', default='no', type=str)
parser.add_argument('-g_s', '--global_seed', help='Define seed for reproducability purpose', default=0, type=int)
parser.add_argument('-d_d', '--data_dir', help='Directory, where the dataset can be found', default=None, type=str)
parser.add_argument('-v_f', '--vocab_file', help='Path of BERT vocabulary file', default=None, type=str)
parser.add_argument('-s_t', '--source_task', help='Optional argument, to store the name of the model', default='',type=str)
parser.add_argument('-b_c_f', '--bert_config_file', help='Directory, where the configuration file can be found', default=None, type=str)
parser.add_argument('-p_m', '--pretrained_model', help='Path of the Pretrained model (.bin /.pth)', default=None, type=str)
parser.add_argument('-m_s_l', '--max_seq_len', help='Maximum length boundary for all sequences', default=128, type=int)
parser.add_argument('-t_b_s', '--train_batch_size', help='Batch size for Training', default=32, type=int)
parser.add_argument('-e_b_s', '--eval_batch_size', help='Batch size for Evaluation', default=16, type=int)
parser.add_argument('-l_r', '--learning_rate', help='Learning rate', default=3e-5, type=float)
parser.add_argument('-n_t_e', '--num_train_epochs', help='Number of training epochs', default=1, type=int)
parser.add_argument('-n_w_s', '--num_warmup_steps', help='Number of warmup steps', default=0, type=int)
parser.add_argument('-o_d', '--output_dir', help='Directory for the output file', default=None, type=str)
args = vars(parser.parse_args())
# Passing arguments to variables
task_name = args['task_name']
do_train = args['do_train']
do_eval = args['do_eval']
do_predict = args['do_predict']
add_head = args['add_head']
remove_head = args['remove_head']
freeze_base = args['freeze_base']
global_seed = args['global_seed']
data_dir = args['data_dir']
vocab_file = args['vocab_file']
bert_config_file = args['bert_config_file']
pretrained_model = args['pretrained_model']
max_seq_len = args['max_seq_len']
train_batch_size = args['train_batch_size']
eval_batch_size = args['eval_batch_size']
learning_rate = args['learning_rate']
epochs = args['num_train_epochs']
num_warmup_steps = args['num_warmup_steps']
output_dir = args['output_dir']
is_regression = args['is_regression']
source_task = args['source_task']
# Setting seed
random.seed(global_seed)
np.random.seed(global_seed)
torch.manual_seed(global_seed)
torch.cuda.manual_seed_all(global_seed)
# Setting up device
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
print("Selected {} as device.".format(device))
# BertConfig
bert_config = BertConfig.from_json_file(bert_config_file)
# Processor
processor_dict = {"cola":ColaProcessor,
"mrpc":MRPCProcessor,
"mnli":MNLIMatchedProcessor,
"qnli":QNLIProcessor,
"qqp":QQPProcessor,
"rte":RTEProcessor,
"sst2":SST2Processor,
"sts":STSBProcessor,
"wnli":WNLIProcessor}
processor = processor_dict[task_name](data_dir=data_dir)
# Tokenizer
tokenizer = BertTokenizer(vocab_file=vocab_file)
# Metrics
metric = Metrics(is_regression)
# Training
if do_train == 'yes':
train_data, train_labels = processor.get_train_data()
dev_data, dev_labels = processor.get_dev_data()
label_list = processor.get_label_list()
num_labels = len(label_list)
(train_sentence_index, _), (dev_sentence_index, _), _ = processor.get_index()
train_input_ids, train_attention_mask, train_token_type_ids = preprocesser(
dataset=train_data, sentence_idx=train_sentence_index,tokenizer=tokenizer,
max_seq_len=max_seq_len)
dev_input_ids, dev_attention_mask, dev_token_type_ids = preprocesser(
dataset=dev_data, sentence_idx=dev_sentence_index
,tokenizer=tokenizer, max_seq_len=max_seq_len)
# Converting labels to numeric values
train_labels = train_labels.values.flatten('C')
dev_labels = dev_labels.values.flatten('C')
if is_regression == 'yes':
# TODO
train_labels = train_labels.astype(float)
dev_labels = dev_labels.astype(float)
else:
label_map = {}
for i,label in enumerate(label_list):
label_map[label] = i
for j,label in enumerate(train_labels):
train_labels[j] = label_map[label]
for k,label in enumerate(dev_labels):
dev_labels[k] = label_map[label]
train_labels=train_labels.astype(int)
dev_labels=dev_labels.astype(int)
train_labels = torch.tensor(train_labels[:3000])
dev_labels = torch.tensor(dev_labels[:3000])
# Defining DataLoader
train_data = TensorDataset(train_input_ids, train_attention_mask, train_labels)
train_sampler = RandomSampler(train_data)
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=train_batch_size)
dev_data = TensorDataset(dev_input_ids, dev_attention_mask, dev_labels)
dev_sampler = SequentialSampler(dev_data)
dev_dataloader = DataLoader(dev_data, sampler=dev_sampler, batch_size=eval_batch_size)
# TODO Transfer-Modeling
# Defining model and criterion
bert_config.num_labels = num_labels
model = BertForSequenceClassification.from_pretrained(pretrained_model, return_dict=False)
print('add_head: {}'.format(add_head))
print('remove_head: {}'.format(remove_head))
if (add_head == 'yes'):
model = BertWithAdditionalHead(model, num_labels)
**```
if (remove_head == 'yes'):
model.classifier = nn.Linear(in_features=model.classifier.in_features, out_features=num_labels)
```**
# TODO Freezing weights of base model
if (freeze_base == 'yes'):
if (add_head == 'yes'):
for params in model.base_model.parameters():
print("Freezing Parameter: {}".format(params))
params.requires_grad = False
else:
for params in model.bert.parameters():
print("Freezing Parameter: {}".format(params))
params.requires_grad = False
# Moving model to GPU if possible
if device == torch.device("cuda"):
model.cuda()
optimizer = AdamW(model.parameters(), lr=learning_rate)
total_steps = len(train_dataloader) * epochs
scheduler = get_linear_schedule_with_warmup(optimizer,num_warmup_steps=num_warmup_steps,
num_training_steps=total_steps)
loss_val = []
# For each epoch...
for epoch_i in range(0, epochs):
# ========================================
# Training
# ========================================
# Perform one full pass over the training set.
print("")
print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs))
print('Training...')
# Measure how long the training epoch takes.
t0 = time.time()
# Reset the total loss for this epoch.
total_loss = 0
# Put the model into training mode. Don't be mislead--the call to
# `train` just changes the *mode*, it doesn't *perform* the training.
# `dropout` and `batchnorm` layers behave differently during training
# vs. test (source: https://stackoverflow.com/questions/51433378/what-does-model-train-do-in-pytorch)
model.train()
# For each batch of training data...
for step, batch in enumerate(train_dataloader):
# Progress update every 40 batches.
if step % 2 == 0 and not step == 0:
# Calculate elapsed time in minutes.
elapsed = format_time(time.time() - t0)
# Report progress.
print(' Batch {:>5,} of {:>5,}. Elapsed: {:}.'.format(step,
len(train_dataloader), elapsed))
# Unpack this training batch from our dataloader.
#
# As we unpack the batch, we'll also copy each tensor to the GPU using the
# `to` method.
#
# `batch` contains three pytorch tensors:
# [0]: input ids
# [1]: attention masks
# [2]: labels
b_input_ids = batch[0].to(device)
b_input_mask = batch[1].to(device)
b_labels = batch[2].to(device)
# Always clear any previously calculated gradients before performing a
# backward pass. PyTorch doesn't do this automatically because
# accumulating the gradients is "convenient while training RNNs".
# (source: https://stackoverflow.com/questions/48001598/why-do-we-need-to-call-zero-grad-in-pytorch)
model.zero_grad()
# optimizer.zero_grad()
# Perform a forward pass (evaluate the model on this training batch).
# This will return the loss (rather than the model output) because we
# have provided the `labels`.
# The documentation for this `model` function is here:
# https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification
outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask,
labels=b_labels)
# The call to `model` always returns a tuple, so we need to pull the
# loss value out of the tuple.
loss = outputs[0]
# Display loss for every 10 steps
print("Loss: {} in Step: {}".format(loss, step))
if step%20==0 and not step==0:
break
# Accumulate the training loss over all of the batches so that we can
# calculate the average loss at the end. `loss` is a Tensor containing a
# single value; the `.item()` function just returns the Python value
# from the tensor.
total_loss += loss.item()
# Perform a backward pass to calculate the gradients.
loss.backward()
# Clip the norm of the gradients to 1.0.
# This is to help prevent the "exploding gradients" problem.
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
# Update parameters and take a step using the computed gradient.
# The optimizer dictates the "update rule"--how the parameters are
# modified based on their gradients, the learning rate, etc.
optimizer.step()
# Update the learning rate.
scheduler.step()
# Calculate the average loss over the training data.
avg_train_loss = total_loss / len(train_dataloader)
# Store the loss value for plotting the learning curve.
loss_val.append(avg_train_loss)
t_train = format_time(time.time() - t0)
print("")
print(" Average training loss: {0:.2f}".format(avg_train_loss))
print(" Training epoch took: {:}".format(t_train))
# ========================================
# Validation
# ========================================
# After the completion of each training epoch, measure our performance on
# our validation set.
print("")
print("Running Validation...")
t0 = time.time()
# Put the model in evaluation mode--the dropout layers behave differently
# during evaluation.
model.eval()
# Tracking variables
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
eval_dict = metric.get_dict()
tmp_eval_dict = {}
# Evaluate data for one epoch
for dev_step, batch in enumerate(dev_dataloader):
# Add batch to GPU
batch = tuple(t.to(device) for t in batch)
# Unpack the inputs from our dataloader
b_input_ids, b_input_mask, b_labels = batch
# Telling the model not to compute or store gradients, saving memory and
# speeding up validation
with torch.no_grad():
# Forward pass, calculate logit predictions.
# This will return the logits rather than the loss because we have
# not provided labels.
# token_type_ids is the same as the "segment ids", which
# differentiates sentence 1 and 2 in 2-sentence tasks.
# The documentation for this `model` function is here:
# https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification
outputs = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask,
labels=b_labels)
# Get the "logits" output by the model. The "logits" are the output
# values prior to applying an activation function like the softmax.
tmp_eval_loss, logits = outputs[:2]
print("Eval Loss: {} in Step: {}".format(tmp_eval_loss, dev_step))
# Move logits and labels to CPU
logits = logits.detach().cpu().numpy()
logits = logits.argmax(axis=1)
label_ids = b_labels.to('cpu').numpy()
# Calculate the accuracy for this batch of test sentences.
# tmp_eval_accuracy = flat_accuracy(logits, label_ids)
tmp_eval_dict = metric.calculate_metrics(preds=logits, labels=label_ids)
# Accumulate the total accuracy.
eval_loss += tmp_eval_loss
for key in eval_dict:
if eval_dict[key] == None:
eval_dict = copy.deepcopy(tmp_eval_dict)
continue
else:
eval_dict[key] += tmp_eval_dict[key]
# Track the number of batches
nb_eval_steps += 1
# logging time
if dev_step==10:
break
t_val = format_time(time.time() - t0)
for key in eval_dict:
eval_dict[key] = eval_dict[key]/nb_eval_steps
eval_loss = eval_loss/nb_eval_steps
log_eval(epoch_i, avg_train_loss, eval_loss, eval_dict, output_dir, t_train, t_val)
# Report the final accuracy for this validation run.
for key in eval_dict:
print(key + ": {}".format(eval_dict[key]))
print(" Validation took: {:}".format(t_val))
print("")
print("Training complete!")
print("Saving the model in {} ...".format(output_dir))
model.save_pretrained(output_dir+"{}_{}.bin".format(source_task,task_name))
# TODO
if do_eval == 'yes' and not (do_train == 'yes'):
pass
# TODO
if do_predict == 'yes':
pass
if __name__ == "__main__":
main()
```
Training is based on: https://www.youtube.com/watch?v=FKlPCK1uFrc&list=PLam9sigHPGwOBuH4_4fr-XvDbe5uneaf6
Hopefully you can help me :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11816/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11816/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11815 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11815/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11815/comments | https://api.github.com/repos/huggingface/transformers/issues/11815/events | https://github.com/huggingface/transformers/issues/11815 | 897,971,221 | MDU6SXNzdWU4OTc5NzEyMjE= | 11,815 | How get sentenses embbedings from TFBertForMaskedLM | {
"login": "resquilleur",
"id": 57857889,
"node_id": "MDQ6VXNlcjU3ODU3ODg5",
"avatar_url": "https://avatars.githubusercontent.com/u/57857889?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/resquilleur",
"html_url": "https://github.com/resquilleur",
"followers_url": "https://api.github.com/users/resquilleur/followers",
"following_url": "https://api.github.com/users/resquilleur/following{/other_user}",
"gists_url": "https://api.github.com/users/resquilleur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/resquilleur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/resquilleur/subscriptions",
"organizations_url": "https://api.github.com/users/resquilleur/orgs",
"repos_url": "https://api.github.com/users/resquilleur/repos",
"events_url": "https://api.github.com/users/resquilleur/events{/privacy}",
"received_events_url": "https://api.github.com/users/resquilleur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!",
"Resolved"
] | 1,621 | 1,622 | 1,622 | NONE | null | Good afternoon!
I am solving a text clustering problem by fine-tuning a trained BERT model. After seeing a number of articles on the subject, I decided to use the masking problem and the TFBertForMaskedLM model for fine-tuning. I was able to fine-tune the network on my set, and now I want to use the embbedings of this model to transform my set and feed into the clustering algorithm.
The question and problem is that the output of `bert_model.layers[0]` is `[None, max_len, emb_size]`, I get emb of each word, and I need emb of a document or sequence, any way out?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11815/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11815/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11814 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11814/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11814/comments | https://api.github.com/repos/huggingface/transformers/issues/11814/events | https://github.com/huggingface/transformers/issues/11814 | 897,970,634 | MDU6SXNzdWU4OTc5NzA2MzQ= | 11,814 | Permission error for cardiffnlp/twitter-roberta-base-emotion | {
"login": "StephenQuirolgico",
"id": 4974765,
"node_id": "MDQ6VXNlcjQ5NzQ3NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4974765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StephenQuirolgico",
"html_url": "https://github.com/StephenQuirolgico",
"followers_url": "https://api.github.com/users/StephenQuirolgico/followers",
"following_url": "https://api.github.com/users/StephenQuirolgico/following{/other_user}",
"gists_url": "https://api.github.com/users/StephenQuirolgico/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StephenQuirolgico/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StephenQuirolgico/subscriptions",
"organizations_url": "https://api.github.com/users/StephenQuirolgico/orgs",
"repos_url": "https://api.github.com/users/StephenQuirolgico/repos",
"events_url": "https://api.github.com/users/StephenQuirolgico/events{/privacy}",
"received_events_url": "https://api.github.com/users/StephenQuirolgico/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @StephenQuirolgico,\r\n\r\ncould you attach a code snippet that I can copy-paste to reproduce the error? :-)",
"@patrickvonplaten, Not exactly sure what the issue was but it's working now. Thanks!"
] | 1,621 | 1,622 | 1,622 | NONE | null | @patrickvonplaten,
I'm having issues accessing the `cardiffnlp/twitter-roberta-base-emotion` model using:
```
task='emotion'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
```
When I substitute another task, such as `task='sentiment'`, it works fine. I have also tried using the `cardiffnlp/twitter-roberta-base-emotion` model within an NLP framework (AdaptNLP) but got a `permission denied` error. However, I did not receive a `permission denied` error when using the `sentiment ` task within this NLP framework. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11814/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11814/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11813 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11813/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11813/comments | https://api.github.com/repos/huggingface/transformers/issues/11813/events | https://github.com/huggingface/transformers/pull/11813 | 897,961,450 | MDExOlB1bGxSZXF1ZXN0NjQ5ODk0MjU4 | 11,813 | fix roformer config doc | {
"login": "JunnYu",
"id": 50394665,
"node_id": "MDQ6VXNlcjUwMzk0NjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/50394665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JunnYu",
"html_url": "https://github.com/JunnYu",
"followers_url": "https://api.github.com/users/JunnYu/followers",
"following_url": "https://api.github.com/users/JunnYu/following{/other_user}",
"gists_url": "https://api.github.com/users/JunnYu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JunnYu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JunnYu/subscriptions",
"organizations_url": "https://api.github.com/users/JunnYu/orgs",
"repos_url": "https://api.github.com/users/JunnYu/repos",
"events_url": "https://api.github.com/users/JunnYu/events{/privacy}",
"received_events_url": "https://api.github.com/users/JunnYu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes roformer config doc
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11813/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11813/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11813",
"html_url": "https://github.com/huggingface/transformers/pull/11813",
"diff_url": "https://github.com/huggingface/transformers/pull/11813.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11813.patch",
"merged_at": 1621598771000
} |
https://api.github.com/repos/huggingface/transformers/issues/11812 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11812/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11812/comments | https://api.github.com/repos/huggingface/transformers/issues/11812/events | https://github.com/huggingface/transformers/pull/11812 | 897,907,026 | MDExOlB1bGxSZXF1ZXN0NjQ5ODQ3NTI0 | 11,812 | Patch recursive import | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | MEMBER | null | The RoFormer converter requires the `JiebaPreTokenizer` which was imported at the root of the file.
This resulted in a cyclic dependency and a partially initialized module.
This PR fixes the issue by importing it only when necessary and additionally tests that the `PreTrainedTokenizerFast` can be loaded as a standalone. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11812/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11812",
"html_url": "https://github.com/huggingface/transformers/pull/11812",
"diff_url": "https://github.com/huggingface/transformers/pull/11812.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11812.patch",
"merged_at": 1621594201000
} |
https://api.github.com/repos/huggingface/transformers/issues/11811 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11811/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11811/comments | https://api.github.com/repos/huggingface/transformers/issues/11811/events | https://github.com/huggingface/transformers/issues/11811 | 897,876,078 | MDU6SXNzdWU4OTc4NzYwNzg= | 11,811 | GPT Neo for Sequence Classification | {
"login": "saichandrapandraju",
"id": 41769919,
"node_id": "MDQ6VXNlcjQxNzY5OTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/41769919?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saichandrapandraju",
"html_url": "https://github.com/saichandrapandraju",
"followers_url": "https://api.github.com/users/saichandrapandraju/followers",
"following_url": "https://api.github.com/users/saichandrapandraju/following{/other_user}",
"gists_url": "https://api.github.com/users/saichandrapandraju/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saichandrapandraju/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saichandrapandraju/subscriptions",
"organizations_url": "https://api.github.com/users/saichandrapandraju/orgs",
"repos_url": "https://api.github.com/users/saichandrapandraju/repos",
"events_url": "https://api.github.com/users/saichandrapandraju/events{/privacy}",
"received_events_url": "https://api.github.com/users/saichandrapandraju/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"@patil-suraj this may be a good first issue? Feel free to open a PR!",
"Thanks @NielsRogge .\n\nHi @patil-suraj ,\n\nIs there any workaround to make it work in my local?",
"We could for sure add `GPTNeoForSequenceClassification`. \r\n\r\nIt would be as easy as \r\n- just copying the `GPT2ForSequenceClassification` module and replacing the `GPT2` with `GPTNeo`\r\nhttps://github.com/huggingface/transformers/blob/afe479adb5474250215438fe27db9dc9dbbbde09/src/transformers/models/gpt2/modeling_gpt2.py#L1225-L1231\r\n- `config.hidden_size` instead of config.n_embd\r\nhttps://github.com/huggingface/transformers/blob/afe479adb5474250215438fe27db9dc9dbbbde09/src/transformers/models/gpt2/modeling_gpt2.py#L1232\r\n- remove the model_parallel logic\r\nhttps://github.com/huggingface/transformers/blob/afe479adb5474250215438fe27db9dc9dbbbde09/src/transformers/models/gpt2/modeling_gpt2.py#L1236-L1238\r\n- add a test in `tests/test_modeling_gpt_neo.py` similar to\r\nhttps://github.com/huggingface/transformers/blob/afe479adb5474250215438fe27db9dc9dbbbde09/tests/test_modeling_gpt2.py#L357\r\n\r\nMarking this as \"Good First Issue\".\r\n\r\nFeel free to take a stab if you want, I would be happy to help.",
"Hi Guys,\r\nis anyone working on this?\r\nI can make PR for this. I might also need to use it in the future.\r\n",
"Hi @bhadreshpsavani, Feel free to open a PR :) "
] | 1,621 | 1,622 | 1,622 | NONE | null | Hi,
Is there a way to use GPT Neo for classification tasks like BoolQ ?
As 'OpenAI GPT2' integration of HF has 'GPT2ForSequenceClassification', is there a similar one for GPT Neo? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11811/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11811/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11810 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11810/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11810/comments | https://api.github.com/repos/huggingface/transformers/issues/11810/events | https://github.com/huggingface/transformers/pull/11810 | 897,864,385 | MDExOlB1bGxSZXF1ZXN0NjQ5ODEyMTAy | 11,810 | Feature to use the PreTrainedTokenizerFast class as a stand-alone tokenizer | {
"login": "SaulLu",
"id": 55560583,
"node_id": "MDQ6VXNlcjU1NTYwNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaulLu",
"html_url": "https://github.com/SaulLu",
"followers_url": "https://api.github.com/users/SaulLu/followers",
"following_url": "https://api.github.com/users/SaulLu/following{/other_user}",
"gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions",
"organizations_url": "https://api.github.com/users/SaulLu/orgs",
"repos_url": "https://api.github.com/users/SaulLu/repos",
"events_url": "https://api.github.com/users/SaulLu/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaulLu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could this also be a fallback for `AutoTokenizer` when none of the children classes match?"
] | 1,621 | 1,623 | 1,623 | CONTRIBUTOR | null | # What does this PR do?
In this PR, I propose to add the features needed to use `PreTrainedTokenizerFast` as a standalone tokenizer. These features include:
1. The ability to save a `PreTrainedTokenizerFast` tokenizer. Until now, it was not possible to `save_pretrained` ( with default values in the method) a `PreTrainedTokenizerFast` initialized from a folder containing only the `tokenizer.json`, `tokenizer_config.json` and `special_tokens_map.json` files. An error was previously returned because the `save_vocabulary` method was not implemented, which is normal when trying to use a fast tokenizer alone as it has no slow version. This feature allows this kind of use:
```
from transformers import PreTrainedTokenizerFast
tokenizer = PreTrainedTokenizerFast.from_pretrained("SaulLu/bengali-tokenizer-v2")
tokenizer.save_pretrained("./local_tokenizer")
```
2. The ability to specify in the `config.json` file that the type of tokenizer to be loaded is `PreTrainedTokenizerFast` in order to be able to load a `PreTrainedTokenizerFast` with `AutoTokenizer`.
In this PR, I also propose the modification/addition of 3 types of tests:
- Modifications: This design change required the modification of common tests for tokenizers stored in the `tests/test_tokenization_common.py` file. To my knowledge, this is quite a different use as this is the first time a tokenizer will not have a slow/legacy version. The changes to `tests/test_tokenization_common.py` allow a test class derived from `TokenizerTesterMixin` to leave the `tokenizer_class` attribute set to None and to only set the `rust_tokenizer_class` attribute. In other words, the derived class will allow to test a tokenizer which would not have an associated slow/legacy version. As there were several possibilities to modify these tests, if you ever think that it is easier to develop these tests in another PR, I can remove this part from this PR.
- Added : Added tests for using a standalone `PreTrainedTokenizerFast` in the `tests/test_tokenization_fast.py` file. I have created a tokenizer for this and stored it [here](https://huggingface.co/robot-test/dummy-tokenizer-fast).
- Added : Added tests to load a standalone `PreTrainedTokenizerFast` via `AutoTokenizer` in the `tests/test_tokenization_auto.py ` file. I have created a tokenizer for this and stored it [here](https://huggingface.co/robot-test/dummy-tokenizer-fast-with-model-config).
This PR should make it easy to use a fast tokenizer created with the `Tokenizers` library in the `Transformers` library. A typical use case would be :
1. Create a tokenizer with `Tokenizers` library
```
from tokenizers import Tokenizer
from tokenizers.models import BPE
from tokenizers.trainers import BpeTrainer
from tokenizers.pre_tokenizers import Whitespace
tokenizer = Tokenizer(BPE(unk_token="[UNK]"))
trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
tokenizer.pre_tokenizer = Whitespace()
files = [...]
tokenizer.train(files, trainer)
```
2. Adapt the tokenizer to `Transformers` library
At the end of this step, the tokenizer will be saved in a folder named `brand_new_tokenizer` and containing `tokenizer.json`, `tokenizer_config.json` and `special_tokens_map.json` files.
a. Save and initialize `PreTrainedTokenizerFast` with json file
```
tokenizer.save("tokenizer.json")
```
```
from transformers import PreTrainedTokenizerFast
from transformers.tokenization_utils import AddedToken
fast_tokenizer = PreTrainedTokenizerFast(
tokenizer_file="tokenizer.json",
model_max_length=512,
padding_side="right",
mask_token=AddedToken("[MASK]", lstrip=True, rstrip=False
)
fast_tokenizer.save_pretrained("brand_new_tokenizer")
```
b. Initialize `PreTrainedTokenizerFast` from the tokenizer object
```
from transformers import PreTrainedTokenizerFast
from transformers.tokenization_utils import AddedToken
fast_tokenizer = PreTrainedTokenizerFast(
tokenizer_object=tokenizer,
model_max_length=512,
padding_side="right",
mask_token=AddedToken("[MASK]", lstrip=True, rstrip=False
)
fast_tokenizer.save_pretrained("brand_new_tokenizer")
```
3. Load tokenizer with `PreTrainedTokenizerFast`
```
from transformers import PreTrainedTokenizerFast
tokenizer = PreTrainedTokenizerFast.from_pretrained("brand_new_tokenizer")
```
4. (Temporary solution before a next PR) Create a `config.json` file in `brand_new_tokenizer` folder and initialize a tokenizer with `AutoTokenizer`.
Config file:
```
{
"model_type": "albert",
"tokenizer_class": "PreTrainedTokenizerFast"
}
```
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("brand_new_tokenizer")
```
In one (or more) next PRs, we would still have to :
- disassociate the tokenizer from the `config.json` file so that `AutoTokenizer` can load a saved tokenizer without a model
- if necessary adjust the documentation (for example [here](https://huggingface.co/transformers/fast_tokenizers.html) )
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11810/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11810/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11810",
"html_url": "https://github.com/huggingface/transformers/pull/11810",
"diff_url": "https://github.com/huggingface/transformers/pull/11810.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11810.patch",
"merged_at": 1623664724000
} |
https://api.github.com/repos/huggingface/transformers/issues/11809 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11809/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11809/comments | https://api.github.com/repos/huggingface/transformers/issues/11809/events | https://github.com/huggingface/transformers/issues/11809 | 897,857,159 | MDU6SXNzdWU4OTc4NTcxNTk= | 11,809 | Wrong LayerNorm weight names in "bert-base-uncased" checkpoint ? | {
"login": "helboukkouri",
"id": 36409068,
"node_id": "MDQ6VXNlcjM2NDA5MDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/36409068?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/helboukkouri",
"html_url": "https://github.com/helboukkouri",
"followers_url": "https://api.github.com/users/helboukkouri/followers",
"following_url": "https://api.github.com/users/helboukkouri/following{/other_user}",
"gists_url": "https://api.github.com/users/helboukkouri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/helboukkouri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/helboukkouri/subscriptions",
"organizations_url": "https://api.github.com/users/helboukkouri/orgs",
"repos_url": "https://api.github.com/users/helboukkouri/repos",
"events_url": "https://api.github.com/users/helboukkouri/events{/privacy}",
"received_events_url": "https://api.github.com/users/helboukkouri/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Did you try with other models?\r\n\r\nSince 4.6, it gives a similar warning for every model i try to load. For example:\r\n\r\n```python\r\nimport transformers as tr\r\n\r\ntr.AutoModel.from_pretrained(\"xlm-roberta-base\")\r\n```\r\n```bash\r\nSome weights of the model checkpoint at xlm-roberta-base were not used when initializing XLMRobertaModel: ['lm_head.layer_norm.weight', 'lm_head.dense.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'lm_head.dense.bias', 'lm_head.bias']\r\n- This IS expected if you are initializing XLMRobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing XLMRobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\n```\r\n",
"It seems like your case is a bit different. I think you are \"initializing `XLMRobertaModel` from the checkpoint of a model trained on another task\" (pretraining checkpoint). So you have some parameters that are not needed (those from the language modeling head)\r\n\r\nIn my case, it is the layer norm parameters that have the wrong name regardless of which architecture I load :)\r\n\r\nEdit: basically what I mean is that your behaviour is expected while mine is not.",
"Thanks for the heads up, I guess I need to open a new issue.",
"I'm not sure that it is an issue. It just seems that the checkpoint on the model hub was made with the LM model which explains why there are some weights that are not used in your case since you only use the \"encoder\" part. 😊",
"Prior to 4.6, it has never shown these type of warnings when downloading with an `AutoModel`, that's why I think it may be an issue. The same line of code with 4.5 doesn't trigger the warning.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,703 | 1,624 | NONE | null | ## Environment info
- `transformers` version: 4.4.0.dev0
- Platform: Linux-4.18.0-147.44.1.el8_1.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten (issue with "bert-base-uncased" checkpoint)
## Information
Model I am using (Bert, XLNet ...): BERT(base, uncased)
The problem arises when: loading "bert-base-uncased" model weights from state_dict
## To reproduce
Steps to reproduce the behavior:
1. Download model checkpoint from hub:
```
git lfs install
git clone https://huggingface.co/bert-base-uncased
```
2. Load pre-trained model from checkpoint using `.from_pretrained` (this sort of works)
```python
import torch
from transformers import BertForPreTraining
model = BertForPretraining.from_pretrained('./bert-base-uncased')
"""
[Output]:
Some weights of BertForPreTraining were not initialized from the model checkpoint at ./bert-base-uncased and are newly initialized: ['cls.predictions.decoder.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
"""
```
3. Re-load same weights, this time using `.load_state_dict`
```python
state_dict = torch.load('./bert-base-uncased/pytorch_model.bin')
model.load_state_dict(state_dict)
```
This fails and outputs:
```
RuntimeError: Error(s) in loading state_dict for BertForPreTraining:
Missing key(s) in state_dict: "bert.embeddings.position_ids", "bert.embeddings.LayerNorm.weight", "bert.embeddings.LayerNorm.bias", "bert.encoder.layer.0.attention.output.LayerNorm.weight", "bert.encoder.layer.0.attention.output.LayerNorm.bias", "bert.encoder.layer.0.output.LayerNorm.weight", "bert.encoder.layer.0.output.LayerNorm.bias", "bert.encoder.layer.1.attention.output.LayerNorm.weight", "bert.encoder.layer.1.attention.output.LayerNorm.bias", "bert.encoder.layer.1.output.LayerNorm.weight", "bert.encoder.layer.1.output.LayerNorm.bias", "bert.encoder.layer.2.attention.output.LayerNorm.weight", "bert.encoder.layer.2.attention.output.LayerNorm.bias", "bert.encoder.layer.2.output.LayerNorm.weight", "bert.encoder.layer.2.output.LayerNorm.bias", "bert.encoder.layer.3.attention.output.LayerNorm.weight", "bert.encoder.layer.3.attention.output.LayerNorm.bias", "bert.encoder.layer.3.output.LayerNorm.weight", "bert.encoder.layer.3.output.LayerNorm.bias", "bert.encoder.layer.4.attention.output.LayerNorm.weight", "bert.encoder.layer.4.attention.output.LayerNorm.bias", "bert.encoder.layer.4.output.LayerNorm.weight", "bert.encoder.layer.4.output.LayerNorm.bias", "bert.encoder.layer.5.attention.output.LayerNorm.weight", "bert.encoder.layer.5.attention.output.LayerNorm.bias", "bert.encoder.layer.5.output.LayerNorm.weight", "bert.encoder.layer.5.output.LayerNorm.bias", "bert.encoder.layer.6.attention.output.LayerNorm.weight", "bert.encoder.layer.6.attention.output.LayerNorm.bias", "bert.encoder.layer.6.output.LayerNorm.weight", "bert.encoder.layer.6.output.LayerNorm.bias", "bert.encoder.layer.7.attention.output.LayerNorm.weight", "bert.encoder.layer.7.attention.output.LayerNorm.bias", "bert.encoder.layer.7.output.LayerNorm.weight", "bert.encoder.layer.7.output.LayerNorm.bias", "bert.encoder.layer.8.attention.output.LayerNorm.weight", "bert.encoder.layer.8.attention.output.LayerNorm.bias", "bert.encoder.layer.8.output.LayerNorm.weight", "bert.encoder.layer.8.output.LayerNorm.bias", "bert.encoder.layer.9.attention.output.LayerNorm.weight", "bert.encoder.layer.9.attention.output.LayerNorm.bias", "bert.encoder.layer.9.output.LayerNorm.weight", "bert.encoder.layer.9.output.LayerNorm.bias", "bert.encoder.layer.10.attention.output.LayerNorm.weight", "bert.encoder.layer.10.attention.output.LayerNorm.bias", "bert.encoder.layer.10.output.LayerNorm.weight", "bert.encoder.layer.10.output.LayerNorm.bias", "bert.encoder.layer.11.attention.output.LayerNorm.weight", "bert.encoder.layer.11.attention.output.LayerNorm.bias", "bert.encoder.layer.11.output.LayerNorm.weight", "bert.encoder.layer.11.output.LayerNorm.bias", "cls.predictions.transform.LayerNorm.weight", "cls.predictions.transform.LayerNorm.bias", "cls.predictions.decoder.bias".
Unexpected key(s) in state_dict: "bert.embeddings.LayerNorm.gamma", "bert.embeddings.LayerNorm.beta", "bert.encoder.layer.0.attention.output.LayerNorm.gamma", "bert.encoder.layer.0.attention.output.LayerNorm.beta", "bert.encoder.layer.0.output.LayerNorm.gamma", "bert.encoder.layer.0.output.LayerNorm.beta", "bert.encoder.layer.1.attention.output.LayerNorm.gamma", "bert.encoder.layer.1.attention.output.LayerNorm.beta", "bert.encoder.layer.1.output.LayerNorm.gamma", "bert.encoder.layer.1.output.LayerNorm.beta", "bert.encoder.layer.2.attention.output.LayerNorm.gamma", "bert.encoder.layer.2.attention.output.LayerNorm.beta", "bert.encoder.layer.2.output.LayerNorm.gamma", "bert.encoder.layer.2.output.LayerNorm.beta", "bert.encoder.layer.3.attention.output.LayerNorm.gamma", "bert.encoder.layer.3.attention.output.LayerNorm.beta", "bert.encoder.layer.3.output.LayerNorm.gamma", "bert.encoder.layer.3.output.LayerNorm.beta", "bert.encoder.layer.4.attention.output.LayerNorm.gamma", "bert.encoder.layer.4.attention.output.LayerNorm.beta", "bert.encoder.layer.4.output.LayerNorm.gamma", "bert.encoder.layer.4.output.LayerNorm.beta", "bert.encoder.layer.5.attention.output.LayerNorm.gamma", "bert.encoder.layer.5.attention.output.LayerNorm.beta", "bert.encoder.layer.5.output.LayerNorm.gamma", "bert.encoder.layer.5.output.LayerNorm.beta", "bert.encoder.layer.6.attention.output.LayerNorm.gamma", "bert.encoder.layer.6.attention.output.LayerNorm.beta", "bert.encoder.layer.6.output.LayerNorm.gamma", "bert.encoder.layer.6.output.LayerNorm.beta", "bert.encoder.layer.7.attention.output.LayerNorm.gamma", "bert.encoder.layer.7.attention.output.LayerNorm.beta", "bert.encoder.layer.7.output.LayerNorm.gamma", "bert.encoder.layer.7.output.LayerNorm.beta", "bert.encoder.layer.8.attention.output.LayerNorm.gamma", "bert.encoder.layer.8.attention.output.LayerNorm.beta", "bert.encoder.layer.8.output.LayerNorm.gamma", "bert.encoder.layer.8.output.LayerNorm.beta", "bert.encoder.layer.9.attention.output.LayerNorm.gamma", "bert.encoder.layer.9.attention.output.LayerNorm.beta", "bert.encoder.layer.9.output.LayerNorm.gamma", "bert.encoder.layer.9.output.LayerNorm.beta", "bert.encoder.layer.10.attention.output.LayerNorm.gamma", "bert.encoder.layer.10.attention.output.LayerNorm.beta", "bert.encoder.layer.10.output.LayerNorm.gamma", "bert.encoder.layer.10.output.LayerNorm.beta", "bert.encoder.layer.11.attention.output.LayerNorm.gamma", "bert.encoder.layer.11.attention.output.LayerNorm.beta", "bert.encoder.layer.11.output.LayerNorm.gamma", "bert.encoder.layer.11.output.LayerNorm.beta", "cls.predictions.transform.LayerNorm.gamma", "cls.predictions.transform.LayerNorm.beta".
```
## Expected behavior
Opening the checkpoint using `torch.load` then loading these weights using `model.load_state_dict` should result in matching all keys successfully (in particular here, all LayerNorm weights should be loaded).
## Solution?
The issue here seems to be that the weight and bias parameters in LayerNorm were renamed from gamma and beta previously but the bert-base-uncased checkpoint wasn't updated to reflect this change. I am using a somewhat older version of transformers / pytorch but this seems to be still the case in recent versions of both libraries. The test was done using the model checkpoint from the model hub on 21 May 2021. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11809/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11809/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11808 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11808/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11808/comments | https://api.github.com/repos/huggingface/transformers/issues/11808/events | https://github.com/huggingface/transformers/issues/11808 | 897,758,990 | MDU6SXNzdWU4OTc3NTg5OTA= | 11,808 | How to save and load model from local path in pipeline api ? | {
"login": "yananchen1989",
"id": 26405281,
"node_id": "MDQ6VXNlcjI2NDA1Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yananchen1989",
"html_url": "https://github.com/yananchen1989",
"followers_url": "https://api.github.com/users/yananchen1989/followers",
"following_url": "https://api.github.com/users/yananchen1989/following{/other_user}",
"gists_url": "https://api.github.com/users/yananchen1989/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yananchen1989/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yananchen1989/subscriptions",
"organizations_url": "https://api.github.com/users/yananchen1989/orgs",
"repos_url": "https://api.github.com/users/yananchen1989/repos",
"events_url": "https://api.github.com/users/yananchen1989/events{/privacy}",
"received_events_url": "https://api.github.com/users/yananchen1989/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I don't think it's currently possible, you would have to specify the local path in `model` but it won't ping the custom `cache_dir`.\r\n\r\nWe would happily welcome a PR that enables that for pipelines, would you be interested in that?",
"> I don't think it's currently possible, you would have to specify the local path in `model` but it won't ping the custom `cache_dir`.\r\n> \r\n> We would happily welcome a PR that enables that for pipelines, would you be interested in that?\r\n\r\nThanks for your solution. I prefer to wait for new features in the future. "
] | 1,621 | 1,621 | 1,621 | NONE | null | In `from_pretrained` api, the model can be loaded from local path by passing the `cache_dir`. However, I have not found any parameter when using `pipeline`
for example, `nlp = pipeline("fill-mask" , model = 'distilbert-base-uncased', device=0)`
how to save the downloaded model and load it next time from local path, rather than default cache path ?
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11808/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11808/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11807 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11807/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11807/comments | https://api.github.com/repos/huggingface/transformers/issues/11807/events | https://github.com/huggingface/transformers/issues/11807 | 897,745,208 | MDU6SXNzdWU4OTc3NDUyMDg= | 11,807 | version of T5 is not reported in HuggingFace models | {
"login": "dorooddorood606",
"id": 79288051,
"node_id": "MDQ6VXNlcjc5Mjg4MDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/79288051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorooddorood606",
"html_url": "https://github.com/dorooddorood606",
"followers_url": "https://api.github.com/users/dorooddorood606/followers",
"following_url": "https://api.github.com/users/dorooddorood606/following{/other_user}",
"gists_url": "https://api.github.com/users/dorooddorood606/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorooddorood606/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorooddorood606/subscriptions",
"organizations_url": "https://api.github.com/users/dorooddorood606/orgs",
"repos_url": "https://api.github.com/users/dorooddorood606/repos",
"events_url": "https://api.github.com/users/dorooddorood606/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorooddorood606/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there,\r\n\r\nfor T5V1.1 models we explicitly mention it in the model name, for example see here \r\nhttps://huggingface.co/google/t5-v1_1-base\r\n\r\nthe model version is mentioned in the name as `v1_1`.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,624 | 1,624 | NONE | null | Hi @patrickvonplaten, @patil-suraj,
Google T5 model has two checkpoints, of t5.0.0 and t5.1.0, the performance of the two models is very different, in huggingface mdoels it is not specified which version HuggingFace is using, could you kindly add the details?
thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11807/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11806 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11806/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11806/comments | https://api.github.com/repos/huggingface/transformers/issues/11806/events | https://github.com/huggingface/transformers/pull/11806 | 897,670,195 | MDExOlB1bGxSZXF1ZXN0NjQ5NjQ3MDgz | 11,806 | updated the original RAG implementation to be compatible with latest Pytorch-Lightning | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @shamanez,\r\n\r\nCould you run `make style`? \r\n\r\n@lhoestq - could you take a look as well?",
"Hey @patrickvonplaten \r\n\r\nI did run the \"make style\" and it changed following files and working alright.\r\n",
"Thanks @patrickvonplaten :)"
] | 1,621 | 1,623 | 1,623 | CONTRIBUTOR | null | The original RAG version was not working with PL>=1.3, specially due to the fact that DDPAccelerator class has removed (Retriever Initialization of RAG). The new version of PL library advises us to use DDP pluggings as a replacement.
I also updated lightning_base.py regarding the new PL version. Now RAG works with the latest libraries.
@patrickvonplaten @lhoestq
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11806/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11806/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11806",
"html_url": "https://github.com/huggingface/transformers/pull/11806",
"diff_url": "https://github.com/huggingface/transformers/pull/11806.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11806.patch",
"merged_at": 1623156169000
} |
https://api.github.com/repos/huggingface/transformers/issues/11805 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11805/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11805/comments | https://api.github.com/repos/huggingface/transformers/issues/11805/events | https://github.com/huggingface/transformers/pull/11805 | 897,657,860 | MDExOlB1bGxSZXF1ZXN0NjQ5NjM2NDMx | 11,805 | [Deepspeed] support `zero.Init` in `from_config` | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | CONTRIBUTOR | null | As discussed a while ago this PR:
- adds missing support for `zero.Init` (zero3) for `from_config` (same as we have in `from_pretrained) - which allows a huge model to be loaded in small chunks per gpu at once.
- test
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11805/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11805/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11805",
"html_url": "https://github.com/huggingface/transformers/pull/11805",
"diff_url": "https://github.com/huggingface/transformers/pull/11805.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11805.patch",
"merged_at": 1621613266000
} |
https://api.github.com/repos/huggingface/transformers/issues/11804 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11804/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11804/comments | https://api.github.com/repos/huggingface/transformers/issues/11804/events | https://github.com/huggingface/transformers/issues/11804 | 897,652,406 | MDU6SXNzdWU4OTc2NTI0MDY= | 11,804 | Index out of range when doing manual testing for TFBertModel | {
"login": "lichenhao608",
"id": 23352637,
"node_id": "MDQ6VXNlcjIzMzUyNjM3",
"avatar_url": "https://avatars.githubusercontent.com/u/23352637?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lichenhao608",
"html_url": "https://github.com/lichenhao608",
"followers_url": "https://api.github.com/users/lichenhao608/followers",
"following_url": "https://api.github.com/users/lichenhao608/following{/other_user}",
"gists_url": "https://api.github.com/users/lichenhao608/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lichenhao608/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lichenhao608/subscriptions",
"organizations_url": "https://api.github.com/users/lichenhao608/orgs",
"repos_url": "https://api.github.com/users/lichenhao608/repos",
"events_url": "https://api.github.com/users/lichenhao608/events{/privacy}",
"received_events_url": "https://api.github.com/users/lichenhao608/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, you're quite right with your diagnosis. The problem is that by default, the tokenizer creates a dict of Python lists, not Tensors. Our models don't really understand those list inputs, and so you get errors. \r\n\r\nYou already found the solution of converting those lists to TF Tensors or Numpy arrays, but there is an easier way - just tell the Tokenizer that you want array output. Then you will get the dict you want, and the rest of your code will work correctly. Here's an updated code sample that returns a dict of Numpy arrays instead:\r\n\r\n```\r\ntokenizer = transformers.BertTokenizer.from_pretrained('bert-base-uncased')\r\nmodel = transformers.TFBertModel.from_pretrained('bert-base-uncased')\r\nmodel(**tokenizer(['i, ne'], return_tensors='np'))\r\n```",
"Thanks, that really helps!"
] | 1,621 | 1,621 | 1,621 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.0
- Platform: Windows 10
- Python version: 3.8.5
- PyTorch version (GPU?):
- Tensorflow version (GPU?):2.4.1
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik @Rocketknight1
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
TFBertModel, BertTokenizer
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Want to check what input BertModel will take, so I tested with code
```
tokenizer = transformers.BertTokenizer.from_pretrained('bert-base-uncased')
model = transformers.TFBertModel.from_pretrained('bert-base-uncased')
model(**tokenizer(['i, ne']))
```
This gives an error
```
File "C:\Users\liche\anaconda3\lib\site-packages\transformers\models\bert\modeling_tf_bert.py", line 887, in call
outputs = self.bert(
File "C:\Users\liche\anaconda3\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 1012, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "C:\Users\liche\anaconda3\lib\site-packages\transformers\models\bert\modeling_tf_bert.py", line 645, in call
embedding_output = self.embeddings(
File "C:\Users\liche\anaconda3\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 1012, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "C:\Users\liche\anaconda3\lib\site-packages\transformers\models\bert\modeling_tf_bert.py", line 199, in call
position_embeds = tf.tile(input=position_embeds, multiples=(input_shape[0], 1, 1))
IndexError: list index out of range
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I think my code should produce something, but not an error. So I check the code and find that `input_ids` for `TFBertEmbedding` is a () shape Tensor. Then tracing back to where produce it, and I finally end at function `input_processing` in modeling_tf_utils.py, and find that input_ids are a list of five Tensors, each is a shape of ().
So here comes the problem. As shown in the documentation, `TFBertModel` takes input_ids as a type of `TFModelInputType`, which only accepts either Tensor or numpy array or a list of them. My tokenizer produces ` [[101, 1045, 1010, 11265, 102]]` as input_ids. If manually converting it whole to Tensor or numpy array, I will get a shape (1,5) variable and can be successfully fed to the model and get outputs. However, if directly fed the dict to the model (as the code above), since the model only accepts Tensor or numpy array type, so it will convert the list to a type it accepts. Then it doesn't correctly covert the whole list to tensor, instead, it converts each individual element, i.e. integers, as the accepted type. And after `input_processing`, not the list of Tensor is fed to `TFEmbedding` but each individual empty shape integer Tensor. And it raises the error.
It can be solved by converting it to Tensor as desired before calling the model but my code is logically correct, and expect it works. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11804/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11803 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11803/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11803/comments | https://api.github.com/repos/huggingface/transformers/issues/11803/events | https://github.com/huggingface/transformers/issues/11803 | 897,650,375 | MDU6SXNzdWU4OTc2NTAzNzU= | 11,803 | bert model (bert-base-chinese) consumed too much memory | {
"login": "LiuChiennan",
"id": 26686108,
"node_id": "MDQ6VXNlcjI2Njg2MTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/26686108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LiuChiennan",
"html_url": "https://github.com/LiuChiennan",
"followers_url": "https://api.github.com/users/LiuChiennan/followers",
"following_url": "https://api.github.com/users/LiuChiennan/following{/other_user}",
"gists_url": "https://api.github.com/users/LiuChiennan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LiuChiennan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LiuChiennan/subscriptions",
"organizations_url": "https://api.github.com/users/LiuChiennan/orgs",
"repos_url": "https://api.github.com/users/LiuChiennan/repos",
"events_url": "https://api.github.com/users/LiuChiennan/events{/privacy}",
"received_events_url": "https://api.github.com/users/LiuChiennan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"A batch size of 128 is a lot! Are you using batch size 128 with sequence length 128?",
"Most likely you have a large tensor size of 128 * 128 * 768 - and also depends on what type of tensor data you put int32 / float32 / float64? Try to reduce the batch size, even to 2.",
"> A batch size of 128 is a lot! Are you using batch size 128 with sequence length 128?\r\n\r\nyes, maybe batch size 128 is large, but I don't know why the memory cost becomes larger in each iteration. I mean in the first iteration(with batch size 128) it consumes 10G, and when the process goes to the second iteration(still batch size 128), it consumes 20G, and 30G,40G,.....",
"> Most likely you have a large tensor size of 128 * 128 * 768 - and also depends on what type of tensor data you put int32 / float32 / float64? Try to reduce the batch size, even to 2.\r\n\r\nThe input tensor size actually is 128*128? I am just confused why the memory cost rises in each iteration.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,624 | 1,624 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.0
- Platform: Linux-4.19.117.bsk.5-amd64-x86_64-with-debian-10.7
- Python version: 3.7.3
- PyTorch version (GPU?): 1.8.1+cu102 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@LysandreJik
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using Bert:
When I run a code like this:
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('bert-base-chinese')
bert = AutoModel.from_pretrained('bert-base-chinese')
tokens = tokenizer(query, answer, padding=True, truncation=True, max_length=128, return_tensors="pt")
out = bert(**tokens)
```
where query and answer are both tensors with batch size 128
however, it consumes over 10G memory in this line of code, ```out = bert(**tokens)```, anyone knows why?
and in the next iteration, it consumes 20, 30, 40G memory...
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11803/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11803/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11802 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11802/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11802/comments | https://api.github.com/repos/huggingface/transformers/issues/11802/events | https://github.com/huggingface/transformers/issues/11802 | 897,636,064 | MDU6SXNzdWU4OTc2MzYwNjQ= | 11,802 | Text Generation, adding random words, weird linebreaks & symbols at random. | {
"login": "steeljardas",
"id": 84510026,
"node_id": "MDQ6VXNlcjg0NTEwMDI2",
"avatar_url": "https://avatars.githubusercontent.com/u/84510026?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/steeljardas",
"html_url": "https://github.com/steeljardas",
"followers_url": "https://api.github.com/users/steeljardas/followers",
"following_url": "https://api.github.com/users/steeljardas/following{/other_user}",
"gists_url": "https://api.github.com/users/steeljardas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/steeljardas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/steeljardas/subscriptions",
"organizations_url": "https://api.github.com/users/steeljardas/orgs",
"repos_url": "https://api.github.com/users/steeljardas/repos",
"events_url": "https://api.github.com/users/steeljardas/events{/privacy}",
"received_events_url": "https://api.github.com/users/steeljardas/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Could you provide more information, especially regarding which model and tokenizer you're using? Also, you might have more luck asking on the [forum](https://discusss.huggingface.co), as GitHub issues are for bugs/feature requests.\r\n\r\nThanks!",
"> Hi! Could you provide more information, especially regarding which model and tokenizer you're using? Also, you might have more luck asking on the [forum](https://discusss.huggingface.co), as GitHub issues are for bugs/feature requests.\r\n> \r\n> Thanks!\r\n\r\noh sorry forgot to include them.\r\n\r\n\t\ttokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium') \r\n\t\tmodel = GPT2LMHeadModel.from_pretrained('gpt2-medium' , pad_token_id = tokenizer.eos_token_id)",
"> Hi! Could you provide more information, especially regarding which model and tokenizer you're using? Also, you might have more luck asking on the [forum](https://discusss.huggingface.co), as GitHub issues are for bugs/feature requests.\r\n> \r\n> Thanks!\r\n\r\nif possible can you remove my account on hold on the forum? wont allow me to ask it there.\r\n\r\n\"steelhard\" is the account name.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,624 | 1,624 | NONE | null | Here's the code I'm using to generate text.
`sentence= tokenizer.encode(kw, return_tensors='pt')
output = model.generate(sentence,
max_length = 500,
no_repeat_ngram_size = 2,
do_sample=False)
text.append(tokenizer.decode(output[0], skip_special_tokens = True))`
The issue is that the output often comes like this:
`What are the benefits of using collagen?
,,,
,
,
,,
, __________________, __________
The skin that has collagen has a higher level of hydrophilic (water-loving) proteins. `
or like this:
`Yes, collagen is a natural skin-repairing substance. It is also a powerful anti-inflammatory and antiaging agent.
, and, are the most common types of collagen found in skin.`
As you can see, at the start it wrote ", and," at random and it happens EXTREMELY often, nearly in every single text generation I did.
I don't know if it's related to my settings or not but I'd appreciate all the help you guys can give. I want to get my text to be as human-readable as possible & up to 100-500 words each input. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11802/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11801 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11801/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11801/comments | https://api.github.com/repos/huggingface/transformers/issues/11801/events | https://github.com/huggingface/transformers/issues/11801 | 897,527,216 | MDU6SXNzdWU4OTc1MjcyMTY= | 11,801 | [examples] run_clm re-processes dataset on every run | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The dataset caching is all relying on the datasets library, so the issue should probably be tracked here. Especially if this is a new change: since there was no change I'm aware of in `run_clm` recently it may be coming from a change there.",
"Thank you! I will ask on the `datasets` side.",
"you scooped me Sylvain.\r\n\r\nI downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):\r\n\r\n> `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b/cache-c6aefe81ca4e5152.arrow'}], 'validation': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b/cache-97cf4c813e6469c6.arrow'}]}`\r\n\r\nwhile the same command with the latest version of datasets (actually starting at `1.6.0`) gives:\r\n> `{'train': [], 'validation': []}`\r\n\r\nDoes it ring any bell @lhoestq ?",
"OK, moved this to `datasets` https://github.com/huggingface/datasets/issues/2387\r\n",
"Reopening and bringing it back here:\r\n\r\nAccording to this https://github.com/huggingface/datasets/issues/2387#issuecomment-845781874\r\n\r\nwe need to change examples to add `keep_in_memory=False` - load_dataset otherwise there is no caching.\r\n\r\nhere:\r\nhttps://github.com/huggingface/transformers/blob/223943872e8c9c3fc11db3c6e93da07f5177423f/examples/pytorch/language-modeling/run_clm.py#L233\r\n\r\n",
"ok, `datasets` reverted the in-memory-datasets by default in master, so this is no longer a problem."
] | 1,621 | 1,623 | 1,623 | CONTRIBUTOR | null | developing with `run_clm` is difficult since its startup is very slow - it rebuilds the dataset on each start.
@VictorSanh says it started to do that recently...
I think it's because it has to chunk the existing dataset into smaller pieces, it's a slow start everytime and it doesn't save these results.
So the original dataset has already been preprocessed, but it's not good enough for `run_clm.py`.
So I'm thinking perhaps for dev needs we need a dataset with short <512 entries? and then it could use it w/o additional preprocessing?
But I could be wrong I haven't investigated the reason for the slow start.
to reproduce:
```
USE_TF=0 python examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path gpt2 \
--dataset_name "stas/openwebtext-10k" \
--output_dir output_dir \
--overwrite_output_dir \
--do_train \
--do_eval \
--max_train_samples 1000 \
--max_eval_samples 200 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--num_train_epochs 1 \
--warmup_steps 8 \
--block_size 64 \
--fp16 \
--report_to none
```
So look at the tqdm bars before training starts to see the symptom. And this is already a very truncated dataset.
@VictorSanh, @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11801/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11801/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11800 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11800/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11800/comments | https://api.github.com/repos/huggingface/transformers/issues/11800/events | https://github.com/huggingface/transformers/issues/11800 | 897,444,281 | MDU6SXNzdWU4OTc0NDQyODE= | 11,800 | CamemBert Tokenizer AttributeError: 'NoneType' object has no attribute 'tokenize' | {
"login": "Quang-Vinh",
"id": 22286515,
"node_id": "MDQ6VXNlcjIyMjg2NTE1",
"avatar_url": "https://avatars.githubusercontent.com/u/22286515?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Quang-Vinh",
"html_url": "https://github.com/Quang-Vinh",
"followers_url": "https://api.github.com/users/Quang-Vinh/followers",
"following_url": "https://api.github.com/users/Quang-Vinh/following{/other_user}",
"gists_url": "https://api.github.com/users/Quang-Vinh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Quang-Vinh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Quang-Vinh/subscriptions",
"organizations_url": "https://api.github.com/users/Quang-Vinh/orgs",
"repos_url": "https://api.github.com/users/Quang-Vinh/repos",
"events_url": "https://api.github.com/users/Quang-Vinh/events{/privacy}",
"received_events_url": "https://api.github.com/users/Quang-Vinh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Could you try installing `sentencepiece` to see if that solves the problem?",
"I got an error when sentencepiece wasn't installed and after installing it returned None. Trying it again now I don't see the error anymore though so I'll close the issue 🙂",
"If this was on colab it's possible that you needed the runtime to restart!"
] | 1,621 | 1,621 | 1,621 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.8
- PyTorch version (GPU?): 1.8.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: No
### Who can help
Library:
- tokenizers: @LysandreJik
## Information
Model I am using camemBert https://huggingface.co/camembert-base.
The problem arises when using:
* [x ] the official example scripts:
``` python
from transformers import CamembertModel, CamembertTokenizer
# You can replace "camembert-base" with any other model from the table, e.g. "camembert/camembert-large".
tokenizer = CamembertTokenizer.from_pretrained("camembert-base")
camembert = CamembertModel.from_pretrained("camembert-base")
camembert.eval() # disable dropout (or leave in train mode to finetune)
import torch
# Tokenize in sub-words with SentencePiece
tokenized_sentence = tokenizer.tokenize("J'aime le camembert !")
```
## To reproduce
Steps to reproduce the behavior:
1. Install transformers
2. Run code
I get a AttributeError: 'NoneType' object has no attribute 'tokenize' Error as the tokenizer is None when I load from pre trained.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11800/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11800/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11799 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11799/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11799/comments | https://api.github.com/repos/huggingface/transformers/issues/11799/events | https://github.com/huggingface/transformers/issues/11799 | 897,425,049 | MDU6SXNzdWU4OTc0MjUwNDk= | 11,799 | ImportError: tokenizers>=0.10.1,<0.11 is required for a normal functioning of this module, but found tokenizers==0.8.1rc1. | {
"login": "jucho2725",
"id": 46298038,
"node_id": "MDQ6VXNlcjQ2Mjk4MDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/46298038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jucho2725",
"html_url": "https://github.com/jucho2725",
"followers_url": "https://api.github.com/users/jucho2725/followers",
"following_url": "https://api.github.com/users/jucho2725/following{/other_user}",
"gists_url": "https://api.github.com/users/jucho2725/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jucho2725/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jucho2725/subscriptions",
"organizations_url": "https://api.github.com/users/jucho2725/orgs",
"repos_url": "https://api.github.com/users/jucho2725/repos",
"events_url": "https://api.github.com/users/jucho2725/events{/privacy}",
"received_events_url": "https://api.github.com/users/jucho2725/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, the error is pretty straightforward: your Python environment has the wrong `tokenizers` version.\r\n\r\nI would suggest you reinstall tokenizers *while making sure you are in the same environment as your python runtime*: `pip install -U tokenizers`",
"`pip install -U tokenizers` does not solve the problem. And after several trials, I could not help recreating the docker container to make this work. I guess It was due to creating a new conda environment inside of a docker container. Thank you for your reply! Will close the issue. ",
"I have this identical issue. I am running python under WSL2, which is a docker container, I gather.\r\n",
"I'm having the same issue, using conda inside docker since I need to create a jupyter notebook server",
"I have the same problem.\r\n\r\nI fixed it by update python's version from 3.6 to 3.9. ",
"I try to run `pip uninstall tokenizers` for 2 times, and solved.\r\n\r\n<img width=\"703\" alt=\"image\" src=\"https://user-images.githubusercontent.com/30597946/174764042-f25d97fc-45c5-4000-8f4f-7b94e65302d3.png\">\r\n",
"Re-install transformers with a proper version will be ok. I solve it by the command: `pip install transformers==4.11.3`.",
"it works on python 3.8 when transformers==4.11.3.\r\nSo using `pip install transformers==4.11.3` for the proper installation version.\r\n\r\n3.8 and above will need to upgrade the transformers to 4.2x.xx\r\n",
"More info at sister thread:\r\nhttps://github.com/CompVis/latent-diffusion/issues/207"
] | 1,621 | 1,687 | 1,622 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux Mint Tricia 19.3 (ubuntu 18.04)
- Python version: 3.8.8
- PyTorch version (GPU?): 1.7.0, gpu yes
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
tokenizer: @LysandreJik
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
* [ ] my own modified scripts: (give details below)
* [ ] my own task or dataset: (give details below) text generation
After upgrade to 4.6.1 (same error in 4.6.0), I have an error when I load tokenizer.
### What I have tried
I searched for a similar issue and thought that this is a possible duplicate of [this issue](https://github.com/huggingface/transformers/issues/11713), but there was no change after I apply the solution.
I uninstalled transformers and tokenizers package, reinstall those, and still there is the same issue.
## To reproduce
Steps to reproduce the behavior:
1. Import tokenizer (like below)
```
from transformers import (PreTrainedTokenizerFast,
GPT2Tokenizer,)
```
Error message
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-5-dc540cd053e1> in <module>
----> 1 from transformers import (PreTrainedTokenizerFast,
2 PreTrainedTokenizer,
3 AutoTokenizer,
4 GPT2Tokenizer,)
5
/opt/conda/lib/python3.8/site-packages/transformers/__init__.py in <module>
41
42 # Check the dependencies satisfy the minimal versions required.
---> 43 from . import dependency_versions_check
44 from .file_utils import (
45 _BaseLazyModule,
/opt/conda/lib/python3.8/site-packages/transformers/dependency_versions_check.py in <module>
39 continue # not required, check version only if installed
40
---> 41 require_version_core(deps[pkg])
42 else:
43 raise ValueError(f"can't find {pkg} in {deps.keys()}, check dependency_versions_table.py")
/opt/conda/lib/python3.8/site-packages/transformers/utils/versions.py in require_version_core(requirement)
118 """require_version wrapper which emits a core-specific hint on failure"""
119 hint = "Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git master"
--> 120 return require_version(requirement, hint)
121
122
/opt/conda/lib/python3.8/site-packages/transformers/utils/versions.py in require_version(requirement, hint)
112 if want_ver is not None:
113 for op, want_ver in wanted.items():
--> 114 _compare_versions(op, got_ver, want_ver, requirement, pkg, hint)
115
116
/opt/conda/lib/python3.8/site-packages/transformers/utils/versions.py in _compare_versions(op, got_ver, want_ver, requirement, pkg, hint)
47 raise ValueError("want_ver is None")
48 if not ops[op](version.parse(got_ver), version.parse(want_ver)):
---> 49 raise ImportError(
50 f"{requirement} is required for a normal functioning of this module, but found {pkg}=={got_ver}.{hint}"
51 )
ImportError: tokenizers>=0.10.1,<0.11 is required for a normal functioning of this module, but found tokenizers==0.8.1rc1.
Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git master
```
## Expected behavior
Just work like before!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11799/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11798 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11798/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11798/comments | https://api.github.com/repos/huggingface/transformers/issues/11798/events | https://github.com/huggingface/transformers/pull/11798 | 897,409,860 | MDExOlB1bGxSZXF1ZXN0NjQ5NDIxNzEy | 11,798 | [Examples] create model with custom config on the fly | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"So now can we activate activation checkpointing with: `--config_overrides \"gradient_checkpointing=true,use_cache=False\"`\r\n\r\n1. Should we document this somewhere? maybe `examples/pytorch/README.md` once we port this to all other examples?\r\n\r\n2. But it's only available for non-pretrained model, should I make `config_overrides` available to any model? i.e. this change:\r\n\r\n```\r\n--- a/examples/pytorch/language-modeling/run_clm.py\r\n+++ b/examples/pytorch/language-modeling/run_clm.py\r\n@@ -286,9 +286,10 @@ def main():\r\n else:\r\n config = CONFIG_MAPPING[model_args.model_type]()\r\n logger.warning(\"You are instantiating a new config instance from scratch.\")\r\n- if model_args.config_overrides is not None:\r\n- logger.info(f\"Overriding config: {model_args.config_overrides}\")\r\n- config.update_from_string(model_args.config_overrides)\r\n+\r\n+ if model_args.config_overrides is not None:\r\n+ logger.info(f\"Overriding config: {model_args.config_overrides}\")\r\n+ config.update_from_string(model_args.config_overrides)\r\n```\r\n\r\nIt could invite problems for config sections which have to match the pre-trained weights, but otherwise should give users more flexibility. e.g. allow turning caching off, grad checkpointing on and perhaps do other things that aren't impacted by pretrained weights.",
"This option does not make any sense for pretrained models: in the best case the user will get an error of weights shape mismatch, in the worst case it will just silently yield crappy results.\r\nThus, the option does not make sense IMO for scripts not used for training models from scratch, have to check manually but I think it's just the scripts for language-modeling which offer that option, so in this case only document the option in their README.",
"> [...] have to check manually but I think it's just the scripts for language-modeling which offer that option, so in this case only document the option in their README.\r\n\r\nby \"that option\" do you mean \"gradient_checkpointing\"? If so it's available in 30 models out of 59:\r\n```\r\n$ grep -Irl gradient_checkpointing src/transformers/models/*/modeling* | wc -l\r\n30\r\n$ ls -l src/transformers/models/*/modeling* | egrep -v '(flax|tf)'| wc -l\r\n59\r\n```",
"No I meant the option of training from scratch. I did double check, and it's only in the LM scripts.",
"Right, and I was talking about documenting ` --config_overrides \"gradient_checkpointing=true,use_cache=False\"`\r\n\r\nwhich could apply to any model. (but is not coded to support that at the moment).\r\n\r\nAnd you did mention elsewhere that this feature is on a todo list.",
"Yes, it will be a regular training argument in the future.",
"Excellent point, @LysandreJik.\r\n\r\nI did both.\r\n\r\nThough warning not, assert yes. \r\n"
] | 1,621 | 1,621 | 1,621 | CONTRIBUTOR | null | This PR is addressing a need to:
1. be able to quickly whip up a model of any desired size for the big-science experiments.
2. be able to activate gradient checkpointing (later addition)
We already have the functionality to create a model instead of using a pretrained one, but there was no way to control its config - it would choose the defaults of the Config object, which is very doubtful is of any practical use.
This PR:
1. adds a new `PretrainedConfig` method: `update_from_string` so one can update from a string.
```
config.update_from_string("n_embd=10,n_head=5,scale_attn_weights=false,summary_type=super_cls_index")
```
plus test.
2. adds a new `ModelArguments` arg: `--config_overrides="n_embd=1024,n_head=16,n_layer=48,n_positions=102"` which overrides the default config
3. auto-logs the resulting model size e.g.:
```
Training new model from scratch - Total size=626.69M params
```
Usage:
```
PYTHONPATH=src python examples/pytorch/language-modeling/run_clm.py --dataset_name \
"stas/openwebtext-10k" --output_dir output_dir --overwrite_output_dir --do_train --do_eval \
--max_train_samples 10000 --max_eval_samples 1000 --per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 --num_train_epochs 1 --warmup_steps 8 --block_size 64 --fp16 \
--report_to none --model_type gpt2 --tokenizer_name gpt2 \
--config_overrides "n_embd=1024,n_head=16,n_layer=48,n_positions=1024"
```
Only `run_clm.py` for this experiment.
@sgugger, @LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11798/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11798/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11798",
"html_url": "https://github.com/huggingface/transformers/pull/11798",
"diff_url": "https://github.com/huggingface/transformers/pull/11798.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11798.patch",
"merged_at": 1621964449000
} |
https://api.github.com/repos/huggingface/transformers/issues/11797 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11797/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11797/comments | https://api.github.com/repos/huggingface/transformers/issues/11797/events | https://github.com/huggingface/transformers/issues/11797 | 897,331,299 | MDU6SXNzdWU4OTczMzEyOTk= | 11,797 | [examples] add desc to `dataset.map` to improve tqdm bars | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,623 | 1,623 | CONTRIBUTOR | null | https://github.com/huggingface/datasets/pull/2374 has been merged - we should deploy this feature in our examples, which would tell the user what's being processed and the tqdm bar is for.
Currently we get a bunch of bars that are absolutely meaningless and hard to understand what they do. See also: https://github.com/huggingface/datasets/issues/2330
The only issue is how to depend on `datasets` dev version, might have to wait for a new `datasets` release 1.6.3 to be able to merge such PR.
A new release should be made in the next few days I'm being told, so a PR can be made. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11797/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11797/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11796 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11796/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11796/comments | https://api.github.com/repos/huggingface/transformers/issues/11796/events | https://github.com/huggingface/transformers/issues/11796 | 897,300,869 | MDU6SXNzdWU4OTczMDA4Njk= | 11,796 | [trainer] multi-node tweaks | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Mmm I guess there should be some argument controlling this: when I'm using multi-node I launch the command on two separate machines and have two separate terminals, so having both output the logs is helpful to know where each is at.",
"Absolutely agree for a few nodes! This becomes an issue on 64+ nodes ;) \r\n\r\nLet's have a flag that by default it logs on each node, and can be turned off if wanted.\r\n\r\nThis is all new so I'm first just sharing the things that can be improved\r\n\r\nOne other thing to figure out is pytorch error handling, when the launcher crashes it generated 64 interleaved tracebacks - impossible to understand what went wrong half the time... But that's not trainer-related..."
] | 1,621 | 1,621 | 1,621 | CONTRIBUTOR | null | As I'm using Trainer in a multi-node setup, I will use this issue to post the things that could be improved for that type of env.
1. Repeated logging for non-rank-0 process rank-0 machine:
I gathered all these that get repeated 16 times on a 16 nodes machine:
```
[INFO|trainer.py:1145] 2021-05-20 20:16:39,037 >> ***** Running training *****
[INFO|trainer.py:1146] 2021-05-20 20:16:39,037 >> Num examples = 1000
[INFO|trainer.py:1147] 2021-05-20 20:16:39,037 >> Num Epochs = 1
[INFO|trainer.py:1148] 2021-05-20 20:16:39,037 >> Instantaneous batch size per device = 4
[INFO|trainer.py:1149] 2021-05-20 20:16:39,037 >> Total train batch size (w. parallel, distributed & accumulation) = 256
[INFO|trainer.py:1150] 2021-05-20 20:16:39,037 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1151] 2021-05-20 20:16:39,037 >> Total optimization steps = 4
100%|██████████| 4/4 [00:02<00:00, 1.95it/s][INFO|trainer.py:1341] 2021-05-20 20:16:41,214 >>
{'train_runtime': 2.185, 'train_samples_per_second': 1.831, 'epoch': 1.0}
Training completed. Do not forget to share your model on huggingface.co/models =)
INFO:__main__:*** Evaluate ***
[INFO|trainer.py:2115] 2021-05-20 20:16:41,690 >> ***** Running Evaluation *****
[INFO|trainer.py:2117] 2021-05-20 20:16:41,690 >> Num examples = 200
[INFO|trainer.py:2120] 2021-05-20 20:16:41,690 >> Batch size = 4
```
Probably should check not only rank of the process, but also the rank of the machine, right?
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11796/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11796/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11795 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11795/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11795/comments | https://api.github.com/repos/huggingface/transformers/issues/11795/events | https://github.com/huggingface/transformers/issues/11795 | 897,244,805 | MDU6SXNzdWU4OTcyNDQ4MDU= | 11,795 | get_length_grouped_indices() uses slow list concat | {
"login": "ctheodoris",
"id": 6326111,
"node_id": "MDQ6VXNlcjYzMjYxMTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6326111?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ctheodoris",
"html_url": "https://github.com/ctheodoris",
"followers_url": "https://api.github.com/users/ctheodoris/followers",
"following_url": "https://api.github.com/users/ctheodoris/following{/other_user}",
"gists_url": "https://api.github.com/users/ctheodoris/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ctheodoris/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ctheodoris/subscriptions",
"organizations_url": "https://api.github.com/users/ctheodoris/orgs",
"repos_url": "https://api.github.com/users/ctheodoris/repos",
"events_url": "https://api.github.com/users/ctheodoris/events{/privacy}",
"received_events_url": "https://api.github.com/users/ctheodoris/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks a lot for looking at this optimization. It does look like a nice speedup! Do you want to open a PR with the suggested changes since you're the one who designed it?"
] | 1,621 | 1,621 | 1,621 | CONTRIBUTOR | null | Hi,
get_length_grouped_indices() in LengthGroupedSampler and DistributedLengthGroupedSampler is prohibitively slow for large number of megabatches (in my case takes hours for ~270k megabatches with 100 items each) due to slow list concatenation with sum(megabatches, []).
Concatenating the lists with sum() may be repeatedly reallocating memory with each successive concatenation (similar to performance issues with string concatenation).
[item for sublist in megabatches for item in sublist] approach appears to significantly improve speed for large megabatch number, especially for megabatches with larger number of items.
For example:
# 50,000 megabatches with 3 items each:
megabatches = [[1,2,3] for _ in range(50_000)]
%timeit [item for sublist in megabatches for item in sublist];
3.72 ms ± 75.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit sum(megabatches, []);
7.66 s ± 31.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
------------------------------------------
# 100,000 megabatches with 3 items each:
megabatches = [[1,2,3] for _ in range(100_000)]
%timeit [item for sublist in megabatches for item in sublist];
8.03 ms ± 14.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit sum(megabatches, []);
29.6 s ± 36.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
------------------------------------------
# 100,000 megabatches with 100 items each:
megabatches = [list(range(100)) for _ in range(100_000)]
%timeit [item for sublist in megabatches for item in sublist];
208 ms ± 44.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit -r1 -n1 sum(megabatches, []);
44min 3s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
Thank you for your wonderful work and consideration of this edit. @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11795/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11794 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11794/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11794/comments | https://api.github.com/repos/huggingface/transformers/issues/11794/events | https://github.com/huggingface/transformers/issues/11794 | 897,240,123 | MDU6SXNzdWU4OTcyNDAxMjM= | 11,794 | Bug in TokenClassificationPipeline | {
"login": "cemilcengiz",
"id": 32267027,
"node_id": "MDQ6VXNlcjMyMjY3MDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/32267027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cemilcengiz",
"html_url": "https://github.com/cemilcengiz",
"followers_url": "https://api.github.com/users/cemilcengiz/followers",
"following_url": "https://api.github.com/users/cemilcengiz/following{/other_user}",
"gists_url": "https://api.github.com/users/cemilcengiz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cemilcengiz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cemilcengiz/subscriptions",
"organizations_url": "https://api.github.com/users/cemilcengiz/orgs",
"repos_url": "https://api.github.com/users/cemilcengiz/repos",
"events_url": "https://api.github.com/users/cemilcengiz/events{/privacy}",
"received_events_url": "https://api.github.com/users/cemilcengiz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"That is true. `len(word_ref) != len(word)` is a heuristic that will work on tokenizers that use BPE `continuing_subword_prefix` concept.\r\n \r\nThe reality is that there is no consistent notion of a \"word\" within arbitrary tokenizers.\r\nThe `continuing_subword_prefix` in BPE that *can* be used makes the concept explicit but it's not in the case of GPT2 (and roberta-large) as they are supposed to be ByteLevel. (it is set to '').\r\n\r\nBecause of that, there cannot be any consistent manner to check for \"is_subword\" for these tokenizers.\r\n\r\nLet's take an example \"Hello thereHello\" with `roberta-large`.\r\n\r\n-> [ 0 31414 89 31414 2]\r\n\r\nWe have twice the same token (31414), one is not a subword, the second one is. So there can't be a perfect output in any case.\r\nToken 89 is really \" there\" the space isn't treated that differently any other characters.\r\n\r\nIs that clearer on why it fails in this use-case ?\r\n\r\nThat being said, if we can figure out a heuristic that works for both, it would be better indeed.\r\n",
"When I had a similar problem, I resolved it by checking the character before that word in the original string. In this case, if there is a space, we can include it to the word, tokenize it and join them into a single string. The `word_ref` would be changed like the follows:\r\n\r\n```python\r\nif start_ind > 0 and sentence[start_ind-1] == \" \":\r\n decoded_word_ref = \"\".join(self.tokenizer.tokenize(sentence[start_ind-1: end_ind]))\r\nelse:\r\n decoded_word_ref = sentence[start_ind:end_ind]\r\n```\r\n\r\n(I am replacing the identifier `word_ref` with `decoded_word_ref` to emphasize that it is reconstructed from the token ids and may not correspond to a valid substring in the original text)\r\nTherefore, the related code segment would be updated as follows:\r\n\r\n```python\r\n if start_ind > 0 and sentence[start_ind-1] == \" \":\r\n decoded_word_ref = \"\".join(self.tokenizer.tokenize(sentence[start_ind-1: end_ind]))\r\n else:\r\n decoded_word_ref = sentence[start_ind:end_ind]\r\n word = self.tokenizer.convert_ids_to_tokens([int(input_ids[idx])])[0]\r\n is_subword = len(decoded_word_ref) != len(word)\r\n\r\n if int(input_ids[idx]) == self.tokenizer.unk_token_id:\r\n word = decoded_word_ref\r\n is_subword = False\r\n``` \r\n\r\nNotice that, even if there were multiple whitespaces before the word, they should not cause an issue since each space would be tokenized as a separate token except the last one.\r\n\r\nAlternatively, we might use the decoded_word_ref only for determining the value of `is_subword`. After that, we can use the `word_ref` as before.\r\n",
"That wouldn't work because some byte-level tokenizers will use space as a postfix, not prefix for \"word-separation\". \r\nThis is where we would like to avoid many if conditions for every possible tokenizer.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,624 | 1,624 | NONE | null | ## Environment info
- `transformers` version: 4.5.1
- Platform: Linux-5.4.0-42-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@Narsil , @LysandreJik
## Information
The problem is in TokenClassificationPipeline, in and [this](https://github.com/huggingface/transformers/blob/f4a0d6ff867e8a82a33d7a653e7d45372a463271/src/transformers/pipelines/token_classification.py#L269) and [that line](https://github.com/huggingface/transformers/blob/f4a0d6ff867e8a82a33d7a653e7d45372a463271/src/transformers/pipelines/token_classification.py#L273) .
Here the aim is to determine if the original word for that token is tokenized into multiple subwords or just a single one. The problem is some tokenizers (such as Roberta or GPT-2) tokenize the whitespace together with the subsequent word which causes a mismatch between the original word and the reconstructed word. Since we reconstruct from the tokenized input ids, a single-word token also includes a whitespace (unles it is not the first word in the sequence).
## To reproduce
Please, consider the following:
```python
from transformers import AutoTokenizer
word_ref = "Car"
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
word = tokenizer.tokenize(" " + word_ref)[0]
print(word)
>>> ĠCar
is_subword = len(word_ref) != len(word)
print(is_subword)
>>> True
```
The problem I simulated occurs in my custom pipeline that inherits from TokenClassificationPipeline when I use Roberta tokenizer. I checked the tests for that pipeline and observed that a small Bert tokenizer is used. This can explain why this bug could not be catched as the Bert model tokenizes the spaces differently. If I recall correctly, it splits the words on the whitespaces, then tokenizes the words. In any case, the following result shows why Bert tokenizer does not suffer from the mentioned problem:
```python
from transformers import AutoTokenizer
word_ref = "Car"
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
word = tokenizer.tokenize(" " + word_ref)[0]
print(word)
>>> Car
is_subword = len(word_ref) != len(word)
print(is_subword)
>>> False
```
Finally, this problem might be affecting other pipelines (or inference scripts etc.) that depends on the reconstructed tokens as well.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11794/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11794/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11793 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11793/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11793/comments | https://api.github.com/repos/huggingface/transformers/issues/11793/events | https://github.com/huggingface/transformers/issues/11793 | 897,213,429 | MDU6SXNzdWU4OTcyMTM0Mjk= | 11,793 | [trainer] the noisy tensorflow loaded when asked explicitly not to load it | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, the `import Trainer` seems to be importing TensorFlow again. Let me try to see if I can remove that.",
"I messed up my branch and pushed directly on master by mistake, but I don't think it needs reverting and doing a PR since it's a short fix.\r\nShort story is that I locally have no tensorflow import with `USE_TF=0` after [this commit](https://github.com/huggingface/transformers/commit/b8697bc62216b9e2ca60811626c6a6ca992b0d34). Can you confirm?",
"I confirm. Apologies I missed that request.\r\n\r\nThank you for fixing it, @sgugger!"
] | 1,621 | 1,623 | 1,623 | CONTRIBUTOR | null | Unrequested TF loading and its noisy disrespectful logging is back it seems:
```
USE_TF=0 python examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path $MODEL \
--dataset_name $DATASET \
--output_dir output_dir \
--overwrite_output_dir \
--do_train \
--do_eval \
--max_train_samples 1000 \
--max_eval_samples 200 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--num_train_epochs 1 \
--warmup_steps 8 \
--block_size 64 \
--fp16 \
--report_to none
```
```
r10i6n8: 2021-05-20 19:52:04.357654: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: [...]
r10i6n8: 2021-05-20 19:52:04.357677: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
```
I am testing multinode setups so am I'm getting hundreds of these! 256 gpus - 512 of these warnings!
How can we make sure that `USE_TF=0` is respected and `tensorflow` doesn't get loaded - I can't uninstall it since it's a shared environment.
Thank you!
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11793/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11793/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11792 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11792/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11792/comments | https://api.github.com/repos/huggingface/transformers/issues/11792/events | https://github.com/huggingface/transformers/issues/11792 | 897,118,325 | MDU6SXNzdWU4OTcxMTgzMjU= | 11,792 | T5EncoderModel slower in half-precision | {
"login": "DA-L3",
"id": 33768245,
"node_id": "MDQ6VXNlcjMzNzY4MjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/33768245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DA-L3",
"html_url": "https://github.com/DA-L3",
"followers_url": "https://api.github.com/users/DA-L3/followers",
"following_url": "https://api.github.com/users/DA-L3/following{/other_user}",
"gists_url": "https://api.github.com/users/DA-L3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DA-L3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DA-L3/subscriptions",
"organizations_url": "https://api.github.com/users/DA-L3/orgs",
"repos_url": "https://api.github.com/users/DA-L3/repos",
"events_url": "https://api.github.com/users/DA-L3/events{/privacy}",
"received_events_url": "https://api.github.com/users/DA-L3/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@stas00 or @sgugger can chime in if I'm wrong, but I believe half-precision performance improvement is strongly tied to hardware: even hardware that handles half-precision like pascal GPUs may not see a speed increase with FP16 compared to FP32, and I believe it can have the opposite effect. Could you share your setup?\r\n\r\nYou can check this thread for a similar question: https://github.com/huggingface/transformers/issues/9179",
"I don't think we have resolved this conundrum in https://github.com/huggingface/transformers/issues/9179 - it got closed w/o a resolution.\r\n\r\nRunning your test on 2 cards:\r\n\r\n```\r\nrtx-3090\r\n\r\nfp16:\r\n\r\nFull process: 7.10092830657959\r\nModel only: 1.9809677600860596\r\nToken only: 5.1195228099823\r\n\r\nfp32:\r\n\r\nFull process: 5.614374399185181\r\nModel only: 0.6039936542510986\r\nToken only: 5.009963750839233\r\n\r\ngtx-1070\r\n\r\nfp16:\r\n\r\nFull process: 17.52509307861328\r\nModel only: 12.342488050460815\r\nToken only: 5.182169198989868\r\n\r\nfp32:\r\n\r\nFull process: 5.362875461578369\r\nModel only: 0.3538181781768799\r\nToken only: 5.008580923080444\r\n```\r\n\r\nThis investigation most likely will require using torch profiler to get to the root of it.",
"Thank you for answering and reffering to the other issue, since it seems to be an ongoing mystery, I will close this issue w/o resolution for now. "
] | 1,621 | 1,622 | 1,622 | NONE | null | Hi,
I am encountering troubles in understanding why the half-precision version of the T5Encoder infers slower than the full-precision one.
## To reproduce
Starting with the `half`-precision.
```python
import torch
from transformers import T5EncoderModel, T5Tokenizer
import time
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
seq="Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet."
seq = ' '.join([seq] * 50)
model = T5EncoderModel.from_pretrained('t5-large', cache_dir='model').to(device)
model = model.half()
model = model.eval()
tokenizer = T5Tokenizer.from_pretrained('t5-large', cache_dir='model')
token_timer = time.time()
tokens = tokenizer.batch_encode_plus(seq, add_special_tokens=True, padding='longest', return_tensors='pt')
end_token = time.time()
input_ids = tokens['input_ids'].to(device)
attention_mask = tokens['attention_mask'].to(device)
model_timer = time.time()
with torch.no_grad():
ignored = model(input_ids=input_ids,attention_mask=attention_mask)
end_timer = time.time()
print(f'Full process:\t{end_timer - token_timer}')
print(f'Model only:\t{end_timer - model_timer}')
print(f'Token only:\t{end_token - token_timer}')
```
To use the `full`-precision, just drop the `model = model.half()` line.
## The output
The `half`-precision:
```
Full process: 10.929700136184692
Model only: 3.4116599559783936
Token only: 7.5169923305511475
```
The `full`-precision:
```
Full process: 7.794144153594971
Model only: 0.23117947578430176
Token only: 7.562213897705078
```
First, I would expect that the half-precision model is faster but secondly what is more confusing to me is the time difference in `Model only`, which measures the time needed to execute the the `torch.no_grad()`-part.
Is there an implementation problem in the code snippet? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11792/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11792/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11791 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11791/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11791/comments | https://api.github.com/repos/huggingface/transformers/issues/11791/events | https://github.com/huggingface/transformers/issues/11791 | 897,003,525 | MDU6SXNzdWU4OTcwMDM1MjU= | 11,791 | LongformerForSequenceClassification: global_attention_mask=None | {
"login": "jackashore",
"id": 39889276,
"node_id": "MDQ6VXNlcjM5ODg5Mjc2",
"avatar_url": "https://avatars.githubusercontent.com/u/39889276?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jackashore",
"html_url": "https://github.com/jackashore",
"followers_url": "https://api.github.com/users/jackashore/followers",
"following_url": "https://api.github.com/users/jackashore/following{/other_user}",
"gists_url": "https://api.github.com/users/jackashore/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jackashore/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jackashore/subscriptions",
"organizations_url": "https://api.github.com/users/jackashore/orgs",
"repos_url": "https://api.github.com/users/jackashore/repos",
"events_url": "https://api.github.com/users/jackashore/events{/privacy}",
"received_events_url": "https://api.github.com/users/jackashore/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,624 | 1,624 | NONE | null | Hi, my question is, what happens if `global_attention_mask` in `LongformerForSequenceClassification` is not stated? Does it mean that only local attention works in this case? I haven't found anything about it in the docs. Thanks in advance! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11791/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11791/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11790 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11790/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11790/comments | https://api.github.com/repos/huggingface/transformers/issues/11790/events | https://github.com/huggingface/transformers/issues/11790 | 896,909,088 | MDU6SXNzdWU4OTY5MDkwODg= | 11,790 | facebook/mbart-large-50-one-to-many-mmt fails on Swahili | {
"login": "DCNemesis",
"id": 3616964,
"node_id": "MDQ6VXNlcjM2MTY5NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3616964?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DCNemesis",
"html_url": "https://github.com/DCNemesis",
"followers_url": "https://api.github.com/users/DCNemesis/followers",
"following_url": "https://api.github.com/users/DCNemesis/following{/other_user}",
"gists_url": "https://api.github.com/users/DCNemesis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DCNemesis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DCNemesis/subscriptions",
"organizations_url": "https://api.github.com/users/DCNemesis/orgs",
"repos_url": "https://api.github.com/users/DCNemesis/repos",
"events_url": "https://api.github.com/users/DCNemesis/events{/privacy}",
"received_events_url": "https://api.github.com/users/DCNemesis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @DCNemesis \r\n\r\nDoes this happen for this specific example or for all the examples that you tried?\r\n\r\nAnd this isn't really an issue with implementation. As the model is many-to-many is not trained in every single language pair this does happen in some cases. It's likey that there's far less data for X to Swalihi translation which could be the reason for this.",
"@patil-suraj it fails every time I try English to Swahili. M2M100 does fine on the same tasks, so I'll probably just use that in this case, but it is hard to believe this is the intended behavior of mbart-large-50.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"This issue still has not been fixed. Mbart-large-50-many-to-many-mmt has the same issue.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,627 | 1,627 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.0
- Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patil-suraj @patrickvonplaten
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- facebook/mbart-large-50-one-to-many-mmt
- facebook/mbart-large-50-many-to-many-mmt
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run the below code
```
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast`
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
article_hi = "संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है"
article_en = "Let's try this again..."
article_sw = 'Mzozo wa Israeli na Palestina:Marekani imekuwa ikiilinda Israel na kuifanya kutogoopa kufanya lolote'
# translate Hindi to Swahili
tokenizer.src_lang = "hi_IN"
encoded_hi = tokenizer(article_hi, return_tensors="pt")
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id["sw_KE"])
output = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(output)
# translate English to Swahili
tokenizer.src_lang = "en_XX"
encoded_en = tokenizer(article_en, return_tensors="pt")
generated_tokens = model.generate(**encoded_en, forced_bos_token_id=tokenizer.lang_code_to_id["sw_KE"])
output = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(output)
#translate Swahili to English
tokenizer.src_lang = 'sw_KE'
encoded_sw = tokenizer(article_sw, return_tensors="pt")
generated_tokens = model.generate(**encoded_sw, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"])
output = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(output)
```
## Expected behavior
Output is:
['U. N. head says there is no military solution in Syria']
["! Let's try this again..."]
['The Israeli Prime Minister in Palestine: He visited Israel and visited Israel on any day of the week. Read more']
The translation from Swahili to English works, but the translations to Swahili all end up in English.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11790/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11789 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11789/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11789/comments | https://api.github.com/repos/huggingface/transformers/issues/11789/events | https://github.com/huggingface/transformers/issues/11789 | 896,900,160 | MDU6SXNzdWU4OTY5MDAxNjA= | 11,789 | PegasusTokenizer returning None | {
"login": "akashe",
"id": 7673060,
"node_id": "MDQ6VXNlcjc2NzMwNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7673060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akashe",
"html_url": "https://github.com/akashe",
"followers_url": "https://api.github.com/users/akashe/followers",
"following_url": "https://api.github.com/users/akashe/following{/other_user}",
"gists_url": "https://api.github.com/users/akashe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akashe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akashe/subscriptions",
"organizations_url": "https://api.github.com/users/akashe/orgs",
"repos_url": "https://api.github.com/users/akashe/repos",
"events_url": "https://api.github.com/users/akashe/events{/privacy}",
"received_events_url": "https://api.github.com/users/akashe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @akashe,\r\n\r\nThink this error is analogs to this one: https://github.com/huggingface/transformers/issues/8864. \r\n\r\nInstalling `sentencepiece` should solve the problem :-) \r\n\r\nhttps://github.com/huggingface/transformers/issues/8864",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"> Hey @akashe,\r\n> \r\n> Think this error is analogs to this one: #8864.\r\n> \r\n> Installing `sentencepiece` should solve the problem :-)\r\n> \r\n> #8864\r\n\r\nStill does not seem to work, even after installing sentencepiece",
"Same here ;(",
"Could you please update to the newest `transformers` version and check again? I cannot reproduce the error sadly",
"Hi @patrickvonplaten, checked with the newest transformers. Tokenizer is not returning None.",
"@akashe did you solve the problem later? I am having the same issue. ",
"Update to the newest version. It worked after that.",
"I got the same issue first, of getting Nonetype. To solve this, just install sentencepiece, and make sure to restart runtime."
] | 1,621 | 1,674 | 1,625 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Ubuntu 20.04
- Python version: Python 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101
- Tensorflow version (GPU?):
- Using GPU in script?: Problem in both CPU and GPU
- Using distributed or parallel set-up in script?: No
### Who can help @patrickvonplaten @LysandreJik
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): Pegasus
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Go to https://huggingface.co/transformers/model_doc/pegasus.html#pegasusforconditionalgeneration
2. Run the summarization example in the section
3. PegasusTokenizer.from_pretrained('google/pegasus-xsum') returns None. PegasusTokenizer also returns None for 'google/pegasus-large'
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Should return a non None value.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11789/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11789/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11788 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11788/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11788/comments | https://api.github.com/repos/huggingface/transformers/issues/11788/events | https://github.com/huggingface/transformers/issues/11788 | 896,860,928 | MDU6SXNzdWU4OTY4NjA5Mjg= | 11,788 | EncoderDecoder Cross Attention Generation Output Shape does not match Documentation | {
"login": "l-salewski",
"id": 71447327,
"node_id": "MDQ6VXNlcjcxNDQ3MzI3",
"avatar_url": "https://avatars.githubusercontent.com/u/71447327?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/l-salewski",
"html_url": "https://github.com/l-salewski",
"followers_url": "https://api.github.com/users/l-salewski/followers",
"following_url": "https://api.github.com/users/l-salewski/following{/other_user}",
"gists_url": "https://api.github.com/users/l-salewski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/l-salewski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/l-salewski/subscriptions",
"organizations_url": "https://api.github.com/users/l-salewski/orgs",
"repos_url": "https://api.github.com/users/l-salewski/repos",
"events_url": "https://api.github.com/users/l-salewski/events{/privacy}",
"received_events_url": "https://api.github.com/users/l-salewski/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,624 | 1,624 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: Linux-4.15.0-143-generic-x86_64-with-glibc2.27
- Python version: 3.9.4
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes (v100)
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
-->
- encoderdecoder/text generation: @patrickvonplaten, @patil-suraj
## Information
Model I am using: EncoderDecoder with BERT
The problem arises when using:
* [x] the official example scripts: slightly modified/extended, see below
The tasks I am working on is:
* [x] my own task or dataset: just an example sentence from the docs
## To reproduce
Steps to reproduce the behavior:
1. Start with the example script from the [EncoderDecoder forward documenation](https://huggingface.co/transformers/model_doc/encoderdecoder.html#transformers.EncoderDecoderModel.forward)
2. Remove model training and model saving and loading steps (not relevant) and configure model to return attentions
3. Check shapes of cross attention outputs of generation and forward
```python
from transformers import EncoderDecoderModel, BertTokenizer
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased') # initialize Bert2Bert from pre-trained checkpoints
# forward
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
outputs = model(input_ids=input_ids, decoder_input_ids=input_ids, output_attentions=True)
forward_cross_attentions = outputs.cross_attentions
# As described in the docs the shapes are:
# "Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)"" (here (1,12,8,8))
print(f"Elements in forward cross attention: {len(forward_cross_attentions)}")
# Yields: Elements in forward cross attention: 12
print(f"Shapes in forward cross attention: {[fca.shape for fca in forward_cross_attentions]}")
# Yields: Shapes in forward cross attention: [torch.Size([1, 12, 8, 8]), torch.Size([1, 12, 8, 8]), torch.Size([1, 12, 8, 8]), torch.Size([1, 12, 8, 8]), torch.Size([1, 12, 8, 8]), torch.Size([1, 12, 8, 8]), torch.Size([1, 12, 8, 8]), torch.Size([1, 12, 8, 8]), torch.Size([1, 12, 8, 8]), torch.Size([1, 12, 8, 8]), torch.Size([1, 12, 8, 8]), torch.Size([1, 12, 8, 8])]
# generation
generated = model.generate(input_ids, decoder_start_token_id=model.config.decoder.pad_token_id, return_dict_in_generate=True, output_attentions=True)
generated_cross_attentions = generated.cross_attentions
# generated_cross_attentions contains 19 elements, maybe one for each generation step (generated.sequences has 20 elements)?
print(f"Elements in generation cross attention: {len(generated_cross_attentions)}")
# Yields: Elements in generation cross attention: 19
# All of the contained cross attentions have shape (1,12,1,8)
for cross_attention in generated_cross_attentions:
print(f"Shapes in generation cross attention: {[gca.shape for gca in cross_attention]}")
# Yields: Shapes in generation cross attention: [torch.Size([1, 12, 1, 8]), torch.Size([1, 12, 1, 8]), torch.Size([1, 12, 1, 8]), torch.Size([1, 12, 1, 8]), torch.Size([1, 12, 1, 8]), torch.Size([1, 12, 1, 8]), torch.Size([1, 12, 1, 8]), torch.Size([1, 12, 1, 8]), torch.Size([1, 12, 1, 8]), torch.Size([1, 12, 1, 8]), torch.Size([1, 12, 1, 8]), torch.Size([1, 12, 1, 8])] (repeated 19 times)
```
Furthermore, if `num_beams>1` all `num_beams*batch_size` cross attentions are returned even if `num_return_sequences == 1`.
```python
# continued from above...
# generation
generated = model.generate(input_ids, decoder_start_token_id=model.config.decoder.pad_token_id, return_dict_in_generate=True, output_attentions=True, num_beams=10)
generated_cross_attentions = generated.cross_attentions
# generated_cross_attentions contains 19 elements, maybe one for each generation step (generated.sequences has 20 elements)?
print(f"Elements in generation cross attention: {len(generated_cross_attentions)}")
# All of the contained cross attentions have shape (10,12,1,8)
for cross_attention in generated_cross_attentions:
print(f"Shapes in generation cross attention: {[gca.shape for gca in cross_attention]}")
print(f"Shape of the generated sequences: {generated.sequences.shape}")
Yields: torch.Size([1, 20])
```
## Expected behavior
A `Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, gen_sequence_length, sequence_length)`. Ideally the cross attentions batch size should match the batch size of the `generated.sequences`.
## Work arounds
Stacking the tuples and then concatting along the dimension which is 1 like this:
```python
torch.cat([torch.stack(ca) for ca in generated_cross_attentions], dim=-2)
```
yields such a tensor of the correct shape, is that the correct way to assemble it?
For the batch size issue I haven't found a work around yet. Is it possible to retain the beam indices of the selected beams from `generate`? `output_scores` is no help, because it has the same shape as `generated.sequences`.
Any help, ideas or pointers how to work around this are highly appreciated. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11788/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11788/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11787 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11787/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11787/comments | https://api.github.com/repos/huggingface/transformers/issues/11787/events | https://github.com/huggingface/transformers/issues/11787 | 896,859,628 | MDU6SXNzdWU4OTY4NTk2Mjg= | 11,787 | GPT Neo past_key_values unexpected behaviour | {
"login": "edwinagnew",
"id": 42814611,
"node_id": "MDQ6VXNlcjQyODE0NjEx",
"avatar_url": "https://avatars.githubusercontent.com/u/42814611?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/edwinagnew",
"html_url": "https://github.com/edwinagnew",
"followers_url": "https://api.github.com/users/edwinagnew/followers",
"following_url": "https://api.github.com/users/edwinagnew/following{/other_user}",
"gists_url": "https://api.github.com/users/edwinagnew/gists{/gist_id}",
"starred_url": "https://api.github.com/users/edwinagnew/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/edwinagnew/subscriptions",
"organizations_url": "https://api.github.com/users/edwinagnew/orgs",
"repos_url": "https://api.github.com/users/edwinagnew/repos",
"events_url": "https://api.github.com/users/edwinagnew/events{/privacy}",
"received_events_url": "https://api.github.com/users/edwinagnew/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"I encountered a similar problem when trying to use GPT-Neo with PPLM (https://github.com/uber-research/PPLM). Seems that Neo's `past_key_values` is returning and consuming key-value tensors as well as (I'm guessing) feed-forward tensors:\r\n\r\n```python\r\ninputs = tokenizer(prompt, return_tensors='pt')\r\noutputs = model(**inputs)\r\npast = outputs.past_key_values\r\n\r\nfor idx, p in enumerate(past):\r\n print(f'{idx}: {tuple(elem.shape for elem in p)}')\r\n\r\n# output\r\n# 0: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64]))\r\n# 1: (torch.Size([1, 3, 768]),)\r\n# 2: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64]))\r\n# 3: (torch.Size([1, 3, 768]),)\r\n# 4: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64]))\r\n# 5: (torch.Size([1, 3, 768]),)\r\n# 6: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64]))\r\n# 7: (torch.Size([1, 3, 768]),)\r\n# 8: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64]))\r\n# 9: (torch.Size([1, 3, 768]),)\r\n# 10: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64]))\r\n# 11: (torch.Size([1, 3, 768]),)\r\n```\r\n\r\nGPT-2 correctly returns just the key-value tensors:\r\n\r\n```python\r\n# 0: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64]))\r\n# 1: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64]))\r\n# 2: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64]))\r\n# 3: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64]))\r\n# 4: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64]))\r\n# 5: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64]))\r\n# 6: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64]))\r\n# 7: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64]))\r\n# 8: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64]))\r\n# 9: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64]))\r\n# 10: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64]))\r\n# 11: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64]))\r\n```",
"After some more testing, the above seems to be because of local attention layers in GPT-Neo's default configuration. When specifying ```config = GPTNeoConfig(attention_types=[[[\"global\"], 24]])```, I get similar `past_key_values` as in GPT-2:\r\n```python\r\n# 0: (torch.Size([1, 16, 3, 128]), torch.Size([1, 16, 3, 128]))\r\n# 1: (torch.Size([1, 16, 3, 128]), torch.Size([1, 16, 3, 128])) \r\n# 2: (torch.Size([1, 16, 3, 128]), torch.Size([1, 16, 3, 128])) \r\n# 3: (torch.Size([1, 16, 3, 128]), torch.Size([1, 16, 3, 128])) \r\n# 4: (torch.Size([1, 16, 3, 128]), torch.Size([1, 16, 3, 128])) \r\n# ...\r\n```\r\n\r\nI do think the [documentation](https://huggingface.co/transformers/model_doc/gpt_neo.html#transformers.GPTNeoModel.forward) for `past_key_values` should be updated since it currently says: \"with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)\"",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hi @patil-suraj, just checking if there is any progress on this issue or pull request #11630? That PR seems to fix the problem related to my usecase.",
"The different shape for local attention layers is because of the folding going on in the current implementation.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,631 | 1,631 | NONE | null | I have been successfully using the GPT2LMHeadModel module for text generation for some time and I recently tried to reuse the code to generate with GPTNeoForCausalLM. Though the documentations appear identical, I get the error "ValueError: not enough values to unpack (expected 2, got 1)" for the line`output, past = self.model(context, past_key_values=past, use_cache=True).values()` (which works fine for GPT2).
Is this a bug or has the documentation been copied incorrectly? Would appreciate any tips for fixing.
Many thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11787/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11787/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11786 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11786/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11786/comments | https://api.github.com/repos/huggingface/transformers/issues/11786/events | https://github.com/huggingface/transformers/pull/11786 | 896,845,411 | MDExOlB1bGxSZXF1ZXN0NjQ4OTA5OTcz | 11,786 | [RFC] Laying down building stone for more flexible ONNX export capabilities | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Example of potential command line to export `bert-base-cased` => \r\n\r\n`python3 -m transformers.onnx -f pytorch --model=bert-base-cased --features=default --optimize --optimization-level=all onnx/bert-base-cased/`",
"See the contributed docs here https://235542-155220641-gh.circle-artifacts.com/0/docs/_build/html/serialization.html",
"Idea: Rename the `convert_pytorch` to `export` so we have the exact same hierarchy than PyTorch: \r\n- PyTorch: `torch.onnx.export`\r\n- Transformers: `transformers.onnx.export`\r\n\r\nwdyt? ",
"That's a great idea!",
"@Narsil we moved forward on your suggestion, can you have a look _(one more time 😄)_ 🙏🏻 ",
"Hello, when we can use the transformers.onnx?",
"You already can when installing from source:\r\n```\r\npip install git+https://github.com/huggingface/transformers\r\n```\r\n\r\nWe'll do a release this week (probably Thursday or Friday) and it will be in a pypi release then.",
"hi, this thread is super important. \r\nIs there support for bart text2text_generation export to onnx (more specifically for summarization tasks) ? "
] | 1,621 | 1,652 | 1,625 | MEMBER | null | This PR aims at reworking the way the ONNX export tool work by introducing a static, checked description format to provide ONNX exporters (pt almost done, TF will follow) all the required knobs.
More specifically this PR introduces the following concepts:
- `OnnxConfig` dataclass which enforces a model to be supported to describe all the properties to generate proper export
- `OnnxVariable` namedtuple which describe a variables w.r.t the name of the variable, shape and potentially how many time it's "repeated" => Useful for `past_keys`
Test case was done initially for BART model, without `use_cache=True` supports.
For the sake of completeness, dropping support for `use_cache=True` is currently needed because we have a double nested tuple at the core of the `past_keys` output structure which would require multiple level of dynamic axis, not currently supported by ONNX.
This might be something we can work on in the future, potentially introducing a ONNX compatible output structure getting rid of the nested tuples layout and activable from a config property (_to be discussed further later on_).
**Update 1:**
- I managed to enable exporting with nested structures such as `past_key_values` for GPT2.
- Need to work on enabling the same for using such values as inputs to the model
Supported models:
- [x] ALBERT
- [x] BART (with & without past)
- [x] BERT
- [x] DistilBERT
- [ ] Longformer => I've support for this, but the exporting fails because of missing ops ... need investigations.
- [x] GPT2 (with & without past)
- [x] Roberta
- [x] T5
- [x] XLM-Roberta | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11786/reactions",
"total_count": 9,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11786/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11786",
"html_url": "https://github.com/huggingface/transformers/pull/11786",
"diff_url": "https://github.com/huggingface/transformers/pull/11786.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11786.patch",
"merged_at": 1625756083000
} |
https://api.github.com/repos/huggingface/transformers/issues/11785 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11785/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11785/comments | https://api.github.com/repos/huggingface/transformers/issues/11785/events | https://github.com/huggingface/transformers/pull/11785 | 896,829,353 | MDExOlB1bGxSZXF1ZXN0NjQ4ODk1NjU2 | 11,785 | Fix regression in regression | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you for fixing the issue! "
] | 1,621 | 1,621 | 1,621 | COLLABORATOR | null | # What does this PR do?
This PR fixes the regression introduced in #11012 for regression problems with only one label (like STS-B), see discussion on #11780. I checked both `run_glue` and `run_glue_no_trainer` on this branch and get the proper results for this task now.
Fixes #11780
Fixes #11583 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11785/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11785/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11785",
"html_url": "https://github.com/huggingface/transformers/pull/11785",
"diff_url": "https://github.com/huggingface/transformers/pull/11785.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11785.patch",
"merged_at": 1621518913000
} |
https://api.github.com/repos/huggingface/transformers/issues/11784 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11784/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11784/comments | https://api.github.com/repos/huggingface/transformers/issues/11784/events | https://github.com/huggingface/transformers/pull/11784 | 896,806,126 | MDExOlB1bGxSZXF1ZXN0NjQ4ODc0Njcy | 11,784 | Fix release utilpattern in conf.py | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | COLLABORATOR | null | # What does this PR do?
When we applied black to the conf.py style, the line with the version changed but the pattern in our release util script was not updated. This PR fixes that. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11784/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11784/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11784",
"html_url": "https://github.com/huggingface/transformers/pull/11784",
"diff_url": "https://github.com/huggingface/transformers/pull/11784.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11784.patch",
"merged_at": 1621517431000
} |
https://api.github.com/repos/huggingface/transformers/issues/11783 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11783/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11783/comments | https://api.github.com/repos/huggingface/transformers/issues/11783/events | https://github.com/huggingface/transformers/issues/11783 | 896,645,300 | MDU6SXNzdWU4OTY2NDUzMDA= | 11,783 | PyInstaller Transformers runtime import error | {
"login": "PhaneendraGunda",
"id": 12506295,
"node_id": "MDQ6VXNlcjEyNTA2Mjk1",
"avatar_url": "https://avatars.githubusercontent.com/u/12506295?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhaneendraGunda",
"html_url": "https://github.com/PhaneendraGunda",
"followers_url": "https://api.github.com/users/PhaneendraGunda/followers",
"following_url": "https://api.github.com/users/PhaneendraGunda/following{/other_user}",
"gists_url": "https://api.github.com/users/PhaneendraGunda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhaneendraGunda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhaneendraGunda/subscriptions",
"organizations_url": "https://api.github.com/users/PhaneendraGunda/orgs",
"repos_url": "https://api.github.com/users/PhaneendraGunda/repos",
"events_url": "https://api.github.com/users/PhaneendraGunda/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhaneendraGunda/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, the error seems to originate from PyInstaller rather than `transformers`, right? Have you reported it to the PyInstaller team?",
"> Hi, the error seems to originate from PyInstaller rather than `transformers`, right? Have you reported it to the PyInstaller team?\r\n\r\nYes @LysandreJik, I posted the same question in PyInstaller as well. But PyInstaller is working with other libraries like torch, tensroflow. It's only failing with Transformers library as it is checking the versions of all dependent libraries. Not sure exact reason.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,624 | 1,624 | NONE | null | Hi,
I am getting the following error while creating executable with transformers using PyInstaller
**PyInstaller: 4.3
Transformers Version: 4.6.0
736 INFO: Python: 3.8.5 (conda)
751 INFO: Platform: macOS-10.15.5**
_413560 INFO: Packages required by datasets:
['dill', 'multiprocess', 'pandas', 'tqdm', 'tqdm', 'requests', 'xxhash', 'pyarrow', 'numpy']
445137 INFO: Packages required by filelock:
[]
File "", line 2
import huggingface-hub as p
^
SyntaxError: invalid syntax
Traceback (most recent call last):
File "/Users/xxxxx/opt/anaconda3/envs/xxxxxx/lib/python3.8/site-packages/PyInstaller/utils/hooks/init.py", line 358, in get_module_file_attribute
attr = loader.get_filename(package)
AttributeError: 'NoneType' object has no attribute 'get_filename'_
transformers hook file as follows,
```
from PyInstaller.utils.hooks import collect_all
def hook(hook_api):
packages = [
'transformers',
# "Pillow",
# "black==21.4b0",
# "cookiecutter==1.7.2",
"dataclasses",
"datasets",
# "deepspeed>=0.3.16",
# "docutils==0.16.0",
# "fairscale>0.3",
# "faiss-cpu",
# "fastapi",
"filelock",
# "flake8>=3.8.3",
# "flax>=0.3.2",
# "fugashi>=1.0",
"huggingface-hub",
"importlib_metadata",
# "ipadic>=1.0.0,<2.0",
# "isort>=5.5.4",
# "jax>=0.2.8",
# "jaxlib>=0.1.59",
# "jieba",
# "keras2onnx",
# "nltk",
"numpy",
# "onnxconverter-common",
# "onnxruntime-tools>=1.4.2",
# "onnxruntime>=1.4.0",
"packaging",
# "parameterized",
# "protobuf",
# "psutil",
# "pydantic",
# "pytest",
# "pytest-sugar",
# "pytest-xdist",
# "python>=3.6.0",
# "recommonmark",
"regex",
"requests",
# "rouge-score",
# "sacrebleu>=1.4.12",
"sacremoses",
# "sagemaker>=2.31.0",
# "scikit-learn",
# "sentencepiece==0.1.91",
# "soundfile",
# "sphinx-copybutton",
# "sphinx-markdown-tables",
# "sphinx-rtd-theme==0.4.3", # sphinx-rtd-theme==0.5.0 introduced big changes in the style.
# "sphinx==3.2.1",
# "sphinxext-opengraph==0.4.1",
# "starlette",
# "tensorflow-cpu>=2.3",
# "tensorflow>=2.3",
# "timeout-decorator",
"tokenizers",
# "torch>=1.0",
# "torchaudio",
"tqdm",
# "unidic>=1.0.2",
# "unidic_lite>=1.0.7",
# "uvicorn",
]
for package in packages:
datas, binaries, hiddenimports = collect_all(package)
hook_api.add_datas(datas)
hook_api.add_binaries(binaries)
hook_api.add_imports(*hiddenimports)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11783/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11783/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11782 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11782/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11782/comments | https://api.github.com/repos/huggingface/transformers/issues/11782/events | https://github.com/huggingface/transformers/pull/11782 | 896,307,798 | MDExOlB1bGxSZXF1ZXN0NjQ4NDI5MDIy | 11,782 | [WIP] Expand `past_key_values` also during beam search in EncoderDecoder models | {
"login": "seongminp",
"id": 9260067,
"node_id": "MDQ6VXNlcjkyNjAwNjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9260067?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seongminp",
"html_url": "https://github.com/seongminp",
"followers_url": "https://api.github.com/users/seongminp/followers",
"following_url": "https://api.github.com/users/seongminp/following{/other_user}",
"gists_url": "https://api.github.com/users/seongminp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seongminp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seongminp/subscriptions",
"organizations_url": "https://api.github.com/users/seongminp/orgs",
"repos_url": "https://api.github.com/users/seongminp/repos",
"events_url": "https://api.github.com/users/seongminp/events{/privacy}",
"received_events_url": "https://api.github.com/users/seongminp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Changed to `WIP` because right now the PR does not account for cross attentions in `past_key_values` (indices 2 and 3). \r\n\r\nCould not be certain if cross-attention matrix for each layer in `past_key_values` is always a 4-tuple for all encoder-decoder models (maybe some model does not use cross-attention even though it is an encoder-decoder model..?). \r\n\r\nThe doc does say key/value indices 2 and 3 in `past_key_values` are optional. "
] | 1,621 | 1,622 | 1,622 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #11781
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. **Not yet - issue was just submitted**
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). **Internal change**
- [x] Did you write any new necessary tests? **No coverage branch added**
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11782/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11782",
"html_url": "https://github.com/huggingface/transformers/pull/11782",
"diff_url": "https://github.com/huggingface/transformers/pull/11782.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11782.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11781 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11781/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11781/comments | https://api.github.com/repos/huggingface/transformers/issues/11781/events | https://github.com/huggingface/transformers/issues/11781 | 896,305,367 | MDU6SXNzdWU4OTYzMDUzNjc= | 11,781 | `generate` with `num_beam` > 1 does not work in EncoderDecoder models when `past` is supplied. | {
"login": "seongminp",
"id": 9260067,
"node_id": "MDQ6VXNlcjkyNjAwNjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9260067?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seongminp",
"html_url": "https://github.com/seongminp",
"followers_url": "https://api.github.com/users/seongminp/followers",
"following_url": "https://api.github.com/users/seongminp/following{/other_user}",
"gists_url": "https://api.github.com/users/seongminp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seongminp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seongminp/subscriptions",
"organizations_url": "https://api.github.com/users/seongminp/orgs",
"repos_url": "https://api.github.com/users/seongminp/repos",
"events_url": "https://api.github.com/users/seongminp/events{/privacy}",
"received_events_url": "https://api.github.com/users/seongminp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
" Hey @seongminp,\r\n\r\nThanks for the issue report. It's a rather specific use-case to pass `past_key_values` to `generate()`. Could you give me some more detail when you need to do so? ",
"Hi @patrickvonplaten!\r\n\r\nMy use-case for passing `past_key_values` to `generate` is to manipulate the encoder hidden states before passing them to decoder's cross attention. \r\n\r\nSpecifically, I am using a encoder-decoder generative (as in modeling the latent space, like GAN or VAE) text model. \r\n\r\nSeveral existing works, like [Microsoft's Optimus](https://github.com/ChunyuanLI/Optimus) and [Fang et al.](https://arxiv.org/abs/2101.00828), adds custom manipulations for key/value of decoder's cross attention.\r\n\r\nOfficial implementations of Optimus and Fang et al. are both implemented with this wonderful library, but uses a custom `generate` function because right now restrictions mentioned in this issue exists while passing `past` to `generate`.\r\n\r\nWould love to hear your feedback!",
"Hey @seongminp,\r\n\r\nThanks for the feedback! The problem is that the `past` variable strongly varies from model to model. *E.g.* Bart uses a different `past` tuple structure then `gpt2` does and `xlnet` uses a completely different structure. We would have to add a specific `prepare_cache` method to each model which seems would add to much complexity to the `generate()` method for quite a specific case IMO. Do you think we could instead solve it by just forcing the user to preprocess `past` correctly before passing it to `generate()`? E.g., the following code:\r\n\r\n```python\r\n past = tuple( \r\n (\r\n layer[0].index_select(0, expanded_return_idx).to(layer[0].device), \r\n layer[1].index_select(0, expanded_return_idx).to(layer[1].device), \r\n ) \r\n for layer in past \r\n)\r\n```\r\n\r\n could be executed by the user before calling `model.generate(input_ids, past=past)` no? \r\nWe could make a nice forum post about it so that people interested in the work mentioned above would have access to the correct pre-processing of `past` :-) \r\n\r\nWhat do you think?\r\n\r\n",
"Hi again.\r\n\r\nThat makes more sense.\r\n\r\nTrying to encompass all uses of `past` in `generate_utils` seems to be more trouble than it is worth. \r\n\r\nI'll close the pull request. Feel free to close this issue also!\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,624 | 1,624 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.7.0.dev0
- Platform: Linux-5.4.0-72-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): 2.5.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Both
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people. -->
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
## Information
Model I am using (Bert, XLNet ...): Bart, T5
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x ] my own task or dataset: (give details below)
## To reproduce
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```python
import torch
from transformers import BartForConditionalGeneration, BartConfig
config = BartConfig.from_pretrained('facebook/bart-base')
bart = BartForConditionalGeneration.from_pretrained('facebook/bart-base', config=config)
batch_size = 4
input_ids = torch.zeros((batch_size, 1), dtype=torch.long)
attention_mask = torch.ones((batch_size, 1))
# past_key_value: tuple of length config.n_layers with each tuple having 2 tuples each,
# of which has 2 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)
embed_size_per_head = config.d_model // config.decoder_attention_heads
keys = torch.ones(config.decoder_layers, batch_size, config.decoder_attention_heads, 1, embed_size_per_head)
past = tuple((key, key) for key in keys)
# Works.
num_beams = 1
encoder_outputs = bart.get_encoder()(input_ids.repeat_interleave(num_beams, dim=0), return_dict=True)
bart.generate(input_ids=input_ids, attention_mask=attention_mask, encoder_outputs=encoder_outputs,past=past, use_cahce=True, num_beams=num_beams)
# Doesn't work.
num_beams = 3
encoder_outputs = bart.get_encoder()(input_ids.repeat_interleave(num_beams, dim=0), return_dict=True)
bart.generate(input_ids=input_ids, attention_mask=attention_mask, encoder_outputs=encoder_outputs, past=past, use_cahce=True, num_beams=num_beams)
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
In the code snippet above, second call to `generate` crashes because `past_key_values` are not supplied to all beams.
This happened when `past` argument is passed to `generate` in models where `is_encoder_decoder` is `True` (issue seen in Bart and T5).
To mitigate this issue, `past` should also be expanded in `_expand_inputs_for_generation` in `generation_utils.py`. (I've noticed that, at this point in the generation process, the script looks for `past` not `past_key_values` in `model_kwargs`.)
I've submitted a pull request that applies the above mentioned patch.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11781/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11781/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11780 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11780/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11780/comments | https://api.github.com/repos/huggingface/transformers/issues/11780/events | https://github.com/huggingface/transformers/issues/11780 | 896,038,212 | MDU6SXNzdWU4OTYwMzgyMTI= | 11,780 | Unintentional(?) interface change on loss function in models didn't work well for single-column regression | {
"login": "yoshitomo-matsubara",
"id": 11156001,
"node_id": "MDQ6VXNlcjExMTU2MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/11156001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yoshitomo-matsubara",
"html_url": "https://github.com/yoshitomo-matsubara",
"followers_url": "https://api.github.com/users/yoshitomo-matsubara/followers",
"following_url": "https://api.github.com/users/yoshitomo-matsubara/following{/other_user}",
"gists_url": "https://api.github.com/users/yoshitomo-matsubara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yoshitomo-matsubara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yoshitomo-matsubara/subscriptions",
"organizations_url": "https://api.github.com/users/yoshitomo-matsubara/orgs",
"repos_url": "https://api.github.com/users/yoshitomo-matsubara/repos",
"events_url": "https://api.github.com/users/yoshitomo-matsubara/events{/privacy}",
"received_events_url": "https://api.github.com/users/yoshitomo-matsubara/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes the change was unintentional to enable multi-label regression. I think the old\r\n```\r\nloss = loss_fct(logits.view(-1), labels.view(-1))\r\n```\r\nwill work in the case of one or several labels but might not give a clear error message if we have multiple labels but a shape error (if we have 5 possible lables but the model was configured with 4, we would see, with a batch size of 8, an error saying shape incompatibility between a tensor of size 32 and a tensor of size 40).\r\n\r\nSomething that would give a nicer error message is probably:\r\n```\r\nif self.num_labels == 1:\r\n loss = loss_fct(logits.squeeze(), labels.squeeze())\r\nelse:\r\n loss = loss_fct(logits, labels)\r\n```\r\nwhich would take care of this problem and show a clear error message.\r\n\r\nI can implement that change quickly and we should do a patch release but want to check the fix seems ok. What do you think @LysandreJik and @abhi1thakur ?",
"Thank you @sgugger for your prompt response!\r\nI also found #11583 reports a weird result with STS-B and could be fixed by the patch.",
"Indeed, I don't know why I couldn't reproduce the bad results earlier but this is definitely the same issue (I probably wasn't trying on the master branch.)"
] | 1,621 | 1,621 | 1,621 | CONTRIBUTOR | null | The recent PR #11012 changed the interface of forward function for `labels` in regression tasks as it skips `.view(-1)` in loss function like [this](https://github.com/huggingface/transformers/pull/11012/files#diff-a48ba7f6444ca4954a58f1ac3e66c7941a2bbc4615649d56b182aeac8cc36d9cL1523).
As shown below, that causes `UserWarning: Using a target size (torch.Size([32])) that is different to the input size (torch.Size([32, 1]))` with BERT in glue example code, and it looks like this change was applied **not only to BERT but also a lot of models in #11012**
To resolve it, the current example glue code needs `if` statement that transform `labels` variable before `forward` function only for regression task.
But if the interface change in the PR was unintentional, I think we should revert `.view(-1)` in loss function.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.0
- Platform: Google Colab
- Python version: 3.7
- PyTorch version (GPU?): 1.8.1
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@abhi1thakur @sgugger @LysandreJik from #11012
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): `bert-base-uncased`
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run
```
mkdir /tmp/stsb/ -p
python transformers/examples/pytorch/text-classification/run_glue_no_trainer.py \
--model_name_or_path bert-base-cased \
--task_name stsb \
--max_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--output_dir /tmp/stsb/
```
2. We will see `UserWarning: Using a target size (torch.Size([32])) that is different to the input size (torch.Size([32, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.` which didn't appear with the previous version of transformers like 2-3 weeks ago.
```
05/19/2021 22:39:45 - INFO - __main__ - ***** Running training *****
05/19/2021 22:39:45 - INFO - __main__ - Num examples = 5749
05/19/2021 22:39:45 - INFO - __main__ - Num Epochs = 3
05/19/2021 22:39:45 - INFO - __main__ - Instantaneous batch size per device = 32
05/19/2021 22:39:45 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 32
05/19/2021 22:39:45 - INFO - __main__ - Gradient Accumulation steps = 1
05/19/2021 22:39:45 - INFO - __main__ - Total optimization steps = 540
0% 0/540 [00:00<?, ?it/s]/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py:528: UserWarning: Using a target size (torch.Size([32])) that is different to the input size (torch.Size([32, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.mse_loss(input, target, reduction=self.reduction)
33% 178/540 [00:24<00:47, 7.66it/s]/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py:528: UserWarning: Using a target size (torch.Size([21])) that is different to the input size (torch.Size([21, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.mse_loss(input, target, reduction=self.reduction)
33% 180/540 [00:24<00:43, 8.26it/s]/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py:528: UserWarning: Using a target size (torch.Size([8])) that is different to the input size (torch.Size([8, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.mse_loss(input, target, reduction=self.reduction)
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py:528: UserWarning: Using a target size (torch.Size([4])) that is different to the input size (torch.Size([4, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.mse_loss(input, target, reduction=self.reduction)
05/19/2021 22:40:13 - INFO - /usr/local/lib/python3.7/dist-packages/datasets/metric.py - Removing /root/.cache/huggingface/metrics/glue/stsb/default_experiment-1-0.arrow
05/19/2021 22:40:13 - INFO - __main__ - epoch 0: {'pearson': 0.40341441213742524, 'spearmanr': 0.41749739006146336}
66% 359/540 [00:52<00:24, 7.26it/s]05/19/2021 22:40:40 - INFO - /usr/local/lib/python3.7/dist-packages/datasets/metric.py - Removing /root/.cache/huggingface/metrics/glue/stsb/default_experiment-1-0.arrow
05/19/2021 22:40:40 - INFO - __main__ - epoch 1: {'pearson': 0.4407148954008369, 'spearmanr': 0.4550002378117188}
100% 539/540 [01:20<00:00, 7.42it/s]05/19/2021 22:41:08 - INFO - /usr/local/lib/python3.7/dist-packages/datasets/metric.py - Removing /root/.cache/huggingface/metrics/glue/stsb/default_experiment-1-0.arrow
05/19/2021 22:41:08 - INFO - __main__ - epoch 2: {'pearson': 0.4408745967619131, 'spearmanr': 0.43830345183360847}
Configuration saved in /tmp/stsb/config.json
Model weights saved in /tmp/stsb/pytorch_model.bin
100% 540/540 [01:24<00:00, 6.42it/s]
```
As a result, this gave me a pretty bad performance `epoch 2: {'pearson': 0.4408745967619131, 'spearmanr': 0.43830345183360847}` while they were both around 0.87 with the previous version.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The following warning (actually should be a bug) should not appear, and the validation performance pearson and spearmanr should be around 0.87 with the parameters given in the example command.
`UserWarning: Using a target size (torch.Size([32])) that is different to the input size (torch.Size([32, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.`
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11780/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11780/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11779 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11779/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11779/comments | https://api.github.com/repos/huggingface/transformers/issues/11779/events | https://github.com/huggingface/transformers/pull/11779 | 895,797,186 | MDExOlB1bGxSZXF1ZXN0NjQ3OTczMjc2 | 11,779 | Deprecate commands from the transformers-cli that are in the hf-cli | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"my reason for removing (still keeping a descriptive error obviously) rather than deprecating is that I'd love to know if people actually use those (and if they do, do they use them in scripts or manually)\r\n\r\nBut I will defer to the great transformers-maintainers as the final decision 💖",
"sounds good to me!"
] | 1,621 | 1,621 | 1,621 | MEMBER | null | Commands that are both in the `transformers-cli` and in the `huggingface-cli` are deprecated here and will be quickly removed.
I'm voting for deprecating them and not removing them even though better ways exist as I suspect some users to use the `transformers-cli` in bash scripts to automatically upload models to the hub.
Context from @julien-c:
> my thoughts is that we should deprecate the subset of transformers-cli command that are in huggingface-cli, as the commands are identical and having both is confusing.
>
> Transformers-specific commands (model conversion, new model templating) can stay in transformers-cli.
>
> What do you think? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11779/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11779/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11779",
"html_url": "https://github.com/huggingface/transformers/pull/11779",
"diff_url": "https://github.com/huggingface/transformers/pull/11779.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11779.patch",
"merged_at": 1621494964000
} |
https://api.github.com/repos/huggingface/transformers/issues/11778 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11778/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11778/comments | https://api.github.com/repos/huggingface/transformers/issues/11778/events | https://github.com/huggingface/transformers/pull/11778 | 895,699,977 | MDExOlB1bGxSZXF1ZXN0NjQ3ODg4NjEy | 11,778 | [Flax] Align GLUE training script with mlm training script | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ran the experiment again, but testing time stayed the same for me...think it's better though to have a consistent way of handling the random keys though - so merging"
] | 1,621 | 1,621 | 1,621 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Currently running on TPUv3-8 to see if this leads to a speed-up
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11778/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11778/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11778",
"html_url": "https://github.com/huggingface/transformers/pull/11778",
"diff_url": "https://github.com/huggingface/transformers/pull/11778.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11778.patch",
"merged_at": 1621586216000
} |
https://api.github.com/repos/huggingface/transformers/issues/11777 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11777/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11777/comments | https://api.github.com/repos/huggingface/transformers/issues/11777/events | https://github.com/huggingface/transformers/pull/11777 | 895,696,417 | MDExOlB1bGxSZXF1ZXN0NjQ3ODg1NDgx | 11,777 | Flax Generate | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,622 | 1,622 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds the `generate()` method in Flax. An in-detail explanation of the design choices can be found here: https://www.notion.so/Flax-JAX-Generation-fe0c8d9807024d41a7ed4108f71a6f18
Example generate: https://colab.research.google.com/drive/1LiVLyjfTCGJtHldfFv1F3W3khkii5_Xp?usp=sharing
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11777/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11777/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11777",
"html_url": "https://github.com/huggingface/transformers/pull/11777",
"diff_url": "https://github.com/huggingface/transformers/pull/11777.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11777.patch",
"merged_at": 1622071098000
} |
https://api.github.com/repos/huggingface/transformers/issues/11776 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11776/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11776/comments | https://api.github.com/repos/huggingface/transformers/issues/11776/events | https://github.com/huggingface/transformers/pull/11776 | 895,691,040 | MDExOlB1bGxSZXF1ZXN0NjQ3ODgwNzIw | 11,776 | uplaod | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11776/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11776/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11776",
"html_url": "https://github.com/huggingface/transformers/pull/11776",
"diff_url": "https://github.com/huggingface/transformers/pull/11776.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11776.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11775 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11775/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11775/comments | https://api.github.com/repos/huggingface/transformers/issues/11775/events | https://github.com/huggingface/transformers/pull/11775 | 895,571,910 | MDExOlB1bGxSZXF1ZXN0NjQ3Nzc3NjM3 | 11,775 | Fix usage of head masks by TF encoder-decoder models' `generate()` function | {
"login": "stancld",
"id": 46073029,
"node_id": "MDQ6VXNlcjQ2MDczMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stancld",
"html_url": "https://github.com/stancld",
"followers_url": "https://api.github.com/users/stancld/followers",
"following_url": "https://api.github.com/users/stancld/following{/other_user}",
"gists_url": "https://api.github.com/users/stancld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stancld/subscriptions",
"organizations_url": "https://api.github.com/users/stancld/orgs",
"repos_url": "https://api.github.com/users/stancld/repos",
"events_url": "https://api.github.com/users/stancld/events{/privacy}",
"received_events_url": "https://api.github.com/users/stancld/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for the addition @stancld ! \r\n\r\nI think once we fix the tests in led + T5 we can merge this one :-)",
"It also looks good to me!",
"Hey @patrickvonplaten, I haven't implemented head masking for the `generate` method for LED and T5 intentionally. The reason is that TF LED and T5 does not use head masks properly (there's an old glitch that the decoder uses encoder's `head_mask` instead of `cross_attn_head_mask`). Maybe, I can fix this issue in other PRs and then enable testing for these two models? :)",
"> Hey @patrickvonplaten, I haven't implemented head masking for the `generate` method for LED and T5 intentionally. The reason is that TF LED and T5 does not use head masks properly (there's an old glitch that the decoder uses encoder's `head_mask` instead of `cross_attn_head_mask`). Maybe, I can fix this issue in other PRs and then enable testing for these two models? :)\r\n\r\nGood for me!"
] | 1,621 | 1,622 | 1,622 | CONTRIBUTOR | null | TF counterpart to #11621
**Description:** It is necessary to fix head masking for LED and T5 models.
Edit: Fix for T5 - #11857
<hr>
**Reviewers:** @patrickvonplaten @Rocketknight1 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11775/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11775",
"html_url": "https://github.com/huggingface/transformers/pull/11775",
"diff_url": "https://github.com/huggingface/transformers/pull/11775.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11775.patch",
"merged_at": 1622034164000
} |
https://api.github.com/repos/huggingface/transformers/issues/11774 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11774/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11774/comments | https://api.github.com/repos/huggingface/transformers/issues/11774/events | https://github.com/huggingface/transformers/issues/11774 | 895,520,414 | MDU6SXNzdWU4OTU1MjA0MTQ= | 11,774 | Finetune - Helsinki-NLP/opus-mt-fr-en | {
"login": "dinosaxon",
"id": 5419441,
"node_id": "MDQ6VXNlcjU0MTk0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5419441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dinosaxon",
"html_url": "https://github.com/dinosaxon",
"followers_url": "https://api.github.com/users/dinosaxon/followers",
"following_url": "https://api.github.com/users/dinosaxon/following{/other_user}",
"gists_url": "https://api.github.com/users/dinosaxon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dinosaxon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dinosaxon/subscriptions",
"organizations_url": "https://api.github.com/users/dinosaxon/orgs",
"repos_url": "https://api.github.com/users/dinosaxon/repos",
"events_url": "https://api.github.com/users/dinosaxon/events{/privacy}",
"received_events_url": "https://api.github.com/users/dinosaxon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you share your code so that we may help?\r\n\r\nI believe this is covered in the quicktour! https://huggingface.co/transformers/quicktour.html",
"here is my code\r\n\r\n`python3 /marian/examples/transformers/examples/research_projects/seq2seq-distillation/finetune.py \\\r\n --learning_rate=3e-5 \\\r\n --fp16 \\\r\n --gpus 1 \\\r\n --do_train \\\r\n --do_predict \\\r\n --n_val 1000 \\\r\n --val_check_interval 0.1 \\\r\n --src_lang \"fr\" \\\r\n --tgt_lang \"en\" \\\r\n --num_train_epochs 400 \\\r\n --warmup_steps 20 \\\r\n --train_batch_size 10 \\\r\n --eval_batch_size 10 \\\r\n --data_dir \"/marian/examples/test/data\" \\\r\n --output_dir \"/marian/examples/test/out\" \\\r\n --cache_dir \"/marian/examples/test/cache\" \\\r\n --max_source_length 128 \\\r\n --max_target_length 128 \\\r\n --val_max_target_length 128 \\\r\n --test_max_target_length 128 \\\r\n --model_name_or_path \"/marian/examples/test\"\r\n \"$@\"`",
"Ah, I believe this code has been deprecated for some time now. If you're looking to finetune a model on translation, may I recommend taking a look at our [translation examples](https://github.com/huggingface/transformers/tree/master/examples/pytorch/translation) instead?",
"Thank you, I will give it a try",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,624 | 1,624 | NONE | null | Hi all
I am new to huggingface!
I am trying to finetune the Helsinki-NLP/opus-mt-fr-en but I am getting the error:
```
2021-05-19 14:20:33.882388: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dyn
amic library libcudart.so.11.0
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py", line 1205, in from_pretrained
state_dict = torch.load(resolved_archive_file, map_location="cpu")
File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 593, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 762, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, 'v'.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/marian/examples/transformers/examples/research_projects/seq2seq-distillation/finetune.py", line 442, in <m
odule>
main(args)
File "/marian/examples/transformers/examples/research_projects/seq2seq-distillation/finetune.py", line 381, in ma
in
model: SummarizationModule = SummarizationModule(args)
File "/marian/examples/transformers/examples/research_projects/seq2seq-distillation/finetune.py", line 65, in __i
nit__
super().__init__(hparams, num_labels=None, mode=self.mode, **kwargs)
File "/marian/examples/transformers/examples/research_projects/seq2seq-distillation/lightning_base.py", line 109,
in __init__
self.model = self.model_type.from_pretrained(
File "/usr/local/lib/python3.8/dist-packages/transformers/models/auto/auto_factory.py", line 381, in from_pretrai
ned
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py", line 1207, in from_pretrained
raise OSError(
OSError: Unable to load weights from pytorch checkpoint file for '/marian/examples/test' at '/marian/examples/test/
pytorch_model.bin'If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
```
Could you tell me where I can set the from_tf=True?
Also, how can I convert a pytorch_model.bin to tf model?
Is there any step-by-step tutorial regarding this task?
Best
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11774/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11773 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11773/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11773/comments | https://api.github.com/repos/huggingface/transformers/issues/11773/events | https://github.com/huggingface/transformers/pull/11773 | 895,320,478 | MDExOlB1bGxSZXF1ZXN0NjQ3NTU2NDQx | 11,773 | [Demo] Slow down in TPU training | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Basically prior to this change you were running a single jitted function in each train step, and because of asynchronous dispatch it didn't have to wait until the previous step was complete until dispatching the program for the next step. But if you split an RNG in between, then JAX blocks until the previous step was complete, then dispatches and executes the split command, and then dispatches the next training step.\r\n\r\nIn short, the guideline is that each step in a training loop should be a single jitted function. If done right this should lead to close to 100% device utilization.\r\n\r\nThis is a common gotcha -- people hit this regularly, and we should help catch the slow patterns early, such that you could detect this even with a local run with no accelerator/unit test. @jheek is working on a library that would allow you to annotate code such that you'd get that kind of error or warning for this, and other cases.\r\n\r\n@jheek also said:\r\n\r\nYeah this is an example of my number 1 most common and most hurtful JAX performance gotcha that I want to catch automatically\r\n\r\nIn this case it stands out but there are more subtle variants where it's hard to spot in a review\r\n\r\nThis analysis is only true for TPU without async mode enabled btw. Because all other devices have a queue that is > 1\r\n",
"Thanks a lot for this in-detail explanation @avital! \r\n\r\nAlso pinging @sgugger @stas00 @mfuntowicz - might be interesting to read :-) ",
"(I guess really this means that `run_glue_flax.py` could be made faster? /cc @marcvanzee )",
"> (I guess really this means that `run_glue_flax.py` could be made faster? /cc @marcvanzee )\r\n\r\nYeah, I'm currently testing it actually, see here: https://github.com/huggingface/transformers/pull/11778 .\r\nWill report results tomorrow",
"Great that you discovered this! I actually didn't notice the bug, and since training was already fast enough I didn't look into it. Curious to see whether we will get even more speedup!",
"Reran, the experiments - got a small speed-up on TPU. Here the new numbers: https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification#runtime-evaluation"
] | 1,621 | 1,621 | 1,621 | MEMBER | null | @avital @marcvanzee - I wanted to align `run_mlm_flax.py` more with `run_glue_flax.py` and noticed that by doing the change as shown in this PR, training on TPU slows down very significantly by ca. ~40%.
Currently, [`run_glue_flax.py`](https://github.com/huggingface/transformers/blob/master/examples/flax/text-classification/run_flax_glue.py) and [`run_mlm_flax.py`](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_flax_mlm.py) deal slightly differently with the PRNG key: `run_mlm_flax.py` splits the key inside the training step while `run_glue_flax.py` does so before the train step and shards it then before passing it to the train loop. It seems that `run_mlm_flax.py` is significantly faster on TPU.
Do you by any chance have good explanations for that? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11773/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11773",
"html_url": "https://github.com/huggingface/transformers/pull/11773",
"diff_url": "https://github.com/huggingface/transformers/pull/11773.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11773.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11772 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11772/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11772/comments | https://api.github.com/repos/huggingface/transformers/issues/11772/events | https://github.com/huggingface/transformers/issues/11772 | 895,278,537 | MDU6SXNzdWU4OTUyNzg1Mzc= | 11,772 | Different performance when training different transformers version | {
"login": "quancq",
"id": 22343093,
"node_id": "MDQ6VXNlcjIyMzQzMDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/22343093?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quancq",
"html_url": "https://github.com/quancq",
"followers_url": "https://api.github.com/users/quancq/followers",
"following_url": "https://api.github.com/users/quancq/following{/other_user}",
"gists_url": "https://api.github.com/users/quancq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/quancq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/quancq/subscriptions",
"organizations_url": "https://api.github.com/users/quancq/orgs",
"repos_url": "https://api.github.com/users/quancq/repos",
"events_url": "https://api.github.com/users/quancq/events{/privacy}",
"received_events_url": "https://api.github.com/users/quancq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"We do not guarantee the exact reproducibility of training between versions, only with the same version (PyTorch does the same by the way). Are you using the Trainer API? If this is the case, I believe it's the work done to ensure full reproducibility for checkpoints (e.g. you get to the same results training from scratch or resuming from a checkpoint) that is probably creating this difference, as the way the training data was shuffled has been changed.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,624 | 1,624 | NONE | null | ## Environment info
- `transformers` version: 4.6 and 4.5
- Platform:
- Python version: 3.7
- PyTorch version (GPU?): 1.8.0 GPU
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik @sgugger
Models: PhoBERT (RoBERTa based)
Model hub: https://huggingface.co/vinai/phobert-base/
## Information
Model I am using (PhoBERT ...):
The problem arises when using:
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
## Expected behavior
Training loss, dev loss, dev F1 in each epoch different when training model with transformers version 4.5 and 4.6.
Have anyone meet same this problem? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11772/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11771 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11771/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11771/comments | https://api.github.com/repos/huggingface/transformers/issues/11771/events | https://github.com/huggingface/transformers/pull/11771 | 895,265,066 | MDExOlB1bGxSZXF1ZXN0NjQ3NTA3Njc1 | 11,771 | Add DOI badge to README | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | MEMBER | null | # What does this PR do?
Add DOI badge to README, as explained in https://guides.github.com/activities/citable-code/ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11771/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11771/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11771",
"html_url": "https://github.com/huggingface/transformers/pull/11771",
"diff_url": "https://github.com/huggingface/transformers/pull/11771.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11771.patch",
"merged_at": 1621432137000
} |
https://api.github.com/repos/huggingface/transformers/issues/11770 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11770/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11770/comments | https://api.github.com/repos/huggingface/transformers/issues/11770/events | https://github.com/huggingface/transformers/pull/11770 | 895,191,182 | MDExOlB1bGxSZXF1ZXN0NjQ3NDQzNjA2 | 11,770 | [T5 failing CI] Fix generate test | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes wrong device placement as introduced in #11621
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11770/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11770",
"html_url": "https://github.com/huggingface/transformers/pull/11770",
"diff_url": "https://github.com/huggingface/transformers/pull/11770.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11770.patch",
"merged_at": 1621416678000
} |
https://api.github.com/repos/huggingface/transformers/issues/11769 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11769/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11769/comments | https://api.github.com/repos/huggingface/transformers/issues/11769/events | https://github.com/huggingface/transformers/issues/11769 | 895,125,646 | MDU6SXNzdWU4OTUxMjU2NDY= | 11,769 | Trainer removes newer checkpoints, not older. | {
"login": "avacaondata",
"id": 35173563,
"node_id": "MDQ6VXNlcjM1MTczNTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avacaondata",
"html_url": "https://github.com/avacaondata",
"followers_url": "https://api.github.com/users/avacaondata/followers",
"following_url": "https://api.github.com/users/avacaondata/following{/other_user}",
"gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions",
"organizations_url": "https://api.github.com/users/avacaondata/orgs",
"repos_url": "https://api.github.com/users/avacaondata/repos",
"events_url": "https://api.github.com/users/avacaondata/events{/privacy}",
"received_events_url": "https://api.github.com/users/avacaondata/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Please give us a command that reproduces the bug as your indications are too vague to reproduce. Also make sure you are using a source install as there was a bug recently fixed with the checkpoints (though it was with `load_best_model_at_end=True` which I have no idea if you're using).",
"I'm using load_best_model_at_end=True, but this happens way before the end, so I think this is a separate issue.\r\nHere's the command I'm using:\r\n\r\n```\r\npython -u -m torch.distributed.launch --nproc_per_node=8 /home/ubuntu/transformers/examples/research_projects/mlm_wwm/run_mlm_wwm.py \\\r\n --model_name_or_path ./deberta_3004/checkpoint-274200 \\\r\n --config_name ./config_deberta/config.json \\\r\n --tokenizer_name ./deberta_tokenizer_1304 \\\r\n --train_file ./suc_cleaned_1805.txt \\\r\n --validation_file ./final_valid.txt \\\r\n --output_dir ./deberta_3004 \\\r\n --overwrite_output_dir \\\r\n --do_train \\\r\n --do_eval \\\r\n --evaluation_strategy steps \\\r\n --per_device_train_batch_size 24 \\\r\n --per_device_eval_batch_size 48 \\\r\n --gradient_accumulation_steps 11 \\\r\n --learning_rate 2e-4 \\\r\n --save_steps 200 \\\r\n --logging_steps 200 \\\r\n --overwrite_cache \\\r\n --max_seq_length 512 \\\r\n --eval_accumulation_steps 10 \\\r\n --load_best_model_at_end \\\r\n --run_name deberta_1404 \\\r\n --save_total_limit 50 --warmup_steps 7000 \\\r\n --adam_beta2 0.999 --adam_epsilon 1e-6 --weight_decay 0.01 --num_train_epochs 1 --max_steps 1000000 --preprocessing_num_workers 96 --fp16 --dataloader_num_workers 24 --ignore_data_skip\r\n\r\n```",
"Please retry on a master branch then. As I said, the bug of deleting newer checkpoints with `load_best_model_at_end=True` has been fixed by #11748. The bug was happening before the end, so I think you are experimenting the same one.",
"Okay, I'll retry re-installing from master then :) Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,624 | 1,624 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.0
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
@LysandreJik @patrickvonplaten @stas00 @sgugger
## Information
Model I am using (Bert, XLNet ...): DEBERTA, but which model is used is not important here.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Train with trainer for more than 200k steps with save_total_limit to 50 for example, and logging steps to 200.
2. Observe how this bug makes you lose your most recent progress and it removes your newest checkpoints, which costs money, as you have been training without saving the newest checkpoints (it removes them just after saving them).
## Expected behavior
It is expected that the trainer doesn't remove the newest checkpoints, but the oldest ones, when you set the save_total_limit. This happens over 200k steps. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11769/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11769/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11768 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11768/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11768/comments | https://api.github.com/repos/huggingface/transformers/issues/11768/events | https://github.com/huggingface/transformers/issues/11768 | 895,115,179 | MDU6SXNzdWU4OTUxMTUxNzk= | 11,768 | DataCollatorForWholeWordMask only works for BERT, and nothing is said in the docstring. | {
"login": "avacaondata",
"id": 35173563,
"node_id": "MDQ6VXNlcjM1MTczNTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avacaondata",
"html_url": "https://github.com/avacaondata",
"followers_url": "https://api.github.com/users/avacaondata/followers",
"following_url": "https://api.github.com/users/avacaondata/following{/other_user}",
"gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions",
"organizations_url": "https://api.github.com/users/avacaondata/orgs",
"repos_url": "https://api.github.com/users/avacaondata/repos",
"events_url": "https://api.github.com/users/avacaondata/events{/privacy}",
"received_events_url": "https://api.github.com/users/avacaondata/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes merging it was a mistake. It will be removed when we have something better in the future.",
"@sgugger Could you please tell me how could I adapt it for a general fast tokenizer? Or at least how would you do it for a ByteBPETokenizer like Roberta's or Deberta's?",
"I haven't dug into this, but it should probably leverage the `word_ids` the fast tokenizer provide to be more general.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I just ran into the same problem. Is somebody working on this?\r\n\r\nI need a language modeling data collator for RoBERTa-style tokenizers and might as well try my hand at providing an extensible, general implementation that issues proper warnings if used on yet-unsupported tokenizer classes, if there's interest.",
"The problem is that after passing through datasets, the objects are dicts, not BatchEncoding, therefore they don't have the word_ids() method, and without that we cannot generalize Whole Word Masking. One solution is to pre tokenize and pre process the dataset inside the function you put in the datasets map, however you disable dynamic batching which is a key improvement of Roberta with respect to Bert.",
"Thank you for elaborating!\r\n\r\nSimilarly to the implementation for BERT tokenizers in the current `DataCollatorForWholeWordMasking`, it is possible to obtain a word start mask for RoBERTa tokenizers by decoding every token in the collator by using something like this:\r\n\r\n```python\r\ndef _word_starts(self, inputs: torch.Tensor) -> torch.Tensor:\r\n is_word_start = torch.full_like(inputs, fill_value=False)\r\n for i, example in enumerate(torch.split(inputs, split_size_or_sections=1, dim=0)):\r\n line_mask = torch.tensor([self.tokenizer.decode([t]).startswith(\" \") for t in example.flatten().tolist()\r\n if t != self.tokenizer.pad_token_id])\r\n is_word_start[i, 0:line_mask.shape[0]] = line_mask\r\n return is_word_start\r\n```\r\n\r\nI believe that this is accurate if the tokenizer is initialized with `add_prefix_space=True`, otherwise the first word is missing, which is probably acceptable in most circumstances.\r\n\r\nIf this method is correct, it could be extended to BART tokenizers, where the condition for the first token of a word is `not tokenizer.decode([t]).startswith('##')`. I'm not sure whether this is a path one wants to take here, though."
] | 1,621 | 1,624 | 1,624 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
@patrickvonplaten @LysandreJik @patil-suraj @sgugger
## Information
Model I am using (Bert, XLNet ...): DBERTA (V1) BASE
The problem arises when using:
* [x] the official example scripts: (give details below): DataCollatorForWholeWordMask
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
The DataCollatorForWholeWordMask, that should be used for pre-training a Roberta model, or a Deberta model for example (as you don't have a SpanCollator), only works for BERT, and one needs to look the details in the collator code to notice this. I've been training a language model from scratch for weeks now, just to notice yesterday that your collator for WholeWordMask is wrong and only works for BERT.
Steps to reproduce the behavior:
1. Try to use the DataCollatorForWholeWordMask with any model that is not BERT.
## Expected behavior
A data collator that is included in your data collators should work generally for any model, not only for BERT. Or at least, in the Docstring it should be clear that one will waste huge amounts of money if using this collator for other models that are not BERT. This being said, I would like to know how could I use the word_ids from the tokenizer to do this, as with the TokenClassification example you provide here: https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/text_classification.ipynb#scrollTo=MOsHUjgdIrIW
In this example the extension of the token labels doesn't depend on the continuation token having "##" at the beginning, but uses the word ids from the FastTokenizer. I think the DataCollatorForWholeWordMask should work generally, at least for all fast tokenizers, not only for BERT. For my case, I would like to know what can I do to at least train a little bit more with the correct objective, not with normal MLM but with WWMLM. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11768/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11767 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11767/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11767/comments | https://api.github.com/repos/huggingface/transformers/issues/11767/events | https://github.com/huggingface/transformers/issues/11767 | 895,099,780 | MDU6SXNzdWU4OTUwOTk3ODA= | 11,767 | AttributeError when using EncoderDecoderModel.forward() with encoder_outputs and return_dict=True | {
"login": "aizawa-naoki",
"id": 6253193,
"node_id": "MDQ6VXNlcjYyNTMxOTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6253193?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aizawa-naoki",
"html_url": "https://github.com/aizawa-naoki",
"followers_url": "https://api.github.com/users/aizawa-naoki/followers",
"following_url": "https://api.github.com/users/aizawa-naoki/following{/other_user}",
"gists_url": "https://api.github.com/users/aizawa-naoki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aizawa-naoki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aizawa-naoki/subscriptions",
"organizations_url": "https://api.github.com/users/aizawa-naoki/orgs",
"repos_url": "https://api.github.com/users/aizawa-naoki/repos",
"events_url": "https://api.github.com/users/aizawa-naoki/events{/privacy}",
"received_events_url": "https://api.github.com/users/aizawa-naoki/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @aizawa-naoki,\r\n\r\nThanks for your bug report here. \r\n\r\nThe problem here is that the model expects the inputs and outputs to be of type `ModelOutput` by setting `return_dict=True`.\r\nHowever, `encoder_outputs` is passed as a tuple and not as a `ModelOutput` which leads to an error. You could fix your code as follows:\r\n\r\n```python\r\nfrom transformers import EncoderDecoderModel, GPT2Tokenizer\r\nfrom transformers.modeling_outputs import BaseModelOutput\r\nimport torch\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2')\r\nmodel = EncoderDecoderModel.from_encoder_decoder_pretrained(\"gpt2\", \"gpt2\")\r\n\r\nenc_input_ids = torch.tensor(tokenizer.encode(\"Hello, my dog is cute\", add_special_tokens=True)).unsqueeze(0)\r\ndec_input_ids = torch.tensor([[model.config.decoder.eos_token_id]])\r\n\r\noutputs = model(input_ids=enc_input_ids, decoder_input_ids=dec_input_ids, encoder_outputs=None, return_dict=True)\r\n_, _, enc_h = outputs.values() # (logits, past_key_values, encoder_last_hidden_states)\r\n\r\noutputs = model(input_ids=enc_input_ids, decoder_input_ids=dec_input_ids, encoder_outputs=BaseModelOutput(last_hidden_state=enc_h), return_dict=True)# Error occured @ this line.\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,624 | 1,624 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.0
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.9.4
- PyTorch version (GPU?): 1.8.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten, @patil-suraj
## Information
Model I am using (Bert, XLNet ...): encoder=decoder="gpt2"
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```python3:code.py
from transformers import EncoderDecoderModel, GPT2Tokenizer
import torch
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = EncoderDecoderModel.from_encoder_decoder_pretrained("gpt2", "gpt2")
enc_input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0)
dec_input_ids = torch.tensor([[model.config.decoder.eos_token_id]])
outputs = model(input_ids=enc_input_ids, decoder_input_ids=dec_input_ids, encoder_outputs=None, return_dict=True)
_, _, enc_h = outputs.values() # (logits, past_key_values, encoder_last_hidden_states)
enc_h = (enc_h, ) # *1(link below) requests that I should make tuple for "encoder_outputs" argument ↓
outputs = model(input_ids=enc_input_ids, decoder_input_ids=dec_input_ids, encoder_outputs=enc_h, return_dict=True)# Error occured @ this line.
```
[*1:Doc](https://huggingface.co/transformers/model_doc/encoderdecoder.html#transformers.EncoderDecoderModel.forward)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## behavior

## Expected behavior
No Error at modeling_encoder_decoder.py line 463.
# Cause of Error
In [modeling_encoder_decoder.py line435](https://github.com/huggingface/transformers/blob/680d181ce80070f89f0ebd49bf93ca29b24cd56b/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L435), "encoder_outputs" need to behave as Iterable (and the encoder-decoder-model documentation request Tuple as argument).
But, around [line 463](https://github.com/huggingface/transformers/blob/680d181ce80070f89f0ebd49bf93ca29b24cd56b/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L463), "encoder_outputs" need to behave something else.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11767/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11766 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11766/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11766/comments | https://api.github.com/repos/huggingface/transformers/issues/11766/events | https://github.com/huggingface/transformers/issues/11766 | 895,074,347 | MDU6SXNzdWU4OTUwNzQzNDc= | 11,766 | Error when using IterableDataset as train_dataset for Trainer | {
"login": "yeounyi",
"id": 41869778,
"node_id": "MDQ6VXNlcjQxODY5Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/41869778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yeounyi",
"html_url": "https://github.com/yeounyi",
"followers_url": "https://api.github.com/users/yeounyi/followers",
"following_url": "https://api.github.com/users/yeounyi/following{/other_user}",
"gists_url": "https://api.github.com/users/yeounyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yeounyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yeounyi/subscriptions",
"organizations_url": "https://api.github.com/users/yeounyi/orgs",
"repos_url": "https://api.github.com/users/yeounyi/repos",
"events_url": "https://api.github.com/users/yeounyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/yeounyi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Can you print the elements you get when iterating through your dataset (and their types)? It seems like there is something wrong here. I'm not familiar with parquet but your iter is only going to return the result of the first of `df.to_batches()`, is that expected?\r\n\r\nNote that the `__len__` should not be implemented if possible as it will probably trigger other issues in the Trainer when it sees it.",
"If I run code below\r\n\r\n```python3\r\nds = CustomIterableData(file_path, tokenizer, cat_info_path = cat_info_path)\r\n\r\nfeatures = []\r\nfor i, result in enumerate(ds.__iter__()):\r\n features.append(result)\r\n if i >= 5:\r\n break\r\n```\r\n\r\nIt gives this features\r\n\r\n```\r\n[{'input_ids': tensor([ 2, 12861, 10824, 12861, 2967, 8574, 4036, 4052, 7473, 3721,\r\n 12861, 23637, 12861, 3346, 11109, 2967, 11109, 10824, 11109, 3346,\r\n 3, 3813, 24928, 3346, 3, 3, 3, 3, 3425, 4431,\r\n 4109, 16853, 3, 3]), 'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]), 'label': 3408},\r\n {'input_ids': tensor([ 2, 15991, 4051, 17692, 6399, 2967, 6426, 11620, 2720, 4104,\r\n 4183, 26227, 25, 3308, 3, 6426, 12527, 14794, 3, 26227,\r\n 3, 6701, 26227, 3, 6426, 26227, 3, 38, 4276, 4091,\r\n 3, 38, 4276, 4091, 3]), 'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]), 'label': 3318},\r\n {'input_ids': tensor([ 2, 23687, 8027, 86, 15136, 15994, 7413, 11620, 4712, 9350,\r\n 15955, 31870, 11177, 16601, 18535, 3280, 3, 9477, 10532, 3,\r\n 2298, 4525, 4566, 16601, 3, 3, 3, 3, 3]), 'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1]), 'label': 2742},\r\n {'input_ids': tensor([ 2, 3213, 11761, 9853, 2290, 8103, 12854, 2136, 10359, 18847,\r\n 22156, 4009, 4036, 10456, 4273, 78, 4184, 4011, 81, 4020,\r\n 76, 4097, 71, 4012, 69, 4037, 3, 10439, 10921, 3,\r\n 8103, 12873, 4031, 2136, 3, 8103, 12854, 2136, 3, 3,\r\n 3, 3213, 11761, 3]), 'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]), 'label': 2560},\r\n {'input_ids': tensor([ 2, 9204, 8006, 6988, 2744, 4162, 3283, 91, 7940, 16908,\r\n 9863, 18117, 16420, 16545, 12793, 25385, 28539, 25942, 8023, 4010,\r\n 11976, 27499, 6329, 70, 8193, 90, 16926, 13323, 23626, 4121,\r\n 87, 3, 7058, 6482, 3, 10921, 33648, 3, 8006, 16635,\r\n 10762, 3735, 3, 9420, 3, 3191, 4923, 4266, 14841, 3,\r\n 3191, 4923, 4266, 14841, 3]), 'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1]), 'label': 2419},\r\n {'input_ids': tensor([ 2, 8471, 24, 4060, 80, 15994, 11704, 9651, 7998, 7388,\r\n 11903, 15962, 8022, 9668, 3283, 18806, 6223, 2348, 5032, 3802,\r\n 4007, 3081, 33036, 4257, 7018, 9651, 7998, 3, 6677, 7084,\r\n 2114, 3, 6718, 6951, 8705, 3, 11791, 3, 3, 3,\r\n 3]), 'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]), 'label': 4097}]\r\n```\r\n\r\n\r\nI changed the code as following and passed a tokenizer to Trainer to use DataCollatorWithPadding.\r\n\r\nif I remove `__len__` method it gives this error \r\n`\"train_dataset does not implement __len__, max_steps has to be specified\"`\r\n\r\nNow Trainer works fine but it only trains with 1 sample, maybe the first one.\r\nI don't know why.. Why all the data is not getting read? \r\n\r\n\r\n```python\r\nimport torch\r\nimport pyarrow.parquet as pq\r\nfrom transformers import BatchEncoding\r\n\r\nclass CustomIterableData(torch.utils.data.dataset.IterableDataset):\r\n def __init__(self, file_path, tokenizer, with_labels=False):\r\n super().__init__()\r\n self.file_path = file_path\r\n self.tokenizer = tokenizer\r\n self.with_labels = with_labels\r\n\r\n def process(self, row):\r\n inputs = str(row[2])\r\n labels = self.str2label(str(row[4]))\r\n\r\n inputs = self.tokenizer(inputs, return_tensors=\"pt\", padding=True, truncation=True)\r\n self.input_ids = [i.clone().detach() for i in inputs.input_ids]\r\n self.attention_mask = [i.clone().detach() for i in inputs.attention_mask]\r\n\r\n if self.with_labels:\r\n # indexing to squeeze(0)\r\n return BatchEncoding({'input_ids': self.input_ids[0], 'attention_mask': self.attention_mask[0], 'label': labels})\r\n \r\n return BatchEncoding({'input_ids': self.input_ids[0], 'attention_mask': self.attention_mask[0]})\r\n \r\n def __iter__(self):\r\n df = pq.read_table(source = self.file_path)\r\n for batch in df.to_batches():\r\n for row in zip(*batch.columns):\r\n yield self.process(row)\r\n\r\n def __len__(self):\r\n # yield one row at a time\r\n return 1 \r\n\r\n def str2label(self, string):\r\n ....\r\n\r\n```\r\n",
"\r\nThis code works fine. I referred to [this post](https://medium.com/speechmatics/how-to-build-a-streaming-dataloader-with-pytorch-a66dd891d9dd).\r\n`__len__` method wasn't necessary if positive `max_steps` is passed to `TrainingArguments`\r\n\r\n```python3\r\nclass CustomIterableData(torch.utils.data.dataset.IterableDataset):\r\n def __init__(self, file_path, tokenizer, with_labels=False):\r\n super().__init__()\r\n self.file_path = file_path\r\n self.tokenizer = tokenizer\r\n self.with_labels = with_labels\r\n\r\n def parse_file(self):\r\n df = pq.read_table(source = self.file_path)\r\n for batch in df.to_batches():\r\n for row in zip(*batch.columns): \r\n yield self.process(row)\r\n\r\n def process(self, row):\r\n inputs = str(row[2])\r\n labels = self.str2label(str(row[4]))\r\n\r\n inputs = self.tokenizer(inputs, return_tensors=\"pt\", padding=True, truncation=True)\r\n self.input_ids = [i.clone().detach() for i in inputs.input_ids]\r\n self.attention_mask = [i.clone().detach() for i in inputs.attention_mask]\r\n\r\n if self.with_labels:\r\n # indexing to squeeze(0)\r\n return BatchEncoding({'input_ids': self.input_ids[0], 'attention_mask': self.attention_mask[0], 'label': labels})\r\n \r\n return BatchEncoding({'input_ids': self.input_ids[0], 'attention_mask': self.attention_mask[0]})\r\n \r\n def get_stream(self):\r\n return cycle(self.parse_file())\r\n\r\n def __iter__(self):\r\n return self.get_stream()\r\n\r\n def str2label(self, string):\r\n ....\r\n ```\r\n\r\n"
] | 1,621 | 1,621 | 1,621 | CONTRIBUTOR | null | Hi, I'm using large train data (parquet format) and want to pass this as `IterableDataset` to `Trainer`.
I managed to make custom `IterableDataset`, but sadly it doesn't work.
```python
import torch
import pyarrow.parquet as pq
from transformers import BatchEncoding
class CustomIterableData(torch.utils.data.dataset.IterableDataset):
def __init__(self, file_path, tokenizer, with_labels=False):
super().__init__()
self.file_path = file_path
self.tokenizer = tokenizer
self.with_labels = with_labels
def process(self, row):
inputs = str(row[2])
labels = str(row[4])
inputs = self.tokenizer(inputs, return_tensors="pt", padding=True, truncation=True)
self.input_ids = [i.clone().detach() for i in inputs.input_ids]
self.attention_mask = [i.clone().detach() for i in inputs.attention_mask]
if self.with_labels:
yield BatchEncoding({'input_ids': self.input_ids, 'attention_mask': self.attention_mask, 'labels': labels})
yield BatchEncoding({'input_ids': self.input_ids, 'attention_mask': self.attention_mask})
def __iter__(self):
df = pq.read_table(source = self.file_path)
for batch in df.to_batches():
return map(self.process, zip(*batch.columns))
def __len__(self):
# yield one row at a time
return 1
```
This dataset gives me the error below.
```
File "main10m.py", line 128, in main
trainer.train()
File "/opt/conda/lib/python3.7/site-packages/transformers/trainer.py", line 1246, in train
for step, inputs in enumerate(epoch_iterator):
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 517, in __next__
data = self._next_data()
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 557, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 35, in fetch
return self.collate_fn(data)
File "/opt/conda/lib/python3.7/site-packages/transformers/data/data_collator.py", line 54, in default_data_collator
features = [vars(f) for f in features]
File "/opt/conda/lib/python3.7/site-packages/transformers/data/data_collator.py", line 54, in <listcomp>
features = [vars(f) for f in features]
TypeError: vars() argument must have __dict__ attribute
```
I would appreciate any help!
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11766/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11765 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11765/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11765/comments | https://api.github.com/repos/huggingface/transformers/issues/11765/events | https://github.com/huggingface/transformers/issues/11765 | 895,010,625 | MDU6SXNzdWU4OTUwMTA2MjU= | 11,765 | Unable to use fill-mask pipeline on gpt-neo model | {
"login": "pidugusundeep",
"id": 10946649,
"node_id": "MDQ6VXNlcjEwOTQ2NjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/10946649?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pidugusundeep",
"html_url": "https://github.com/pidugusundeep",
"followers_url": "https://api.github.com/users/pidugusundeep/followers",
"following_url": "https://api.github.com/users/pidugusundeep/following{/other_user}",
"gists_url": "https://api.github.com/users/pidugusundeep/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pidugusundeep/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pidugusundeep/subscriptions",
"organizations_url": "https://api.github.com/users/pidugusundeep/orgs",
"repos_url": "https://api.github.com/users/pidugusundeep/repos",
"events_url": "https://api.github.com/users/pidugusundeep/events{/privacy}",
"received_events_url": "https://api.github.com/users/pidugusundeep/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Fill-mask is for encoder-only models like BERT and RoBERTa. The GPT-neo model is a decoder-only model that is capable of doing text generation. There's a `TextGenerationPipeline` available, so you might try that out. The documentation can be found [here](https://huggingface.co/transformers/main_classes/pipelines.html#transformers.TextGenerationPipeline).",
"I read through articles that this model can be used to do grammar checking? Please share relevant documentation for the same.",
"Yes it can do grammar checking similar to GPT-3, in a zero-shot manner. So you can for example try the following prompt:\r\n\r\n```\r\nOriginal: She no went to the market.\r\nStandard American English:\r\n```\r\n\r\nNormally, if GPT-neo is smart enough, it will then generate `She didn't go to the market.`\r\n\r\nThese big generation models like GPT-3 and GPT-neo can learn in a zero-shot manner, just by giving a few examples, and then ask the model what comes next. So in this case, I didn't even give one example, I asked the model directly for an answer. You can also first provide several examples (\"Original\" and \"Standard American English\" pairs) to the model, and then ask it to predict what comes next. ",
"Great, Can you please share some sample implementation on google collab notebook? ",
"I just copied the code sample from the [model card](https://huggingface.co/EleutherAI/gpt-neo-1.3B):\r\n\r\n```\r\nfrom transformers import pipeline\r\ngenerator = pipeline('text-generation', model='EleutherAI/gpt-neo-1.3B')\r\ngenerator(\"Original: She no went to the market. Standard American English: She didn't go to the market. Original: I loving eating pizza. Standard American English:\", do_sample=True, min_length=50)\r\n```",
"Isn't the text generation specific to generating new text with a given prompt?\r\nI tried using the same format as what you have provided and this was the response \r\nInput: `Original: She no went to the market. Standard American English:` \r\nOutput: `Original: She no went to the market. Standard American English: No, I didn’t go to the market yesterday.` \r\nThis completely changed the 3rd person to 1st person? is the format `Original:xxx Standard American English:` important and is this how it does the grammar correction?",
"Isn't the text generation specific to generating new text with a given prompt? => well, normally it is meant to generate new text given a prompt indeed. But as models like GPT-3 and GPT-neo are so powerful and are trained on a lot of data, they are capable of performing what the authors of GPT-3 call \"in-context learning\": this means that the model knows what to do just based on a given prompt. See the [GPT-3 paper](https://arxiv.org/abs/2005.14165) for more info. \r\n\r\nI've just tried it with GPT-3 and it works. However, GPT-neo doesn't seem as powerful. This is logical since GPT-3 has 175 billion parameters, whereas GPT-neo only has 1.3 billion (there's also a 2.7 billion variant available). \r\n\r\nMaybe you can try by giving more examples in the prompt. Sometimes it seems to work:\r\n\r\n\r\n",
"@NielsRogge What other [task-specific pipelines](https://huggingface.co/transformers/main_classes/pipelines.html) can I use the gpt-neo model with?",
"I think the GPT-neo models only support the `TextGenerationPipeline`. But do not that they can be used for summarization, you can just provide a text followed by \"TLDR:\", and then the model will generate a summary. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,624 | 1,624 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.0
- Platform: googleColab
- Python version:3.7
Models: `GPT neo`
Code :
```
#Import Hugging Face's Transformers
from transformers import pipeline
generator = pipeline('fill-mask', model='EleutherAI/gpt-neo-1.3B')
```
Error:

Can someone help me know what could the reason be for not able to use the fill-mask on `gpt-neo` model?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11765/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11765/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11764 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11764/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11764/comments | https://api.github.com/repos/huggingface/transformers/issues/11764/events | https://github.com/huggingface/transformers/pull/11764 | 894,895,560 | MDExOlB1bGxSZXF1ZXN0NjQ3MTkyMDY1 | 11,764 | [Wav2Vec2] SpecAugment Fast | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Noticed a small speed-up when training (1-2%) only though, and even slighly improved results. More importantly I think the code is much more readable now."
] | 1,621 | 1,621 | 1,621 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR refactores the SpecAugement implementation for Wav2Vec2 by fully relying on PyTorch instead of numpy.
1) The code is made more readable
- `attention_mask` is dropped since it's not required to treat masked batch indices differently
- Previously every batch_idx was forced to have the same number of masked indices (overlapping masked indices can lead to some batch_indices to have fewer masked indices). This is also not enforced here since it would make the function very dependend on the batch_size which is not good IMO. I don't see a reason why different batch_idx cannot have different # of masked indices. It was verified via training that the change does not lead to a performance drop.
2) Replacing a for loop with tensorized code lead to a 1% speed-up in training (not really noticeable thb)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11764/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11764",
"html_url": "https://github.com/huggingface/transformers/pull/11764",
"diff_url": "https://github.com/huggingface/transformers/pull/11764.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11764.patch",
"merged_at": 1621947592000
} |
https://api.github.com/repos/huggingface/transformers/issues/11763 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11763/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11763/comments | https://api.github.com/repos/huggingface/transformers/issues/11763/events | https://github.com/huggingface/transformers/pull/11763 | 894,637,917 | MDExOlB1bGxSZXF1ZXN0NjQ2OTcxNTk2 | 11,763 | A cleaner and more scalable implementation of symbolic tracing | {
"login": "michaelbenayoun",
"id": 25418079,
"node_id": "MDQ6VXNlcjI1NDE4MDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/25418079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelbenayoun",
"html_url": "https://github.com/michaelbenayoun",
"followers_url": "https://api.github.com/users/michaelbenayoun/followers",
"following_url": "https://api.github.com/users/michaelbenayoun/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelbenayoun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelbenayoun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelbenayoun/subscriptions",
"organizations_url": "https://api.github.com/users/michaelbenayoun/orgs",
"repos_url": "https://api.github.com/users/michaelbenayoun/repos",
"events_url": "https://api.github.com/users/michaelbenayoun/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelbenayoun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"What do you think about `dtype` being hardcoded?\r\n\r\nWhile this is OK for now hardcoding dtype might be an issue down the road. For most NLP models inputs are int, but for example for wav2vec2 they are floats. \r\n\r\nAnd would this have an impact if the final usage is in fp16 for where you used `float`.\r\n\r\nWe can't derive the dtype from the model in this context. \r\n\r\nThoughts?\r\n\r\nThis is not a showstopper to merge this, but just something to consider - I'm sure we will cross the bridge if we encounter it."
] | 1,621 | 1,621 | 1,621 | MEMBER | null | # What does this PR do?
This PR provides a much cleaner and less hacky implementation of symbolic tracing for models of the library.
It also provides support for more architectures:
- ALBERT
- DistilBERT
- MobileBERT
- MegatronBERT
- GPT2
- GPT Neo
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11763/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11763/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11763",
"html_url": "https://github.com/huggingface/transformers/pull/11763",
"diff_url": "https://github.com/huggingface/transformers/pull/11763.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11763.patch",
"merged_at": 1621526549000
} |
https://api.github.com/repos/huggingface/transformers/issues/11762 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11762/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11762/comments | https://api.github.com/repos/huggingface/transformers/issues/11762/events | https://github.com/huggingface/transformers/pull/11762 | 894,627,834 | MDExOlB1bGxSZXF1ZXN0NjQ2OTYzMDU0 | 11,762 | Fix a bug in summarization example which did not load model from config properly | {
"login": "tomy0000000",
"id": 23290356,
"node_id": "MDQ6VXNlcjIzMjkwMzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/23290356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomy0000000",
"html_url": "https://github.com/tomy0000000",
"followers_url": "https://api.github.com/users/tomy0000000/followers",
"following_url": "https://api.github.com/users/tomy0000000/following{/other_user}",
"gists_url": "https://api.github.com/users/tomy0000000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomy0000000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomy0000000/subscriptions",
"organizations_url": "https://api.github.com/users/tomy0000000/orgs",
"repos_url": "https://api.github.com/users/tomy0000000/repos",
"events_url": "https://api.github.com/users/tomy0000000/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomy0000000/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | CONTRIBUTOR | null | # What does this PR do?
Current example script does not load model when config is supplied, just a small bug fix.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11762/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11762",
"html_url": "https://github.com/huggingface/transformers/pull/11762",
"diff_url": "https://github.com/huggingface/transformers/pull/11762.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11762.patch",
"merged_at": 1621363116000
} |
https://api.github.com/repos/huggingface/transformers/issues/11761 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11761/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11761/comments | https://api.github.com/repos/huggingface/transformers/issues/11761/events | https://github.com/huggingface/transformers/issues/11761 | 894,359,820 | MDU6SXNzdWU4OTQzNTk4MjA= | 11,761 | Add batching to pipelines | {
"login": "skurzhanskyi",
"id": 17638837,
"node_id": "MDQ6VXNlcjE3NjM4ODM3",
"avatar_url": "https://avatars.githubusercontent.com/u/17638837?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skurzhanskyi",
"html_url": "https://github.com/skurzhanskyi",
"followers_url": "https://api.github.com/users/skurzhanskyi/followers",
"following_url": "https://api.github.com/users/skurzhanskyi/following{/other_user}",
"gists_url": "https://api.github.com/users/skurzhanskyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/skurzhanskyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skurzhanskyi/subscriptions",
"organizations_url": "https://api.github.com/users/skurzhanskyi/orgs",
"repos_url": "https://api.github.com/users/skurzhanskyi/repos",
"events_url": "https://api.github.com/users/skurzhanskyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/skurzhanskyi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! You may find the discussion on this PR useful: https://github.com/huggingface/transformers/pull/11251",
"Thanks for explaining this"
] | 1,621 | 1,621 | 1,621 | NONE | null | # Add batching to pipelines
Are there any plans to add a batching option to existing pipelines? Currently, the model tries to process all the input simultaneously, which sometimes (if the input is considerable) leads to memory errors. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11761/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11760 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11760/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11760/comments | https://api.github.com/repos/huggingface/transformers/issues/11760/events | https://github.com/huggingface/transformers/pull/11760 | 894,320,053 | MDExOlB1bGxSZXF1ZXN0NjQ2NzAxNTk1 | 11,760 | add `dataset_name` to data_args and added accuracy metric | {
"login": "philschmid",
"id": 32632186,
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philschmid",
"html_url": "https://github.com/philschmid",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"repos_url": "https://api.github.com/users/philschmid/repos",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | MEMBER | null | # What does this PR do?
Added `dataset_name` and `dataset_config_name` to `DataTrainingArguments` to use a compatible dataset from the dataset hub. I tested it with `imdb`.
Additionally also resolved `TODO` and added `load_metrics('accuracy')` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11760/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11760",
"html_url": "https://github.com/huggingface/transformers/pull/11760",
"diff_url": "https://github.com/huggingface/transformers/pull/11760.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11760.patch",
"merged_at": 1621348049000
} |
https://api.github.com/repos/huggingface/transformers/issues/11759 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11759/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11759/comments | https://api.github.com/repos/huggingface/transformers/issues/11759/events | https://github.com/huggingface/transformers/issues/11759 | 894,266,121 | MDU6SXNzdWU4OTQyNjYxMjE= | 11,759 | error in load of tokenizer with add_token | {
"login": "ReySadeghi",
"id": 71632819,
"node_id": "MDQ6VXNlcjcxNjMyODE5",
"avatar_url": "https://avatars.githubusercontent.com/u/71632819?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ReySadeghi",
"html_url": "https://github.com/ReySadeghi",
"followers_url": "https://api.github.com/users/ReySadeghi/followers",
"following_url": "https://api.github.com/users/ReySadeghi/following{/other_user}",
"gists_url": "https://api.github.com/users/ReySadeghi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ReySadeghi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ReySadeghi/subscriptions",
"organizations_url": "https://api.github.com/users/ReySadeghi/orgs",
"repos_url": "https://api.github.com/users/ReySadeghi/repos",
"events_url": "https://api.github.com/users/ReySadeghi/events{/privacy}",
"received_events_url": "https://api.github.com/users/ReySadeghi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Could you provide the code that you used, library version, etc (everything asked in the issue template) thanks!",
"here is the code to add tokens to tokenizer and then train on the corpus as a pretrained model:\r\nafter training is finished when I want to load the tokenizer, I got Error.\r\ntransformers version : 4.5.1\r\nubuntu: 16.04\r\npython: 3.7\r\npytorch: 1.6.0+cu101\r\n\r\n.....................................................................\r\n\r\n```py\r\nfrom transformers import AutoConfig, AutoTokenizer, AutoModel\r\nfrom transformers import BertTokenizer, BertForMaskedLM\r\nfrom transformers import Trainer, TrainingArguments\r\nfrom transformers import LineByLineTextDataset\r\nfrom transformers import DataCollatorForLanguageModeling\r\nimport torch\r\n\r\nconfig = AutoConfig.from_pretrained(\"HooshvareLab/bert-fa-base-uncased\")\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"HooshvareLab/bert-fa-base-uncased\",max_len=256)\r\n\r\nvocab=[]\r\nwith open('vocab30k.txt', mode='r',encoding=\"utf8\",errors='ignore') as file2:\r\n for line2 in file2:\r\n line2=line2.split('\\n')[0]\r\n vocab.append(line2)\r\n\r\nvocab=vocab[:10000]\r\ntokenizer.add_tokens(vocab)\r\ntokenizer.save_pretrained(\"tokenizer/\")\r\n\r\nmodel= BertForMaskedLM.from_pretrained(\"HooshvareLab/bert-fa-base-uncased\")\r\nmodel.resize_token_embeddings(len(tokenizer)) \r\n\r\nprint(\" model load\")\r\n\r\n\r\ndataset = LineByLineTextDataset(\r\n tokenizer=tokenizer,\r\n file_path=\"fa_shuffeled.txt\",\r\n block_size=128,\r\n)\r\n\r\nprint(\"data load\")\r\n\r\ndata_collator = DataCollatorForLanguageModeling(\r\n tokenizer=tokenizer, mlm=True, mlm_probability=0.15\r\n)\r\n\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"fineTunedModel/\",\r\n overwrite_output_dir=True,\r\n num_train_epochs=3,\r\n per_gpu_train_batch_size=16,\r\n save_steps=10_000,\r\n save_total_limit=2,\r\n prediction_loss_only=True,\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=dataset,\r\n)\r\n\r\nprint(\"start train\")\r\ntrainer.train()\r\n\r\ntrainer.save_model(\"fineTunedModel2/\")\r\n```",
"I don't have access to `vocab30k` so I tried locally by adding tokens that were not part of the initial vocabulary, saving the tokenizer, reoloading it; but I couldn't manage to have the same issue. If you could share a reproducible example in colab it would be easier to see what's going on.",
"> I don't have access to `vocab30k` so I tried locally by adding tokens that were not part of the initial vocabulary, saving the tokenizer, reoloading it; but I couldn't manage to have the same issue. If you could share a reproducible example in colab it would be easier to see what's going on.\r\n\r\n\r\nthe problem was due to some new tokens that weren't in utf-8 encoding, so when I removed them the problem was solved.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,625 | 1,625 | NONE | null | Hi,
in term of adding tokens to the Bert tokenizer, I tried to add 10k new tokens to my BERT model tokenizer and I saved the tokenizer .
So when I want to load the tokenizer to use, I got this error:
AssertionError: Non-consecutive added token '#سلام' found. Should have index 100005 but has index 100006 in saved vocabulary.
any help? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11759/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11759/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11758 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11758/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11758/comments | https://api.github.com/repos/huggingface/transformers/issues/11758/events | https://github.com/huggingface/transformers/pull/11758 | 894,243,969 | MDExOlB1bGxSZXF1ZXN0NjQ2NjM2NjI5 | 11,758 | Add more subsections to main doc | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | MEMBER | null | # What does this PR do?
This PR adds a subsection right before the list of supported models & the big table of supported frameworks for each model. Merging this PR would change the "welcome" doc page as follows:

and

The motivation for this PR is mainly to be able to better link to all supported models and framework. *E.g.* when asking which models are supported by Flax, it's nice to have a direct link instead of having to scroll down | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11758/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11758",
"html_url": "https://github.com/huggingface/transformers/pull/11758",
"diff_url": "https://github.com/huggingface/transformers/pull/11758.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11758.patch",
"merged_at": 1621345137000
} |
https://api.github.com/repos/huggingface/transformers/issues/11757 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11757/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11757/comments | https://api.github.com/repos/huggingface/transformers/issues/11757/events | https://github.com/huggingface/transformers/pull/11757 | 894,228,839 | MDExOlB1bGxSZXF1ZXN0NjQ2NjIzNTQ2 | 11,757 | Fix incorrect newline in #11650 | {
"login": "oToToT",
"id": 8341564,
"node_id": "MDQ6VXNlcjgzNDE1NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8341564?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oToToT",
"html_url": "https://github.com/oToToT",
"followers_url": "https://api.github.com/users/oToToT/followers",
"following_url": "https://api.github.com/users/oToToT/following{/other_user}",
"gists_url": "https://api.github.com/users/oToToT/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oToToT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oToToT/subscriptions",
"organizations_url": "https://api.github.com/users/oToToT/orgs",
"repos_url": "https://api.github.com/users/oToToT/repos",
"events_url": "https://api.github.com/users/oToToT/events{/privacy}",
"received_events_url": "https://api.github.com/users/oToToT/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | CONTRIBUTOR | null | # What does this PR do?
I found that I broke the link by accidentally adding a newline (probably by my formatter) in #11650.
Here is a fix for that.
Sorry for any inconvenience.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patil-suraj
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11757/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11757",
"html_url": "https://github.com/huggingface/transformers/pull/11757",
"diff_url": "https://github.com/huggingface/transformers/pull/11757.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11757.patch",
"merged_at": 1621344493000
} |
https://api.github.com/repos/huggingface/transformers/issues/11756 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11756/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11756/comments | https://api.github.com/repos/huggingface/transformers/issues/11756/events | https://github.com/huggingface/transformers/issues/11756 | 894,170,697 | MDU6SXNzdWU4OTQxNzA2OTc= | 11,756 | word_to_tokens method of XLNetTokenizerFast not behaving correctly | {
"login": "linfeng-du",
"id": 34938020,
"node_id": "MDQ6VXNlcjM0OTM4MDIw",
"avatar_url": "https://avatars.githubusercontent.com/u/34938020?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/linfeng-du",
"html_url": "https://github.com/linfeng-du",
"followers_url": "https://api.github.com/users/linfeng-du/followers",
"following_url": "https://api.github.com/users/linfeng-du/following{/other_user}",
"gists_url": "https://api.github.com/users/linfeng-du/gists{/gist_id}",
"starred_url": "https://api.github.com/users/linfeng-du/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/linfeng-du/subscriptions",
"organizations_url": "https://api.github.com/users/linfeng-du/orgs",
"repos_url": "https://api.github.com/users/linfeng-du/repos",
"events_url": "https://api.github.com/users/linfeng-du/events{/privacy}",
"received_events_url": "https://api.github.com/users/linfeng-du/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, thanks for reporting. This is related to https://github.com/huggingface/tokenizers/issues/552",
"Thanks! Could you please indicate the time this could be fixed? I'll decide whether to align it locally haha..",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,624 | 1,624 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.3
### Who can help
@LysandreJik
## Information
The `word_to_tokens` method of `XLNetTokenizerFast` seems not behaving correctly.
## To reproduce
Code below for example
```py
batch_claim = [
['Colin', 'Kaepernick', 'became', 'a'],
['Tilda', 'Swinton', 'is', 'a', 'vegan', '.']
]
batch_evidence = [
['He', 'remained', 'the', 'team', "'s", 'starting', 'quarterback'],
['Katherine', 'Matilda', '`', '`', 'Tilda', "''", 'Swinton', '-LRB-', 'born', '5', 'November', '1960']
]
tokenizer = XLNetTokenizerFast.from_pretrained('xlnet-base-cased')
# tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', add_prefix_space=True)
tokenized = tokenizer(
batch_claim, batch_evidence,
padding=True, truncation='do_not_truncate', is_split_into_words=True, return_tensors='pt'
)
print(tokenized)
print(tokenized.word_to_tokens(0, 0, 0))
```
gives None. (Maybe it's because that `XLNetTokenizer` pads on the front that causes this misbehavior?)
Output:
```
{'input_ids': tensor([[ 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 8041, 2066, 93, 1371, 9797, 403,
24, 4, 69, 1493, 18, 230, 17, 26, 23, 1541,
6217, 4, 3],
[15731, 1011, 22588, 577, 27, 24, 28629, 17, 9, 4,
17067, 6883, 902, 1011, 2651, 2651, 15731, 1011, 17, 12,
22588, 577, 17, 13, 1039, 12573, 13, 1094, 306, 704,
2726, 4, 3]]), 'token_type_ids': tensor([[3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 2],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 2]]), 'attention_mask': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1]])}
None
```
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11756/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11756/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11755 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11755/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11755/comments | https://api.github.com/repos/huggingface/transformers/issues/11755/events | https://github.com/huggingface/transformers/issues/11755 | 893,989,068 | MDU6SXNzdWU4OTM5ODkwNjg= | 11,755 | A problem of Ibert IntSoftmax | {
"login": "baodii",
"id": 82791803,
"node_id": "MDQ6VXNlcjgyNzkxODAz",
"avatar_url": "https://avatars.githubusercontent.com/u/82791803?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/baodii",
"html_url": "https://github.com/baodii",
"followers_url": "https://api.github.com/users/baodii/followers",
"following_url": "https://api.github.com/users/baodii/following{/other_user}",
"gists_url": "https://api.github.com/users/baodii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/baodii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/baodii/subscriptions",
"organizations_url": "https://api.github.com/users/baodii/orgs",
"repos_url": "https://api.github.com/users/baodii/repos",
"events_url": "https://api.github.com/users/baodii/events{/privacy}",
"received_events_url": "https://api.github.com/users/baodii/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,624 | 1,624 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- `transformers` version: 4.6.0.dev0
- Platform: Linux-4.15.0-122-generic-x86_64-with-glibc2.10
- Python version: 3.8.1
- PyTorch version (GPU?): 1.9.0a0+git3c87fe9 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: yes
-
### Who can help
@kssteven418
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Hi, I have found a strange thing in the IntSoftmax class of Ibert.
def forward(self, x, scaling_factor):
if not self.quant_mode:
return nn.Softmax(dim=-1)(x), None
x_int = x / scaling_factor
x_int_max, _ = x_int.max(dim=-1, keepdim=True)
x_int = x_int - x_int_max
exp_int, exp_scaling_factor = self.int_exp(x_int, scaling_factor)
# Avoid overflow
exp, exp_scaling_factor = self.act(exp_int, exp_scaling_factor)
exp_int = exp / exp_scaling_factor
exp_int_sum = exp_int.sum(dim=-1, keepdim=True)
factor = floor_ste.apply(2 ** self.max_bit / exp_int_sum)
exp_int = floor_ste.apply(exp_int * factor / 2 ** (self.max_bit - self.output_bit))
scaling_factor = 1 / 2 ** self.output_bit
return exp_int * scaling_factor, scaling_factor
The code above is the forward func of IntSoftmax. And the problem is that `exp, exp_scaling_factor = self.act(exp_int, exp_scaling_factor)` in which self.act is an instance of QuantAct of which the input should be real number, but the exp_int is a quant int number. Although the result of the trained model word well, I think this is not right. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11755/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11755/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11754 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11754/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11754/comments | https://api.github.com/repos/huggingface/transformers/issues/11754/events | https://github.com/huggingface/transformers/issues/11754 | 893,784,795 | MDU6SXNzdWU4OTM3ODQ3OTU= | 11,754 | Trainer accumulates GPU usage at the beginning of each step | {
"login": "ZL92",
"id": 40026571,
"node_id": "MDQ6VXNlcjQwMDI2NTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/40026571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZL92",
"html_url": "https://github.com/ZL92",
"followers_url": "https://api.github.com/users/ZL92/followers",
"following_url": "https://api.github.com/users/ZL92/following{/other_user}",
"gists_url": "https://api.github.com/users/ZL92/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZL92/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZL92/subscriptions",
"organizations_url": "https://api.github.com/users/ZL92/orgs",
"repos_url": "https://api.github.com/users/ZL92/repos",
"events_url": "https://api.github.com/users/ZL92/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZL92/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I've experienced the same issue. "
] | 1,621 | 1,621 | 1,621 | NONE | null | Hello,
My problem is that GPU usage gets increased at the beginning of each step. Although the usage gets decreased with the help of torch.cuda.empty_cache() and gc.collector() during training, OOM errors happened after a while.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: colab
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger @patrickvonplaten
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Models I am using: wav2vec2 and MBart
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Build e2e_model with the following classes: wav2vec2_learn_repr and e2emodel
2. Feed audio and translation under the requirement of the following data_collator.
3. Model training with Trainer.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
The code for reproducing the error:
```
class wav2vec2_learn_repr(Wav2Vec2PreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.wav2vec2 = Wav2Vec2Model(config)
self.dropout = nn.Dropout(config.final_dropout)
self.collapse = collapse_layer
self.init_weights = ()
def forward(self,
input_values,
attention_mask=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
labels=None):
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
outputs = self.wav2vec2(
input_values,
attention_mask=attention_mask,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
hidden_states = outputs[0]
hidden_states = self.dropout(hidden_states)
collapsed_embeddings, attention_masks=cal_collapse_embeddings(hidden_states)
show_gpu(f'In wav2vec2_learn_repr before del')
del outputs, hidden_states
show_gpu(f'In wav2vec2_learn_repr after del')
torch.cuda.empty_cache()
show_gpu(f'In wav2vec2_learn_repr empty cache')
gc.collect()
show_gpu(f'In wav2vec2_learn_repr gc.collect()')
return collapsed_embeddings, attention_masks
```
```
class e2emodel(PreTrainedModel):
def __init__(self,
wav2vec2_name = "facebook/wav2vec2-large-xlsr-53",
mbart_model_name = 'facebook/mbart-large-50-many-to-many-mmt',
):
super().__init__(PretrainedConfig())
self.wav2vec2_repr_model = wav2vec2_learn_repr.from_pretrained(wav2vec2_name)
self.mbart_model = MBartForConditionalGeneration.from_pretrained(mbart_model_name)
self.wav2vec2_repr_model.to(device)
self.mbart_model.to(device)
def forward(self,
input_ids,
attention_mask=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
labels=None):
show_gpu(f'At start')
torch.cuda.empty_cache()
show_gpu(f'empty cache')
gc.collect()
show_gpu(f'gc.collect()')
input_ids.to(device)
# print(f' inputs devices: {input_ids.device}, {labels.device}')
show_gpu(f'load input_ids')
collapsed_embeddings, attention_masks = self.wav2vec2_repr_model(input_ids)
print('collapsed_embeddings, attention_masks', collapsed_embeddings.device, attention_masks.device)
show_gpu(f'after wav2vec2')
del input_ids
show_gpu(f'delete input_ids')
torch.cuda.empty_cache()
show_gpu(f'empty cache')
gc.collect()
show_gpu(f'gc.collect()')
labels.to(device)
show_gpu(f'load mbart inputs')
output = self.mbart_model(inputs_embeds = collapsed_embeddings, attention_mask = attention_masks, labels = labels)
show_gpu(f'after mbart')
del collapsed_embeddings, attention_masks, labels
show_gpu(f'delete mbart inputs')
torch.cuda.empty_cache()
show_gpu(f'empty cache')
return output
```
```
def data_collator(data):
translation = [d['translation'] for d in data]
input_features = [{'input_values': get_inputs_values_from_audio_path(feature_extractor, d['path'])} for d in data]
#TODO: remove empty audio and its translation
wav2vec2_inputs = feature_extractor.pad(input_features,
padding=True,
max_length=None,
pad_to_multiple_of=None,
return_tensors="pt", )
batch={}
batch['inputs_embeds'], batch['attention_mask'] = wav2vec2_learn_repr(wav2vec2_inputs['input_values']) # size [batch_size, nr_sample, 1024]
with tokenizer.as_target_tokenizer():
batch['labels'] = tokenizer([d['translation']for d in data], return_tensors='pt', padding=True).input_ids
return batch
```
```
import torchaudio
resampler = torchaudio.transforms.Resample(orig_freq=48000, new_freq=16000)
def get_inputs_values_from_audio_path(processor, path: str):
signal, sr = torchaudio.load(main_path + '{}/clips/'.format(src_lang) + path)
signal = signal.squeeze(0)
d = (signal.shape[0]/sr)
resampler.orig_freq = sr
signal=resampler.forward(signal).numpy()
input_values = processor(signal, sampling_rate=resampler.new_freq).input_values
return input_values.tolist()[0]
```
```
import gc
import subprocess
def show_gpu(msg):
"""
ref: https://discuss.pytorch.org/t/access-gpu-memory-usage-in-pytorch/3192/4
"""
def query(field):
return(subprocess.check_output(
['nvidia-smi', f'--query-gpu={field}',
'--format=csv,nounits,noheader'],
encoding='utf-8'))
def to_int(result):
return int(result.strip().split('\n')[0])
used = to_int(query('memory.used'))
total = to_int(query('memory.total'))
pct = used/total
print('\n' + msg, f'{100*pct:2.1f}% ({used} out of {total})')
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Here is the GPU usage history from step 7.
**At start 86.9% (14149 out of 16280)**
empty cache 54.3% (8835 out of 16280)
gc.collect() 54.3% (8835 out of 16280)
load input_ids 54.3% (8835 out of 16280)
In wav2vec2_learn_repr before del 56.7% (9231 out of 16280)
In wav2vec2_learn_repr after del 56.7% (9231 out of 16280)
In wav2vec2_learn_repr empty cache 56.7% (9229 out of 16280)
In wav2vec2_learn_repr gc.collect() 56.7% (9229 out of 16280)
collapsed_embeddings, attention_masks cuda:0 cuda:0
after wav2vec2 56.7% (9229 out of 16280)
delete input_ids 56.7% (9229 out of 16280)
empty cache 56.7% (9229 out of 16280)
gc.collect() 56.7% (9229 out of 16280)
load mbart inputs 56.7% (9229 out of 16280)
after mbart 64.3% (10473 out of 16280)
delete mbart inputs 64.3% (10473 out of 16280)
empty cache 64.3% (10473 out of 16280)
**At start 85.5% (13925 out of 16280)**
empty cache 54.3% (8835 out of 16280)
gc.collect() 54.3% (8835 out of 16280)
load input_ids 54.3% (8835 out of 16280)
In wav2vec2_learn_repr before del 58.9% (9593 out of 16280)
In wav2vec2_learn_repr after del 58.9% (9593 out of 16280)
In wav2vec2_learn_repr empty cache 58.9% (9593 out of 16280)
In wav2vec2_learn_repr gc.collect() 58.9% (9593 out of 16280)
collapsed_embeddings, attention_masks cuda:0 cuda:0
after wav2vec2 58.9% (9593 out of 16280)
delete input_ids 58.9% (9593 out of 16280)
empty cache 58.9% (9593 out of 16280
gc.collect() 58.9% (9593 out of 16280)
load mbart inputs 58.9% (9593 out of 16280)
after mbart 66.7% (10853 out of 16280)
delete mbart inputs 66.7% (10853 out of 16280)
empty cache 66.7% (10853 out of 16280)
**At start 95.4% (15529 out of 16280)**
empty cache 94.6% (15393 out of 16280)
gc.collect() 94.6% (15393 out of 16280)
load input_ids 94.6% (15393 out of 16280)
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 15.90 GiB total capacity; 14.96 GiB already allocated; 21.75 MiB free; 15.00 GiB reserved in total by PyTorch)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11754/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11754/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11753 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11753/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11753/comments | https://api.github.com/repos/huggingface/transformers/issues/11753/events | https://github.com/huggingface/transformers/pull/11753 | 893,701,831 | MDExOlB1bGxSZXF1ZXN0NjQ2MTczOTIx | 11,753 | Add Flax Examples and Cloud TPU README | {
"login": "avital",
"id": 37586,
"node_id": "MDQ6VXNlcjM3NTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/37586?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avital",
"html_url": "https://github.com/avital",
"followers_url": "https://api.github.com/users/avital/followers",
"following_url": "https://api.github.com/users/avital/following{/other_user}",
"gists_url": "https://api.github.com/users/avital/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avital/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avital/subscriptions",
"organizations_url": "https://api.github.com/users/avital/orgs",
"repos_url": "https://api.github.com/users/avital/repos",
"events_url": "https://api.github.com/users/avital/events{/privacy}",
"received_events_url": "https://api.github.com/users/avital/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @patrickvonplaten "
] | 1,621 | 1,621 | 1,621 | CONTRIBUTOR | null | # What does this PR do?
Adds a Flax examples README. Pretty bare for now, but will include a link to Cloud TPU instructions once they are up.
I hope my use of relative links works well, but looking for feedback. The main goal here is to have a canonical link we can point to. Perhaps later this should live on the proper docs page but I thought a README is a fine first step.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11753/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11753/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11753",
"html_url": "https://github.com/huggingface/transformers/pull/11753",
"diff_url": "https://github.com/huggingface/transformers/pull/11753.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11753.patch",
"merged_at": 1621356316000
} |
https://api.github.com/repos/huggingface/transformers/issues/11752 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11752/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11752/comments | https://api.github.com/repos/huggingface/transformers/issues/11752/events | https://github.com/huggingface/transformers/pull/11752 | 893,641,850 | MDExOlB1bGxSZXF1ZXN0NjQ2MTIzODA0 | 11,752 | Fixed: Better names for nlp variables in pipelines' tests and docs. | {
"login": "01-vyom",
"id": 46242526,
"node_id": "MDQ6VXNlcjQ2MjQyNTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/46242526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/01-vyom",
"html_url": "https://github.com/01-vyom",
"followers_url": "https://api.github.com/users/01-vyom/followers",
"following_url": "https://api.github.com/users/01-vyom/following{/other_user}",
"gists_url": "https://api.github.com/users/01-vyom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/01-vyom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/01-vyom/subscriptions",
"organizations_url": "https://api.github.com/users/01-vyom/orgs",
"repos_url": "https://api.github.com/users/01-vyom/repos",
"events_url": "https://api.github.com/users/01-vyom/events{/privacy}",
"received_events_url": "https://api.github.com/users/01-vyom/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you take care of the merge conflicts and we should be good to merge? Thanks!",
"Thanks a lot for this !"
] | 1,621 | 1,621 | 1,621 | CONTRIBUTOR | null | # What does this PR do?
Fixes #9455
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Narsil @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11752/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11752",
"html_url": "https://github.com/huggingface/transformers/pull/11752",
"diff_url": "https://github.com/huggingface/transformers/pull/11752.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11752.patch",
"merged_at": 1621345649000
} |
https://api.github.com/repos/huggingface/transformers/issues/11751 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11751/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11751/comments | https://api.github.com/repos/huggingface/transformers/issues/11751/events | https://github.com/huggingface/transformers/issues/11751 | 893,599,150 | MDU6SXNzdWU4OTM1OTkxNTA= | 11,751 | parallelize and deparallelize method for GPT-Neo series model | {
"login": "Ankit-Dhankhar",
"id": 25135844,
"node_id": "MDQ6VXNlcjI1MTM1ODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/25135844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ankit-Dhankhar",
"html_url": "https://github.com/Ankit-Dhankhar",
"followers_url": "https://api.github.com/users/Ankit-Dhankhar/followers",
"following_url": "https://api.github.com/users/Ankit-Dhankhar/following{/other_user}",
"gists_url": "https://api.github.com/users/Ankit-Dhankhar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ankit-Dhankhar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ankit-Dhankhar/subscriptions",
"organizations_url": "https://api.github.com/users/Ankit-Dhankhar/orgs",
"repos_url": "https://api.github.com/users/Ankit-Dhankhar/repos",
"events_url": "https://api.github.com/users/Ankit-Dhankhar/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ankit-Dhankhar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is answered in #11054.\r\n\r\n(I'm in a similar situation as you. I'm just going to go with the suggestion and use DeepSpeed instead of model parallelism.)",
"Thanks, didn't saw that. Parallelism notes are also awesome."
] | 1,621 | 1,621 | 1,621 | NONE | null | # 🚀 Feature request
Parallelize and deparallelize methods for distribution of attention modules across multiple GPUs.
## Motivation
Finetuning GPT Neo 2.7B model on 12 GB GPU gives out of memory error. Having a parallelize method would allow us to train that model by splitting attention modules across multiple GPUs of smaller VRAM.
## Your contribution
Considering [this line](https://github.com/huggingface/transformers/blob/daf0d6a97bb0225a2571a2612b8285e2c3913992/src/transformers/models/gpt2/modeling_gpt2.py#L522) in GPT2 code and the absence of doc for parallelize method in [GPT2 documentation](https://huggingface.co/transformers/model_doc/gpt2.html#gpt2model), wanted to know if these methods are still supported. If not, what is the recommended method for fine-tuning large transformer models like GPT-Neo?
If they are still supported, I can take up this task and submit PR for both methods as well as documentation fix.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11751/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11750 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11750/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11750/comments | https://api.github.com/repos/huggingface/transformers/issues/11750/events | https://github.com/huggingface/transformers/pull/11750 | 893,595,881 | MDExOlB1bGxSZXF1ZXN0NjQ2MDgzNzAw | 11,750 | Flax BERT fix token type init | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Token type ids are 0 by default not 1
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11750/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11750/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11750",
"html_url": "https://github.com/huggingface/transformers/pull/11750",
"diff_url": "https://github.com/huggingface/transformers/pull/11750.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11750.patch",
"merged_at": 1621277674000
} |
https://api.github.com/repos/huggingface/transformers/issues/11749 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11749/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11749/comments | https://api.github.com/repos/huggingface/transformers/issues/11749/events | https://github.com/huggingface/transformers/issues/11749 | 893,524,955 | MDU6SXNzdWU4OTM1MjQ5NTU= | 11,749 | [deepspeed] supporting `--adafactor` | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
},
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,621 | 1,624 | 1,624 | CONTRIBUTOR | null | It was flagged that in this example https://github.com/huggingface/transformers/issues/11044 `--adafactor` is used, but Deepspeed doesn't get it passed since the DS config's optimizer overrides it. So need to sort it out. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11749/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11749/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11748 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11748/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11748/comments | https://api.github.com/repos/huggingface/transformers/issues/11748/events | https://github.com/huggingface/transformers/pull/11748 | 893,495,320 | MDExOlB1bGxSZXF1ZXN0NjQ2MDAwNTU2 | 11,748 | Fix checkpoint deletion | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | COLLABORATOR | null | # What does this PR do?
As pointed out on the [forums](https://discuss.huggingface.co/t/checkpoint-missing-optimizer-pt-how-to-resume/6138) there is a problem in the way checkpoints are deleted currently when `save_total_limit` is set and `load_best_model_at_end` is True. Since the best checkpoint is switched with the last checkpoint, we end up deleting the last checkpoint instead of the oldest available one.
This PR fixes this issue and adds tests. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11748/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11748",
"html_url": "https://github.com/huggingface/transformers/pull/11748",
"diff_url": "https://github.com/huggingface/transformers/pull/11748.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11748.patch",
"merged_at": 1621338159000
} |
https://api.github.com/repos/huggingface/transformers/issues/11747 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11747/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11747/comments | https://api.github.com/repos/huggingface/transformers/issues/11747/events | https://github.com/huggingface/transformers/issues/11747 | 893,415,968 | MDU6SXNzdWU4OTM0MTU5Njg= | 11,747 | mbart-large-cc25 tokenization_utils_fast.py TypeError | {
"login": "lysa-n",
"id": 46386052,
"node_id": "MDQ6VXNlcjQ2Mzg2MDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/46386052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lysa-n",
"html_url": "https://github.com/lysa-n",
"followers_url": "https://api.github.com/users/lysa-n/followers",
"following_url": "https://api.github.com/users/lysa-n/following{/other_user}",
"gists_url": "https://api.github.com/users/lysa-n/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lysa-n/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lysa-n/subscriptions",
"organizations_url": "https://api.github.com/users/lysa-n/orgs",
"repos_url": "https://api.github.com/users/lysa-n/repos",
"events_url": "https://api.github.com/users/lysa-n/events{/privacy}",
"received_events_url": "https://api.github.com/users/lysa-n/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @lysa-n,\r\nFor multilingual models you must define input language(src_lang) and target language(tgt_lang). Since you are using it for summarization for the Dutch language the src_lang and tgt_lang will be the same.\r\nThis should work:\r\n```python\r\nfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM\r\n \r\ntokenizer = AutoTokenizer.from_pretrained(\"facebook/mbart-large-cc25\", src_lang='nl_XX', tgt_lang='nl_XX')\r\n\r\nwith tokenizer.as_target_tokenizer():\r\n print(tokenizer([\"Hello, this one sentence\", \"This is another sentence.\"]))\r\n```\r\nNote: Please cross-check the Dutch language code",
"> Hi @lysa-n,\r\n> For multilingual models you must define input language(src_lang) and target language(tgt_lang). Since you are using it for summarization for the Dutch language the src_lang and tgt_lang will be the same.\r\n> This should work:\r\n> \r\n> ```python\r\n> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM\r\n> \r\n> tokenizer = AutoTokenizer.from_pretrained(\"facebook/mbart-large-cc25\", src_lang='nl_XX', tgt_lang='nl_XX')\r\n> \r\n> with tokenizer.as_target_tokenizer():\r\n> print(tokenizer([\"Hello, this one sentence\", \"This is another sentence.\"]))\r\n> ```\r\n> \r\n> Note: Please cross-check the Dutch language code\r\n\r\nHi @vishal-burman, \r\n\r\nThis seems to work. Thank you so much! "
] | 1,621 | 1,621 | 1,621 | NONE | null | ## Environment info
Hi, I am trying to fine-tune a dutch summarization algorithm. I used the [following ](https://github.com/huggingface/notebooks/blob/master/examples/summarization.ipynb) example notebook provided by huggingface.co.
To prepare the targets for the model, we need to tokenize them inside the as_target_tokenizer context manager. This will make sure the tokenizer uses the special tokens corresponding to the targets. This is achieved by running the following code:
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
with tokenizer.as_target_tokenizer():
print(tokenizer(["Hello, this one sentence", "This is another sentence."]))
```
However, I get the following error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-24-0fc6af9091da> in <module>()
----> 1 with tokenizer.as_target_tokenizer():
2 print(tokenizer(["Hello, this one sentence", "This is another sentence."]))
3
3 frames
/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_fast.py in convert_ids_to_tokens(self, ids, skip_special_tokens)
293 tokens = []
294 for index in ids:
--> 295 index = int(index)
296 if skip_special_tokens and index in self.all_special_ids:
297 continue
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
```
How can I work around this TypeError? I am not a professional and this is actually the first time submitting any question at all.
Thanks in advance :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11747/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11746 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11746/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11746/comments | https://api.github.com/repos/huggingface/transformers/issues/11746/events | https://github.com/huggingface/transformers/pull/11746 | 893,360,318 | MDExOlB1bGxSZXF1ZXN0NjQ1ODg3NTA4 | 11,746 | Use new evaluation loop in TrainerQA | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | COLLABORATOR | null | # What does this PR do?
When writing the new evaluation loop, the code of the special `Trainer` or question answering was not updated, this PR fixes that.
Fixes #11721 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11746/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11746",
"html_url": "https://github.com/huggingface/transformers/pull/11746",
"diff_url": "https://github.com/huggingface/transformers/pull/11746.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11746.patch",
"merged_at": 1621260613000
} |
https://api.github.com/repos/huggingface/transformers/issues/11745 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11745/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11745/comments | https://api.github.com/repos/huggingface/transformers/issues/11745/events | https://github.com/huggingface/transformers/pull/11745 | 893,298,416 | MDExOlB1bGxSZXF1ZXN0NjQ1ODM1Njk1 | 11,745 | [Flax MLM] Refactor run mlm with optax | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11745/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11745",
"html_url": "https://github.com/huggingface/transformers/pull/11745",
"diff_url": "https://github.com/huggingface/transformers/pull/11745.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11745.patch",
"merged_at": 1621422058000
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.