url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/12244 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12244/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12244/comments | https://api.github.com/repos/huggingface/transformers/issues/12244/events | https://github.com/huggingface/transformers/issues/12244 | 924,570,528 | MDU6SXNzdWU5MjQ1NzA1Mjg= | 12,244 | RoFormerTokenizerFast has a wrong result when setting "return_offsets_mapping=True" | {
"login": "JaheimLee",
"id": 18062264,
"node_id": "MDQ6VXNlcjE4MDYyMjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/18062264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JaheimLee",
"html_url": "https://github.com/JaheimLee",
"followers_url": "https://api.github.com/users/JaheimLee/followers",
"following_url": "https://api.github.com/users/JaheimLee/following{/other_user}",
"gists_url": "https://api.github.com/users/JaheimLee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JaheimLee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JaheimLee/subscriptions",
"organizations_url": "https://api.github.com/users/JaheimLee/orgs",
"repos_url": "https://api.github.com/users/JaheimLee/repos",
"events_url": "https://api.github.com/users/JaheimLee/events{/privacy}",
"received_events_url": "https://api.github.com/users/JaheimLee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@JaheimLee \r\n**uncomment** 取消注释 L44-L53\r\n##### this code slice normalized_string is too slow (6s) but test_alignement_methods can pass\r\n\r\nhttps://github.com/huggingface/transformers/blob/e43e11260ff3c0a1b3cb0f4f39782d71a51c0191/src/transformers/models/roformer/tokenization_utils.py#L43-L53\r\n\r\nIf we use this code. `offset_mapping` is true but it will take a lot of processing time.\r\n\r\n------------------------------------------------------------------------------------------------------------\r\nand **comment** 注释 L56-L63\r\n##### this code test_alignement_methods can't pass but fast (300ms)\r\nhttps://github.com/huggingface/transformers/blob/e43e11260ff3c0a1b3cb0f4f39782d71a51c0191/src/transformers/models/roformer/tokenization_utils.py#L55-L63\r\n\r\nIf we use this code. `offset_mapping` is wrong but it will take very little processing time.\r\n\r\n------------------------------------------------------------------------------------------------------------\r\n\r\nIf you use `char level` model , recommend you to use BertTokenizer. (the speed is very fast)\r\n\r\nAnd if you use `word level` model like `roformer_chinese_base`, recommend you to use RoFormerTokenizer. (if you don't care `speed` and want to get true `offset_mapping`, you should **uncomment** L44-L53 and **comment** L56-L63 in transformers/src/transformers/models/roformer/tokenization_utils.py)\r\n\r\n\r\n"
] | 1,623 | 1,625 | 1,625 | NONE | null | I use roformer_chinese_char_base model, so there is no word-level problem. RoFormerTokenizerFast has this bug, but BertTokenizerFast doesn't. Here is the code:
```python
In [1]: from transformers import RoFormerTokenizerFast, BertTokenizerFast
In [2]: path = '/data/pretrained_models/roformer_chinese_char_base'
In [3]: tokenizer_roformer = RoFormerTokenizerFast.from_pretrained(path, add_special_tokens=False, do_lower_case=True)
In [4]: tokenizer_bert = BertTokenizerFast.from_pretrained(path, add_special_tokens=False, do_lower_case=True)
In [5]: text = '收到真的很喜欢,真的是大爱,平常上个网打个游戏,查个东西都非常的好,网速也很快,真是淘到宝了'
In [6]: tokenizer_bert(text, return_offsets_mapping=True, add_special_tokens=False)
Out[6]: {'input_ids': [2684, 691, 4395, 4334, 2100, 1134, 3223, 7675, 4395, 4334, 2798, 1465, 3898, 7675, 1975, 1960, 198, 223, 5079, 2388, 223, 3602, 2349, 7675, 2982, 223, 214, 6017, 6657, 7160, 1960, 4334, 1506, 7675, 5079, 6552, 260, 2100, 2148, 7675, 4395, 2798, 3554, 691, 1698, 270], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'offset_mapping': [(0, 1), (1, 2), (2, 3), (3, 4), (4, 5), (5, 6), (6, 7), (7, 8), (8, 9), (9, 10), (10, 11), (11, 12), (12, 13), (13, 14), (14, 15), (15, 16), (16, 17), (17, 18), (18, 19), (19, 20), (20, 21), (21, 22), (22, 23), (23, 24), (24, 25), (25, 26), (26, 27), (27, 28), (28, 29), (29, 30), (30, 31), (31, 32), (32, 33), (33, 34), (34, 35), (35, 36), (36, 37), (37, 38), (38, 39), (39, 40), (40, 41), (41, 42), (42, 43), (43, 44), (44, 45), (45, 46)]}
In [7]: tokenizer_roformer(text, return_offsets_mapping=True, add_special_tokens=False)
Out[7]: {'input_ids': [2684, 691, 4395, 4334, 2100, 1134, 3223, 7675, 4395, 4334, 2798, 1465, 3898, 7675, 1975, 1960, 198, 223, 5079, 2388, 223, 3602, 2349, 7675, 2982, 223, 214, 6017, 6657, 7160, 1960, 4334, 1506, 7675, 5079, 6552, 260, 2100, 2148, 7675, 4395, 2798, 3554, 691, 1698, 270], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'offset_mapping': [(0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1)]}
```
As you can see, "offset_mapping" is wrong at Out[7].
@JunnYu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12244/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12244/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12243 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12243/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12243/comments | https://api.github.com/repos/huggingface/transformers/issues/12243/events | https://github.com/huggingface/transformers/pull/12243 | 924,550,286 | MDExOlB1bGxSZXF1ZXN0NjczMTczNjA3 | 12,243 | GPT-J | {
"login": "StellaAthena",
"id": 15899312,
"node_id": "MDQ6VXNlcjE1ODk5MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/15899312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StellaAthena",
"html_url": "https://github.com/StellaAthena",
"followers_url": "https://api.github.com/users/StellaAthena/followers",
"following_url": "https://api.github.com/users/StellaAthena/following{/other_user}",
"gists_url": "https://api.github.com/users/StellaAthena/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StellaAthena/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StellaAthena/subscriptions",
"organizations_url": "https://api.github.com/users/StellaAthena/orgs",
"repos_url": "https://api.github.com/users/StellaAthena/repos",
"events_url": "https://api.github.com/users/StellaAthena/events{/privacy}",
"received_events_url": "https://api.github.com/users/StellaAthena/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The main thing I'm uncertain about is how to handle unimplemented functionality. GPT-J uses the same tokenizer as GPT-2, so I removed the tokenizer definition. Is that correct, or no? Relatredly, there were many types of modeling that GPT-J was not designed for, and @finetuneanon's PR just deleted the boilerplate for them. Is this correct?",
"> Also, sorry to ask this again, but could we not modify generation in this PR, since it seems it's not related to GPT-J.\r\n\r\nDamn. It looks like I messed something up.... this was supposed to not include @finetuneanon's commits. I might close this and create a replacement PR with the correct commit history.",
"Mmm, I was wondering how this has been going. I would love to try a stable version of this!",
"Hey @sualehasif \r\n\r\nA stable version will be available in a week, stay tuned! ",
"> Damn. It looks like I messed something up.... this was supposed to not include @finetuneanon's commits. I might close this and create a replacement PR with the correct commit history.\r\n\r\n@StellaAthena any idea when would you be adding a new PR? We are also running some experiments so maybe we could help.",
"@mittalpatel \r\n\r\nI'm taking over the PR. But feel free to post your findings :) ",
"In #12106 @finetuneanon reports the results of some evaluations of the ported model on EleutherAI’s evaluation harness. The numbers were a little lower than what we had found using the original implementation, but both he and I felt this was likely due to FP16. I can now confirm that the ported model achieves the same performance as the original model when evaluated in FP32. The absolute difference in performance on lambada, HellaSwag, PiQA, and Winogrande are all less than 0.5% when done in FP32",
"Cool, that's good to know.",
"@patil-suraj can you mark this as a draft, as it is not ready to merge in its current state?",
"> Hey @sualehasif\r\n> \r\n> A stable version will be available in a week, stay tuned!\r\n\r\nHi, @patil-suraj thanks so much for working on this. Is there any progress on integration to huggingface transformers?",
"Just chiming in here: All of the .py files with dashes will not be importable :) So I'd suggest changing `gpt-j` to `gptj` or `gpt_j` in the .py file path names.",
"Any updates on this and any help required?",
"@patil-suraj What is the status of this?\r\nI would really like to use this model, and I don't feel like messing around with forks to get this to work.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I would still love to see this happen.",
"> I would still love to see this happen.\r\n\r\nThis is going to happen any day now, see #13022"
] | 1,623 | 1,630 | 1,630 | CONTRIBUTOR | null | **This is a work-in-progress focused on reconciling styles and may break without warning. If you want to use GPT-J with the HF interface, you can do that by installing transformers from [here](https://github.com/finetuneanon/transformers/tree/gpt-j). The purpose of this PR is to make progress on converting that repo to the style HF prefers.**
# What does this PR do?
This is my attempt to reconcile #12106 with the HF style guidelines as described by @sgugger. The original PR was created by @finetuneanon and @kurumuz.
This implementation has not been thoroughly tested yet, but I wanted to get something out as a starting point for continuing the conversation before too much momentum is lost. I need to reread HF documentation a bit more to figure out the things that are wrong, or hopefully one of you lovely people can help me out.
For comparison, a frozen version of the code in the original PR can be found [here](https://github.com/finetuneanon/transformers/tree/c0dcc7fad45e9ac07cdff525cbe7fb0ff76a1304).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. [Link](https://github.com/huggingface/transformers/pull/12106)
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten @patil-suraj @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12243/reactions",
"total_count": 30,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 18,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12243/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12243",
"html_url": "https://github.com/huggingface/transformers/pull/12243",
"diff_url": "https://github.com/huggingface/transformers/pull/12243.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12243.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12242 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12242/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12242/comments | https://api.github.com/repos/huggingface/transformers/issues/12242/events | https://github.com/huggingface/transformers/issues/12242 | 924,499,599 | MDU6SXNzdWU5MjQ0OTk1OTk= | 12,242 | Can't load tokenizer for 'imxly/t5-pegasus'. | {
"login": "xiaojinglu",
"id": 12478408,
"node_id": "MDQ6VXNlcjEyNDc4NDA4",
"avatar_url": "https://avatars.githubusercontent.com/u/12478408?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiaojinglu",
"html_url": "https://github.com/xiaojinglu",
"followers_url": "https://api.github.com/users/xiaojinglu/followers",
"following_url": "https://api.github.com/users/xiaojinglu/following{/other_user}",
"gists_url": "https://api.github.com/users/xiaojinglu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiaojinglu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiaojinglu/subscriptions",
"organizations_url": "https://api.github.com/users/xiaojinglu/orgs",
"repos_url": "https://api.github.com/users/xiaojinglu/repos",
"events_url": "https://api.github.com/users/xiaojinglu/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiaojinglu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The tokenizer specified for that model is `T5Tokenizer`, which is a sentencepiece-based tokenizer. However, the tokenizer file is `vocab.txt`, which is a BERT-like (WordPiece) file.\r\n\r\nThe T5 tokenizer expects as `spiece.model` file generated by the SentencePiece library, or a `tokenizer.json` file generated by the Tokenizers library.",
"> The tokenizer specified for that model is `T5Tokenizer`, which is a sentencepiece-based tokenizer. However, the tokenizer file is `vocab.txt`, which is a BERT-like (WordPiece) file.\r\n> \r\n> The T5 tokenizer expects as `spiece.model` file generated by the SentencePiece library, or a `tokenizer.json` file generated by the Tokenizers library.\r\n\r\nso is it the fault of the uploaded model ?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Can't load tokenizer for './models/t5-pegasus'\r\n\r\n下载后使用也存在这个问题"
] | 1,623 | 1,665 | 1,627 | NONE | null | when I use t5-pegasus on huggingface.co, What's the problem?
<img width="1589" alt="截屏2021-06-18 上午11 10 34" src="https://user-images.githubusercontent.com/12478408/122500741-e76d4f00-d025-11eb-839c-96f259c89af0.png">
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12242/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12242/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12241 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12241/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12241/comments | https://api.github.com/repos/huggingface/transformers/issues/12241/events | https://github.com/huggingface/transformers/issues/12241 | 924,487,431 | MDU6SXNzdWU5MjQ0ODc0MzE= | 12,241 | Modify BERT encoder layers? | {
"login": "calusbr",
"id": 25322394,
"node_id": "MDQ6VXNlcjI1MzIyMzk0",
"avatar_url": "https://avatars.githubusercontent.com/u/25322394?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/calusbr",
"html_url": "https://github.com/calusbr",
"followers_url": "https://api.github.com/users/calusbr/followers",
"following_url": "https://api.github.com/users/calusbr/following{/other_user}",
"gists_url": "https://api.github.com/users/calusbr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/calusbr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/calusbr/subscriptions",
"organizations_url": "https://api.github.com/users/calusbr/orgs",
"repos_url": "https://api.github.com/users/calusbr/repos",
"events_url": "https://api.github.com/users/calusbr/events{/privacy}",
"received_events_url": "https://api.github.com/users/calusbr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,623 | 1,625 | null | NONE | null | Hello I would like to modify the encoder layers of the BERT model, to insert FC and ReLu layers.
This idea allows you to reproduce the use of [Squeeze-and-Excitation Networks](https://arxiv.org/abs/1709.01507)
How to use an nn.module class to handle encoder outputs?
Example:
```
import torch.nn as nn
from transformers import BertModel
class CustomBERTModel(nn.Module):
def __init__(self):
super(CustomBERTModel, self).__init__()
self.bert = BertModel.from_pretrained("bert-base-uncased")
# add your additional layers here, for example a dropout layer followed by a linear classification head
self.dropout = nn.Dropout(0.3)
self.out = nn.Linear(768, 2)
def forward(self, ids, mask, token_type_ids):
sequence_output, pooled_output = self.bert(
ids,
attention_mask=mask,
token_type_ids=token_type_ids
)
# we apply dropout to the sequence output, tensor has shape (batch_size, sequence_length, 768)
sequence_output = self.dropout(sequence_output)
# next, we apply the linear layer. The linear layer (which applies a linear transformation)
# takes as input the hidden states of all tokens (so seq_len times a vector of size 768, each corresponding to
# a single token in the input sequence) and outputs 2 numbers (scores, or logits) for every token
# so the logits are of shape (batch_size, sequence_length, 2)
logits = self.out(sequence_output)
return logits
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12241/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12241/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12240 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12240/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12240/comments | https://api.github.com/repos/huggingface/transformers/issues/12240/events | https://github.com/huggingface/transformers/pull/12240 | 924,441,685 | MDExOlB1bGxSZXF1ZXN0NjczMDgyNzIx | 12,240 | Depreciate pythonic Mish and support PyTorch 1.9 version of Mish | {
"login": "digantamisra98",
"id": 34192716,
"node_id": "MDQ6VXNlcjM0MTkyNzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/34192716?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/digantamisra98",
"html_url": "https://github.com/digantamisra98",
"followers_url": "https://api.github.com/users/digantamisra98/followers",
"following_url": "https://api.github.com/users/digantamisra98/following{/other_user}",
"gists_url": "https://api.github.com/users/digantamisra98/gists{/gist_id}",
"starred_url": "https://api.github.com/users/digantamisra98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/digantamisra98/subscriptions",
"organizations_url": "https://api.github.com/users/digantamisra98/orgs",
"repos_url": "https://api.github.com/users/digantamisra98/repos",
"events_url": "https://api.github.com/users/digantamisra98/events{/privacy}",
"received_events_url": "https://api.github.com/users/digantamisra98/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Done"
] | 1,623 | 1,624 | 1,624 | CONTRIBUTOR | null | This PR removes the old pure pythonic version of [Mish](https://arxiv.org/abs/1908.08681) and now enables support for the [PyTorch 1.9 Mish version](https://pytorch.org/docs/stable/generated/torch.nn.Mish.html#torch.nn.Mish). It also removes isolated references of the function where it is not used.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12240/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12240/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12240",
"html_url": "https://github.com/huggingface/transformers/pull/12240",
"diff_url": "https://github.com/huggingface/transformers/pull/12240.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12240.patch",
"merged_at": 1624022026000
} |
https://api.github.com/repos/huggingface/transformers/issues/12239 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12239/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12239/comments | https://api.github.com/repos/huggingface/transformers/issues/12239/events | https://github.com/huggingface/transformers/pull/12239 | 924,412,382 | MDExOlB1bGxSZXF1ZXN0NjczMDU3ODk2 | 12,239 | [t5 doc] make the example work out of the box | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,624 | 1,624 | CONTRIBUTOR | null | This PR expands the training example to include the correct model type for the example to work, e.g. with `T5Model` this example will break.
Fixes: https://github.com/huggingface/transformers/issues/12238
@patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12239/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12239/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12239",
"html_url": "https://github.com/huggingface/transformers/pull/12239",
"diff_url": "https://github.com/huggingface/transformers/pull/12239.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12239.patch",
"merged_at": 1624035620000
} |
https://api.github.com/repos/huggingface/transformers/issues/12238 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12238/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12238/comments | https://api.github.com/repos/huggingface/transformers/issues/12238/events | https://github.com/huggingface/transformers/issues/12238 | 924,408,103 | MDU6SXNzdWU5MjQ0MDgxMDM= | 12,238 | [doc] t5 incomplete example | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"A possible approach proposed here: https://github.com/huggingface/transformers/pull/12239"
] | 1,623 | 1,624 | 1,624 | CONTRIBUTOR | null | https://huggingface.co/transformers/model_doc/t5.html#training has:
```
input_ids = tokenizer('translate English to German: The house is wonderful.', return_tensors='pt').input_ids
labels = tokenizer('Das Haus ist wunderbar.', return_tensors='pt').input_ids
# the forward function automatically creates the correct decoder_input_ids
loss = model(input_ids=input_ids, labels=labels).loss
```
which is broken unless the right model type is used. And when none if specified it typically presumes `AutoModel`, which gives: `TypeError: forward() got an unexpected keyword argument 'labels'`
So probably need to use an explicit `T5ForConditionalGeneration` and then it works.
@patrickvonplaten, @patil-suraj
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12238/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12238/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12237 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12237/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12237/comments | https://api.github.com/repos/huggingface/transformers/issues/12237/events | https://github.com/huggingface/transformers/issues/12237 | 924,405,169 | MDU6SXNzdWU5MjQ0MDUxNjk= | 12,237 | BART fine-tuning doesn't work and produces a fixed output for each input | {
"login": "sajastu",
"id": 10419055,
"node_id": "MDQ6VXNlcjEwNDE5MDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/10419055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sajastu",
"html_url": "https://github.com/sajastu",
"followers_url": "https://api.github.com/users/sajastu/followers",
"following_url": "https://api.github.com/users/sajastu/following{/other_user}",
"gists_url": "https://api.github.com/users/sajastu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sajastu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sajastu/subscriptions",
"organizations_url": "https://api.github.com/users/sajastu/orgs",
"repos_url": "https://api.github.com/users/sajastu/repos",
"events_url": "https://api.github.com/users/sajastu/events{/privacy}",
"received_events_url": "https://api.github.com/users/sajastu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, \r\n\r\nfollowing up on this @patrickvonplaten and @patil-suraj ",
"Hey @sajastu,\r\n\r\nIt's pretty difficult for us to debug the script - from a first look, the hyperparameter settings look good to me. \r\nAn effective batch size of 8 (4 * 2) seems rather small to me, but you can see from your loss curves whether 8 is enough I guess.\r\n\r\nAlso note that the x-sum dataset has some rather special distribution which is not really the same as reddit data IMO. X-sum is extreme summarization and has very dense sentences as summaries. Not sure if this works well with reddit.",
"He @patrickvonplaten, \r\n\r\nThanks for your response!\r\n\r\nThe problem that I'm facing is that: when I'm running the generation phase of `facebook/bart-large-xsum` (i.e., without fine-tuning), I'm getting comparably high scores (22.43/ 7.21 / 17.65); however and interestingly, when I finetune it for a few training steps (let's say 10 training steps), and then run the fine-tuned model on the same test set, the scores get much much lower (15.32 / 2.35 / 9.78). This in fact doesn't make sense to me. Theoretically, I expect the scores to stay near to the main model, if not surpassing it, especially when it has been trained for a very few steps... \r\n\r\nDo you have any thoughts on this? is this behaviour expectable? \r\n\r\nAlso, do you think that the model is overfitting, or get stuck in a local minimum that it's producing the same one output regardless of the input that it gets?",
"I struggle at the same point - the output of the generate-method in a fine-tuned BART seems to be independent of the input.\r\n\r\nInterestingly, this holds only for the generate method. If I call the fine-tuned model directly, as with\r\n`tokenizer.batch_decode(torch.argmax(model(input_ids = input_ids)[0], axis=-1))`\r\nthe output is perfectly related to the input, hence, it differs from input to input. Therefore, I assume there is a bug in the BART.generate()-method, or to be more precise with my assumption, in the specific `modeling_tf_bart.prepare_inputs_for_generation()`. I tried to verify my assumption ( I guess fine-tuning freezes somehow the past-/-cache-value which disconnects the output from the input), but I don't find the point which triggers this special generate-method-behavouir.",
"Hi @phhei,\r\n\r\nI think the code is **probably** correct. Or if any flaw, it must exist in the tokenization module, since I'm not getting this \"fixed\" output on other datasets that I've been using to fine-tune BART. For my special case here, I changed the dataset (i.e., reddit_tifu), ran the same code, and finally able to get it working.\r\n\r\n@patrickvonplaten might be of some help here.",
"Hi @sajastu,\r\n\r\nthanks for your reply. However, if the tokenization-module would cause this behavior, then ` tokenizer.batch_decode(torch.argmax(model(input_ids = input_ids)[0], axis=-1))` (in which input_ids is generated by the tokenizer.encode-method - the same variable I use for the BART.generate(input_ids)-method) would also output always the same. I already investigated the raw tensor output of both approaches, and there is the same: the generate(input_ids)-method always produces the same tensor, `torch.argmax(model(input_ids = input_ids)[0], axis=-1)` depended on the input_ids.\r\n\r\nI'm asking myself why changing the dataset (without anything else) would solve this issue. In my case, I have a non-huggingface-dataset, preprocessed by tokenizer-calls, so a bug in a huggingface-dataset is therefore not the point, too.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,628 | 1,628 | NONE | null | I'm getting stuck on fine-tuning BART model on reddit-tifu dataset. When I use a pre-trained model of BART, for example, `bart-large-xsum` without finetuning, it works fine and produces sort of sensible output for each input, but as I start finetuning it with BART, it starts to predict irrelevant text for each given input; as if it has been overfit to training data. Although, overfitting doesn't seem rational to me as the reddit has over 30k training samples. I'm wondering if there's any problem with my bash script or in the fine-tuning scripts? Since I've been using the instructions on https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization to this end. Following is my bash script for fine-tuning `bart-large-xsum` model.
```
DS_BASE_DIR=/home/code-base/user_space/packages/summarization_datasets/reddit_tifu/
python -m torch.distributed.launch --nproc_per_node=4 examples/pytorch/summarization/run_summarization.py \
--model_name_or_path facebook/bart-large-xsum \
--do_train \
--do_eval \
--train_file $DS_BASE_DIR/train.json \
--validation_file $DS_BASE_DIR/val.json \
--output_dir /home/code-base/user_space/saved_models/bart/reddit-1024-tuned \
--per_device_train_batch_size=2 \
--per_device_eval_batch_size=2 \
--overwrite_output_dir \
--predict_with_generate \
--num_train_epochs 15 \
--text_column text \
--summary_column summary \
--learning_rate 3e-5 \
--weight_decay 0.01 \
--adam_beta2 0.98 \
--warmup_steps 5000
```
I have used these hyperparams to match with the performance of https://arxiv.org/pdf/2008.03156v1.pdf
Outputs, after 1 training epoch:
input:
> so this happened when i was in like third grade but it continued to bother me throughout high school. i had actually forgotten about this till i read one of the other posts on here. the original fuck up happened when as i said we were playing football in the backyard. his backyard was surrounded by a metal fence. we had decided to use the top of a hill right before the fence should be plenty of leeway before the fence right? wrong. i was running in for a touchdown had just gotten past my friend for the touchdown when he jumped and tangled up my legs. i ended up sliding down the hill and fell tooth first into his fence. somehow even though 2/3rds of my tooth was in the fence i managed to avoid all nerves and felt no pain. i came up laughing so hard i was crying which i think made it worse because my friend goes dude your tooth is missing. which of course made me laugh even harder. his mom hears the commotion and comes out sees my missing tooth and me crying and starts freaking out. she partially blamed herself because she's the one that sent us out because before we were just inside playing video games. my dad comes to pick me up she apologizes profusely and i still didn't think it was a big deal. this was on a saturday so we eventually get the dentist to come in on sunday, that place was awesome, to fix the tooth. since i'm so young they only put a temporary cap on. now i also played hockey, soccer and later lacrosse. of course the temporary cap didn't last all that long and came off. this happened several times and there were hockey games i'd start with the cap on lose it halfway through and would confuse everyone. i always had fun with this but it was getting old, and expensive, so eventually the dentist put on a permanent cap. haven't had a problem since. if you guys want i'll see if i can find the young picture of me without the tooth. edit: found it
fine-tuned bart prediction:
> tried to impress a girl, ended up getting kicked out of the house by her dad and her mom for being late to a party.
input:
> hi reddit. typical disclaimer, this didn't actually happen today. it happened a couple months ago, but it's still impacting me today. my kids are typical kids, they don't pick up their stuff and they get scolded for it. i was getting pretty sick of seeing their pok\u00e9mon cards lying all over the place because to me it looked like all of the money that came out of my pocket getting slowly turned into trash. my wife on the other hand went crazy because of the mess. one night it all came to a head. after weeks of ineffectually threatening to take the stupid cards away if they left them all over the floor, and my wife demanding that they clean the room before bedtime, she lost it when going in to tuck them in. i got tired of hearing it, so i went in, saw all of the expensive pok\u00e9mon cards strewn about and lost it too. i immediately started grabbing up all the cards and piling them into boxes then left the room with both arms full. i went stomping angrily through the living room to put them away in the front bedroom that i use for storage. that's when the f u happened. earlier that evening, my older child had noticed my younger child smearing chapstick all over a section of wood laminate flooring...
fine-tuned bart prediction:
> tried to impress a girl, ended up getting kicked out of the house by her dad and her mom. i'm a dumbass.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.7.0 dev0
- Platform: Linux Ubuntu 18.04
- Python version: 3.8
- PyTorch version (GPU?): 1.6.0
- Tensorflow version (GPU?): --
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
@patrickvonplaten, @patil-suraj, @sgugger
## Information
Model I am using (Bert, XLNet ...): BART
The problem arises when using:
* [ x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ x] my own task or dataset: (give details below)
reddit_tifu dataset
## To reproduce
Steps to reproduce the behavior:
1. Running the above script which is taken from the official example
2. After a few training steps, the model learns to predict a specific fixed output for each given input text.
## Expected behavior
After fine-tuning for a few steps/epochs, I expect the model learn to generate at least different outputs for varying input texts.
@patrickvonplaten @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12237/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12237/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12236 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12236/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12236/comments | https://api.github.com/repos/huggingface/transformers/issues/12236/events | https://github.com/huggingface/transformers/pull/12236 | 924,367,728 | MDExOlB1bGxSZXF1ZXN0NjczMDE4Njgw | 12,236 | [Flax] Add FlaxMBart | {
"login": "stancld",
"id": 46073029,
"node_id": "MDQ6VXNlcjQ2MDczMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stancld",
"html_url": "https://github.com/stancld",
"followers_url": "https://api.github.com/users/stancld/followers",
"following_url": "https://api.github.com/users/stancld/following{/other_user}",
"gists_url": "https://api.github.com/users/stancld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stancld/subscriptions",
"organizations_url": "https://api.github.com/users/stancld/orgs",
"repos_url": "https://api.github.com/users/stancld/repos",
"events_url": "https://api.github.com/users/stancld/events{/privacy}",
"received_events_url": "https://api.github.com/users/stancld/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patil-suraj Thank you a lot for your suggestions. I fixed the order of attention and normalization layers and some other minor bugs. Also added some additional copy statements.\r\n\r\nI also changed the `shift_tokens_right` method as this one looks to be different for the MBart models as they don't have a single `decoder_start_token_id` in contrast to other Bart-like models. => This difference of having no `decoder_start_token_id`, however, currently leads to some issues within the `generate` method. (I'll try to have a look what can be done here)\r\n",
"@stancld I pushed a couple of commits to add the `layer_norm` in encoder and decoder. Now, all slow tests are passing.\r\n@patrickvonplaten could you please take a final look?"
] | 1,623 | 1,625 | 1,625 | CONTRIBUTOR | null | # What does this PR do?
This PR adds flax implementation of MBart.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12236/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12236/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12236",
"html_url": "https://github.com/huggingface/transformers/pull/12236",
"diff_url": "https://github.com/huggingface/transformers/pull/12236.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12236.patch",
"merged_at": 1625640638000
} |
https://api.github.com/repos/huggingface/transformers/issues/12235 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12235/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12235/comments | https://api.github.com/repos/huggingface/transformers/issues/12235/events | https://github.com/huggingface/transformers/issues/12235 | 924,305,889 | MDU6SXNzdWU5MjQzMDU4ODk= | 12,235 | can predict_with_generate (do_eval) work with sharded_ddp fairscale in 4.6.1+? | {
"login": "gyin94",
"id": 67664443,
"node_id": "MDQ6VXNlcjY3NjY0NDQz",
"avatar_url": "https://avatars.githubusercontent.com/u/67664443?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gyin94",
"html_url": "https://github.com/gyin94",
"followers_url": "https://api.github.com/users/gyin94/followers",
"following_url": "https://api.github.com/users/gyin94/following{/other_user}",
"gists_url": "https://api.github.com/users/gyin94/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gyin94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gyin94/subscriptions",
"organizations_url": "https://api.github.com/users/gyin94/orgs",
"repos_url": "https://api.github.com/users/gyin94/repos",
"events_url": "https://api.github.com/users/gyin94/events{/privacy}",
"received_events_url": "https://api.github.com/users/gyin94/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Can I ask whether we have any progress on the model prediction (text generation) during the training with fairscale? Thanks/ cc @stas00 ",
"most likely you'd want to ask @sgugger as I he did the fairscale integration. \r\n\r\nyou can ask me about the deepspeed integration if you try that instead.",
"`sharded_ddp` does not work for evaluation, as is mentioned in the documentation. I have mentioned that on the fairscale repository but did not get any update for the authors (same for the blocking aprt of Zero offload via fairscale) so I suggest you use DeepSpeed instead, where we have a much better support from the team at Microsoft.",
"and if fairscale solves the problem on their side and the work resumes in this direction, the key to making generate work might have to include the enabling `synced_gpus` here for fairscale (for zero3-like fairscale features that is):\r\n\r\nhttps://github.com/huggingface/transformers/blob/fb65f65ea6175036f0cc8318145853e9c833f914/src/transformers/trainer_seq2seq.py#L164",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,628 | 1,628 | NONE | null | In 4.5.0, sharded_ddp won't work with predict_with_generate in seq2seq or clm model training during the eval step. Wonder whether it can work in the latest version. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12235/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12235/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12234 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12234/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12234/comments | https://api.github.com/repos/huggingface/transformers/issues/12234/events | https://github.com/huggingface/transformers/issues/12234 | 924,247,702 | MDU6SXNzdWU5MjQyNDc3MDI= | 12,234 | Reconstructing Tokens from Bert Embedding? | {
"login": "patdflynn",
"id": 44388720,
"node_id": "MDQ6VXNlcjQ0Mzg4NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/44388720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patdflynn",
"html_url": "https://github.com/patdflynn",
"followers_url": "https://api.github.com/users/patdflynn/followers",
"following_url": "https://api.github.com/users/patdflynn/following{/other_user}",
"gists_url": "https://api.github.com/users/patdflynn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patdflynn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patdflynn/subscriptions",
"organizations_url": "https://api.github.com/users/patdflynn/orgs",
"repos_url": "https://api.github.com/users/patdflynn/repos",
"events_url": "https://api.github.com/users/patdflynn/events{/privacy}",
"received_events_url": "https://api.github.com/users/patdflynn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,623 | 1,624 | 1,624 | NONE | null | Sorry if this was posted before but I couldn't it after a few searches.
My goal is to take a sentence, run it through BERT, perturb the contextualized embeddings from the output of BERT, and reconstruct the sentence text.
I'm currently using 'bert-base-uncased' as my tokenizer and model and a perturbed output torch tensor with each token embedding size 768. How do I reconstruct the sentence text?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12234/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12234/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12233 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12233/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12233/comments | https://api.github.com/repos/huggingface/transformers/issues/12233/events | https://github.com/huggingface/transformers/pull/12233 | 924,230,962 | MDExOlB1bGxSZXF1ZXN0NjcyODk4NzA3 | 12,233 | Add FlaxBigBird QuestionAnswering script | {
"login": "thevasudevgupta",
"id": 53136577,
"node_id": "MDQ6VXNlcjUzMTM2NTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thevasudevgupta",
"html_url": "https://github.com/thevasudevgupta",
"followers_url": "https://api.github.com/users/thevasudevgupta/followers",
"following_url": "https://api.github.com/users/thevasudevgupta/following{/other_user}",
"gists_url": "https://api.github.com/users/thevasudevgupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thevasudevgupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thevasudevgupta/subscriptions",
"organizations_url": "https://api.github.com/users/thevasudevgupta/orgs",
"repos_url": "https://api.github.com/users/thevasudevgupta/repos",
"events_url": "https://api.github.com/users/thevasudevgupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/thevasudevgupta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patrickvonplaten, this PR is ready for review & merge (tested all the code after porting here).\r\n\r\nFailing test is unrelated to this PR.",
"Awesome merging for the sprint - we'll fix bugs on the go as it's under `research_projects` :-)"
] | 1,623 | 1,624 | 1,624 | CONTRIBUTOR | null | # What does this PR do?
This PR will add flax-bigbird QA script on `natural-questions` dataset.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12233/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12233/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12233",
"html_url": "https://github.com/huggingface/transformers/pull/12233",
"diff_url": "https://github.com/huggingface/transformers/pull/12233.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12233.patch",
"merged_at": 1624640749000
} |
https://api.github.com/repos/huggingface/transformers/issues/12232 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12232/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12232/comments | https://api.github.com/repos/huggingface/transformers/issues/12232/events | https://github.com/huggingface/transformers/issues/12232 | 924,210,763 | MDU6SXNzdWU5MjQyMTA3NjM= | 12,232 | RobertaForMaskedLM.from_pretrained throwing some weights not initialized error when loading same model type | {
"login": "dhuruvasaditya",
"id": 18754465,
"node_id": "MDQ6VXNlcjE4NzU0NDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/18754465?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhuruvasaditya",
"html_url": "https://github.com/dhuruvasaditya",
"followers_url": "https://api.github.com/users/dhuruvasaditya/followers",
"following_url": "https://api.github.com/users/dhuruvasaditya/following{/other_user}",
"gists_url": "https://api.github.com/users/dhuruvasaditya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhuruvasaditya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhuruvasaditya/subscriptions",
"organizations_url": "https://api.github.com/users/dhuruvasaditya/orgs",
"repos_url": "https://api.github.com/users/dhuruvasaditya/repos",
"events_url": "https://api.github.com/users/dhuruvasaditya/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhuruvasaditya/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,624 | 1,624 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.7.0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
## Information
I am using a pretrained RobertaForMaskedLM . When I try to load the model I get the following error:
Some weights of RobertaForMaskedLM were not initialized from the model checkpoint at mldmm/GlassBERTa and are newly initialized: ['lm_head.layer_norm.bias', 'lm_head.bias', 'lm_head.dense.bias', 'lm_head.dense.weight', 'lm_head.layer_norm.weight', 'lm_head.decoder.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
The config of the model is as follows:
```
{
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dim": 96,
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "roberta",
"n_heads": 3,
"n_layers": 3,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.6.1",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 541
}
```
## To reproduce
Steps to reproduce the behavior:
```
from transformers import AutoModelForMaskedLM,AutoTokenizer
tok = AutoTokenizer.from_pretrained('mldmm/GlassBERTa')
mod = AutoModelForMaskedLM.from_pretrained('mldmm/GlassBERTa')
```
or
```
from transformers import RobertaForMaskedLM,AutoTokenizer
tok = AutoTokenizer.from_pretrained('mldmm/GlassBERTa')
mod = RobertaForMaskedLM.from_pretrained('mldmm/GlassBERTa')
```
Same goes for many other pre-trained models hosted ('beomi/kcbert-base','hfl/chinese-bert-wwm-ext')
## Expected behavior
Since I'm loading for the same architecture I am expecting a clean import without any errors, might lead to random output due to those newly initialized layers and can mess the output in fill-mask pipeline. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12232/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12232/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12231 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12231/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12231/comments | https://api.github.com/repos/huggingface/transformers/issues/12231/events | https://github.com/huggingface/transformers/issues/12231 | 924,171,779 | MDU6SXNzdWU5MjQxNzE3Nzk= | 12,231 | Batch inference runtime slows down for inputs with different length sentences | {
"login": "alexdauenhauer",
"id": 11903445,
"node_id": "MDQ6VXNlcjExOTAzNDQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/11903445?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexdauenhauer",
"html_url": "https://github.com/alexdauenhauer",
"followers_url": "https://api.github.com/users/alexdauenhauer/followers",
"following_url": "https://api.github.com/users/alexdauenhauer/following{/other_user}",
"gists_url": "https://api.github.com/users/alexdauenhauer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexdauenhauer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexdauenhauer/subscriptions",
"organizations_url": "https://api.github.com/users/alexdauenhauer/orgs",
"repos_url": "https://api.github.com/users/alexdauenhauer/repos",
"events_url": "https://api.github.com/users/alexdauenhauer/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexdauenhauer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Pinging @NielsRogge as he might have an idea of what's going on with LUKE",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"This has not been resolved as far as I know. Please do not close this issue",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"There seems to be something odd happening indeed. Will investigate this.\r\n\r\nAlso cc'ing @ikuyamada.",
"This may be related to the asynchronous execution on GPU. When adding `torch.cuda.synchronize(device)` after calling `model(**batch)`, the runtime was approximately consistent across batches on my local machine.\r\n\r\n```python\r\nfor i, batch in enumerate(tokenized_inputs):\r\n with torch.no_grad():\r\n start = time.time()\r\n batch.to(device)\r\n outputs = model(**batch)\r\n torch.cuda.synchronize(device)\r\n print(f\"runtime batch {i}: \", time.time() - start)\r\n```",
"thanks for the suggestion @ikuyamada ! I'll give it a shot",
"I tested with `torch.cuda.synchronize(device)` this isn't really the solution I want. I agree that it did seem to resolve the runtime issue, but it increased all runtimes to the longest runtime (now in the second example batch 0 and batch 1 both have execution time of ~0.1 s). This is the opposite of what I would hope to accomplish. The runtime of executing the first batch without `synchronize` is `0.03 s` so I am still not understanding why calling the exact same data a second time would result in a longer runtime. If this is because of async execution, can you please explain to me why?",
"@alexdauenhauer If I understand torch correctly, torch returns the result without completing the computation. When you use the result (e.g., `print(outputs)`), the *synchronization* happens and the result is computed. Therefore, the following code should give similar results to the code above.\r\n\r\n```python\r\nfor i, batch in enumerate(tokenized_inputs):\r\n with torch.no_grad():\r\n start = time.time()\r\n batch.to(device)\r\n outputs = model(**batch)\r\n print(outputs[0][0])\r\n print(f\"runtime batch {i}: \", time.time() - start)\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,632 | 1,632 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Ubuntu 18.04.5 LTS
- Python version: 3.6.9
- PyTorch version (GPU?): 1.8.1
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@LysandreJik @patrickvonplaten (sorry if wrong tags, this is for Luke model, but it is not listed)
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): LukeForEntityPairClassification
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. generate batched inputs for the LukeTokenizer with **identical** sentences in each batch (i.e. no padding required)
2. tokenize each batch by passing the batch to the tokenizer
3. run inference on each batch on GPU and notice that runtime is the same for each batch
4. generate batched inputs for the LukeTokenizer with sentences of **different length** in each batch (i.e. padding is required)
5. tokenize each batch by passing the batch to the tokenizer with `padding=True`
6. run inference on each batch on GPU and notice that runtime increases substantially for subsequent batches after first batch
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```python
import torch
from transformers import LukeForEntityPairClassification, LukeTokenizer
import time
text1 = "Beyoncé lives in Los Angeles."
entity_spans1 = [(0, 7), (17, 28)]
text2 = "Kevin Love has urged the Cleveland Cavaliers to fight to regain their form following LeBron James' move to the Los Angeles Lakers."
entity_spans2 = [(85, 97), (111, 129)]
# experiment 1 - sentence length is identical across the full batch
text = [[text1] * 10, [text2] * 10]
entity_spans = [[entity_spans1] * 10, [entity_spans2] * 10]
model = LukeForEntityPairClassification.from_pretrained("studio-ousia/luke-large-finetuned-tacred")
tokenizer = LukeTokenizer.from_pretrained("studio-ousia/luke-large-finetuned-tacred")
tokenized_inputs = []
for text_batch, span_batch in zip(text, entity_spans):
inputs = tokenizer(text_batch, entity_spans=span_batch, return_tensors="pt", padding=True, truncation=True)
tokenized_inputs.append(inputs)
device = torch.device('cuda')
model.to(device)
model.eval()
for i, batch in enumerate(tokenized_inputs):
with torch.no_grad():
start = time.time()
batch.to(device)
outputs = model(**batch)
print(f"runtime batch {i}: ", time.time() - start)
# experiment 2 - sentence length alternates in length across the batch
text = [[text1, text2] * 10] * 2
entity_spans = [[entity_spans1, entity_spans2] * 10] * 2
model = LukeForEntityPairClassification.from_pretrained("studio-ousia/luke-large-finetuned-tacred")
tokenizer = LukeTokenizer.from_pretrained("studio-ousia/luke-large-finetuned-tacred")
tokenized_inputs = []
for text_batch, span_batch in zip(text, entity_spans):
inputs = tokenizer(text_batch, entity_spans=span_batch, return_tensors="pt", padding=True, truncation=True)
tokenized_inputs.append(inputs)
device = torch.device('cuda')
model.to(device)
model.eval()
for i, batch in enumerate(tokenized_inputs):
with torch.no_grad():
start = time.time()
batch.to(device)
outputs = model(**batch)
print(f"runtime batch {i}: ", time.time() - start)
# results - Tesla T4
runtime batch 0: 0.028860092163085938
runtime batch 1: 0.03273129463195801
runtime batch 0: 0.028328895568847656
runtime batch 1: 0.09934639930725098
```
## Expected behavior
I expect the runtime to be the same for an identical batch of inputs
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12231/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12231/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12230 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12230/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12230/comments | https://api.github.com/repos/huggingface/transformers/issues/12230/events | https://github.com/huggingface/transformers/pull/12230 | 924,139,176 | MDExOlB1bGxSZXF1ZXN0NjcyODE5NjA5 | 12,230 | Flax summarization script | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2934977194,
"node_id": "MDU6TGFiZWwyOTM0OTc3MTk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Flax",
"name": "Flax",
"color": "4862AD",
"default": false,
"description": ""
}
] | closed | false | null | [] | [] | 1,623 | 1,624 | 1,624 | MEMBER | null | # What does this PR do?
Adds Flax summarization example.
Beam search `generate` works like a charm on TPU, really fast! This makes it very easy and fast to use the `generate` in the eval loop. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12230/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12230/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12230",
"html_url": "https://github.com/huggingface/transformers/pull/12230",
"diff_url": "https://github.com/huggingface/transformers/pull/12230.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12230.patch",
"merged_at": 1624443571000
} |
https://api.github.com/repos/huggingface/transformers/issues/12229 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12229/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12229/comments | https://api.github.com/repos/huggingface/transformers/issues/12229/events | https://github.com/huggingface/transformers/pull/12229 | 924,040,674 | MDExOlB1bGxSZXF1ZXN0NjcyNzMyODY1 | 12,229 | Add link to the course | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you 🤗 & @sgugger for doing this. \r\nJust so if anyone is looking for the direct link to videos: https://huggingface.co/course/"
] | 1,623 | 1,624 | 1,623 | COLLABORATOR | null | # What does this PR do?
This PR adds a link to the Hugging Face course on the first page of the documentation. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12229/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12229/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12229",
"html_url": "https://github.com/huggingface/transformers/pull/12229",
"diff_url": "https://github.com/huggingface/transformers/pull/12229.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12229.patch",
"merged_at": 1623942893000
} |
https://api.github.com/repos/huggingface/transformers/issues/12228 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12228/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12228/comments | https://api.github.com/repos/huggingface/transformers/issues/12228/events | https://github.com/huggingface/transformers/pull/12228 | 924,023,465 | MDExOlB1bGxSZXF1ZXN0NjcyNzE4MDM4 | 12,228 | [Flax] FlaxAutoModelForSeq2SeqLM | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,624 | 1,624 | MEMBER | null | # What does this PR do?
This PR adds `FlaxAutoModelForSeq2SeqLM`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12228/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12228/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12228",
"html_url": "https://github.com/huggingface/transformers/pull/12228",
"diff_url": "https://github.com/huggingface/transformers/pull/12228.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12228.patch",
"merged_at": 1624002610000
} |
https://api.github.com/repos/huggingface/transformers/issues/12227 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12227/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12227/comments | https://api.github.com/repos/huggingface/transformers/issues/12227/events | https://github.com/huggingface/transformers/pull/12227 | 923,848,320 | MDExOlB1bGxSZXF1ZXN0NjcyNTY2NTkw | 12,227 | [Blenderbot] Fix docs | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,626 | 1,626 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes the docs.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12227/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12227/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12227",
"html_url": "https://github.com/huggingface/transformers/pull/12227",
"diff_url": "https://github.com/huggingface/transformers/pull/12227.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12227.patch",
"merged_at": 1626182251000
} |
https://api.github.com/repos/huggingface/transformers/issues/12226 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12226/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12226/comments | https://api.github.com/repos/huggingface/transformers/issues/12226/events | https://github.com/huggingface/transformers/pull/12226 | 923,839,009 | MDExOlB1bGxSZXF1ZXN0NjcyNTU4NDMx | 12,226 | update desc for map in all examples | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | # What does this PR do?
Closes #11797. I've added the remaining the `desc` for `summarization`, `token-classification,` `translation`, `language-modeling ` and updated `requirements.txt` as well. I wasn't sure what to add in `desc` at some places so I've added `# not sure if it's right` comment there. Please let me know if that description looks good or should I replace it with something else?
## Who can review?
@stas00 @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12226/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12226/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12226",
"html_url": "https://github.com/huggingface/transformers/pull/12226",
"diff_url": "https://github.com/huggingface/transformers/pull/12226.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12226.patch",
"merged_at": 1623958651000
} |
https://api.github.com/repos/huggingface/transformers/issues/12225 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12225/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12225/comments | https://api.github.com/repos/huggingface/transformers/issues/12225/events | https://github.com/huggingface/transformers/issues/12225 | 923,758,495 | MDU6SXNzdWU5MjM3NTg0OTU= | 12,225 | Pegasus pretraining in fp16 results in NaN loss | {
"login": "kolakows",
"id": 34172905,
"node_id": "MDQ6VXNlcjM0MTcyOTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/34172905?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kolakows",
"html_url": "https://github.com/kolakows",
"followers_url": "https://api.github.com/users/kolakows/followers",
"following_url": "https://api.github.com/users/kolakows/following{/other_user}",
"gists_url": "https://api.github.com/users/kolakows/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kolakows/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kolakows/subscriptions",
"organizations_url": "https://api.github.com/users/kolakows/orgs",
"repos_url": "https://api.github.com/users/kolakows/repos",
"events_url": "https://api.github.com/users/kolakows/events{/privacy}",
"received_events_url": "https://api.github.com/users/kolakows/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yeah, not sure to what extent it is feasible to prevent this as Pegasus was pretrained in `bfloat16` cc @stas00 ",
"But I'm pretraining a freshly initialized model, so I think the problem shouldn't be with the `bfloat16` casting",
"That's interesting. We have primarily debugged bf16-pretrained models that almost all had this issue as Patrick says.\r\n\r\nSo this means the model's design is somehow not fp16-friendly.\r\n\r\nCould you take a last checkpoint that was still good and run it with `DebugUnderflowOverflow`\r\nhttps://huggingface.co/transformers/debugging.html#underflow-and-overflow-detection\r\nand report back the failing trace - which will show us where the under/over-flow occurs in the model.\r\n",
"I will debug it, thanks for the link on how to do it, but probably will have the results in like ~2 weeks time, because now I'm waiting for the results of training without mixed precision. ",
"I've run training from checkpoint with debugging, like below:\r\n\r\n```\r\nDebugUnderflowOverflow(model)\r\ntrainer = Trainer(**args)\r\ntrainer.train(resume_from_checkpoint=checkpoint_path)\r\n```\r\n\r\nAnd NaN's happened after around 2 days of training. I've redirected all stdout to file, but the problem is that there wasn't any output from the DebugUnderflowOverflow, no near first nans or other place in file. Console also didn't show anything.\r\n\r\n```\r\nLogging step 78200\r\n{'loss': 3.6594, 'learning_rate': 0.00028089413749472796, 'epoch': 0.54}\r\nLogging step 78220\r\n{'loss': nan, 'learning_rate': 0.0002806832560101223, 'epoch': 0.54}\r\n```\r\nAssuming that I've used DebugUnderflowOverflow correctly, do you have any ideas what might be the source of these nans?\r\n\r\nDisclaimer about the experiment: \r\nLast checkpoint I had was just before the end of the first epoch, the next one was after NaN's started, so I took the last one, but because we have 'dynamic' tokenization it would take long time just to get to the previous point. So I've used option ignore_data_skip which didn't forward the dataset to the checkpoint's data point but just started training on the whole dataset again. I think it shouldn't matter for the purpose of debugging NaNs because in the first run model has seen whole training dataset without throwing NaN's.",
"You can validating that the tracing works with:\r\nhttps://huggingface.co/transformers/debugging.html#specific-batch-absolute-mix-and-max-value-tracing\r\n\r\nThis will just report all min/max values of the desired batches - e.g. batch 0, so that you know it's configured correctly and outputs the data it would if there were to be NaNs.\r\n\r\nLet's validate that it works first and if it does, then hopefully a trace of one batch could shed some light. If it's really long probably make an attachment to your comment.\r\n\r\ne.g. it's possible that the weights are all not-NaNs, but the loss still somehow gets pushed into over/underflow.",
"Turns out I didn't attach the debugger properly for the first time ^^. A way to validate helped, thanks.\r\n\r\nHere are all frames printed by the debugger after detecting inf.\r\n[overflow_debug.txt](https://github.com/huggingface/transformers/files/6777740/overflow_debug.txt)\r\n\r\nNot sure where things go wrong.",
"The last frame is:\r\n```\r\n model.encoder.layers.10 PegasusEncoderLayer\r\n2.44e-04 6.86e+04 input[0]\r\n0.00e+00 3.40e+38 input[1]\r\n9.77e-04 inf output[0]\r\n```\r\nThe weird thing is that, input[1] is probably attention_mask and not sure why and where some of its values are set to inf. I think in the encoder layer it should be 0 or 1, indicating padding masking? ",
"Other thing I don't fully understand is that\r\n\r\n```\r\n model.encoder.layers.10.fc2 Linear\r\n1.52e-08 3.28e+00 weight\r\n3.00e-04 1.71e+00 bias\r\n0.00e+00 1.80e+02 input[0]\r\n0.00e+00 6.06e+04 output\r\n model.encoder.layers.10 PegasusEncoderLayer\r\n2.44e-04 6.86e+04 input[0]\r\n0.00e+00 3.40e+38 input[1]\r\n9.77e-04 inf output[0]\r\n```\r\n\r\nit looks like second layer of feed forward layer returns output that is still acceptable in fp16, but then the whole layer returns inf. So I assume that the overflow occurred somewhere between this line (4.5.1 version) https://github.com/huggingface/transformers/blob/4bae96ec2bee265f938fc262201538819419089a/src/transformers/models/pegasus/modeling_pegasus.py#L337 and the return. \r\n\r\nBut there is a check to clamp any possible overflows.",
"I'd say the next step is to inject the `detect_overflow` between the suspect lines of code, as shown at the very end of:\r\nhttps://huggingface.co/transformers/debugging.html#underflow-and-overflow-detection\r\n\r\nas shown in this example:\r\n```\r\nfrom debug_utils import detect_overflow\r\n\r\nclass T5LayerFF(nn.Module):\r\n [...]\r\n def forward(self, hidden_states):\r\n forwarded_states = self.layer_norm(hidden_states)\r\n detect_overflow(forwarded_states, \"after layer_norm\")\r\n forwarded_states = self.DenseReluDense(forwarded_states)\r\n detect_overflow(forwarded_states, \"after DenseReluDense\")\r\n return hidden_states + self.dropout(forwarded_states)\r\n```\r\n\r\nand then you will know exactly where things overflow.\r\n\r\nAnd once identified you can either turn off the `autocast` off for that line of code, or to change the operation to always cast to fp32, as in `some_torch_op(...., dtype=torch.float32)` if it's a torch op that is.\r\n\r\nFor `autocast` turning off example please see https://github.com/huggingface/transformers/pull/10956/files",
"So I ran some more tests with `detect_overflow`.\r\n\r\nTurns out that scaling up inside `F.dropout` pushes already high values from output of 2nd linear layer (which is fp16) into inf. \r\n\r\nThe next unexpected thing is that, the inf check\r\nhttps://github.com/huggingface/transformers/blob/4bae96ec2bee265f938fc262201538819419089a/src/transformers/models/pegasus/modeling_pegasus.py#L340\r\nshould be moved a few lines of code up. \r\n\r\nAs it is now, during the dtype check, `hidden_states` is already promoted to fp32 after the residual add. In residual add `residual` is fp32 and `hidden_states` is fp16 with possible overflows that get carried to fp32 result.\r\n\r\nMoving the check up will patch the overflows I've been seeing. \r\n\r\nI also think about adding a second check before the first residual add in the encoder, as some values are rather high (2.2e4). And then I'll keep my fingers crossed that nothing overflows in the decoder as I haven't looked into the scale of values there. \r\n",
"I'm glad to hear that you can now easily tell where things overflow, @kolakows.\r\n\r\nPlease remember that the original code was trained in a different dtype regime (bf16 or fp32/tf32) and so the designers of the model haven't had to deal with fp16 and that's why changes are needed to be applied to the original port. This same story happens to pretty much all models of this kind (i.e. not designed to be trained with fp16 in mind).\r\n\r\nI trust you will be able to tweak the code to overcome this.\r\n\r\nYou can approach this in 3 ways \r\n1. explicit upcasting as you suggested\r\n2. turning off `autocast` for the duration of the \"sensitive\" few lines of code.\r\n3. yet another approach is to change the loss function to punish the high weights and encourage the model to use weights in a safe fp16 range, e.g. for t5\r\nhttps://github.com/huggingface/transformers/pull/10956#issuecomment-820712267 - which may or may not work here and of course need to think how to add the extra component in a sensible way.\r\n\r\nThen you can PR the changes and hopefully others will enjoy the fruit of your hard labour. Thank you!",
"Thank you for guiding me on how to debug the model and pointing out possible fixes. It took me some time to wrap my head around fp16. I think now I have a clear understanding on how to approach it. \r\n\r\nFor now I made simple patches and will be running some more training and see how it goes. If I get some nice results, I'll post some summary here and do a PR.",
"A while back I also made a short study comparing bf16 and fp16, so it might be useful too to understand the limitations of fp16:\r\nhttps://github.com/stas00/ml-ways/blob/master/numbers/bfloat16-vs-float16-study.ipynb"
] | 1,623 | 1,626 | 1,626 | NONE | null | ## Environment info
`transformers` version: 4.5.1
- Platform: Linux-5.4.0-73-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten, @patil-suraj
## Information
Model I am using: pegasus
The problem arises when using:
* [ ] my own modified scripts:
```
config = PegasusConfig(**pegasus_config_kwargs)
model = PegasusForConditionalGeneration(config=config)
```
and then using Trainer with fp16 on.
The trainer args I'm using:
```json
{
"logging_strategy": "steps",
"logging_steps": 20,
"save_strategy": "steps",
"save_steps": 5000,
"num_train_epochs": 2,
"lr_scheduler_type": "linear",
"warmup_steps": 10000,
"learning_rate": 0.001,
"dataloader_num_workers": 8,
"per_device_train_batch_size": 16,
"gradient_accumulation_steps": 16,
"group_by_length": true,
"adafactor": true,
"fp16": true
}
```
The tasks I am working on is:
* [ ] my own task or dataset
## To reproduce
I was trying to pretrain pegasus in fp16 from scratch using a modified script. The training is much faster, around 40% speedup, but after almost 3 days, training was 10% into a second epoch, a NaN loss happened. Debugging the place where overflow occurred I guess is possible, but will be troublesome. Do you know what could be the problem or if someone is working on problems with fp16 on pegasus?
I've seen for example that it could be a problem when using pretrained checkpoints (https://discuss.huggingface.co/t/finetuning-for-fp16-compatibility/977), but shouldn't it work when initializing model from config, like below?
```
config = PegasusConfig(**pegasus_config_kwargs)
model = PegasusForConditionalGeneration(config=config)
```
Training without fp16 works fine. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12225/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12225/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12224 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12224/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12224/comments | https://api.github.com/repos/huggingface/transformers/issues/12224/events | https://github.com/huggingface/transformers/pull/12224 | 923,732,017 | MDExOlB1bGxSZXF1ZXN0NjcyNDY0Mzcz | 12,224 | Support for torch 1.9.0 | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Shouldn't the fx tests be skipped correspondingly? I see the CI logs show that they all passed with 1.9.0 - how is that possible?",
"The `is_torch_fx_available` returns `False` as the versions aren't compatible. The tests for torch.fx require `is_torch_fx_available` to be `True` in order to run!\r\n\r\nYes, switching back to > 1.9.0 once the issue is fixed works for me.",
"OK, but the tests were reported as passed and not skipped. So another todo for down the road is to add a skip rule, so that we don't get a misleading report and have a skipped test appearing as passed. Don't have to do it now.",
"They should have a decorator for that, rather than the in-test check. Would be better reported as skipped, indeed!"
] | 1,623 | 1,623 | 1,623 | MEMBER | null | This PR adds support for torch 1.9.0. It upgrades the CPU CI to use torch 1.9.0, and the GPU CI to use PyTorch's 1.9.0 docker image to run tests.
As discussed with @michaelbenayoun, this puts a hard requirement on having a specific torch version for torch fx to be run. The idea is that:
- The torch fx support in `transformers` is currently experimental, and will be updated *without* backwards compatibility requirements
- To that end, it should always support the latest PyTorch version and not the earlier ones.
- However PyTorch 1.9.0 will not be supported due to https://github.com/pytorch/pytorch/pull/59569
- To that end, we setup a specific version requirement on `torch` in order to offer torch FX support.
Running on torch 1.8.0 and torch 1.8.1, as well as the various torch 1.8.1-cu111 and other 1.8.x versions works correctly.
Running on torch < 1.8 or torch > 1.8 returns:
```
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/opt/pycharm-professional/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "/opt/pycharm-professional/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/lysandre/.config/JetBrains/PyCharm2021.1/scratches/torchfx.py", line 6, in <module>
traced_model = symbolic_trace(
File "/home/lysandre/transformers/src/transformers/modeling_fx_utils.py", line 374, in symbolic_trace
tracer = HFTracer(batch_size=batch_size, sequence_length=sequence_length, num_choices=num_choices)
File "/home/lysandre/transformers/src/transformers/modeling_fx_utils.py", line 152, in __init__
raise ImportError(
ImportError: Found an incompatible version of torch. Found version 1.9.0, but only version 1.8 is supported.
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12224/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12224/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12224",
"html_url": "https://github.com/huggingface/transformers/pull/12224",
"diff_url": "https://github.com/huggingface/transformers/pull/12224.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12224.patch",
"merged_at": 1623943741000
} |
https://api.github.com/repos/huggingface/transformers/issues/12223 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12223/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12223/comments | https://api.github.com/repos/huggingface/transformers/issues/12223/events | https://github.com/huggingface/transformers/issues/12223 | 923,724,892 | MDU6SXNzdWU5MjM3MjQ4OTI= | 12,223 | Argument `never_split` not working on `AutoTokenizer` | {
"login": "udeepam",
"id": 33596302,
"node_id": "MDQ6VXNlcjMzNTk2MzAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33596302?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/udeepam",
"html_url": "https://github.com/udeepam",
"followers_url": "https://api.github.com/users/udeepam/followers",
"following_url": "https://api.github.com/users/udeepam/following{/other_user}",
"gists_url": "https://api.github.com/users/udeepam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/udeepam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/udeepam/subscriptions",
"organizations_url": "https://api.github.com/users/udeepam/orgs",
"repos_url": "https://api.github.com/users/udeepam/repos",
"events_url": "https://api.github.com/users/udeepam/events{/privacy}",
"received_events_url": "https://api.github.com/users/udeepam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ah, I believe the fast tokenizers do not have the `never_split` option. In order to achieve this I would add the tokens to the vocabulary instead cc @n1t0 is there another way to handle this?\r\n\r\n```py\r\n>>> tokenizer.tokenize(\"lol that's funny\")\r\n['lo', '##l', 'that', \"'\", 's', 'funny']\r\n>>> tokenizer.add_tokens([\"lol\"])\r\n1\r\n>>> tokenizer.tokenize(\"lol that's funny\")\r\n['lol', 'that', \"'\", 's', 'funny']\r\n```",
"Thanks! The suggestion works for the token `lol`. However another token that I do not want to be split is `...` and the suggestion does not work for this as shown below.\r\n```python\r\n>>> from transformers import AutoTokenizer\r\n>>> tokenizer = AutoTokenizer.from_pretrained('bert-large-uncased')\r\n>>> tokenizer.tokenize(\"... that's funny\")\r\n['.', '.', '.', 'that', \"'\", 's', 'funny']\r\n>>> tokenizer.add_tokens([\"...\"])\r\n0\r\n>>> tokenizer.tokenize(\"... that's funny\")\r\n['.', '.', '.', 'that', \r\n```\r\n\r\nHowever, again it does work using the `BertTokenizer` and the `never_split` argument e.g.\r\n```python\r\n>>> from transformers import BertTokenizer\r\n>>> tokenizer = BertTokenizer.from_pretrained('bert-large-uncased', never_split={'...'})\r\n>>> tokenizer.tokenize(\"... That's funny\")\r\n['...', 'that', \"'\", 's', 'funny']\r\n```\r\n\r\nIs there another workaround?",
"Hi @udeepam, I'm not sure to understand the goal of doing this.\r\n\r\n```python\r\n>>> tokenizer = AutoTokenizer.from_pretrained('bert-large-uncased', use_fast=False, never_split={'lol'})\r\n>>> tokenizer.tokenize(\"lol That's funny\")\r\n['lol', 'that', \"'\", 's', 'funny']\r\n>>> tokens = tokenizer.encode(\"lol That's funny\")\r\n>>> tokens\r\n[101, 100, 2008, 1005, 1055, 6057, 102]\r\n>>> tokenizer.convert_ids_to_tokens(tokens)\r\n['[CLS]', '[UNK]', 'that', \"'\", 's', 'funny', '[SEP]']\r\n```\r\n\r\nAs you can see, the tokenizer doesn't split the `lol` token, but it doesn't know it. So it ends up being an `[UNK]` token. If it knew it, it wouldn't have split it in the first place. Is it the behavior you expect to get?\r\n\r\nUnfortunately, I don't see any other workaround than what @LysandreJik proposed in the first place.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,627 | 1,627 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-4.19.0-14-cloud-amd64-x86_64-with-debian-10.7
- Python version: 3.7.9
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): 2.5.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
- tokenizers: @LysandreJik
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): BERT
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('bert-large-uncased', never_split={'lol'})
tokenizer.tokenize("lol That's funny")
"""
['lo', '##l', 'that', "'", 's', 'funny']
"""
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The expected output should be
```python
['lol', 'that', "'", 's', 'funny']
```
I know by using the `BertTokenizer` the `never_split` argument works e.g.
```python
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-large-uncased', never_split={'lol'})
tokenizer.tokenize("lol That's funny")
"""
['lol', 'that', "'", 's', 'funny']
"""
```
But I want to use the `AutoTokenizer` for another model, `nghuyong/ernie-2.0-en`, and it doesn't work there either.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12223/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12223/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12222 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12222/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12222/comments | https://api.github.com/repos/huggingface/transformers/issues/12222/events | https://github.com/huggingface/transformers/pull/12222 | 923,662,650 | MDExOlB1bGxSZXF1ZXN0NjcyNDAzNTgz | 12,222 | [WIP] enabling `inference_mode` for pipelines for potentially improved perf. | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@Narsil Curious if you already observed improved performance through this mode? ",
"So small, not really worth it right now. (a few percent tops)\r\n\r\nMain roadblock is that the context manager does not exist in torch 1.7 which is still supported by transformers. (So enabling it would mean adding more logic in transformers to basically use inference_mode when available else, no_grad).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,628 | 1,628 | CONTRIBUTOR | null | # What does this PR do?
This won't work on torch==1.7.1 but does on >=1.8.1 (LTS).
Question is, should we enable this with a compatibility layer, or simply do nothing.
I think we need a bit of benchmarking to assess the value of this change first.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12222/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12222/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12222",
"html_url": "https://github.com/huggingface/transformers/pull/12222",
"diff_url": "https://github.com/huggingface/transformers/pull/12222.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12222.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12221 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12221/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12221/comments | https://api.github.com/repos/huggingface/transformers/issues/12221/events | https://github.com/huggingface/transformers/issues/12221 | 923,655,450 | MDU6SXNzdWU5MjM2NTU0NTA= | 12,221 | Tokenizer encoding skips � character | {
"login": "seahrh",
"id": 4428622,
"node_id": "MDQ6VXNlcjQ0Mjg2MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4428622?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seahrh",
"html_url": "https://github.com/seahrh",
"followers_url": "https://api.github.com/users/seahrh/followers",
"following_url": "https://api.github.com/users/seahrh/following{/other_user}",
"gists_url": "https://api.github.com/users/seahrh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seahrh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seahrh/subscriptions",
"organizations_url": "https://api.github.com/users/seahrh/orgs",
"repos_url": "https://api.github.com/users/seahrh/repos",
"events_url": "https://api.github.com/users/seahrh/events{/privacy}",
"received_events_url": "https://api.github.com/users/seahrh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Maybe @n1t0 has an idea!",
"This is totally expected behavior. This tokenizer uses the same cleanup steps that were used in BERT, and this character is specifically removed.\r\n\r\nCf here on line 492:\r\nhttps://github.com/huggingface/transformers/blob/32dbb2d/src/transformers/models/bert/tokenization_bert.py#L487-L498"
] | 1,623 | 1,623 | 1,623 | NONE | null | ## Environment info
- `transformers` version: 4.5.1
- Platform: Linux-4.4.0-19041-Microsoft-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.5
- PyTorch version (GPU?): 1.8.1+cu102 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
- tokenizers: @LysandreJik
## Information
Model I am using (Bert, XLNet ...): Electra
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator")
c = "foo � bar"
print(f"c[4:5]={c[4:5]}")
e = tokenizer(c, return_offsets_mapping=True)
print(repr(e))
"""
{'input_ids': [101, 29379, 3347, 102], 'token_type_ids': [0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1], 'offset_mapping': [(0, 0), (0, 3), (6, 9), (0, 0)]}
"""
i = e.char_to_token(4)
print(f"i={repr(i)}") # i=None
```
## Expected behavior
Problem: � character was not encoded by the tokenizer.
� character should be encoded as some token <UNK> or otherwise.
Said character appears in the SquadV2 dataset with ID `5acd29f507355d001abf3774`:
```
Question
What is the glyph that Apple's Last Resort font displays?
Context
Rendering software which cannot process a Unicode character appropriately often displays it as an open rectangle, or the Unicode "replacement character" (U+FFFD, �), to indicate the position of the unrecognized character. Some systems have made attempts to provide more information about such characters. The Apple's Last Resort font will display a substitute glyph indicating the Unicode range of the character, and the SIL International's Unicode Fallback font will display a box showing the hexadecimal scalar value of the character.
Answer
�
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12221/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12221/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12220 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12220/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12220/comments | https://api.github.com/repos/huggingface/transformers/issues/12220/events | https://github.com/huggingface/transformers/issues/12220 | 923,635,462 | MDU6SXNzdWU5MjM2MzU0NjI= | 12,220 | [Trainer.py] tr_loss in trainer with distributed training | {
"login": "logoutAgain",
"id": 23735761,
"node_id": "MDQ6VXNlcjIzNzM1NzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/23735761?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/logoutAgain",
"html_url": "https://github.com/logoutAgain",
"followers_url": "https://api.github.com/users/logoutAgain/followers",
"following_url": "https://api.github.com/users/logoutAgain/following{/other_user}",
"gists_url": "https://api.github.com/users/logoutAgain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/logoutAgain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/logoutAgain/subscriptions",
"organizations_url": "https://api.github.com/users/logoutAgain/orgs",
"repos_url": "https://api.github.com/users/logoutAgain/repos",
"events_url": "https://api.github.com/users/logoutAgain/events{/privacy}",
"received_events_url": "https://api.github.com/users/logoutAgain/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Since it's averaged over all the training mini-batches, it should be a good representation of the real training loss. I'd personally avoid any complexity and add a new reduce operation here, since a user can always evaluate on the training set to get the \"real\" training loss if they absolutely need to. Does that make sense?",
"Thank you for your reply. I got it. "
] | 1,623 | 1,623 | 1,623 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4, not very sure
- Platform: linux
- Python version: 3.7
- PyTorch version (GPU?): 1.7
- Tensorflow version (GPU?): //
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Distributed training with single node with multi-gpu
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger
## Information
Model I am using (Bert, XLNet ...): Bart
The problem arises when using: trainer.py
## To reproduce
Steps to reproduce the behavior:
1. python -m torch.distributed.launch --nproc_per_node=2 xxx
2. observe the tr_loss
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I'm not sure if it's a bug or a misunderstanding. In `trainer.py`, the `tr_loss` printed in the distributed training is the loss caused in rank = 0. Do we need to reduce the `tr_loss`?

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12220/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12220/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12219 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12219/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12219/comments | https://api.github.com/repos/huggingface/transformers/issues/12219/events | https://github.com/huggingface/transformers/pull/12219 | 923,570,044 | MDExOlB1bGxSZXF1ZXN0NjcyMzIyMjg5 | 12,219 | Enabling users to provide their own `stopping_criteria` + `logits_processor` to `generate`. | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"@patrickvonplaten (Not urgent, get some rest :))",
"Sorry for the late reply here @Narsil - I'm happy with the PR I think :-) If we could add a test that would be great",
"@patrickvonplaten Should I merge this ?",
"I think we shouldn't check anything. If you defined something we pass it `as-is` IMO. It's a poweuser feature, the doc specifically mentions this:\r\n\r\nhttps://github.com/huggingface/transformers/pull/12219/files#diff-b7601d397d5d60326ce61a9c91beaa2afa026014141052b32b07e1d044fbbe17R801",
"But also happy to drop the PR, the issue didn't seem to generate that much traction.\r\nIf we're scared to introduce new range of bugs, hard to understand stuff, maybe let's just drop it.",
"I think it would be nice to merge the PR, but it just doesn't make much sense to me that a default, always-defined value like `max_length=20` would overwrite something that's passed via the `logits_processor`. So instead of dropping the PR we can just ensure that passed `logits_processor` and `stopping_criteria` that are passed have priority which is intuitive and sensible to me. ",
"So, you think, we should\r\n\r\n```python\r\nif logits_processor is None:\r\n logist_processort = self._get_logits_process(...)\r\n```\r\ninstead ?\r\n\r\nMake sense.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Leaving it as closed for now - reopening in case the community expresses interest in this PR again...",
"Thanks a lot for taking this over @lvwerra ! Let me know if you need any help with the remaining tests",
"Superseeded by https://github.com/huggingface/transformers/pull/14779#issuecomment-997914237"
] | 1,623 | 1,640 | 1,640 | CONTRIBUTOR | null | # What does this PR do?
Fixes #12118
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12219/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12219/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12219",
"html_url": "https://github.com/huggingface/transformers/pull/12219",
"diff_url": "https://github.com/huggingface/transformers/pull/12219.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12219.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12218 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12218/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12218/comments | https://api.github.com/repos/huggingface/transformers/issues/12218/events | https://github.com/huggingface/transformers/issues/12218 | 923,510,936 | MDU6SXNzdWU5MjM1MTA5MzY= | 12,218 | T5 model seq2seq text generation using word embeddings instead of token_ids does not work | {
"login": "jerry3chen",
"id": 32173077,
"node_id": "MDQ6VXNlcjMyMTczMDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/32173077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jerry3chen",
"html_url": "https://github.com/jerry3chen",
"followers_url": "https://api.github.com/users/jerry3chen/followers",
"following_url": "https://api.github.com/users/jerry3chen/following{/other_user}",
"gists_url": "https://api.github.com/users/jerry3chen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jerry3chen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jerry3chen/subscriptions",
"organizations_url": "https://api.github.com/users/jerry3chen/orgs",
"repos_url": "https://api.github.com/users/jerry3chen/repos",
"events_url": "https://api.github.com/users/jerry3chen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jerry3chen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @jerry3chen, \r\n\r\nCan you post a fully reproducible code snippet so that I can take a look? :-)",
"Hi @patrickvonplaten,\r\n\r\nI will post some more detailed codes. But this is downstream task so it is probably not ideal to have all of the code.\r\nI will just post down all of the parts that involve the t5model.\r\n\r\nHere is where I initialized the t5 model\r\n`\r\nenc2 = MT5ForConditionalGeneration.from_pretrained('google/mt5-small')\r\n`\r\n\r\nThen is it passed to a bigger model:\r\n`\r\nmodel = Gat2Seq(enc,enc2,vocab.word2id('<pad>'),vocab.word2id('</s>'))\r\n`\r\n`\r\nclass Gat2Seq(nn.Module):\r\n def __init__(self, encoder, encoder2, pad_idx, eos_idx, teacher_forcing = 0.5):\r\n super().__init__()\r\n self.encoder = encoder\r\n self.encoder2 = encoder2\r\n`\r\nDuring training, I have:\r\n`context = self.encoder(graph, art_lengths)\r\noutputs = self.encoder2(inputs_embeds=context, attention_mask=input_mask, labels=padded_labels)`\r\nWhere context is the shape of [8, 50, 512] coming from previous encoder(8 is the batch size, 50 is the sentence max length, 512 is the embedding size default from mt5tokenizer). padded_labels has shape of [8, 20](8 is the batch size, 20 is the maximum target sequence length). It is batch of target sentence token_ids that I want the model to generate. I wanted the t5model to treated the context as embedded tokens and does it's own encode/decode for text generation.\r\nThe training step works fine and I am able to see reasonable decrease in outputs.loss.\r\n\r\nFinally when I have some trained models, I ran this time to generate text:\r\n`\r\noutputs = self.encoder2.generate(input_ids=None, inputs_embeds=context, attention_mask=input_mask, bos_token_id=0, pad_token_id=0, eos_token_id=1)\r\n`\r\nWhere context here is exact the same as the one used in training.\r\n\r\nHowever, I will get the following error when program hits the generation line:\r\n\r\n> File \"pred.py\", line 452, in <module>\r\n> main()\r\n> File \"pred.py\", line 448, in main\r\n> setup_predicting(model, data_loader, hps, vocab, f.split('/')[-1] + '_model_output.txt')\r\n> File \"pred.py\", line 64, in setup_predicting\r\n> run_predicting(model, data_loader, hps, vocab, save_f)\r\n> File \"pred.py\", line 118, in run_predicting\r\n> raise e\r\n> File \"pred.py\", line 106, in run_predicting\r\n> outputs = model.forward(G,lengths,labels,predicting=True) # [n_snodes, 2]\r\n> File \"/scratch/jerryc/jerryc/gat2seq/HeterSumGraph-master-mod-att-TV-char/HiGraphMod.py\", line 470, in forward\r\n> outputs = self.encoder2.generate(input_ids=None, inputs_embeds=context, attention_mask=input_mask, bos_token_id=0, pad_token_id=0, eos_token_id=1)\r\n> File \"/scratch/jerryc/jerryc/venv_py3.7/lib/python3.7/site-packages/torch/autograd/grad_mode.py\", line 27, in decorate_context\r\n> return func(*args, **kwargs)\r\n> File \"/scratch/jerryc/jerryc/venv_py3.7/lib/python3.7/site-packages/transformers/generation_utils.py\", line 913, in generate\r\n> input_ids, decoder_start_token_id=decoder_start_token_id, bos_token_id=bos_token_id\r\n> File \"/scratch/jerryc/jerryc/venv_py3.7/lib/python3.7/site-packages/transformers/generation_utils.py\", line 422, in _prepare_decoder_input_ids_for_generation\r\n> torch.ones((input_ids.shape[0], 1), dtype=torch.long, device=input_ids.device) * decoder_start_token_id\r\n> AttributeError: 'NoneType' object has no attribute 'shape'\r\n\r\nHope this is enough for you to diagnose the issue.\r\nThanks,\r\nJerry",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"hello, I face the same problem. Could you give me any suggestions?",
"Hey @jerry3chen, @yuto3o,\r\n\r\nCould you please provide a complete, but **minimal** reproducible code snippet, so that I can easily reproduce the bug?\r\n\r\nSmall non-executeable code snippets are not enough to efficiently debug the problem.\r\n\r\nThanks!",
"@patrickvonplaten @yuto3o @jerry3chen \r\n\r\nHello, I also face the same problem. \r\nHowever, I found that the error doesn't occur if I pass `decoder_input_ids` consisting of `pad_token_id` to the `generate`.\r\nThe minimal reproducible code snippets are as follows. \r\n\r\nMy environment\r\n```\r\ntransformers 4.12.0\r\ntorch 1.8.0\r\n```\r\n\r\n**reproducible code for the error**\r\n\r\n```py\r\nfrom transformers import (\r\n T5ForConditionalGeneration,\r\n T5Tokenizer,\r\n)\r\nmodel = T5ForConditionalGeneration.from_pretrained(\"sonoisa/t5-base-japanese\")\r\ntokenizer = T5Tokenizer.from_pretrained(\"sonoisa/t5-base-japanese\", is_fast=True)\r\n\r\n# the example sentence is \"It's sunny today\" in English\r\ntokenized_inputs = tokenizer([\"今日は良い天気です\"], return_tensors='pt') \r\n\r\n# create input embedding instead of passing input_ids\r\ninputs_embeds = model.get_input_embeddings()(tokenized_inputs[\"input_ids\"])\r\n\r\noutput_ids = model.generate(\r\n inputs_embeds=inputs_embeds,\r\n attention_mask=tokenized_inputs[\"attention_mask\"]\r\n)\r\n```\r\n\r\n> ---------------------------------------------------------------------------\r\n> AttributeError Traceback (most recent call last)\r\n> <ipython-input-32-e369f62c37b6> in <module>\r\n> 1 inputs_embeds = model.get_input_embeddings()(tokenized_inputs[\"input_ids\"])\r\n> ----> 2 output_ids = model.generate(\r\n> 3 inputs_embeds=inputs_embeds,\r\n> 4 attention_mask=tokenized_inputs[\"attention_mask\"]\r\n> 5 )\r\n> \r\n> ~/anaconda3/envs/aitd/lib/python3.8/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)\r\n> 25 def decorate_context(*args, **kwargs):\r\n> 26 with self.__class__():\r\n> ---> 27 return func(*args, **kwargs)\r\n> 28 return cast(F, decorate_context)\r\n> 29 \r\n> \r\n> ~/anaconda3/envs/aitd/lib/python3.8/site-packages/transformers/generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, max_new_tokens, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, synced_gpus, **model_kwargs)\r\n> 911 input_ids = model_kwargs.pop(\"decoder_input_ids\")\r\n> 912 else:\r\n> --> 913 input_ids = self._prepare_decoder_input_ids_for_generation(\r\n> 914 input_ids, decoder_start_token_id=decoder_start_token_id, bos_token_id=bos_token_id\r\n> 915 )\r\n> \r\n> ~/anaconda3/envs/aitd/lib/python3.8/site-packages/transformers/generation_utils.py in _prepare_decoder_input_ids_for_generation(self, input_ids, decoder_start_token_id, bos_token_id)\r\n> 422 decoder_start_token_id = self._get_decoder_start_token_id(decoder_start_token_id, bos_token_id)\r\n> 423 decoder_input_ids = (\r\n> --> 424 torch.ones((input_ids.shape[0], 1), dtype=torch.long, device=input_ids.device) * decoder_start_token_id\r\n> 425 )\r\n> 426 return decoder_input_ids\r\n> \r\n> AttributeError: 'NoneType' object has no attribute 'shape'\r\n> \r\n\r\n\r\n**How to fix it**\r\n```py\r\nfrom transformers import (\r\n T5ForConditionalGeneration,\r\n T5Tokenizer,\r\n)\r\nmodel = T5ForConditionalGeneration.from_pretrained(\"sonoisa/t5-base-japanese\")\r\ntokenizer = T5Tokenizer.from_pretrained(\"sonoisa/t5-base-japanese\", is_fast=True)\r\n\r\ntokenized_inputs = tokenizer([\"今日は良い天気です\"], return_tensors='pt') # It's sunny today\r\ninputs_embeds = model.get_input_embeddings()(tokenized_inputs[\"input_ids\"])\r\n\r\n# **NOTE**: pad_token_id is used as decoder_start_token_id\r\ndummy_decoder_input_ids = torch.tensor([[tokenizer.pad_token_id]]) \r\n\r\noutput_ids = model.generate(\r\n inputs_embeds=inputs_embeds,\r\n attention_mask=tokenized_inputs[\"attention_mask\"],\r\n decoder_input_ids=dummy_decoder_input_ids\r\n)\r\n```\r\n\r\n> #output_ids\r\n> tensor([[ 0, 32099, 876, 4, 5, 2262, 32098, 876, 4, 2262,\r\n> 1]])\r\n\r\n\r\n**When I pass `input_ids` to `generate`**\r\n\r\nI can get the same result when I pass `input_ids`.\r\n\r\n```py\r\nfrom transformers import (\r\n T5ForConditionalGeneration,\r\n T5Tokenizer,\r\n)\r\nmodel = T5ForConditionalGeneration.from_pretrained(\"sonoisa/t5-base-japanese\")\r\ntokenizer = T5Tokenizer.from_pretrained(\"sonoisa/t5-base-japanese\", is_fast=True)\r\n\r\ntokenized_inputs = tokenizer([\"今日は良い天気です\"], return_tensors='pt') # It's sunny today\r\n\r\noutput_ids = model.generate(\r\n input_ids=tokenized_inputs[\"input_ids\"],\r\n attention_mask=tokenized_inputs[\"attention_mask\"]\r\n)\r\n```\r\n\r\n> #output_ids\r\n> tensor([[ 0, 32099, 876, 4, 5, 2262, 32098, 876, 4, 2262,\r\n> 1]])",
"@ichiroex,\r\n\r\nThanks for the nicely reproducible code snippet - this is indeed a bug and should be fixed.",
"PR to fix this: #14443 ",
"@patrickvonplaten Thank you!!"
] | 1,623 | 1,637 | 1,627 | NONE | null | Hi there,
I trained a MT5ForConditionalGeneration model. During training, I used my own embeddings for encoding (but default embeddings for decoding). However, when I try to generate output using generate function, it will give me an error message. I will post the code and error message in the following:
Here is the code for model training:
`outputs = self.encoder2(inputs_embeds=context, attention_mask=input_mask, labels=padded_labels)`
Where the context is similar to batch of token_ids but instead they are embeddings. The labels are target sequence token_ids. The training works fine without any issues.
And here is the line I tried to generate using the model:
`outputs = self.encoder2.generate(input_ids=None, inputs_embeds=context, attention_mask=input_mask, bos_token_id=0, pad_token_id=0, eos_token_id=1)`
And once the program hits the above line, I will get the following error message:
> outputs = self.encoder2.generate(input_ids=None, inputs_embeds=context, attention_mask=input_mask, bos_token_id=0, pad_token_id=0, eos_token_id=1)
> File "/scratch/jerryc/jerryc/venv_py3.7/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
> return func(*args, **kwargs)
> File "/scratch/jerryc/jerryc/venv_py3.7/lib/python3.7/site-packages/transformers/generation_utils.py", line 913, in generate
> input_ids, decoder_start_token_id=decoder_start_token_id, bos_token_id=bos_token_id
> File "/scratch/jerryc/jerryc/venv_py3.7/lib/python3.7/site-packages/transformers/generation_utils.py", line 422, in _prepare_decoder_input_ids_for_generation
> torch.ones((input_ids.shape[0], 1), dtype=torch.long, device=input_ids.device) * decoder_start_token_id
> AttributeError: 'NoneType' object has no attribute 'shape'
It seems the model is not handling this case property.
Any help would be appreciated.
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12218/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12218/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12217 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12217/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12217/comments | https://api.github.com/repos/huggingface/transformers/issues/12217/events | https://github.com/huggingface/transformers/pull/12217 | 923,384,401 | MDExOlB1bGxSZXF1ZXN0NjcyMTU1Njc5 | 12,217 | fix pt-1.9.0 `add_` deprecation | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Can we had an import error in AdaFactor to error if the version is les than 1.5 then? It seems the code is only there.",
"Sure, I'm just not sure where we are at `transformers`-wise with minimal pt version, so it might be simpler to require pt-1.5+, but the suggestion you made works too for now.\r\n\r\nWould it help to add `()` for `alpha` as described in the last section of OP? ",
"Yes, I missed that part. Adding parenthesis is fine!",
"@sgugger, it's in AdamW too - it's just whoever coded it hasn't checked back-compat (they didn't know), i.e. search for `add_` - so I think we either need a wrapper or cut off at pt-1.5.0 project-wise.\r\n\r\nFound this now, as I was adding `()` for clarity. See the new diff.",
"as discussed on slack for now adding:\r\n```\r\nrequire_version(\"torch>=1.5.0\") # add_ with alpha\r\n```\r\nfor AdamW and Adafactor.\r\n"
] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | This PR fixes a new pt-1.9.0 `add_` deprecation in several places.
The deprecation warnings:
```
UserWarning: This overload of add_ is deprecated:
add_(Number alpha, Tensor other)
Consider using one of the following signatures instead:
add_(Tensor other, *, Number alpha) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:1025.)
exp_avg_sq.mul_(beta2t).add_(1.0 - beta2t, update)
```
The new API is at https://pytorch.org/docs/stable/generated/torch.Tensor.add_.html
## Backward compatibility alert
I tracked this API down to need pt-1.5.0 or higher
Requesting an easier way to do this kind of process: https://github.com/pytorch/pytorch/issues/60149
I still have no idea which minimal pytorch version `transformers` is meant to support. Merging this PR will push it at least to `torch>=1.5.0`. Last I [checked](https://github.com/huggingface/transformers/pull/7985) some 8 months ago we barely supported `torch>=1.4.0`.
If you're OK with `torch>=1.5.0` then we should revive and update [this](https://github.com/huggingface/transformers/pull/7985) or make a new PR or fix it here.
## Readability
Unfortunately since we have to use the named arg now, the autoformatter makes the code less readable, by forcing whitespace in the expression.
I wrote these as:
```
exp_avg.mul_(group["beta1"]).add_(update, alpha=1-group["beta1"])
```
to make it clear, it's an expression, but it made it into:
```
exp_avg.mul_(group["beta1"]).add_(update, alpha=1 - group["beta1"])
```
now it looks like alpha is 1. grrrr. perhaps `()` are needed for improved readability. i.e.:
```
exp_avg.mul_(group["beta1"]).add_(update, alpha=(1 - group["beta1"]))
```
@sgugger, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12217/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12217/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12217",
"html_url": "https://github.com/huggingface/transformers/pull/12217",
"diff_url": "https://github.com/huggingface/transformers/pull/12217.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12217.patch",
"merged_at": 1623945240000
} |
https://api.github.com/repos/huggingface/transformers/issues/12216 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12216/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12216/comments | https://api.github.com/repos/huggingface/transformers/issues/12216/events | https://github.com/huggingface/transformers/pull/12216 | 923,244,296 | MDExOlB1bGxSZXF1ZXN0NjcyMDI5NjYw | 12,216 | Fix blenderbot checkpoint convert codes. | {
"login": "hyunwoongko",
"id": 38183241,
"node_id": "MDQ6VXNlcjM4MTgzMjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/38183241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hyunwoongko",
"html_url": "https://github.com/hyunwoongko",
"followers_url": "https://api.github.com/users/hyunwoongko/followers",
"following_url": "https://api.github.com/users/hyunwoongko/following{/other_user}",
"gists_url": "https://api.github.com/users/hyunwoongko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hyunwoongko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hyunwoongko/subscriptions",
"organizations_url": "https://api.github.com/users/hyunwoongko/orgs",
"repos_url": "https://api.github.com/users/hyunwoongko/repos",
"events_url": "https://api.github.com/users/hyunwoongko/events{/privacy}",
"received_events_url": "https://api.github.com/users/hyunwoongko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | #12203 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12216/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12216/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12216",
"html_url": "https://github.com/huggingface/transformers/pull/12216",
"diff_url": "https://github.com/huggingface/transformers/pull/12216.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12216.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12215 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12215/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12215/comments | https://api.github.com/repos/huggingface/transformers/issues/12215/events | https://github.com/huggingface/transformers/issues/12215 | 923,218,997 | MDU6SXNzdWU5MjMyMTg5OTc= | 12,215 | Missing PredictionHeadTransform for BertGenerationDecoder | {
"login": "j-min",
"id": 18069263,
"node_id": "MDQ6VXNlcjE4MDY5MjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/18069263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/j-min",
"html_url": "https://github.com/j-min",
"followers_url": "https://api.github.com/users/j-min/followers",
"following_url": "https://api.github.com/users/j-min/following{/other_user}",
"gists_url": "https://api.github.com/users/j-min/gists{/gist_id}",
"starred_url": "https://api.github.com/users/j-min/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/j-min/subscriptions",
"organizations_url": "https://api.github.com/users/j-min/orgs",
"repos_url": "https://api.github.com/users/j-min/repos",
"events_url": "https://api.github.com/users/j-min/events{/privacy}",
"received_events_url": "https://api.github.com/users/j-min/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"SImilarly, `token_type_embeddings` is also missing for [BertGenerationEmbeddings](https://github.com/huggingface/transformers/blob/v4.6.0/src/transformers/models/bert_generation/modeling_bert_generation.py#L133).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hey @j-min,\r\n\r\n`BertForGeneration` was added so that the checkpoints of https://huggingface.co/blog/warm-starting-encoder-decoder can be used in Transformers. Those models don't really need `token_type_ids` since they are generation models and also the lm head is different\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,631 | 1,631 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.0
### Who can help
@LysandreJik
## Information
Model I am using (Bert, XLNet ...): Bert, BertForGeneration
It seems the [`BertPredictionHeadTransform`](https://github.com/huggingface/transformers/blob/v4.6.0/src/transformers/models/bert/modeling_bert.py#L645) layer (dense+layer norm) is not used in [BertGenerationDecoder](https://github.com/huggingface/transformers/blob/v4.6.0/src/transformers/models/bert_generation/modeling_bert_generation.py#L430), while it is used in [the original BERT](https://github.com/huggingface/transformers/blob/v4.6.0/src/transformers/models/bert/modeling_bert.py#L657). Is this expected?
## To reproduce
Steps to reproduce the behavior:
```python3
from transformers import BertForPreTraining, BertGenerationDecoder
bert = BertForPreTraining.from_pretrained('bert-base-uncased')
bert
>>> ....
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): BertPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
(cls): BertPreTrainingHeads(
(predictions): BertLMPredictionHead(
(transform): BertPredictionHeadTransform(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
(decoder): Linear(in_features=768, out_features=30522, bias=True)
)
(seq_relationship): Linear(in_features=768, out_features=2, bias=True)
)
)
bertdecoder = BertGenerationDecoder.from_pretrained('bert-base-uncased', is_decoder=True)
bertdecoder
>>> ....
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
)
(lm_head): BertGenerationOnlyLMHead(
(decoder): Linear(in_features=768, out_features=30522, bias=True)
)
)
```
## Expected behavior
BertGenerationDecoder has the same transform layer before the final LM head.
```python3
(transform): BertPredictionHeadTransform(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12215/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12215/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12214 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12214/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12214/comments | https://api.github.com/repos/huggingface/transformers/issues/12214/events | https://github.com/huggingface/transformers/issues/12214 | 923,179,660 | MDU6SXNzdWU5MjMxNzk2NjA= | 12,214 | Getting 404 Client Error when loading BaptisteDoyen/camembert-base-xnli | {
"login": "mothguib",
"id": 17726728,
"node_id": "MDQ6VXNlcjE3NzI2NzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/17726728?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mothguib",
"html_url": "https://github.com/mothguib",
"followers_url": "https://api.github.com/users/mothguib/followers",
"following_url": "https://api.github.com/users/mothguib/following{/other_user}",
"gists_url": "https://api.github.com/users/mothguib/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mothguib/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mothguib/subscriptions",
"organizations_url": "https://api.github.com/users/mothguib/orgs",
"repos_url": "https://api.github.com/users/mothguib/repos",
"events_url": "https://api.github.com/users/mothguib/events{/privacy}",
"received_events_url": "https://api.github.com/users/mothguib/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"There's a typo in your model identifier:\r\n```diff\r\n- BaptisteDoyen/camembert-base-xlni\r\n+ BaptisteDoyen/camembert-base-xnli\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,627 | 1,627 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-5.11.16-arch1-1-x86_64-with-glibc2.33
- Python version: 3.9.5
- PyTorch version (GPU?): 1.9.0+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
## Information
The problem arises when using:
* [ ] the official example scripts: https://huggingface.co/BaptisteDoyen/camembert-base-xlni
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task
## To reproduce
Steps to reproduce the behavior:
```
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="BaptisteDoyen/camembert-base-xnli")
```
returns:
```
404 Client Error: Not Found for url: https://huggingface.co/BaptisteDoyen/camembert-base-xnli/resolve/main/config.json
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12214/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12214/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12213 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12213/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12213/comments | https://api.github.com/repos/huggingface/transformers/issues/12213/events | https://github.com/huggingface/transformers/issues/12213 | 923,178,522 | MDU6SXNzdWU5MjMxNzg1MjI= | 12,213 | [Question] When pretraining a language model, can I choose to mask specific words? | {
"login": "wenting-zhao",
"id": 8762524,
"node_id": "MDQ6VXNlcjg3NjI1MjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8762524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wenting-zhao",
"html_url": "https://github.com/wenting-zhao",
"followers_url": "https://api.github.com/users/wenting-zhao/followers",
"following_url": "https://api.github.com/users/wenting-zhao/following{/other_user}",
"gists_url": "https://api.github.com/users/wenting-zhao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wenting-zhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wenting-zhao/subscriptions",
"organizations_url": "https://api.github.com/users/wenting-zhao/orgs",
"repos_url": "https://api.github.com/users/wenting-zhao/repos",
"events_url": "https://api.github.com/users/wenting-zhao/events{/privacy}",
"received_events_url": "https://api.github.com/users/wenting-zhao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,627 | 1,627 | NONE | null | Hi there,
I apologize if this is answered anywhere. I need to pretrain a language model with some specific words masked. I was wondering if this is currently supported? Since language models are trained in an unsupervised way, I saw in the examples that the provided datasets don't need any labels. However, I was thinking if it would be possible to create my own (sentence_with_masks, masked_words) pairs. If library isn't currently supporting that, may anyone point me to a file so that I can make my modifications? Thanks in advance! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12213/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12213/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12212 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12212/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12212/comments | https://api.github.com/repos/huggingface/transformers/issues/12212/events | https://github.com/huggingface/transformers/issues/12212 | 923,126,639 | MDU6SXNzdWU5MjMxMjY2Mzk= | 12,212 | Clearer indication for overridden method in generation | {
"login": "ktangri",
"id": 22266659,
"node_id": "MDQ6VXNlcjIyMjY2NjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/22266659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ktangri",
"html_url": "https://github.com/ktangri",
"followers_url": "https://api.github.com/users/ktangri/followers",
"following_url": "https://api.github.com/users/ktangri/following{/other_user}",
"gists_url": "https://api.github.com/users/ktangri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ktangri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ktangri/subscriptions",
"organizations_url": "https://api.github.com/users/ktangri/orgs",
"repos_url": "https://api.github.com/users/ktangri/repos",
"events_url": "https://api.github.com/users/ktangri/events{/privacy}",
"received_events_url": "https://api.github.com/users/ktangri/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Also putting this in the \"Fix generation docs\" task basket",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,631 | null | NONE | null | The expectation for the `prepare_inputs_for_generation` function to be overridden can be made clearer by changing https://github.com/huggingface/transformers/blob/700cee344691afc41f68aa18fedea463b22f95f1/src/transformers/generation_utils.py#L369-L374
to raise a `NotImplementedError` that provides the information mentioned in the function's comment.
@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12212/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12212/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12211 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12211/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12211/comments | https://api.github.com/repos/huggingface/transformers/issues/12211/events | https://github.com/huggingface/transformers/pull/12211 | 923,123,256 | MDExOlB1bGxSZXF1ZXN0NjcxOTE5NzA5 | 12,211 | [WIP] tweak model repo saving | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"This was incorporated by @sgugger and @LysandreJik in another PR"
] | 1,623 | 1,627 | 1,627 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12211/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12211",
"html_url": "https://github.com/huggingface/transformers/pull/12211",
"diff_url": "https://github.com/huggingface/transformers/pull/12211.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12211.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/12210 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12210/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12210/comments | https://api.github.com/repos/huggingface/transformers/issues/12210/events | https://github.com/huggingface/transformers/issues/12210 | 923,106,608 | MDU6SXNzdWU5MjMxMDY2MDg= | 12,210 | Better documentation for generation parameter defaults | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Fine by me! I think it can just be stated at the beginning before each arg is documented.",
"Putting this is the \"Improve generation task basket\" so that this is handled once the generation docs are improved as well",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,631 | null | MEMBER | null | # Generation default params documentation
It's very hard to follow how the generation parameters are set when running generation. When looking at the official function: https://github.com/huggingface/transformers/blob/700cee344691afc41f68aa18fedea463b22f95f1/src/transformers/generation_utils.py#L644 all parameters default to `None`, but are then later overwritten by the config's default parameters, *e.g.* here: https://github.com/huggingface/transformers/blob/700cee344691afc41f68aa18fedea463b22f95f1/src/transformers/generation_utils.py#L878 . This is very hard to trace or follow. We should at least put a warning or note that clearly states that all generation parameters (and actually all forward) parameters **always** default to the config.
What do you think @LysandreJik @patil-suraj @sgugger ?
If you agree, I'll open a PR for it :-) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12210/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12210/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12209 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12209/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12209/comments | https://api.github.com/repos/huggingface/transformers/issues/12209/events | https://github.com/huggingface/transformers/issues/12209 | 923,058,019 | MDU6SXNzdWU5MjMwNTgwMTk= | 12,209 | The kernel appears to have died. It will restart automatically. from transformers import pipeline | {
"login": "splurring",
"id": 86021446,
"node_id": "MDQ6VXNlcjg2MDIxNDQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/86021446?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/splurring",
"html_url": "https://github.com/splurring",
"followers_url": "https://api.github.com/users/splurring/followers",
"following_url": "https://api.github.com/users/splurring/following{/other_user}",
"gists_url": "https://api.github.com/users/splurring/gists{/gist_id}",
"starred_url": "https://api.github.com/users/splurring/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/splurring/subscriptions",
"organizations_url": "https://api.github.com/users/splurring/orgs",
"repos_url": "https://api.github.com/users/splurring/repos",
"events_url": "https://api.github.com/users/splurring/events{/privacy}",
"received_events_url": "https://api.github.com/users/splurring/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you share a colab with a reproducible code example so that we may take a look? Thank you!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,627 | 1,627 | NONE | null | I am working on the Jupyter Notebook
With the code:
`from transformers import pipeline`
I get:
"The kernel appears to have died. It will restart automatically."
Can someone explain to me what I have to do to fix this? I have already installed tensorflow and transformers. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12209/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12209/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12208 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12208/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12208/comments | https://api.github.com/repos/huggingface/transformers/issues/12208/events | https://github.com/huggingface/transformers/pull/12208 | 923,048,579 | MDExOlB1bGxSZXF1ZXN0NjcxODUzNDcy | 12,208 | AutoTokenizer: infer the class from the tokenizer config if possible | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | COLLABORATOR | null | # What does this PR do?
This PR adds the functionality to load a tokenizer with `AutoTokenizer.from_pretrained` after saving it locally (without saving the model config in the same folder).
To do this, the proper tokenizer class is saved in `tokenizer_config.json` and the `AutoTokenizer.from_pretrained` method will first look in this file before defaulting to the model config (like before).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12208/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12208/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12208",
"html_url": "https://github.com/huggingface/transformers/pull/12208",
"diff_url": "https://github.com/huggingface/transformers/pull/12208.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12208.patch",
"merged_at": 1623947962000
} |
https://api.github.com/repos/huggingface/transformers/issues/12207 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12207/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12207/comments | https://api.github.com/repos/huggingface/transformers/issues/12207/events | https://github.com/huggingface/transformers/pull/12207 | 923,036,791 | MDExOlB1bGxSZXF1ZXN0NjcxODQyODY4 | 12,207 | Pipeline update & tests | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | MEMBER | null | Image classification models that have less than 5 labels currently cannot run with the pipeline defaults as it uses a top_k of 5 by default. This puts a limit on the top_k so that it maxes out to the number of labels of the model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12207/reactions",
"total_count": 7,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 2,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/12207/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12207",
"html_url": "https://github.com/huggingface/transformers/pull/12207",
"diff_url": "https://github.com/huggingface/transformers/pull/12207.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12207.patch",
"merged_at": 1623915676000
} |
https://api.github.com/repos/huggingface/transformers/issues/12206 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12206/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12206/comments | https://api.github.com/repos/huggingface/transformers/issues/12206/events | https://github.com/huggingface/transformers/pull/12206 | 923,022,712 | MDExOlB1bGxSZXF1ZXN0NjcxODMwMzYy | 12,206 | Add TFHubertModel | {
"login": "will-rice",
"id": 25072137,
"node_id": "MDQ6VXNlcjI1MDcyMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/25072137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/will-rice",
"html_url": "https://github.com/will-rice",
"followers_url": "https://api.github.com/users/will-rice/followers",
"following_url": "https://api.github.com/users/will-rice/following{/other_user}",
"gists_url": "https://api.github.com/users/will-rice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/will-rice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/will-rice/subscriptions",
"organizations_url": "https://api.github.com/users/will-rice/orgs",
"repos_url": "https://api.github.com/users/will-rice/repos",
"events_url": "https://api.github.com/users/will-rice/events{/privacy}",
"received_events_url": "https://api.github.com/users/will-rice/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @will-rice,\r\n\r\nWow that was quick! :D Can you remove the [WIP] whenever the PR is ready for review? :-)",
"One thing that's different from the PyTorch version is I couldn't use the copy comments because I added the type to the config arguments in Wav2Vec2. If I retained the copy comments it would overwrite HubertConfig with Wav2Vec2Config. Which makes sense, but I wondered if there was a way to fix this so I could keep the copy comments, but ignore the config type.",
"I added back the WIP based on the TFWav2Vec2 [bugs](https://github.com/huggingface/transformers/issues/12264#issuecomment-864611327). I will update this with the fixes when those are corrected. ",
"@patrickvonplaten I believe this one is ready for review now. I updated it with the wav2vec2 bug fixes.",
"@patrickvonplaten I can definitely add the copy comments. The issue I ran into was due to Wav2Vec2Config typing in TFWav2Vec2 so the copy script overwrites the TFHubertConfig. I didn't look in depth at the copy code, but I was thinking that we could allow the copy to ignore typing.",
"Removing the config typing from TFWav2Vec2 would work though and that's how it is in PyTorch.",
"> Removing the config typing from TFWav2Vec2 would work though and that's how it is in PyTorch.\r\n\r\nAh you can add something like `with Wav2Vec2->Hubert` which should correctly replace the class name when copying",
"Left a comment here: https://github.com/huggingface/transformers/pull/12206/files?file-filters%5B%5D=.py#r667074787 :-) That's how it should work well with the configs",
"> Left a comment here: https://github.com/huggingface/transformers/pull/12206/files?file-filters%5B%5D=.py#r667074787 :-) That's how it should work well with the configs\n\n🤦♂️ \n\nThanks! I will update it now."
] | 1,623 | 1,625 | 1,625 | CONTRIBUTOR | null | # What does this PR do?
This PR adds the TFHubert Model.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
@sgugger
@Rocketknight1
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12206/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12206/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12206",
"html_url": "https://github.com/huggingface/transformers/pull/12206",
"diff_url": "https://github.com/huggingface/transformers/pull/12206.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12206.patch",
"merged_at": 1625853326000
} |
https://api.github.com/repos/huggingface/transformers/issues/12205 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12205/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12205/comments | https://api.github.com/repos/huggingface/transformers/issues/12205/events | https://github.com/huggingface/transformers/pull/12205 | 922,902,435 | MDExOlB1bGxSZXF1ZXN0NjcxNzIyNzI5 | 12,205 | [Docs] fixed broken link | {
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks again!"
] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | # What does this PR do?
Fixes #12200
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Documentation: @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12205/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12205/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12205",
"html_url": "https://github.com/huggingface/transformers/pull/12205",
"diff_url": "https://github.com/huggingface/transformers/pull/12205.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12205.patch",
"merged_at": 1623870893000
} |
https://api.github.com/repos/huggingface/transformers/issues/12204 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12204/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12204/comments | https://api.github.com/repos/huggingface/transformers/issues/12204/events | https://github.com/huggingface/transformers/pull/12204 | 922,863,707 | MDExOlB1bGxSZXF1ZXN0NjcxNjg4MTIx | 12,204 | (#12203) Fix blenderbot checkpoint convert codes. | {
"login": "hyunwoongko",
"id": 38183241,
"node_id": "MDQ6VXNlcjM4MTgzMjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/38183241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hyunwoongko",
"html_url": "https://github.com/hyunwoongko",
"followers_url": "https://api.github.com/users/hyunwoongko/followers",
"following_url": "https://api.github.com/users/hyunwoongko/following{/other_user}",
"gists_url": "https://api.github.com/users/hyunwoongko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hyunwoongko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hyunwoongko/subscriptions",
"organizations_url": "https://api.github.com/users/hyunwoongko/orgs",
"repos_url": "https://api.github.com/users/hyunwoongko/repos",
"events_url": "https://api.github.com/users/hyunwoongko/events{/privacy}",
"received_events_url": "https://api.github.com/users/hyunwoongko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | https://github.com/huggingface/transformers/issues/12203 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12204/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12204/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12204",
"html_url": "https://github.com/huggingface/transformers/pull/12204",
"diff_url": "https://github.com/huggingface/transformers/pull/12204.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12204.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12203 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12203/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12203/comments | https://api.github.com/repos/huggingface/transformers/issues/12203/events | https://github.com/huggingface/transformers/issues/12203 | 922,862,638 | MDU6SXNzdWU5MjI4NjI2Mzg= | 12,203 | blenderbot checkpoint convert script has bug. | {
"login": "hyunwoongko",
"id": 38183241,
"node_id": "MDQ6VXNlcjM4MTgzMjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/38183241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hyunwoongko",
"html_url": "https://github.com/hyunwoongko",
"followers_url": "https://api.github.com/users/hyunwoongko/followers",
"following_url": "https://api.github.com/users/hyunwoongko/following{/other_user}",
"gists_url": "https://api.github.com/users/hyunwoongko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hyunwoongko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hyunwoongko/subscriptions",
"organizations_url": "https://api.github.com/users/hyunwoongko/orgs",
"repos_url": "https://api.github.com/users/hyunwoongko/repos",
"events_url": "https://api.github.com/users/hyunwoongko/events{/privacy}",
"received_events_url": "https://api.github.com/users/hyunwoongko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Now I have almost fixed the bug. I will PR soon."
] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | - error

- original parlai checkpoint

- so, we should fix codes like below.
```python
def rename_layernorm_keys(sd):
keys = [
"encoder.norm_embeddings.weight",
"encoder.norm_embeddings.bias",
"decoder.norm_embeddings.weight",
"decoder.norm_embeddings.bias",
]
for k in keys:
v = sd.pop(k)
new_k = "model." + k.replace("norm_embeddings", "layer_norm")
assert new_k not in sd
sd[new_k] = v
IGNORE_KEYS = ["START"]
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12203/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12203/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12202 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12202/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12202/comments | https://api.github.com/repos/huggingface/transformers/issues/12202/events | https://github.com/huggingface/transformers/issues/12202 | 922,709,925 | MDU6SXNzdWU5MjI3MDk5MjU= | 12,202 | Training in google colab with TPU using TFTrainer fails with | {
"login": "YanDavKMS",
"id": 52955700,
"node_id": "MDQ6VXNlcjUyOTU1NzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/52955700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YanDavKMS",
"html_url": "https://github.com/YanDavKMS",
"followers_url": "https://api.github.com/users/YanDavKMS/followers",
"following_url": "https://api.github.com/users/YanDavKMS/following{/other_user}",
"gists_url": "https://api.github.com/users/YanDavKMS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YanDavKMS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YanDavKMS/subscriptions",
"organizations_url": "https://api.github.com/users/YanDavKMS/orgs",
"repos_url": "https://api.github.com/users/YanDavKMS/repos",
"events_url": "https://api.github.com/users/YanDavKMS/events{/privacy}",
"received_events_url": "https://api.github.com/users/YanDavKMS/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! We're trying to move away from using TFTrainer for TensorFlow and instead train models with the native Keras API. We have a full example using the Keras approach here: https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification\r\n\r\nTraining on TPU with this example works correctly, but there are some issues with Keras predictions on TPU that we're actively working on. If you encounter these (the output object contains None fields that should contain values), you can try moving any `predict` calls out of the `strategy.scope()`, or saving the model and doing the predictions on a GPU or CPU instance instead.",
"Is there any chance this will be fixed?\r\nTF/Trainer has many things that are useful and easier to use.",
"Unfortunately, we're probably going to be moving away from TFTrainer entirely - it's actually likely to be deprecated in the very near future! We will, however, be making ongoing adjustments to our models and data preprocessing to ensure people's workflows remain smooth!",
"Sounds good. Thank you very much!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"> Hi! We're trying to move away from using TFTrainer for TensorFlow and instead train models with the native Keras API. We have a full example using the Keras approach here: https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification\r\n> \r\n> Training on TPU with this example works correctly, but there are some issues with Keras predictions on TPU that we're actively working on. If you encounter these (the output object contains None fields that should contain values), you can try moving any `predict` calls out of the `strategy.scope()`, or saving the model and doing the predictions on a GPU or CPU instance instead.\r\n\r\n`predict` works slowly outside of `strategy.scope()`. Is there any other way to make `predict` working with TPU ? I tried to create custom loop for prediction using `tf.function` - it doesn't work with TPU.",
"Not easily, unfortunately. This is a known issue at our end and we're hoping to implement a fix, but in the meantime you can try exporting your trained model to a GPU instance and running `predict()` there."
] | 1,623 | 1,630 | 1,629 | NONE | null | ## Environment info
- `transformers` version: 4.6.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Using GPU in script?: Using TPU
- Using distributed or parallel set-up in script?: I assume Yes, under the hood
### Who can help
- trainer: @sgugger @Rocketknight1
## Information
Model I am using (Albert):
The problem arises when using:
* [ ] my own modified scripts
The tasks I am working on is:
* [ ] my own task or dataset
## To reproduce
I'm trying to train classification model on TPU using TFTrainer, it fails with the following error:
> Trying to run metric.update_state in replica context when the metric was not created in TPUStrategy scope. Make sure the keras Metric is created in TPUstrategy scope.
I tried training without eval and it finishes without an error but the model is not really trained and results are poor.
Also tried to train with eval and without compute_metrics but the same error is thrown.
```
from transformers import TFTrainer, TFTrainingArguments
from transformers import TFAutoModelForSequenceClassification
def compute_metrics(pred):
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='weighted')
acc = accuracy_score(labels, preds)
return {
'accuracy': acc,
'precision': precision,
'recall': recall,
'f1': f1
}
training_args = TFTrainingArguments(
tpu_num_cores=8,
output_dir=output_dir, # output directory
num_train_epochs=3, # total number of training epochs
per_device_train_batch_size=3, # batch size per device during training
per_device_eval_batch_size=3, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir=logging_dir, # directory for storing logs
logging_steps=10,
evaluation_strategy="steps",
eval_steps=500,
save_steps=3000,
load_best_model_at_end=True,
metric_for_best_model="f1",
learning_rate=1e-5
)
with training_args.strategy.scope():
model = TFAutoModelForSequenceClassification.from_pretrained(modelName,
num_labels=len(label_dict),
output_attentions=False,
output_hidden_states=False)
trainer = TFTrainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
compute_metrics=compute_metrics,
train_dataset=train_dataset, # training dataset
eval_dataset=val_dataset, # evaluation dataset
)
trainer.train()
```
## Expected behavior
I would expect to train successfully on TPU
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12202/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12202/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12201 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12201/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12201/comments | https://api.github.com/repos/huggingface/transformers/issues/12201/events | https://github.com/huggingface/transformers/issues/12201 | 922,684,254 | MDU6SXNzdWU5MjI2ODQyNTQ= | 12,201 | ValueError: char_to_token() is not available when using Python based tokenizers ; XLNetTokenizer and encodings.char_to_token bug ; | {
"login": "akar5h",
"id": 19966604,
"node_id": "MDQ6VXNlcjE5OTY2NjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/19966604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akar5h",
"html_url": "https://github.com/akar5h",
"followers_url": "https://api.github.com/users/akar5h/followers",
"following_url": "https://api.github.com/users/akar5h/following{/other_user}",
"gists_url": "https://api.github.com/users/akar5h/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akar5h/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akar5h/subscriptions",
"organizations_url": "https://api.github.com/users/akar5h/orgs",
"repos_url": "https://api.github.com/users/akar5h/repos",
"events_url": "https://api.github.com/users/akar5h/events{/privacy}",
"received_events_url": "https://api.github.com/users/akar5h/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Why are you not using the fast tokenizer? The error message tells you that the feature `char_to_token` is not available for the slow (i.e. python) tokenizers because nobody has implemented it yet.",
"@cronoik , I ran the same with Fast Tokenizer (XLNetTokenizerFast) on \"xlnet-base-cased\" , although char_to_token() was available this time , there seems to be some problem with XLNetTokenizerFast . \r\n``` \r\n start_positions.append(encodings.char_to_token(i, answers[i]['answer_start']))\r\n end_positions.append(encodings.char_to_token(i, answers[i]['answer_end'] - 1))\r\n```\r\nWhile debugging this snippet from the above code, I observed using **XLNetTokenizerFast on \"xlnet-base-cased\"** that `encodings.char_to_token(i, answers[i]['answer_start'])`\r\nis None for most of the cases . (90%) . The output is None , hence encoding[\"start_position\"] and \"end_position\" have erronous values . \r\nand just changing the model, i.e, using **AutoTokenizer on \"roberta-base\"** , Unlike above I saw these valuse to be finite and not None . And I was further able to fine tune the model . \r\n\r\nDo you have some insight on this ? \r\n\r\n",
"There is to 95% nothing wrong with the tokenizer. You are just using it the wrong way. Please give us an example that leads to None. The `char_to_token` returns none when you ask for a whitespace position and you use a tokenizer that does not support whitespace.",
"Sure, Try these 2 code snippets , \r\n\r\nwith XLNetTokenizerFast: \r\n```\r\nimport json\r\nfrom pathlib import Path\r\nfrom transformers import XLNetTokenizerFast, XLNetForQuestionAnsweringSimple\r\n# from transformers import BigBirdTokenizerFast, BigBirdForQuestionAnswering\r\n\r\nimport torch\r\n\r\ndef read_squad(path):\r\n path = Path(path)\r\n with open(path, 'rb') as f:\r\n squad_dict = json.load(f)\r\n\r\n contexts = []\r\n questions = []\r\n answers = []\r\n for group in squad_dict['data']:\r\n for passage in group['paragraphs']:\r\n context = passage['context']\r\n for qa in passage['qas']:\r\n question = qa['question']\r\n for answer in qa['answers']:\r\n contexts.append(context)\r\n questions.append(question)\r\n answers.append(answer)\r\n\r\n return contexts, questions, answers\r\n\r\n\r\ntrain_contexts, train_questions, train_answers = read_squad('train-v2.0.json')\r\nval_contexts, val_questions, val_answers = read_squad('dev-v2.0.json')\r\n\r\n\r\ndef add_end_idx(answers, contexts):\r\n for answer, context in zip(answers, contexts):\r\n gold_text = answer['text']\r\n start_idx = answer['answer_start']\r\n end_idx = start_idx + len(gold_text)\r\n\r\n # sometimes squad answers are off by a character or two – fix this\r\n if context[start_idx:end_idx] == gold_text:\r\n answer['answer_end'] = end_idx\r\n elif context[start_idx - 1:end_idx - 1] == gold_text:\r\n answer['answer_start'] = start_idx - 1\r\n answer['answer_end'] = end_idx - 1 # When the gold label is off by one character\r\n elif context[start_idx - 2:end_idx - 2] == gold_text:\r\n answer['answer_start'] = start_idx - 2\r\n answer['answer_end'] = end_idx - 2 # When the gold label is off by two characters\r\n\r\n\r\nadd_end_idx(train_answers, train_contexts)\r\nadd_end_idx(val_answers, val_contexts)\r\n\r\ndevice = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')\r\n\r\nmodel_name = \"xlnet-base-cased\"\r\ntokenizer = XLNetTokenizerFast.from_pretrained(model_name)\r\nmodel = XLNetForQuestionAnsweringSimple.from_pretrained(model_name)\r\nmodel.to(device)\r\n\r\ntrain_encodings = tokenizer(train_contexts, train_questions, truncation=True, padding=True, max_length= 512)\r\nval_encodings = tokenizer(val_contexts, val_questions, truncation=True, padding=True, max_length= 512)\r\n\r\n\r\ndef add_token_positions(encodings, answers):\r\n start_positions = []\r\n end_positions = []\r\n for i in range(len(answers)):\r\n start_positions.append(encodings.char_to_token(i, answers[i]['answer_start']))\r\n end_positions.append(encodings.char_to_token(i, answers[i]['answer_end'] - 1))\r\n\r\n # if None, the answer passage has been truncated\r\n if start_positions[-1] is None:\r\n start_positions[-1] = tokenizer.model_max_length\r\n if end_positions[-1] is None:\r\n end_positions[-1] = tokenizer.model_max_length\r\n encodings.update({'start_positions': start_positions, 'end_positions': end_positions})\r\n\r\n\r\nadd_token_positions(train_encodings, train_answers)\r\nadd_token_positions(val_encodings, val_answers)\r\n```\r\n\r\nWith roberta-base:\r\n```\r\nimport json\r\nfrom pathlib import Path\r\nfrom transformers import AutoTokenizer, AutoModelForQuestionAnswering\r\n# from transformers import BigBirdTokenizerFast, BigBirdForQuestionAnswering\r\n\r\nimport torch\r\n\r\ndef read_squad(path):\r\n path = Path(path)\r\n with open(path, 'rb') as f:\r\n squad_dict = json.load(f)\r\n\r\n contexts = []\r\n questions = []\r\n answers = []\r\n for group in squad_dict['data']:\r\n for passage in group['paragraphs']:\r\n context = passage['context']\r\n for qa in passage['qas']:\r\n question = qa['question']\r\n for answer in qa['answers']:\r\n contexts.append(context)\r\n questions.append(question)\r\n answers.append(answer)\r\n\r\n return contexts, questions, answers\r\n\r\n\r\ntrain_contexts, train_questions, train_answers = read_squad('train-v2.0.json')\r\nval_contexts, val_questions, val_answers = read_squad('dev-v2.0.json')\r\n\r\n\r\ndef add_end_idx(answers, contexts):\r\n for answer, context in zip(answers, contexts):\r\n gold_text = answer['text']\r\n start_idx = answer['answer_start']\r\n end_idx = start_idx + len(gold_text)\r\n\r\n # sometimes squad answers are off by a character or two – fix this\r\n if context[start_idx:end_idx] == gold_text:\r\n answer['answer_end'] = end_idx\r\n elif context[start_idx - 1:end_idx - 1] == gold_text:\r\n answer['answer_start'] = start_idx - 1\r\n answer['answer_end'] = end_idx - 1 # When the gold label is off by one character\r\n elif context[start_idx - 2:end_idx - 2] == gold_text:\r\n answer['answer_start'] = start_idx - 2\r\n answer['answer_end'] = end_idx - 2 # When the gold label is off by two characters\r\n\r\n\r\nadd_end_idx(train_answers, train_contexts)\r\nadd_end_idx(val_answers, val_contexts)\r\n\r\ndevice = torch.device('cpu')\r\n\r\nmodel_name = \"roberta-base\"\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\nmodel = AutoModelForQuestionAnswering.from_pretrained(model_name)\r\nmodel.to(device)\r\n\r\ntrain_encodings = tokenizer(train_contexts, train_questions, truncation=True, padding=True, max_length= 512)\r\nval_encodings = tokenizer(val_contexts, val_questions, truncation=True, padding=True, max_length= 512)\r\n\r\n\r\ndef add_token_positions(encodings, answers):\r\n start_positions = []\r\n end_positions = []\r\n for i in range(len(answers)):\r\n start_positions.append(encodings.char_to_token(i, answers[i]['answer_start']))\r\n end_positions.append(encodings.char_to_token(i, answers[i]['answer_end'] - 1))\r\n\r\n # if None, the answer passage has been truncated\r\n if start_positions[-1] is None:\r\n start_positions[-1] = tokenizer.model_max_length\r\n if end_positions[-1] is None:\r\n end_positions[-1] = tokenizer.model_max_length\r\n encodings.update({'start_positions': start_positions, 'end_positions': end_positions})\r\n\r\n\r\nadd_token_positions(train_encodings, train_answers)\r\nadd_token_positions(val_encodings, val_answers)\r\n```\r\n\r\nIn the two snippets above, check the value of \"start_positions\" , \"end_positions\" variables in \"add_token_positions\" function after its final iteration , and compare them . Its tokenizer.model_max_length for most cases in XLNet one. \r\nNow why is it that in tokenizer (specifically , encodings.char_to_token(i, answers[i]['answer_start'])) its returning finite values in roberta , and with other tokenizer its None . All that was changed was Tokenizer \r\n[ encodings.char_to_token(i, answers[i]['answer_start']) is None for xlnet model and , not for roberta . \r\n\r\n",
"Please give us an example of your text that produces None. You have already shown us your code.\r\n",
"[train-v2.0.txt](https://github.com/huggingface/transformers/files/6671676/train-v2.0.txt)\r\nConsider this slice of Squad2.0 dataset , roughly 65 contexts and their qas . (change file from .txt to json ) \r\nI'm working on the complete Squad2.0 dataset , but this json will reproduce the issue.",
"much help ",
"Can someone help me understand the purpose \"add_token_position\" function? I've read multiple articles and watched videos and they all mention \"we need to add the token position\" but I honestly don't understand that explanation. For example, if we try and fine-tune a bert-base-uncased the start_position for train_context[0] is 67 and the end_position is 70 (subtracting -1 to account for space). I'm fairly certain these numbers represent indices but indices of what and in what list? Thanks for your help. ",
"Any update @cronoik on why XLnet tokenizer is returning None because it still is returning the same.",
"Sorry, I found the reaction of @akar5h very unfriendly and decided to ignore this issue I'll look into it later."
] | 1,623 | 1,653 | 1,624 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- transformers version: 4.6.1
- Platform: Windows
- Python version: 3.8.8
- PyTorch version (GPU?): 1.8.1 , GPU enabled
- Tensorflow version (GPU?): NA
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten
@LysandreJik
## Information
Model I am using (Bert, XLNet ...): XLNet , "xlnet-base-cased"
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
my own modified script, but the issue can be reproduced as given below.
encodings.char_to_token(i, answers[i]['answer_start'])
The error I get is :
ValueError: char_to_token() is not available when using Python based tokenizers
- This issue is very similar to #9326
The tasks I am working on is:
* [SQUAD ] an official GLUE/SQUaD task: (give the name)
* A self-curated QA dataset in SQUaD format
Steps to reproduce the behavior:
Run the code snippet given below :
```
import json
from pathlib import Path
from transformers import XLNetTokenizer, XLNetForQuestionAnsweringSimple
import torch
def read_squad(path):
path = Path(path)
with open(path, 'rb') as f:
squad_dict = json.load(f)
contexts = []
questions = []
answers = []
for group in squad_dict['data']:
for passage in group['paragraphs']:
context = passage['context']
for qa in passage['qas']:
question = qa['question']
for answer in qa['answers']:
contexts.append(context)
questions.append(question)
answers.append(answer)
return contexts, questions, answers
train_contexts, train_questions, train_answers = read_squad('train-v2.0.json')
val_contexts, val_questions, val_answers = read_squad('dev-v2.0.json')
def add_end_idx(answers, contexts):
for answer, context in zip(answers, contexts):
gold_text = answer['text']
start_idx = answer['answer_start']
end_idx = start_idx + len(gold_text)
# sometimes squad answers are off by a character or two – fix this
if context[start_idx:end_idx] == gold_text:
answer['answer_end'] = end_idx
elif context[start_idx - 1:end_idx - 1] == gold_text:
answer['answer_start'] = start_idx - 1
answer['answer_end'] = end_idx - 1 # When the gold label is off by one character
elif context[start_idx - 2:end_idx - 2] == gold_text:
answer['answer_start'] = start_idx - 2
answer['answer_end'] = end_idx - 2 # When the gold label is off by two characters
add_end_idx(train_answers, train_contexts)
add_end_idx(val_answers, val_contexts)
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
model_name = "xlnet-base-cased"
tokenizer = XLNetTokenizer.from_pretrained(model_name)
model = XLNetForQuestionAnsweringSimple.from_pretrained(model_name)
model.to(device)
train_encodings = tokenizer(train_contexts, train_questions, truncation=True, padding=True, max_length= 512)
val_encodings = tokenizer(val_contexts, val_questions, truncation=True, padding=True, max_length= 512)
def add_token_positions(encodings, answers):
start_positions = []
end_positions = []
for i in range(len(answers)):
start_positions.append(encodings.char_to_token(i, answers[i]['answer_start']))
end_positions.append(encodings.char_to_token(i, answers[i]['answer_end'] - 1))
# if None, the answer passage has been truncated
if start_positions[-1] is None:
start_positions[-1] = tokenizer.model_max_length
if end_positions[-1] is None:
end_positions[-1] = tokenizer.model_max_length
encodings.update({'start_positions': start_positions, 'end_positions': end_positions})
add_token_positions(train_encodings, train_answers)
add_token_positions(val_encodings, val_answers)
```

<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
- encodings.char_to_token(i, answers[i]['answer_start']) should return some value
- char_to_token should be not none in this case like in other tokenizers
ValueError: char_to_token() is not available when using Python based tokenizers
encodings._encoding seems to be None
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12201/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12201/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12200 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12200/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12200/comments | https://api.github.com/repos/huggingface/transformers/issues/12200/events | https://github.com/huggingface/transformers/issues/12200 | 922,675,185 | MDU6SXNzdWU5MjI2NzUxODU= | 12,200 | [Docs] Broken Link in the Benchmarks.rst | {
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Don't hesitate to submit a PR with a fix!"
] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | ## Issue info
In the documentation at [Benchmarks page](https://huggingface.co/transformers/benchmarks.html) the last link is broken due to the reordering of examples folders
It is
```
With the new `benchmark` tools, it is easier than ever to share your benchmark results with the community
:prefix_link:`here <examples/benchmarking/README.md>`.
```
should be changed to
```
With the new `benchmark` tools, it is easier than ever to share your benchmark results with the community
- :prefix_link:`Pytorch Benchmarking Results<examples/pytorch/benchmarking/README.md>`.
- :prefix_link:`Tensorflow Benchmarking Results<examples/tensorflow/benchmarking/README.md>`.
```
or a separate documentation page can be created for benchmarking the results of the model. Please Let me know if I can help or if it is being covered by another ongoing effort.
## Who can help?
@patrickvonplaten
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12200/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12200/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12199 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12199/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12199/comments | https://api.github.com/repos/huggingface/transformers/issues/12199/events | https://github.com/huggingface/transformers/pull/12199 | 922,582,895 | MDExOlB1bGxSZXF1ZXN0NjcxNDQwMjYx | 12,199 | [WIP] TensorFlow variant of DataCollatorForLanguageModeling. | {
"login": "aromans",
"id": 14765123,
"node_id": "MDQ6VXNlcjE0NzY1MTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/14765123?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aromans",
"html_url": "https://github.com/aromans",
"followers_url": "https://api.github.com/users/aromans/followers",
"following_url": "https://api.github.com/users/aromans/following{/other_user}",
"gists_url": "https://api.github.com/users/aromans/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aromans/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aromans/subscriptions",
"organizations_url": "https://api.github.com/users/aromans/orgs",
"repos_url": "https://api.github.com/users/aromans/repos",
"events_url": "https://api.github.com/users/aromans/events{/privacy}",
"received_events_url": "https://api.github.com/users/aromans/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thanks a lot for your PR!\r\n\r\nBefore I review more in detail, could you provide an example of use of this API? Data-collators are very PyTorch-ic so I want to make sure this is something that can actually be used in TensorFlow without too many contorsions.",
"> Thanks a lot for your PR!\r\n> \r\n> Before I review more in detail, could you provide an example of use of this API? Data-collators are very PyTorch-ic so I want to make sure this is something that can actually be used in TensorFlow without too many contorsions.\r\n\r\nAbsolutely! We are currently in the process of pretraining Bert with a custom dataset in a domain specific language. We are going to make use of the TFBertForPreTraining Model to achieve this as well as a custom trained Tokenizer. (https://huggingface.co/transformers/model_doc/bert.html#tfbertforpretraining)\r\nSpecifically we started with the collator for language modeling to make our training data consistent with MLM and NSP tasks. The collator provided that functionality along with batching but only for PyTorch. \r\nWe wanted to provide the functionality that existed for PyTorch for TensorFlow users, and plan on completing the entire API for TensorFlow support if desired.\r\nIf you need specific implementation details we are willing to expand further. ",
"Do you have an example of data preprocessing a bit similar to the [run_mlm](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_mlm.py) script we have in PyTorch? That would be helpful to see this TF data collator in action.",
"> Do you have an example of data preprocessing a bit similar to the [run_mlm](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_mlm.py) script we have in PyTorch? That would be helpful to see this TF data collator in action.\r\n\r\nWe are going to move this PR into a WIP so we can address your question. ",
"In answer to your question @sgugger, our objective is to integrate the collator with TFTrainer. Currently PyTorch users enjoy this functionality but TensorFlow users do not have the built-in functionality that deserves to be there (unless we are mistaken, and if so apologize). Our idea is to implement the following change in TFTrainer/get_train_tfdataset:\r\n\r\n```\r\nif tf_collate_fn is None:\r\n ds = (\r\n self.train_dataset.repeat()\r\n .shuffle(self.num_train_examples, seed=self.args.seed)\r\n .batch(self.total_train_batch_size, drop_remainder=self.args.dataloader_drop_last)\r\n .prefetch(tf.data.experimental.AUTOTUNE)\r\n )\r\nelse\r\n ds = (\r\n self.train_dataset.repeat()\r\n .shuffle(self.num_train_examples, seed=self.args.seed)\r\n .batch(self.total_train_batch_size, drop_remainder=self.args.dataloader_drop_last)\r\n .map(tf_collate_fn)\r\n .prefetch(tf.data.experimental.AUTOTUNE)\r\n )\r\n```\r\n\r\nor we could implement the dataset conversion in the collator:\r\n\r\n```\r\nif not tf_collate_fn is None:\r\n ds = tf_collate_fn(ds)\r\nelse:\r\n ds = (\r\n self.train_dataset.repeat()\r\n .shuffle(self.num_train_examples, seed=self.args.seed)\r\n .batch(self.total_train_batch_size, drop_remainder=self.args.dataloader_drop_last)\r\n .prefetch(tf.data.experimental.AUTOTUNE)\r\n )\r\n```\r\nThis would provide an avenue for TensorFlow users to train any models requiring collator functionality in TFTrainer.\r\n\r\nAny advice or alternative solutions are welcome! ",
"We plan to drop the TFTrainer pretty soon to the profit of using Keras, but this could still be useful as we will still rely on the datasets.\r\nI think the best API would be to apply it to a TensorFlow dataset but @Rocketknight1 might have other views.",
"Our intention is to drop TFTrainer to do training through Keras instead, and as a result in TF we want the input to come from tf.data.Dataset objects rather than custom collators.\r\n\r\nA lot of things like multi-GPU or TPU training in Keras expect tf.data.Dataset input, and will coerce the input into a Dataset if you don't supply it as one.",
"@Rocketknight1 Understood. So providing a collator that could be passed to Dataset.map is the way to go if we want the option. Or are you saying that such an operation should be performed before TFTrainer?\n\nI just want to clarify before we continue with a PR. ",
"We want to avoid TFTrainer entirely in future, so yeah - any kind of custom collator should return a Dataset, or should work through Dataset.map(). This is something we're in the process of updating through our library - there's still a lot of usages of TFTrainer that I'm cleaning up over time!",
"Thank you for your quick response! \n\nWe will continue with the PR going down the .map route. Even though TFTrainer is depreciating, some may still find it beneficial in the meantime. \n\nCheers!",
"@LysandreJik @Rocketknight1 @sgugger\r\n\r\n@sdwalker62 and I have made our working commit for the data_tf_collator.py functioning with tf.data.Dataset. We had quite a few commits within our test-branch that has slightly cluttered the PR, so if you want us to make another PR to help focus in on the code that matters most let us know.\r\nOtherwise, the two scripts to primarily look at are data_tf_collator.py and test_data_tf_collator.py. \r\n\r\nLet us know if you have any questions. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hey! This isn't something I want to go stale, but I lost track of it when I saw you were still adding commits! Are you happy with it as-is, and ready for a review?",
"That is no problem! And we are ready for a review at your convenience. ",
"Hi! I'm reviewing now. This is actually quite timely - we're planning a general revamp of all the data collators to support both Tensorflow and JAX, as well as support for our Dataset objects to automatically convert to `tf.data.Dataset`, which will almost certainly include the new data collation functions as part of the `tf.data` pipeline.\r\n\r\nThe downside is that we haven't decided how exactly to structure the code yet, so we might ask you to move or rename this class, but hopefully we can use almost all of the code here as part of the revamp!",
"> Hi! I'm reviewing now. This is actually quite timely - we're planning a general revamp of all the data collators to support both Tensorflow and JAX, as well as support for our Dataset objects to automatically convert to `tf.data.Dataset`, which will almost certainly include the new data collation functions as part of the `tf.data` pipeline.\r\n> \r\n> The downside is that we haven't decided how exactly to structure the code yet, so we might ask you to move or rename this class, but hopefully we can use almost all of the code here as part of the revamp!\r\n\r\nThat is perfect, we are glad we could help out! We will happily move/rename/or restructure the code in any way that best suits your revamp and the rest of your codebase :smile: \r\n",
"So I've been thinking this over a bit more - my guess is that `tokenizer.pad` probably cannot/shouldn't be compiled with tf.function. It's effectively a totally arbitrary function, and every new model we add might have a different one, so we couldn't make any guarantee that AutoGraph will play nicely with it, even though in testing it seemed to work for me on a few common cases. For the same reasons, we shouldn't try to reimplement `tokenizer.pad` like you did with `tf_pad_tokens`, because at any moment a model could come along that would require a fresh rewrite of that. \r\n\r\nGiven that we need to call a block of arbitrary Python code, that means we can't guarantee that the collation function will be compilable with `tf.function` or `Dataset.map()`, although we could still use it in a `tf.data` pipeline by either using it when the data is loaded with `from_generator`, or wrapping it in `py_function` to allow it to be used in `Dataset.map()`.\r\n\r\nI think we should go for the following:\r\n\r\n1. The function should take input as either tf.Tensor or nested (possibly variable-length) lists. It could optionally accept `np.ndarray` or `tf.ragged.RaggedTensor` too.\r\n2. No `tf.function` anywhere - code is pure Python\r\n3. We can possibly have some kind of 'master' function that takes an argument like `return_tensors` and will call the framework-specific collators based on the argument value, but this is something we can implement later.\r\n\r\nThat's a lot of changes, though I'm hopeful we could keep a lot of your code here as-is. Do you think it makes sense, or do you have any objections to any of it?",
"In the meantime, I'm going to be working on this too - I'll take a different `DataCollator` class and try to write a TF equivalent of it tomorrow. If I run into any issues there I'll let you know.",
"Hey, I've rewritten a few of the classes in our preferred style, but left the language modelling ones alone for now, you can see them here: https://github.com/huggingface/transformers/pull/13105\r\n\r\nWe'd like to push ahead with this fairly soon, so if you'd like, you can try adjusting this PR to a similar style. If not, we can close this PR and I'll add the rest to my PR tomorrow. Either way, thank you for the contribution - whether or not we use the code directly, this PR was helpful in drawing our attention to the problem and to possible approaches for writing data collators that support frameworks besides Torch!",
"> Hey, I've rewritten a few of the classes in our preferred style, but left the language modelling ones alone for now, you can see them here: #13105\r\n> \r\n> We'd like to push ahead with this fairly soon, so if you'd like, you can try adjusting this PR to a similar style. If not, we can close this PR and I'll add the rest to my PR tomorrow. Either way, thank you for the contribution - whether or not we use the code directly, this PR was helpful in drawing our attention to the problem and to possible approaches for writing data collators that support frameworks besides Torch!\r\n\r\nThis afternoon we started finalizing and adding some of those changes you've suggested in another branch. Once done, we will also adjust the code to match your preferred style shown in your new PR. We can merge those changes into the this PR here and you can feel free to just use this code in your PR or as a starting point for your revisions. Either way, no hard feelings, and we are glad we could help out in any way!",
"I'm happy for you to submit your code, and I'll avoid any classes you're touching when I make my own PR! Which ones would you like to handle?",
"Hey! We'd like to push to get this in soon, so we can proceed with a general overhaul of our TF data pipelines. At the same time, I know you're contributing code for free, and the rush is mostly caused by my own disorganization, so I don't want to force deadlines on you or anything! \r\n\r\nWe'd like to move on and merge everything by Monday, so if you want to add any code today or this weekend, I'll grab it at that point and pull it into my PR. If not, then don't worry - what you've added up to now will already be quite helpful for the final PR, and we'll make sure that both of you get correct author/contributor credits for it regardless!",
"Hey there! 😃 We just made some code changes to integrate more closely with your style and had all of our tests pass. We are finishing up lunch and then will go through a final review before updating the PR. ",
"@sdwalker62 and I just pushed up our revisions based on your review and recent PR. We changed the name of the file to TFDataCollatorForMaskedLanguageModeling. Hopefully, this helps with your upcoming merge this Monday! Let us know if you need anything else, and we look forward to contributing to more things in the future! :smile: ",
"Thank you! We're just finishing off an upstream PR to `Datasets`, at which point I'll be merging your code into the other DataCollator PR and getting the rest of the team to review it.",
"Hey, just to update you: The code has been incorporated into my local copy, and I'm working on adding some other methods we need before I push it all to the other PR. I'll tag you as soon as that commit is in!",
"Code is all in at #13105. I'm very likely to steal some of the test code from this PR too once we incorporate tests for all the classes, so I'll make sure you're acknowledged as contributors for that too!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,632 | 1,632 | CONTRIBUTOR | null | Co-authored-by: Dalton Walker <[email protected]>
# What does this PR do?
We didn't see any support for TensorFlow within the DataCollatorForLanguageModeling data class. Integrating directly with TensorFlow seems useful for TensorFlow users and avoids the necessity for tensor conversion.
This PR adds a TFDataCollatorForLangaugeModeling data class that integrates directly with TensorFlow tensors and paves the way for further TFDataCollator conversions.
(Reopened PR #12179)
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik @Rocketknight1 @sgugger
Anyone in the community is free to review the PR.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12199/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12199",
"html_url": "https://github.com/huggingface/transformers/pull/12199",
"diff_url": "https://github.com/huggingface/transformers/pull/12199.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12199.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12198 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12198/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12198/comments | https://api.github.com/repos/huggingface/transformers/issues/12198/events | https://github.com/huggingface/transformers/pull/12198 | 922,568,134 | MDExOlB1bGxSZXF1ZXN0NjcxNDI3MzEx | 12,198 | Enabling AutoTokenizer for HubertConfig. | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12198/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12198/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12198",
"html_url": "https://github.com/huggingface/transformers/pull/12198",
"diff_url": "https://github.com/huggingface/transformers/pull/12198.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12198.patch",
"merged_at": 1623853726000
} |
https://api.github.com/repos/huggingface/transformers/issues/12197 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12197/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12197/comments | https://api.github.com/repos/huggingface/transformers/issues/12197/events | https://github.com/huggingface/transformers/issues/12197 | 922,558,528 | MDU6SXNzdWU5MjI1NTg1Mjg= | 12,197 | XLM-RoBERTa MLM Trainer not saving 'sentencepiece.bpe.model' file | {
"login": "gabrieltardochi",
"id": 60230715,
"node_id": "MDQ6VXNlcjYwMjMwNzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/60230715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gabrieltardochi",
"html_url": "https://github.com/gabrieltardochi",
"followers_url": "https://api.github.com/users/gabrieltardochi/followers",
"following_url": "https://api.github.com/users/gabrieltardochi/following{/other_user}",
"gists_url": "https://api.github.com/users/gabrieltardochi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gabrieltardochi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gabrieltardochi/subscriptions",
"organizations_url": "https://api.github.com/users/gabrieltardochi/orgs",
"repos_url": "https://api.github.com/users/gabrieltardochi/repos",
"events_url": "https://api.github.com/users/gabrieltardochi/events{/privacy}",
"received_events_url": "https://api.github.com/users/gabrieltardochi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Without seeing your training script, it's impossible to diagnose what went wrong. I just tried a `tokenizer.save_pretrained(...)` with this model and I get all the files.",
"Hey @sgugger, thanks for the quick reply! I was making a very stupid mistake(typo) and haven't noticed it until now.\r\nI was using 'roberta-base' instead of 'xlm-roberta-base', that is why there was no 'sentencepiece.bpe.model' file when saving it.\r\nSorry for taking your time!"
] | 1,623 | 1,623 | 1,623 | NONE | null | ## Environment info (Colab)
- `transformers` version: 4.7.0.dev0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
## Information
Model I am using xlm-roberta-base:
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
Example: [RoBERTa/BERT/DistilBERT and masked language modeling, using HuggingFace Trainer with your own train file](https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Dataset: csv file with only one column named "text", containing one sentence per row.
## To reproduce
Steps to reproduce the behavior:
1. Follow the instructions displayed in this [pytorch language-modeling examples](https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) page (RoBERTa/BERT/DistilBERT and masked language modeling, using HuggingFace Trainer with your own train file).
Used command:
`!python train.py --model_name_or_path xlm-roberta-base --train_file custom_train_dset.csv --save_steps 300000 --line_by_line --do_train --output_dir xlm-roberta-base-mlm-tuned-example`
## Expected behavior
I did the very same thing, but with less data(same custom dataset, less rows) two days ago(2021/06/14) and I got the desired output:

Now, this is the output that I am getting(**wrong**):

@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12197/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12197/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12196 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12196/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12196/comments | https://api.github.com/repos/huggingface/transformers/issues/12196/events | https://github.com/huggingface/transformers/issues/12196 | 922,525,627 | MDU6SXNzdWU5MjI1MjU2Mjc= | 12,196 | Where I can find official pretrained weights of SOP in Albert and NSP in Bert? | {
"login": "s4sarath",
"id": 10637096,
"node_id": "MDQ6VXNlcjEwNjM3MDk2",
"avatar_url": "https://avatars.githubusercontent.com/u/10637096?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/s4sarath",
"html_url": "https://github.com/s4sarath",
"followers_url": "https://api.github.com/users/s4sarath/followers",
"following_url": "https://api.github.com/users/s4sarath/following{/other_user}",
"gists_url": "https://api.github.com/users/s4sarath/gists{/gist_id}",
"starred_url": "https://api.github.com/users/s4sarath/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/s4sarath/subscriptions",
"organizations_url": "https://api.github.com/users/s4sarath/orgs",
"repos_url": "https://api.github.com/users/s4sarath/repos",
"events_url": "https://api.github.com/users/s4sarath/events{/privacy}",
"received_events_url": "https://api.github.com/users/s4sarath/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Is there any update?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"@LysandreJik - Can you help me here :)?",
"Hi @s4sarath, sorry for the delayed response. If the checkpoints on the hub do not satisfy you (I see the SOP/NSP layers are indeed lacking), conversion scripts are available for each model:\r\n\r\n- [BERT](https://github.com/huggingface/transformers/tree/master/src/transformers/models/bert), see the `convert_*` scripts\r\n- [ALBERT](https://github.com/huggingface/transformers/blob/master/src/transformers/models/albert/convert_albert_original_tf_checkpoint_to_pytorch.py)\r\n\r\nI confirm this successfully exports the full model, including the NSP/SOP weights:\r\n\r\n#### ALBERT\r\n\r\n```bash\r\nwget https://storage.googleapis.com/albert_models/albert_base_v2.tar.gz \r\ntar -xzf albert_base_v2.tar.gz \r\npython convert_albert_original_tf_checkpoint_to_pytorch.py --tf_checkpoint_path=albert_base/model.ckpt-best --albert_config_file=albert_base/albert_config.json --pytorch_dump_path=albert_base/pytorch_model.bin\r\ncp albert_base/albert_config.json albert_base/config.json \r\n```\r\n```python\r\n>>> from transformers import TFAlbertForPreTraining\r\n>>> model = TFAlbertForPreTraining.from_pretrained(\"albert_base\", from_pt=True)\r\n[...]\r\nAll the weights of TFAlbertForPreTraining were initialized from the PyTorch model.\r\nIf your task is similar to the task the model of the checkpoint was trained on, you can already use TFAlbertForPreTraining for predictions without further training.\r\n```\r\n\r\n#### BERT\r\n\r\nSame for BERT:\r\n\r\n```bash\r\nwget https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip\r\nunzip uncased_L-12_H-768_A-12.zip \r\npython convert_bert_original_tf_checkpoint_to_pytorch.py --tf_checkpoint_path=uncased_L-12_H-768_A-12/bert_model.ckpt --bert_config_file=uncased_L-12_H-768_A-12/bert_config.json --pytorch_dump_path=uncased_L-12_H-768_A-12/pytorch_model.bin\r\n```\r\n```python\r\n>>> from transformers import TFBertForPreTraining\r\n>>> bert = TFBertForPreTraining.from_pretrained(\"uncased_L-12_H-768_A-12\", from_pt=True)\r\n[...[\r\nAll the weights of TFBertForPreTraining were initialized from the PyTorch model.\r\nIf your task is similar to the task the model of the checkpoint was trained on, you can already use TFBertForPreTraining for predictions without further training.\r\n```\r\n\r\nHope that helps.\r\n\r\n\r\n\r\n",
"Thanks Lysandre. No worries.\n\nI kind of did same hack. But was wondering, isit something that is supposed\nto be a part of official model loading.\n\nThanks\nSarath\n\nOn Wed, 11 Aug, 2021, 8:12 pm Lysandre Debut, ***@***.***>\nwrote:\n\n> Hi @s4sarath <https://github.com/s4sarath>, sorry for the delayed\n> response. If the checkpoints on the hub do not satisfy you (I see the\n> SOP/NSP layers are indeed lacking), conversion scripts are available for\n> each model:\n>\n> - BERT\n> <https://github.com/huggingface/transformers/tree/master/src/transformers/models/bert>,\n> see the convert_* scripts\n> - ALBERT\n> <https://github.com/huggingface/transformers/blob/master/src/transformers/models/albert/convert_albert_original_tf_checkpoint_to_pytorch.py>\n>\n> I confirm this successfully exports the full model, including the NSP/SOP\n> weights:\n> ALBERT\n>\n> wget https://storage.googleapis.com/albert_models/albert_base_v2.tar.gz\n> tar -xzf albert_base_v2.tar.gz\n> python convert_albert_original_tf_checkpoint_to_pytorch.py --tf_checkpoint_path=albert_base/model.ckpt-best --albert_config_file=albert_base/albert_config.json --pytorch_dump_path=albert_base/pytorch_model.bin\n> cp albert_base/albert_config.json albert_base/config.json\n>\n> >>> from transformers import TFAlbertForPreTraining>>> model = TFAlbertForPreTraining.from_pretrained(\"albert_base\", from_pt=True)\n> [...]All the weights of TFAlbertForPreTraining were initialized from the PyTorch model.If your task is similar to the task the model of the checkpoint was trained on, you can already use TFAlbertForPreTraining for predictions without further training.\n>\n> BERT\n>\n> Same for BERT:\n>\n> wget https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip\n> unzip uncased_L-12_H-768_A-12.zip\n> python convert_bert_original_tf_checkpoint_to_pytorch.py --tf_checkpoint_path=uncased_L-12_H-768_A-12/bert_model.ckpt --bert_config_file=uncased_L-12_H-768_A-12/bert_config.json --pytorch_dump_path=uncased_L-12_H-768_A-12/pytorch_model.bin\n>\n> >>> from transformers import TFBertForPreTraining>>> bert = TFBertForPreTraining.from_pretrained(\"uncased_L-12_H-768_A-12\", from_pt=True)\n> [...[All the weights of TFBertForPreTraining were initialized from the PyTorch model.If your task is similar to the task the model of the checkpoint was trained on, you can already use TFBertForPreTraining for predictions without further training.\n>\n> Hope that helps.\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/12196#issuecomment-896887301>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ACRE6KCCQLTEL7JVTE5YAKDT4KD4ZANCNFSM46ZJP3RA>\n> .\n> Triage notifications on the go with GitHub Mobile for iOS\n> <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>\n> or Android\n> <https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email>\n> .\n>\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,631 | 1,631 | NONE | null | Hi Guys,
I was checking for the pre-trained weights ( 2 layer classifier ) of ```SOP``` in ```Albert``` and ```NSP``` in ```Bert```.
Seems like it is initializing randomly every time.
Can we have the official weights loaded here or is it not available from official models?
Can anyone clarify please.
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12196/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12196/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12195 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12195/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12195/comments | https://api.github.com/repos/huggingface/transformers/issues/12195/events | https://github.com/huggingface/transformers/issues/12195 | 922,508,786 | MDU6SXNzdWU5MjI1MDg3ODY= | 12,195 | Batched pipeline for NER | {
"login": "dinani65",
"id": 75939454,
"node_id": "MDQ6VXNlcjc1OTM5NDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/75939454?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dinani65",
"html_url": "https://github.com/dinani65",
"followers_url": "https://api.github.com/users/dinani65/followers",
"following_url": "https://api.github.com/users/dinani65/following{/other_user}",
"gists_url": "https://api.github.com/users/dinani65/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dinani65/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dinani65/subscriptions",
"organizations_url": "https://api.github.com/users/dinani65/orgs",
"repos_url": "https://api.github.com/users/dinani65/repos",
"events_url": "https://api.github.com/users/dinani65/events{/privacy}",
"received_events_url": "https://api.github.com/users/dinani65/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This has been asked many times before, see #11244. However, the corresponding PR was not merged, see [this comment](https://github.com/huggingface/transformers/pull/11251#pullrequestreview-637488364) for the reason."
] | 1,623 | 1,623 | 1,623 | NONE | null | Hi,
Is there a way to run batches with NER Pipeline rather than just one example?
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12195/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12194 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12194/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12194/comments | https://api.github.com/repos/huggingface/transformers/issues/12194/events | https://github.com/huggingface/transformers/issues/12194 | 922,461,868 | MDU6SXNzdWU5MjI0NjE4Njg= | 12,194 | LayoutXLM not loaded | {
"login": "tommasodelorenzo",
"id": 57231812,
"node_id": "MDQ6VXNlcjU3MjMxODEy",
"avatar_url": "https://avatars.githubusercontent.com/u/57231812?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tommasodelorenzo",
"html_url": "https://github.com/tommasodelorenzo",
"followers_url": "https://api.github.com/users/tommasodelorenzo/followers",
"following_url": "https://api.github.com/users/tommasodelorenzo/following{/other_user}",
"gists_url": "https://api.github.com/users/tommasodelorenzo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tommasodelorenzo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tommasodelorenzo/subscriptions",
"organizations_url": "https://api.github.com/users/tommasodelorenzo/orgs",
"repos_url": "https://api.github.com/users/tommasodelorenzo/repos",
"events_url": "https://api.github.com/users/tommasodelorenzo/events{/privacy}",
"received_events_url": "https://api.github.com/users/tommasodelorenzo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"LayoutXLM is not yet supported by the AutoModel API. You can probably plug it into a `LayoutLMTokenizer` and a `LayoutLMForTokenClassification`. ",
"I had already tried that, but did not work.\r\n```\r\nmodel_name=\"microsoft/layoutxlm-base\"\r\ntokenizer = LayoutLMTokenizer.from_pretrained(model_name)\r\n```\r\nGives me the error\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-6-720ed3162350> in <module>\r\n 1 model_name=\"microsoft/layoutxlm-base\"\r\n----> 2 tokenizer = LayoutLMTokenizer.from_pretrained(model_name)\r\n\r\n/opt/conda/lib/python3.8/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)\r\n 1717 logger.info(f\"loading file {file_path} from cache at {resolved_vocab_files[file_id]}\")\r\n 1718 \r\n-> 1719 return cls._from_pretrained(\r\n 1720 resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs\r\n 1721 )\r\n\r\n/opt/conda/lib/python3.8/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs)\r\n 1789 # Instantiate tokenizer.\r\n 1790 try:\r\n-> 1791 tokenizer = cls(*init_inputs, **init_kwargs)\r\n 1792 except OSError:\r\n 1793 raise OSError(\r\n\r\n/opt/conda/lib/python3.8/site-packages/transformers/models/bert/tokenization_bert.py in __init__(self, vocab_file, do_lower_case, do_basic_tokenize, never_split, unk_token, sep_token, pad_token, cls_token, mask_token, tokenize_chinese_chars, strip_accents, **kwargs)\r\n 191 )\r\n 192 \r\n--> 193 if not os.path.isfile(vocab_file):\r\n 194 raise ValueError(\r\n 195 f\"Can't find a vocabulary file at path '{vocab_file}'. To load the vocabulary from a Google pretrained \"\r\n\r\n/opt/conda/lib/python3.8/genericpath.py in isfile(path)\r\n 28 \"\"\"Test whether a path is a regular file\"\"\"\r\n 29 try:\r\n---> 30 st = os.stat(path)\r\n 31 except (OSError, ValueError):\r\n 32 return False\r\n\r\nTypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType\r\n```\r\nWhile, `model = LayoutLMModel.from_pretrained(model_name)` does not throw an error, but it seems not able to correctly initialize weights.\r\n```\r\nYou are using a model of type layoutxlm to instantiate a model of type layoutlm. This is not supported for all configurations of models and can yield errors.\r\nSome weights of the model checkpoint at microsoft/layoutxlm-base were not used when initializing LayoutLMModel: ['layoutlmv2.visual.backbone.bottom_up.res4.1.conv3.norm.running_var', 'layoutlmv2.encoder.layer.7.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv3.norm.running_var', 'layoutlmv2.visual.backbone.fpn_output5.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv3.norm.weight', 'layoutlmv2.encoder.layer.3.output.LayerNorm.bias', 'layoutlmv2.encoder.layer.9.attention.output.LayerNorm.bias', 'layoutlmv2.encoder.layer.10.attention.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv2.norm.running_mean', 'layoutlmv2.encoder.layer.3.attention.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv3.norm.running_var', 'layoutlmv2.embeddings.position_embeddings.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv1.norm.bias', 'layoutlmv2.embeddings.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.stem.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.0.shortcut.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv1.weight', 'layoutlmv2.encoder.layer.5.intermediate.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv1.norm.bias', 'layoutlmv2.visual.backbone.fpn_lateral3.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv1.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.6.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv3.norm.bias', 'layoutlmv2.encoder.layer.9.attention.self.value.weight', 'layoutlmv2.encoder.layer.5.attention.self.query.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv3.norm.running_mean', 'layoutlmv2.pooler.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.0.shortcut.norm.running_var', 'layoutlmv2.encoder.layer.9.attention.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv3.norm.bias', 'layoutlmv2.encoder.layer.0.intermediate.dense.bias', 'layoutlmv2.encoder.layer.10.output.dense.bias', 'layoutlmv2.visual_proj.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv2.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.2.attention.output.LayerNorm.weight', 'layoutlmv2.encoder.layer.0.attention.self.query.bias', 'layoutlmv2.encoder.layer.11.attention.self.query.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv2.norm.weight', 'layoutlmv2.encoder.layer.5.intermediate.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv3.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.5.attention.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv2.norm.running_mean', 'layoutlmv2.embeddings.h_position_embeddings.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv1.weight', 'layoutlmv2.encoder.layer.10.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv3.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.3.attention.self.query.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv1.weight', 'layoutlmv2.encoder.layer.6.attention.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res2.0.shortcut.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv2.norm.running_var', 'layoutlmv2.encoder.layer.9.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv3.norm.running_mean', 'layoutlmv2.encoder.layer.8.attention.self.query.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv1.weight', 'layoutlmv2.encoder.layer.3.attention.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv2.norm.bias', 'layoutlmv2.encoder.layer.9.intermediate.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv2.norm.weight', 'layoutlmv2.visual.pixel_mean', 'layoutlmv2.encoder.layer.9.output.LayerNorm.weight', 'layoutlmv2.encoder.layer.3.attention.self.query.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv3.norm.bias', 'layoutlmv2.visual.backbone.fpn_output2.weight', 'layoutlmv2.encoder.layer.8.attention.self.query.weight', 'layoutlmv2.encoder.layer.2.attention.self.query.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv1.norm.bias', 'layoutlmv2.encoder.layer.0.intermediate.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv3.norm.weight', 'layoutlmv2.encoder.layer.11.attention.self.key.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv1.norm.weight', 'layoutlmv2.encoder.layer.9.attention.self.value.bias', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv3.weight', 'layoutlmv2.encoder.layer.6.attention.self.key.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv2.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.5.attention.self.value.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.0.shortcut.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv2.norm.running_mean', 'layoutlmv2.encoder.layer.4.attention.self.query.bias', 'layoutlmv2.encoder.layer.2.attention.output.LayerNorm.bias', 'layoutlmv2.encoder.layer.11.attention.self.key.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv3.norm.bias', 'layoutlmv2.encoder.layer.1.attention.self.key.weight', 'layoutlmv2.encoder.layer.3.intermediate.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv1.norm.weight', 'layoutlmv2.encoder.layer.8.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv1.norm.weight', 'layoutlmv2.encoder.layer.0.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv1.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.4.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.fpn_output3.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv3.norm.running_var', 'layoutlmv2.encoder.layer.6.attention.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv1.norm.running_mean', 'layoutlmv2.encoder.layer.11.attention.self.query.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv1.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.4.attention.self.key.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv3.weight', 'layoutlmv2.encoder.layer.6.attention.self.query.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv2.weight', 'layoutlmv2.visual.backbone.fpn_lateral2.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv3.norm.bias', 'layoutlmv2.encoder.layer.11.attention.self.value.bias', 'layoutlmv2.visual.backbone.fpn_output3.weight', 'layoutlmv2.encoder.layer.6.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv1.norm.weight', 'layoutlmv2.encoder.layer.7.attention.self.value.bias', 'layoutlmv2.encoder.layer.2.intermediate.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv1.norm.weight', 'layoutlmv2.encoder.layer.7.attention.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv3.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.8.attention.self.key.weight', 'layoutlmv2.encoder.layer.10.attention.self.value.weight', 'layoutlmv2.encoder.layer.1.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv2.norm.bias', 'layoutlmv2.encoder.layer.5.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv1.norm.running_var', 'layoutlmv2.encoder.layer.3.attention.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv1.norm.running_var', 'layoutlmv2.encoder.layer.4.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv3.weight', 'layoutlmv2.encoder.layer.4.attention.self.query.weight', 'layoutlmv2.encoder.layer.8.attention.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.0.shortcut.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.0.shortcut.weight', 'layoutlmv2.encoder.layer.9.attention.self.key.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv3.norm.running_mean', 'layoutlmv2.encoder.layer.2.attention.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv1.norm.weight', 'layoutlmv2.encoder.layer.10.attention.self.key.bias', 'layoutlmv2.encoder.layer.9.attention.self.key.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv2.weight', 'layoutlmv2.encoder.layer.1.intermediate.dense.bias', 'layoutlmv2.embeddings.x_position_embeddings.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv3.norm.running_mean', 'layoutlmv2.encoder.layer.10.attention.output.LayerNorm.bias', 'layoutlmv2.encoder.layer.0.attention.self.query.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.0.shortcut.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.0.shortcut.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv1.norm.bias', 'layoutlmv2.encoder.layer.0.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv1.weight', 'layoutlmv2.encoder.layer.4.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv3.norm.running_mean', 'layoutlmv2.encoder.layer.2.attention.self.query.weight', 'layoutlmv2.encoder.layer.8.attention.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv1.weight', 'layoutlmv2.visual.backbone.fpn_lateral5.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv2.norm.running_var', 'layoutlmv2.encoder.layer.0.output.dense.weight', 'layoutlmv2.encoder.layer.2.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv3.weight', 'layoutlmv2.encoder.layer.6.attention.self.key.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv1.norm.running_mean', 'layoutlmv2.encoder.layer.6.attention.self.query.weight', 'layoutlmv2.encoder.layer.3.attention.self.key.weight', 'layoutlmv2.visual_proj.bias', 'layoutlmv2.encoder.layer.10.attention.self.query.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv1.norm.weight', 'layoutlmv2.embeddings.position_ids', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv3.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.6.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.0.shortcut.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv2.weight', 'layoutlmv2.encoder.layer.4.attention.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv2.norm.bias', 'layoutlmv2.encoder.layer.2.attention.self.key.weight', 'layoutlmv2.encoder.layer.10.attention.self.key.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.0.shortcut.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.stem.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv2.norm.running_mean', 'layoutlmv2.encoder.layer.5.attention.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv3.norm.bias', 'layoutlmv2.encoder.layer.9.attention.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv3.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.9.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv3.weight', 'layoutlmv2.encoder.layer.0.output.dense.bias', 'layoutlmv2.encoder.layer.1.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv3.norm.running_mean', 'layoutlmv2.encoder.layer.11.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv3.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.8.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res2.0.shortcut.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.1.attention.self.key.bias', 'layoutlmv2.encoder.layer.7.intermediate.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv1.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.10.intermediate.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv2.norm.running_var', 'layoutlmv2.visual.backbone.fpn_output2.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv3.norm.weight', 'layoutlmv2.encoder.layer.8.intermediate.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.0.shortcut.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv1.norm.running_var', 'layoutlmv2.encoder.layer.0.attention.self.key.bias', 'layoutlmv2.visual.backbone.bottom_up.stem.conv1.weight', 'layoutlmv2.encoder.layer.6.intermediate.dense.weight', 'layoutlmv2.encoder.layer.8.attention.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv3.norm.weight', 'layoutlmv2.encoder.layer.6.attention.self.value.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.stem.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv1.norm.running_var', 'layoutlmv2.encoder.layer.11.intermediate.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv2.norm.running_mean', 'layoutlmv2.encoder.layer.11.attention.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv1.weight', 'layoutlmv2.encoder.layer.8.attention.self.key.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv1.weight', 'layoutlmv2.encoder.layer.9.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res3.0.shortcut.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv3.weight', 'layoutlmv2.visual.backbone.fpn_lateral5.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv3.norm.bias', 'layoutlmv2.encoder.layer.11.attention.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv1.weight', 'layoutlmv2.encoder.layer.7.attention.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv3.norm.bias', 'layoutlmv2.encoder.layer.9.intermediate.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv3.norm.bias', 'layoutlmv2.encoder.layer.7.attention.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv2.norm.weight', 'layoutlmv2.visual.backbone.fpn_lateral4.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv2.weight', 'layoutlmv2.visual.backbone.fpn_output4.bias', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv2.weight', 'layoutlmv2.encoder.layer.7.attention.self.query.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv2.norm.weight', 'layoutlmv2.encoder.layer.4.attention.self.value.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv1.weight', 'layoutlmv2.encoder.layer.2.attention.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv3.weight', 'layoutlmv2.encoder.layer.2.attention.self.value.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv1.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.0.attention.output.LayerNorm.bias', 'layoutlmv2.encoder.layer.10.attention.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv2.norm.running_mean', 'layoutlmv2.encoder.layer.0.attention.output.dense.bias', 'layoutlmv2.pooler.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv3.weight', 'layoutlmv2.encoder.layer.8.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv1.norm.running_var', 'layoutlmv2.encoder.layer.11.output.dense.weight', 'layoutlmv2.embeddings.word_embeddings.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv1.norm.running_var', 'layoutlmv2.embeddings.y_position_embeddings.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.stem.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv2.norm.running_var', 'layoutlmv2.encoder.layer.5.attention.self.value.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.fpn_output4.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv2.norm.num_batches_tracked', 'layoutlmv2.visual_LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv2.norm.bias', 'layoutlmv2.encoder.layer.4.intermediate.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv3.norm.weight', 'layoutlmv2.visual_LayerNorm.weight', 'layoutlmv2.encoder.layer.11.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv2.norm.running_mean', 'layoutlmv2.encoder.layer.8.attention.self.value.weight', 'layoutlmv2.encoder.layer.10.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.0.shortcut.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv1.weight', 'layoutlmv2.encoder.layer.6.attention.self.value.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv3.norm.running_mean', 'layoutlmv2.encoder.layer.10.attention.self.value.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.0.shortcut.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res2.0.shortcut.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv3.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.7.attention.self.key.bias', 'layoutlmv2.encoder.layer.1.attention.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.0.shortcut.weight', 'layoutlmv2.encoder.layer.4.attention.self.key.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv3.norm.running_var', 'layoutlmv2.encoder.layer.9.attention.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv1.norm.weight', 'layoutlmv2.encoder.layer.7.attention.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.stem.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.0.shortcut.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv1.norm.running_mean', 'layoutlmv2.encoder.layer.3.output.dense.weight', 'layoutlmv2.encoder.layer.6.attention.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv2.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.1.attention.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv2.weight', 'layoutlmv2.encoder.layer.4.attention.output.dense.weight', 'layoutlmv2.encoder.layer.8.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv3.norm.running_var', 'layoutlmv2.encoder.layer.0.attention.output.LayerNorm.weight', 'layoutlmv2.encoder.layer.11.output.LayerNorm.weight', 'layoutlmv2.encoder.layer.5.attention.self.query.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv3.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.2.attention.self.value.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv3.norm.weight', 'layoutlmv2.encoder.layer.4.intermediate.dense.bias', 'layoutlmv2.encoder.layer.6.intermediate.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv1.norm.running_var', 'layoutlmv2.encoder.layer.3.output.LayerNorm.weight', 'layoutlmv2.encoder.layer.7.output.dense.bias', 'layoutlmv2.encoder.layer.10.intermediate.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv2.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.7.attention.self.key.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv2.norm.weight', 'layoutlmv2.encoder.layer.0.attention.self.value.bias', 'layoutlmv2.visual.pixel_std', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv3.norm.running_var', 'layoutlmv2.encoder.layer.1.output.dense.bias', 'layoutlmv2.encoder.layer.5.attention.self.key.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv3.weight', 'layoutlmv2.embeddings.token_type_embeddings.weight', 'layoutlmv2.encoder.layer.7.intermediate.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv1.norm.weight', 'layoutlmv2.encoder.layer.4.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv3.norm.bias', 'layoutlmv2.encoder.layer.7.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.fpn_lateral3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv2.weight', 'layoutlmv2.encoder.layer.1.attention.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv1.norm.running_var', 'layoutlmv2.encoder.layer.5.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv3.norm.running_var', 'layoutlmv2.encoder.layer.4.attention.self.value.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv1.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.1.attention.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv1.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.11.attention.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv3.norm.running_mean', 'layoutlmv2.encoder.layer.5.output.dense.bias', 'layoutlmv2.encoder.layer.9.attention.self.query.weight', 'layoutlmv2.visual_segment_embedding', 'layoutlmv2.encoder.layer.1.attention.self.query.weight', 'layoutlmv2.encoder.layer.10.attention.self.query.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv2.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.11.attention.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv1.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.6.attention.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.0.shortcut.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv2.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.2.attention.self.key.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv1.norm.weight', 'layoutlmv2.encoder.layer.2.output.dense.bias', 'layoutlmv2.encoder.layer.5.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv3.norm.weight', 'layoutlmv2.encoder.layer.1.output.LayerNorm.bias', 'layoutlmv2.encoder.layer.7.attention.self.query.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv2.norm.bias', 'layoutlmv2.embeddings.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv2.weight', 'layoutlmv2.encoder.layer.0.attention.self.value.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv2.norm.bias', 'layoutlmv2.encoder.layer.0.attention.self.key.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res2.0.shortcut.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv3.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.11.intermediate.dense.bias', 'layoutlmv2.encoder.layer.3.intermediate.dense.bias', 'layoutlmv2.encoder.layer.2.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv3.norm.bias', 'layoutlmv2.encoder.layer.10.attention.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.0.shortcut.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv2.norm.running_var', 'layoutlmv2.encoder.layer.1.attention.self.value.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv2.norm.bias', 'layoutlmv2.encoder.layer.9.attention.self.query.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv2.norm.running_var', 'layoutlmv2.encoder.layer.8.attention.self.value.bias', 'layoutlmv2.encoder.layer.4.attention.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv2.norm.weight', 'layoutlmv2.visual.backbone.fpn_output5.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv1.weight', 'layoutlmv2.encoder.layer.1.intermediate.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.fpn_lateral4.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv2.norm.running_var', 'layoutlmv2.encoder.layer.4.attention.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv3.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.5.attention.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res5.0.shortcut.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv3.weight', 'layoutlmv2.encoder.layer.7.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv3.norm.num_batches_tracked', 'layoutlmv2.embeddings.w_position_embeddings.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv2.weight', 'layoutlmv2.encoder.layer.1.attention.self.value.bias', 'layoutlmv2.encoder.layer.11.attention.self.value.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.0.shortcut.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv3.norm.running_var', 'layoutlmv2.encoder.layer.2.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv2.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.8.intermediate.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv3.norm.weight', 'layoutlmv2.encoder.layer.2.intermediate.dense.weight', 'layoutlmv2.encoder.layer.0.attention.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv1.norm.bias', 'layoutlmv2.encoder.layer.3.attention.self.value.bias', 'layoutlmv2.encoder.layer.5.attention.self.key.weight', 'layoutlmv2.visual.backbone.fpn_lateral2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv3.norm.running_mean', 'layoutlmv2.encoder.layer.1.attention.self.query.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv1.norm.running_var', 'layoutlmv2.encoder.layer.3.attention.output.LayerNorm.bias', 'layoutlmv2.encoder.layer.3.attention.self.value.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv2.weight', 'layoutlmv2.encoder.layer.6.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv1.norm.running_mean', 'layoutlmv2.encoder.layer.8.attention.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.0.shortcut.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv1.norm.bias', 'layoutlmv2.encoder.layer.7.attention.self.value.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv2.norm.bias', 'layoutlmv2.encoder.layer.3.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv1.norm.bias', 'layoutlmv2.encoder.layer.10.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv2.norm.weight', 'layoutlmv2.encoder.layer.3.attention.self.key.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv2.weight', 'layoutlmv2.encoder.layer.5.attention.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv2.norm.running_mean']\r\n- This IS expected if you are initializing LayoutLMModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing LayoutLMModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of LayoutLMModel were not initialized from the model checkpoint at microsoft/layoutxlm-base and are newly initialized: ['layoutlm.encoder.layer.4.intermediate.dense.weight', 'layoutlm.encoder.layer.4.intermediate.dense.bias', 'layoutlm.encoder.layer.10.attention.self.query.weight', 'layoutlm.encoder.layer.6.output.LayerNorm.bias', 'layoutlm.encoder.layer.2.attention.self.value.bias', 'layoutlm.encoder.layer.2.intermediate.dense.bias', 'layoutlm.encoder.layer.5.output.dense.bias', 'layoutlm.encoder.layer.10.output.LayerNorm.bias', 'layoutlm.encoder.layer.9.output.dense.weight', 'layoutlm.encoder.layer.0.attention.self.value.weight', 'layoutlm.encoder.layer.7.attention.self.key.bias', 'layoutlm.encoder.layer.2.attention.output.LayerNorm.weight', 'layoutlm.encoder.layer.1.attention.self.key.weight', 'layoutlm.encoder.layer.8.output.dense.bias', 'layoutlm.encoder.layer.1.intermediate.dense.weight', 'layoutlm.encoder.layer.6.attention.self.query.bias', 'layoutlm.encoder.layer.11.attention.output.LayerNorm.weight', 'layoutlm.encoder.layer.3.attention.self.query.bias', 'layoutlm.encoder.layer.1.attention.self.value.bias', 'layoutlm.encoder.layer.7.output.dense.bias', 'layoutlm.encoder.layer.5.attention.output.dense.bias', 'layoutlm.encoder.layer.9.attention.output.dense.bias', 'layoutlm.encoder.layer.9.attention.self.key.bias', 'layoutlm.encoder.layer.9.output.dense.bias', 'layoutlm.encoder.layer.5.output.LayerNorm.bias', 'layoutlm.encoder.layer.6.attention.output.LayerNorm.bias', 'layoutlm.encoder.layer.6.output.dense.bias', 'layoutlm.pooler.dense.weight', 'layoutlm.encoder.layer.1.attention.output.dense.bias', 'layoutlm.encoder.layer.7.output.LayerNorm.bias', 'layoutlm.encoder.layer.2.attention.self.key.bias', 'layoutlm.encoder.layer.1.attention.self.value.weight', 'layoutlm.encoder.layer.3.attention.output.dense.weight', 'layoutlm.encoder.layer.10.intermediate.dense.weight', 'layoutlm.encoder.layer.6.attention.self.query.weight', 'layoutlm.encoder.layer.6.attention.output.dense.weight', 'layoutlm.encoder.layer.7.intermediate.dense.bias', 'layoutlm.encoder.layer.3.intermediate.dense.weight', 'layoutlm.encoder.layer.7.attention.self.value.weight', 'layoutlm.encoder.layer.8.attention.output.LayerNorm.weight', 'layoutlm.encoder.layer.6.attention.self.key.weight', 'layoutlm.encoder.layer.1.output.dense.bias', 'layoutlm.encoder.layer.3.attention.self.key.weight', 'layoutlm.encoder.layer.5.output.LayerNorm.weight', 'layoutlm.encoder.layer.1.attention.self.query.bias', 'layoutlm.encoder.layer.11.attention.output.dense.weight', 'layoutlm.encoder.layer.10.attention.output.LayerNorm.weight', 'layoutlm.encoder.layer.0.attention.self.query.weight', 'layoutlm.encoder.layer.11.attention.self.key.bias', 'layoutlm.encoder.layer.2.attention.self.query.weight', 'layoutlm.encoder.layer.11.attention.self.query.weight', 'layoutlm.encoder.layer.7.attention.self.key.weight', 'layoutlm.encoder.layer.10.output.dense.weight', 'layoutlm.encoder.layer.0.attention.self.key.bias', 'layoutlm.encoder.layer.7.attention.self.query.bias', 'layoutlm.encoder.layer.7.attention.output.LayerNorm.bias', 'layoutlm.encoder.layer.9.intermediate.dense.bias', 'layoutlm.encoder.layer.1.attention.self.key.bias', 'layoutlm.encoder.layer.1.attention.output.LayerNorm.weight', 'layoutlm.encoder.layer.8.attention.self.value.weight', 'layoutlm.encoder.layer.7.attention.self.value.bias', 'layoutlm.encoder.layer.8.attention.self.key.bias', 'layoutlm.encoder.layer.5.attention.self.query.bias', 'layoutlm.encoder.layer.11.attention.output.LayerNorm.bias', 'layoutlm.encoder.layer.10.attention.self.key.weight', 'layoutlm.encoder.layer.2.attention.self.value.weight', 'layoutlm.encoder.layer.11.output.LayerNorm.bias', 'layoutlm.encoder.layer.10.attention.self.value.weight', 'layoutlm.encoder.layer.1.intermediate.dense.bias', 'layoutlm.encoder.layer.2.attention.output.LayerNorm.bias', 'layoutlm.encoder.layer.4.attention.output.LayerNorm.weight', 'layoutlm.encoder.layer.0.output.dense.weight', 'layoutlm.encoder.layer.4.output.LayerNorm.weight', 'layoutlm.encoder.layer.7.attention.output.LayerNorm.weight', 'layoutlm.encoder.layer.3.attention.output.LayerNorm.weight', 'layoutlm.encoder.layer.10.output.LayerNorm.weight', 'layoutlm.encoder.layer.3.attention.self.value.weight', 'layoutlm.embeddings.x_position_embeddings.weight', 'layoutlm.encoder.layer.11.attention.self.value.weight', 'layoutlm.encoder.layer.5.intermediate.dense.bias', 'layoutlm.encoder.layer.4.attention.self.query.bias', 'layoutlm.embeddings.word_embeddings.weight', 'layoutlm.encoder.layer.7.attention.self.query.weight', 'layoutlm.encoder.layer.6.output.dense.weight', 'layoutlm.encoder.layer.11.output.dense.bias', 'layoutlm.encoder.layer.2.intermediate.dense.weight', 'layoutlm.encoder.layer.8.attention.self.key.weight', 'layoutlm.encoder.layer.5.output.dense.weight', 'layoutlm.encoder.layer.6.attention.self.value.bias', 'layoutlm.encoder.layer.2.output.LayerNorm.weight', 'layoutlm.encoder.layer.9.attention.output.dense.weight', 'layoutlm.encoder.layer.3.output.dense.weight', 'layoutlm.encoder.layer.5.attention.self.value.weight', 'layoutlm.encoder.layer.9.attention.self.value.weight', 'layoutlm.encoder.layer.1.attention.output.LayerNorm.bias', 'layoutlm.encoder.layer.8.output.dense.weight', 'layoutlm.encoder.layer.1.output.LayerNorm.bias', 'layoutlm.encoder.layer.6.attention.output.LayerNorm.weight', 'layoutlm.encoder.layer.8.attention.self.value.bias', 'layoutlm.encoder.layer.2.attention.self.query.bias', 'layoutlm.encoder.layer.2.output.dense.bias', 'layoutlm.encoder.layer.4.attention.output.LayerNorm.bias', 'layoutlm.embeddings.h_position_embeddings.weight', 'layoutlm.encoder.layer.0.attention.output.LayerNorm.bias', 'layoutlm.encoder.layer.11.output.LayerNorm.weight', 'layoutlm.encoder.layer.5.attention.self.key.weight', 'layoutlm.encoder.layer.7.attention.output.dense.bias', 'layoutlm.encoder.layer.6.intermediate.dense.weight', 'layoutlm.embeddings.token_type_embeddings.weight', 'layoutlm.encoder.layer.11.attention.output.dense.bias', 'layoutlm.encoder.layer.9.attention.self.key.weight', 'layoutlm.encoder.layer.8.output.LayerNorm.weight', 'layoutlm.encoder.layer.6.output.LayerNorm.weight', 'layoutlm.encoder.layer.10.attention.output.dense.bias', 'layoutlm.encoder.layer.7.intermediate.dense.weight', 'layoutlm.encoder.layer.9.output.LayerNorm.weight', 'layoutlm.encoder.layer.0.output.LayerNorm.bias', 'layoutlm.encoder.layer.7.output.LayerNorm.weight', 'layoutlm.encoder.layer.8.attention.output.LayerNorm.bias', 'layoutlm.encoder.layer.9.attention.self.query.bias', 'layoutlm.encoder.layer.0.output.LayerNorm.weight', 'layoutlm.encoder.layer.3.intermediate.dense.bias', 'layoutlm.encoder.layer.4.attention.self.key.bias', 'layoutlm.encoder.layer.5.attention.self.query.weight', 'layoutlm.encoder.layer.2.output.dense.weight', 'layoutlm.encoder.layer.5.attention.output.LayerNorm.bias', 'layoutlm.encoder.layer.5.intermediate.dense.weight', 'layoutlm.encoder.layer.5.attention.self.key.bias', 'layoutlm.encoder.layer.8.attention.output.dense.bias', 'layoutlm.encoder.layer.10.output.dense.bias', 'layoutlm.encoder.layer.9.attention.output.LayerNorm.weight', 'layoutlm.encoder.layer.4.attention.self.value.weight', 'layoutlm.encoder.layer.1.output.dense.weight', 'layoutlm.encoder.layer.4.output.dense.weight', 'layoutlm.encoder.layer.8.attention.output.dense.weight', 'layoutlm.encoder.layer.3.output.LayerNorm.bias', 'layoutlm.encoder.layer.11.intermediate.dense.weight', 'layoutlm.encoder.layer.4.output.LayerNorm.bias', 'layoutlm.encoder.layer.11.output.dense.weight', 'layoutlm.encoder.layer.7.attention.output.dense.weight', 'layoutlm.embeddings.LayerNorm.weight', 'layoutlm.encoder.layer.2.attention.self.key.weight', 'layoutlm.encoder.layer.11.intermediate.dense.bias', 'layoutlm.encoder.layer.2.output.LayerNorm.bias', 'layoutlm.encoder.layer.3.output.dense.bias', 'layoutlm.encoder.layer.3.attention.self.key.bias', 'layoutlm.encoder.layer.8.attention.self.query.bias', 'layoutlm.encoder.layer.1.output.LayerNorm.weight', 'layoutlm.embeddings.w_position_embeddings.weight', 'layoutlm.encoder.layer.9.intermediate.dense.weight', 'layoutlm.encoder.layer.10.attention.self.value.bias', 'layoutlm.encoder.layer.8.attention.self.query.weight', 'layoutlm.encoder.layer.9.attention.self.value.bias', 'layoutlm.encoder.layer.4.attention.output.dense.bias', 'layoutlm.encoder.layer.11.attention.self.query.bias', 'layoutlm.encoder.layer.5.attention.output.dense.weight', 'layoutlm.encoder.layer.0.attention.output.LayerNorm.weight', 'layoutlm.encoder.layer.11.attention.self.value.bias', 'layoutlm.encoder.layer.0.output.dense.bias', 'layoutlm.encoder.layer.3.attention.output.dense.bias', 'layoutlm.encoder.layer.9.attention.self.query.weight', 'layoutlm.encoder.layer.3.attention.output.LayerNorm.bias', 'layoutlm.encoder.layer.3.attention.self.query.weight', 'layoutlm.embeddings.LayerNorm.bias', 'layoutlm.encoder.layer.10.attention.self.key.bias', 'layoutlm.encoder.layer.2.attention.output.dense.bias', 'layoutlm.encoder.layer.6.intermediate.dense.bias', 'layoutlm.encoder.layer.7.output.dense.weight', 'layoutlm.encoder.layer.10.attention.output.LayerNorm.bias', 'layoutlm.encoder.layer.9.attention.output.LayerNorm.bias', 'layoutlm.encoder.layer.0.attention.self.query.bias', 'layoutlm.encoder.layer.2.attention.output.dense.weight', 'layoutlm.embeddings.position_embeddings.weight', 'layoutlm.encoder.layer.10.attention.self.query.bias', 'layoutlm.encoder.layer.4.attention.self.key.weight', 'layoutlm.encoder.layer.5.attention.output.LayerNorm.weight', 'layoutlm.encoder.layer.0.attention.self.key.weight', 'layoutlm.encoder.layer.8.intermediate.dense.weight', 'layoutlm.encoder.layer.0.attention.output.dense.weight', 'layoutlm.encoder.layer.3.attention.self.value.bias', 'layoutlm.encoder.layer.10.intermediate.dense.bias', 'layoutlm.encoder.layer.11.attention.self.key.weight', 'layoutlm.encoder.layer.4.output.dense.bias', 'layoutlm.encoder.layer.3.output.LayerNorm.weight', 'layoutlm.encoder.layer.0.intermediate.dense.bias', 'layoutlm.encoder.layer.5.attention.self.value.bias', 'layoutlm.encoder.layer.0.attention.output.dense.bias', 'layoutlm.pooler.dense.bias', 'layoutlm.encoder.layer.6.attention.self.value.weight', 'layoutlm.encoder.layer.0.attention.self.value.bias', 'layoutlm.encoder.layer.6.attention.self.key.bias', 'layoutlm.encoder.layer.1.attention.output.dense.weight', 'layoutlm.encoder.layer.4.attention.self.value.bias', 'layoutlm.encoder.layer.6.attention.output.dense.bias', 'layoutlm.embeddings.y_position_embeddings.weight', 'layoutlm.encoder.layer.9.output.LayerNorm.bias', 'layoutlm.encoder.layer.4.attention.output.dense.weight', 'layoutlm.encoder.layer.10.attention.output.dense.weight', 'layoutlm.encoder.layer.1.attention.self.query.weight', 'layoutlm.encoder.layer.8.output.LayerNorm.bias', 'layoutlm.encoder.layer.0.intermediate.dense.weight', 'layoutlm.encoder.layer.4.attention.self.query.weight', 'layoutlm.encoder.layer.8.intermediate.dense.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n```",
"Hmm ok, I thought LayoutXLM was equivalent to LayoutLM, but apparently it isn't. I guess one would need to add LayoutXLM to HuggingFace Transformers in order to properly load it. Otherwise, you can use the newly released layoutlmft package by the original authors as explained [here](https://github.com/microsoft/unilm/tree/master/layoutxlm).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,627 | 1,627 | NONE | null | I am trying to use [layoutxlm model](https://huggingface.co/microsoft/layoutxlm-base), but I get the following error when loading either the tokenizer or the model with respectively `AutoTokenizer.from_pretrained("microsoft/layoutxlm-base")` or `AutoModelForTokenClassification.from_pretrained("microsoft/layoutxlm-base")`.
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-11-381e58ab16b7> in <module>
1 model_name="microsoft/layoutxlm-base"
----> 2 tokenizer = AutoTokenizer.from_pretrained(model_name)
/opt/conda/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
400 kwargs["_from_auto"] = True
401 if not isinstance(config, PretrainedConfig):
--> 402 config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
403
404 use_fast = kwargs.pop("use_fast", True)
/opt/conda/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
430 config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
431 if "model_type" in config_dict:
--> 432 config_class = CONFIG_MAPPING[config_dict["model_type"]]
433 return config_class.from_dict(config_dict, **kwargs)
434 else:
KeyError: 'layoutxlm'
```
Using `transformers 4.6.1` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12194/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12193 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12193/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12193/comments | https://api.github.com/repos/huggingface/transformers/issues/12193/events | https://github.com/huggingface/transformers/issues/12193 | 922,451,939 | MDU6SXNzdWU5MjI0NTE5Mzk= | 12,193 | Cannot import RobertaPreTrainedModel | {
"login": "dogatekin",
"id": 21290261,
"node_id": "MDQ6VXNlcjIxMjkwMjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/21290261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dogatekin",
"html_url": "https://github.com/dogatekin",
"followers_url": "https://api.github.com/users/dogatekin/followers",
"following_url": "https://api.github.com/users/dogatekin/following{/other_user}",
"gists_url": "https://api.github.com/users/dogatekin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dogatekin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dogatekin/subscriptions",
"organizations_url": "https://api.github.com/users/dogatekin/orgs",
"repos_url": "https://api.github.com/users/dogatekin/repos",
"events_url": "https://api.github.com/users/dogatekin/events{/privacy}",
"received_events_url": "https://api.github.com/users/dogatekin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,623 | 1,624 | 1,624 | NONE | null | ## Environment info
I tried with both transformers 3.5.1 and 4.6.1.
### Who can help
Maybe @julien-c since they contributed RoBERTa.
## Information
I want to derive my own class from RobertaPreTrainedModel, but I cannot import that class like I can import e.g. BertPreTrainedModel or AlbertPreTrainedModel. More specifically,
```from transformers import BertPreTrainedModel```
and
```from transformers import AlbertPreTrainedModel```
works, but
```from transformers import RobertaPreTrainedModel```
returns `ImportError: cannot import name 'RobertaPreTrainedModel'`.
Is this the intended behavior or could it be a bug?
## To reproduce
Try `from transformers import RobertaPreTrainedModel`
## Expected behavior
The RobertaPreTrainedModel class should be imported like it works for other transformers. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12193/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12192 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12192/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12192/comments | https://api.github.com/repos/huggingface/transformers/issues/12192/events | https://github.com/huggingface/transformers/pull/12192 | 922,287,278 | MDExOlB1bGxSZXF1ZXN0NjcxMTgyNDU4 | 12,192 | Marian tatoeba conversion update | {
"login": "Traubert",
"id": 2804367,
"node_id": "MDQ6VXNlcjI4MDQzNjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2804367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Traubert",
"html_url": "https://github.com/Traubert",
"followers_url": "https://api.github.com/users/Traubert/followers",
"following_url": "https://api.github.com/users/Traubert/following{/other_user}",
"gists_url": "https://api.github.com/users/Traubert/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Traubert/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Traubert/subscriptions",
"organizations_url": "https://api.github.com/users/Traubert/orgs",
"repos_url": "https://api.github.com/users/Traubert/repos",
"events_url": "https://api.github.com/users/Traubert/events{/privacy}",
"received_events_url": "https://api.github.com/users/Traubert/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"cc @patrickvonplaten who did the conversion",
"Hey @Traubert,\r\n\r\nThanks a lot for adapting the conversion script! Could you maybe post a link to a Marian Tatoeba model so that I can try it out? \r\n",
"I'll fix the issues you mentioned, and I think the output needs to be made a bit neater by omitting some things. Currently a lot of technical details are copied from the model's yaml description. Is there a huggingface guideline for what model cards should look like?\r\n\r\n@patrickvonplaten The converter downloads the models, so you should be able to test like:\r\n\r\n```python\r\nfrom convert_marian_tatoeba_to_pytorch import *\r\nconv = TatoebaConverter()\r\nconv.convert_models(('fin-eng',), dry_run = False)\r\n```\r\n\r\nThis would result in the converter looking in the metadata from the Tatoeba-Challenge repository, which you are supposed to have available locally, and choosing the best model for that pair. It will then download and convert it, I think in that case this file: https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opus+bt-2021-04-30.zip",
"Hm. There's still a failing CI test from isort, but I ran that, committed the change, and on my machine `isort --check-only src/transformers/models/marian/convert_marian_tatoeba_to_pytorch.py` now reports nothing. Any ideas? @sgugger @patrickvonplaten ",
"Make sure you have the good versions installed, they are all pinned so `pip install -e .[quality]` in the repo should do the trick.",
"> Make sure you have the good versions installed, they are all pinned so `pip install -e .[quality]` in the repo should do the trick.\r\n\r\nThanks - also I didn't realise that `make style` was doing something more than plain `isort`, so now I committed another styled-by-make-style version.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Sorry for being so unresponsive on this PR @Traubert - do you think it could be possible to open a copy of the PR with a clean git commit history? Think some external git commits got accidentally merged into this PR.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,632 | 1,632 | CONTRIBUTOR | null | # What does this PR do?
The Helsinki-NLP / Tatoeba NMT models have gone through various
architectural changes, and the old conversion code fails on them. This
commit is something of a rewrite to remedy this, in particular parsing
supplied yaml files rather than README.md files. It needs to be looked
at by someone on the Huggingface side.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger @sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12192/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12192",
"html_url": "https://github.com/huggingface/transformers/pull/12192",
"diff_url": "https://github.com/huggingface/transformers/pull/12192.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12192.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12191 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12191/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12191/comments | https://api.github.com/repos/huggingface/transformers/issues/12191/events | https://github.com/huggingface/transformers/pull/12191 | 922,280,874 | MDExOlB1bGxSZXF1ZXN0NjcxMTc2NjE1 | 12,191 | updated DLC images and sample notebooks | {
"login": "philschmid",
"id": 32632186,
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philschmid",
"html_url": "https://github.com/philschmid",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"repos_url": "https://api.github.com/users/philschmid/repos",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | MEMBER | null | # What does this PR do?
This PR updates the SageMaker documentation. It moves the overview to the bottom of the site, since it will grow. It also adds the Vision Transformer example. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12191/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12191/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12191",
"html_url": "https://github.com/huggingface/transformers/pull/12191",
"diff_url": "https://github.com/huggingface/transformers/pull/12191.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12191.patch",
"merged_at": 1623842640000
} |
https://api.github.com/repos/huggingface/transformers/issues/12190 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12190/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12190/comments | https://api.github.com/repos/huggingface/transformers/issues/12190/events | https://github.com/huggingface/transformers/issues/12190 | 922,174,125 | MDU6SXNzdWU5MjIxNzQxMjU= | 12,190 | How to figure out which pretrained tokenizers support emojis? | {
"login": "Prashant446",
"id": 44218375,
"node_id": "MDQ6VXNlcjQ0MjE4Mzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/44218375?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Prashant446",
"html_url": "https://github.com/Prashant446",
"followers_url": "https://api.github.com/users/Prashant446/followers",
"following_url": "https://api.github.com/users/Prashant446/following{/other_user}",
"gists_url": "https://api.github.com/users/Prashant446/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Prashant446/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Prashant446/subscriptions",
"organizations_url": "https://api.github.com/users/Prashant446/orgs",
"repos_url": "https://api.github.com/users/Prashant446/repos",
"events_url": "https://api.github.com/users/Prashant446/events{/privacy}",
"received_events_url": "https://api.github.com/users/Prashant446/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"you can try this:\r\ntokenizer.decode(tokenizer.encode('I 🙁 hate ☹️ to 😣 see 😖 this 😫 fail 😩 , 🥺 pls 😢 help 😭 me 😤'))\r\n\r\nIf the tokenizer successfully decodes back to origin emojis then yes! Your tokenizer can encode emojis.\r\nIn this case, you use distillroberta tokenizer which use BPE (Radford et al. 2019) method, hence your tokenizer can encode emojis.\r\n\r\n\r\n\r\n",
"I already found this hack somewhere, but thanks anyway!"
] | 1,623 | 1,626 | 1,626 | NONE | null | Hi, I am working on a dataset with emojis. I found that the BERT tokenizer doesn't support emojis, and we have to manually add them and train their embeddings (#7648). But the RoBERTa tokenizer seems to identify emojis on tokenization as the following code:
```Python3
tokenizer = AutoTokenizer.from_pretrained('distilroberta-base', use_fast=True, normalization=True)
tokenizer.encode('I 🙁 hate ☹️ to 😣 see 😖 this 😫 fail 😩 , 🥺 pls 😢 help 😭 me 😤')
```
outputs:
```
[ 0, 100, 8103, 27, 10172, 4157, 42699, 9253, 12605, 7, 17841, 2469, 192, 17841, 25448, 42, 17841, 4958,
5998, 17841, 15375, 2156, 8103, 8210, 3070, 2968, 29, 17841, 7258, 244, 17841, 12410, 162, 17841, 10470, 2]
```
None of this represents the "\<unk\>" token so, all of them should be trained embeddings. Is that right? Also, why are there weird characters in most of the words in the vocab of RoBERTa model like - 'ĸ', 'Ġthis', 'ĠðŁĺ', '«', 'Ġfail', 'ĠðŁĺ', etc.? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12190/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12190/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12189 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12189/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12189/comments | https://api.github.com/repos/huggingface/transformers/issues/12189/events | https://github.com/huggingface/transformers/issues/12189 | 922,103,284 | MDU6SXNzdWU5MjIxMDMyODQ= | 12,189 | T5 Generate from Encoder Output | {
"login": "kevin3567",
"id": 31675719,
"node_id": "MDQ6VXNlcjMxNjc1NzE5",
"avatar_url": "https://avatars.githubusercontent.com/u/31675719?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kevin3567",
"html_url": "https://github.com/kevin3567",
"followers_url": "https://api.github.com/users/kevin3567/followers",
"following_url": "https://api.github.com/users/kevin3567/following{/other_user}",
"gists_url": "https://api.github.com/users/kevin3567/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kevin3567/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kevin3567/subscriptions",
"organizations_url": "https://api.github.com/users/kevin3567/orgs",
"repos_url": "https://api.github.com/users/kevin3567/repos",
"events_url": "https://api.github.com/users/kevin3567/events{/privacy}",
"received_events_url": "https://api.github.com/users/kevin3567/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"from [https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py](url)\r\nrow 369-374:\r\n`\r\ndef prepare_inputs_for_generation(self, input_ids: torch.LongTensor, **kwargs) -> Dict[str, Any]: \r\n return {\"input_ids\": input_ids}\r\n`\r\nMaybe haven't a new way to add own args other than \"input_ids\".\r\n",
"This might help: https://github.com/huggingface/transformers/pull/10599",
"It should be possible to directly pass `encoder_outptus`",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,627 | 1,627 | NONE | null | Hi,
I am working with the T5ConditionalGeneration for sequence generation. I am wondering that for autoregressively decoding outputs with the _generate()_ function, is there a way to decode a sequence from by feeding an intermediate layer input (such as _encoder_outputs_ or _input_embeds_) as opposed to the _input_ids_. I have noticed that for _forward()_ function, this feature is supported, where we can input _encoder_outputs_ or _input_embeds_ instead of the _input_ids_. However, I have not yet figured out a way to decode through the following:
> \# model is t5 conditional generation
> out_sequence = model.generate(encoder_outputs=encoder_outputs, num_beams=args.num_beams, ...)
If this is feature not directly available, are there any alternative approaches recommended to allow sequence decoding directly from an encoder output? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12189/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12188 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12188/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12188/comments | https://api.github.com/repos/huggingface/transformers/issues/12188/events | https://github.com/huggingface/transformers/issues/12188 | 921,979,144 | MDU6SXNzdWU5MjE5NzkxNDQ= | 12,188 | TextDatasetForNextSentencePrediction does not seem to contain truncate function unlike LineByLineWithSOPTextDataset | {
"login": "retarfi",
"id": 32985371,
"node_id": "MDQ6VXNlcjMyOTg1Mzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/32985371?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/retarfi",
"html_url": "https://github.com/retarfi",
"followers_url": "https://api.github.com/users/retarfi/followers",
"following_url": "https://api.github.com/users/retarfi/following{/other_user}",
"gists_url": "https://api.github.com/users/retarfi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/retarfi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/retarfi/subscriptions",
"organizations_url": "https://api.github.com/users/retarfi/orgs",
"repos_url": "https://api.github.com/users/retarfi/repos",
"events_url": "https://api.github.com/users/retarfi/events{/privacy}",
"received_events_url": "https://api.github.com/users/retarfi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,627 | 1,627 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Ubuntu 20.04.2 LTS
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1 (Yes)
- Tensorflow version (GPU?): Not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): Bert(PreTraining)
The problem arises when using:
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Prepare input.txt like:
```
About this time, when some rain began to fall, Sancho proposed that they should shelter themselves in the fulling-mill, but Don Quixote had conceived such abhorrence for it, on account of what was past, that he would no means set foot within its wall; wherefore, turning to the right-hand, they chanced to fall in with a road different from that in which they had traveled the day before; they had not gone far, when the knight discovered a man riding with something on his head, that glittered like polished gold, and scarce had he descried this phenomenon, when turning to Sancho, “I find,” said he, “that every proverb is strictly true; indeed, all of them are apophthegms dictated by experience herself; more especially, that which says, “shut one door, and another will soon open”: this I mention, because, if last night, fortune shut against us the door we fought to enter, by deceiving us with the fulling-hammers; today another stands wide open, in proffering to use us, another greater and more certain adventure, by which, if I fail to enter, it shall be my own fault, and not imputed to my ignorance of fulling-mills, or the darkness of the night.
About this time, when some rain began to fall, Sancho proposed that they should shelter themselves in the fulling-mill, but Don Quixote had conceived such abhorrence for it, on account of what was past, that he would no means set foot within its wall; wherefore, turning to the right-hand, they chanced to fall in with a road different from that in which they had traveled the day before; they had not gone far, when the knight discovered a man riding with something on his head, that glittered like polished gold, and scarce had he descried this phenomenon, when turning to Sancho, “I find,” said he, “that every proverb is strictly true; indeed, all of them are apophthegms dictated by experience herself; more especially, that which says, “shut one door, and another will soon open”: this I mention, because, if last night, fortune shut against us the door we fought to enter, by deceiving us with the fulling-hammers; today another stands wide open, in proffering to use us, another greater and more certain adventure, by which, if I fail to enter, it shall be my own fault, and not imputed to my ignorance of fulling-mills, or the darkness of the night.
```
(I think any document where the total number of tokens when using TextDatasetForNextSentencePrediction exceeds 512 will be fine)
2. Run code below:
```python
import transformers
from transformers.data.datasets import TextDatasetForNextSentencePrediction
from transformers.data.data_collator import DataCollatorForLanguageModeling
from transformers import BertConfig, BertForPreTraining, Trainer, BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
training_args = transformers.TrainingArguments(
output_dir = './bert/',
per_device_train_batch_size = 2
)
train_dataset = TextDatasetForNextSentencePrediction(
tokenizer = tokenizer,
file_path = 'input.txt',
overwrite_cache= True,
block_size = 512,
)
data_collator = DataCollatorForLanguageModeling(
tokenizer = tokenizer,
mlm = True,
)
bert_config = BertConfig(
vocab_size = tokenizer.vocab_size,
hidden_size = 768,
num_attention_heads = 12
)
model = BertForPreTraining(config=bert_config)
trainer = Trainer(
model = model,
args = training_args,
data_collator = data_collator,
train_dataset = train_dataset,
)
trainer.train()
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
No error is expected, but got an error:
```
RuntimeError Traceback (most recent call last)
<ipython-input-2-ddc701df65e7> in <module>
32
33 )
---> 34 trainer.train()
~/j-fin-bert/venv/lib/python3.8/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs)
1270 tr_loss += self.training_step(model, inputs)
1271 else:
-> 1272 tr_loss += self.training_step(model, inputs)
1273 self.current_flos += float(self.floating_point_ops(inputs))
1274
~/j-fin-bert/venv/lib/python3.8/site-packages/transformers/trainer.py in training_step(self, model, inputs)
1732 loss = self.compute_loss(model, inputs)
1733 else:
-> 1734 loss = self.compute_loss(model, inputs)
1735
1736 if self.args.n_gpu > 1:
~/j-fin-bert/venv/lib/python3.8/site-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs)
1764 else:
1765 labels = None
-> 1766 outputs = model(**inputs)
1767 # Save past state if it exists
1768 # TODO: this needs to be fixed and made cleaner later.
~/j-fin-bert/venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
~/j-fin-bert/venv/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, next_sentence_label, output_attentions, output_hidden_states, return_dict)
1067 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1068
-> 1069 outputs = self.bert(
1070 input_ids,
1071 attention_mask=attention_mask,
~/j-fin-bert/venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
~/j-fin-bert/venv/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
962 head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
963
--> 964 embedding_output = self.embeddings(
965 input_ids=input_ids,
966 position_ids=position_ids,
~/j-fin-bert/venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
~/j-fin-bert/venv/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds, past_key_values_length)
205 if self.position_embedding_type == "absolute":
206 position_embeddings = self.position_embeddings(position_ids)
--> 207 embeddings += position_embeddings
208 embeddings = self.LayerNorm(embeddings)
209 embeddings = self.dropout(embeddings)
RuntimeError: The size of tensor a (555) must match the size of tensor b (512) at non-singleton dimension 1
```
I think this is due to the fact that TextDateForNextSentencePrediction, unlike LineByLineWithSOPTextDataset, does not have the truncate feature like truncate_seq_pair in the create_examples_from_document function.
So I added truncate_seq_pair such as
https://github.com/huggingface/transformers/blob/802ffaff0da0a7d28b0fef85b44de5c66f717a4b/src/transformers/data/datasets/language_modeling.py#L293-L310
to
https://github.com/huggingface/transformers/blob/802ffaff0da0a7d28b0fef85b44de5c66f717a4b/src/transformers/data/datasets/language_modeling.py#L491-L492
Then it worked.
Should truncate_seq_pair be also added in TextDatasetForNextSentencePrediction?
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12188/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12187 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12187/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12187/comments | https://api.github.com/repos/huggingface/transformers/issues/12187/events | https://github.com/huggingface/transformers/pull/12187 | 921,924,048 | MDExOlB1bGxSZXF1ZXN0NjcwODU4OTc0 | 12,187 | Clean push to hub API | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,624 | 1,624 | COLLABORATOR | null | # What does this PR do?
This PR reworks the way the push to hub API works by avoiding cloning in a temporary directory, copying the files saved then pushing, by creating and cloning the repo first, then saving the object and lastly pushing it to the hub. The general commands does not bring any major breaking change although the behavior of `object.push_to_hub(repo_name)` and `object.save_pretrained(push_to_hub=True)` changes slightly (see below). That last API is not used much however, since it's new and completely undocumented, so I think it's okay.
`push_to_hub` takes a `repo_name_or_path` and will create a local clone of the repo if it does not exist. That local clone is synced with the distant repo. This is a bit different from before where a temp dir was used for the clone and push, this behavior is still accessible by passing along `temp_dir=True`.
## `push_to_hub` API for models, tokenizers and configs
Works like before with a slight change of behavior::
```
model.push_to_hub(repo_name_or_path="my-awesome-model")
```
will push to the hub the model by creating the repo, cloning it if it exists, saving the model inside and pushing. The change with before is that a local folder named "my_awesome_model" will be created if it does not exist, and if it exists it will either:
- be put in sync with the distant repo if the distant repo exists
- error if it is not a local clone of the distant repo
In the same vein
```
model.save_pretrained(my_folder, push_to_hub=True)
```
will use `my_folder` as a working directory and create it if it exists, error if it exists and is not a local clone of the distant repo and do a `git pull` if it exists and is a local clone of the distant repo.
In both cases, the previous behavior can be activated by passing along `temp_dir=True`.
Side note: this PR adds tests for the `FlaxPretrainedModel.push_to_hub` method.
## `push_to_hub` API for the Trainer
Here there are also slightly breaking changes in the sense that the control over the repo to which we push moves from arguments in the `push_to_hub` method to the fields in `TrainingArguments`. This is because the repo is now initialized at init, so we need to know the repo name, organization and potential token there.
The `Trainer` adds an automatic `.gitignore` to ignore all checkpoints folder, which can be changed by the user (we can add a CLI argument to control that in the future) and the `push_to_hub` method now just triggers a save, writes the model card then push the whole output dir to the distant repo.
Another slightly breaking change is that the default for the `logging_dir` (for TensorBoard) changes, so that the logs are inside the output_dir and also pushed to the hub. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12187/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12187/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12187",
"html_url": "https://github.com/huggingface/transformers/pull/12187",
"diff_url": "https://github.com/huggingface/transformers/pull/12187.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12187.patch",
"merged_at": 1624457479000
} |
https://api.github.com/repos/huggingface/transformers/issues/12186 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12186/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12186/comments | https://api.github.com/repos/huggingface/transformers/issues/12186/events | https://github.com/huggingface/transformers/pull/12186 | 921,801,996 | MDExOlB1bGxSZXF1ZXN0NjcwNzUwMjIz | 12,186 | [WIP] Flax XLM | {
"login": "asvskartheek",
"id": 25862483,
"node_id": "MDQ6VXNlcjI1ODYyNDgz",
"avatar_url": "https://avatars.githubusercontent.com/u/25862483?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asvskartheek",
"html_url": "https://github.com/asvskartheek",
"followers_url": "https://api.github.com/users/asvskartheek/followers",
"following_url": "https://api.github.com/users/asvskartheek/following{/other_user}",
"gists_url": "https://api.github.com/users/asvskartheek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asvskartheek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asvskartheek/subscriptions",
"organizations_url": "https://api.github.com/users/asvskartheek/orgs",
"repos_url": "https://api.github.com/users/asvskartheek/repos",
"events_url": "https://api.github.com/users/asvskartheek/events{/privacy}",
"received_events_url": "https://api.github.com/users/asvskartheek/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @asvskartheek \r\n\r\nGreat to see that you started working on `FlaxXLM`! Feel free to ping me and @patrickvonplaten if you have any questions! Happy to help :)",
"Hey @patil-suraj , thanks for offering to help. Is there a general guide or a series of standard steps that can follow while porting models to Flax on HuggingFace in HF's own style?",
"There is no guide as such yet. But you could see how other models are implemented in Flax, which should give a good idea about the conversion. Here's [FlaxBert](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_flax_bert.py), [FlaxGPT2](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_flax_bert.py)\r\n\r\nTo start you could just copy the PyTorch model and start replacing each module in Flax.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,627 | 1,627 | NONE | null | # What does this PR do?
This PR will add XLM in Flax/Jax
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12186/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12186/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12186",
"html_url": "https://github.com/huggingface/transformers/pull/12186",
"diff_url": "https://github.com/huggingface/transformers/pull/12186.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12186.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12185 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12185/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12185/comments | https://api.github.com/repos/huggingface/transformers/issues/12185/events | https://github.com/huggingface/transformers/pull/12185 | 921,801,371 | MDExOlB1bGxSZXF1ZXN0NjcwNzQ5NzQx | 12,185 | Use yaml to create metadata | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"note that this can ultimately be in `huggingface_hub` rather than here (but it's great to be able to experiment with this here)",
"Yes we were talking about it with @LysandreJik, probably for after the upcoming release!"
] | 1,623 | 1,623 | 1,623 | COLLABORATOR | null | # What does this PR do?
This PR leverages `pyaml` to avoid writing yaml manually, as suggested by @julien-c . | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12185/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12185/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12185",
"html_url": "https://github.com/huggingface/transformers/pull/12185",
"diff_url": "https://github.com/huggingface/transformers/pull/12185.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12185.patch",
"merged_at": 1623863865000
} |
https://api.github.com/repos/huggingface/transformers/issues/12184 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12184/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12184/comments | https://api.github.com/repos/huggingface/transformers/issues/12184/events | https://github.com/huggingface/transformers/pull/12184 | 921,755,707 | MDExOlB1bGxSZXF1ZXN0NjcwNzEwNTE0 | 12,184 | Temporarily deactivate torchhub test | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | COLLABORATOR | null | # What does this PR do?
This PR removes the torch hub test for now, as it seems there is a problem with the torch hub for now. Will investigate more tomorrow if need be. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12184/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12184",
"html_url": "https://github.com/huggingface/transformers/pull/12184",
"diff_url": "https://github.com/huggingface/transformers/pull/12184.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12184.patch",
"merged_at": 1623788211000
} |
https://api.github.com/repos/huggingface/transformers/issues/12183 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12183/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12183/comments | https://api.github.com/repos/huggingface/transformers/issues/12183/events | https://github.com/huggingface/transformers/issues/12183 | 921,743,321 | MDU6SXNzdWU5MjE3NDMzMjE= | 12,183 | Inconsistency between GPTNeo and GPT2 config classes | {
"login": "leogao2",
"id": 54557097,
"node_id": "MDQ6VXNlcjU0NTU3MDk3",
"avatar_url": "https://avatars.githubusercontent.com/u/54557097?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leogao2",
"html_url": "https://github.com/leogao2",
"followers_url": "https://api.github.com/users/leogao2/followers",
"following_url": "https://api.github.com/users/leogao2/following{/other_user}",
"gists_url": "https://api.github.com/users/leogao2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leogao2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leogao2/subscriptions",
"organizations_url": "https://api.github.com/users/leogao2/orgs",
"repos_url": "https://api.github.com/users/leogao2/repos",
"events_url": "https://api.github.com/users/leogao2/events{/privacy}",
"received_events_url": "https://api.github.com/users/leogao2/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | null | [] | [
"Seconding this.\r\n\r\nLast month I swapped out GPT-2 for GPT-Neo in a [project](https://github.com/nostalgebraist/nostalgebraist-autoresponder/), and these differences made it more difficult to adapt my existing code.",
"Hi @leogao2 and @nostalgebraist, thanks for opening an issue! You're correct that the way this is currently implemented it prevents a few use-cases. Namely this is authorized:\r\n\r\n```py\r\nfrom transformers import GPT2Config\r\n\r\nconfig = GPT2Config()\r\nconfig.hidden_size\r\n```\r\n\r\nBut these are not:\r\n\r\n```py\r\nfrom transformers import GPT2Config\r\n\r\nconfig = GPT2Config()\r\nconfig.hidden_size = 4\r\n# Fails\r\n\r\nconfig = GPT2Config(hidden_size=4)\r\n# Fails\r\n```\r\n\r\nUnfortunately we can't just rename arguments - as this would break both checkpoints on the hub as well as local checkpoints. We're thinking of a way to enable this with a convention set across configurations for the attributes you mention - this convention would allow getting and setting attributes that are defined in this convention, such as the ones you mention.\r\n\r\nLet us explore a bit and we'll come back to you. cc @patil-suraj @patrickvonplaten @sgugger ",
"> This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\r\n> \r\n> Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.\r\n\r\nThis still needs to be addressed.",
"Is there any progress on this?",
"Hey @leogao2! Yes, a proposal is available here: https://github.com/nreimers/transformers/commit/2198ee719c6101ef47de00cd6d53da3a8f938fb4 but there are still a few rough edges to polish. We'll try to have it merged in the next few weeks, will let you know.",
"This was fixed in #13026 which will be in the next release alongside GPT-J. Thank you for opening an issue!"
] | 1,623 | 1,631 | 1,631 | CONTRIBUTOR | null | The config classes for GPTNeo and GPT2 have a bunch of differences that are seemingly unnecessary. This makes it harder for downstream users to write code that depends on accessing these attributes. See below:

It seems that max_position_embeddings, hidden_size, num_layers, num_heads, intermediate_size, resid_dropout, embed_dropout, and attention_dropout should be renamed for sonsistency with the GPT2 config class.
### Who can help
@LysandreJik @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12183/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12183/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12182 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12182/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12182/comments | https://api.github.com/repos/huggingface/transformers/issues/12182/events | https://github.com/huggingface/transformers/issues/12182 | 921,640,396 | MDU6SXNzdWU5MjE2NDAzOTY= | 12,182 | KeyError: 'labels' during Distilling Zero Shot Classification | {
"login": "controldev",
"id": 79089450,
"node_id": "MDQ6VXNlcjc5MDg5NDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/79089450?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/controldev",
"html_url": "https://github.com/controldev",
"followers_url": "https://api.github.com/users/controldev/followers",
"following_url": "https://api.github.com/users/controldev/following{/other_user}",
"gists_url": "https://api.github.com/users/controldev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/controldev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/controldev/subscriptions",
"organizations_url": "https://api.github.com/users/controldev/orgs",
"repos_url": "https://api.github.com/users/controldev/repos",
"events_url": "https://api.github.com/users/controldev/events{/privacy}",
"received_events_url": "https://api.github.com/users/controldev/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] | open | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I have the same issue. ",
"This should be re-opened. ",
"Hello, unfortunately, none of the maintainers have the bandwidth to assist in the resolution of this issue. I'm putting a `Good second issue` label and pinging the original author @joeddav. We're happy to review a PR!",
"@MLMarins @erip could you provide more details!",
"@sadakmed create a text file with a few lines and run the zero-shot text classification example with arbitrary labels. There is some breakage in the `datasets` API that causes the labels from the teacher to not be propagated.",
"I was also encountering this error, and noticed that the call to [datasets.Dataset.map() in line 310](https://github.com/huggingface/transformers/blob/857ab55c01cf7213bc1822933cd2ef2b7552bac4/examples/research_projects/zero-shot-distillation/distill_classifier.py#L310) is the culprit. It drops the `labels` column from the dataset. Try replacing it with the following \r\n```\r\nds_tokenized = dataset.map(tokenizer, input_columns=\"text\")\r\ndataset = Dataset.from_dict(\r\n {\r\n \"text\": ds_tokenized[:][\"text\"],\r\n \"labels\": teacher_soft_preds, # output of get_teacher_predictions()\r\n \"input_ids\": ds_tokenized[:][\"input_ids\"],\r\n \"attention_mask\": ds_tokenized[:][\"attention_mask\"],\r\n }\r\n)\r\ndataset.set_format(\"torch\")\r\n```",
"@LysandreJik I've created a PR for this issue, please take a look when you get the chance to."
] | 1,623 | 1,660 | null | NONE | null | EDIT: I confirmed that this happens with the example script as it is, so no other changes are required to reproduce this.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Using GPU in script?: Yes (NVIDIA P100)
- Using distributed or parallel set-up in script?: No
### Who can help
Tagging @VictorSanh @sgugger, @patil-suraj (please correct me if I'm wrong)
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
Student: `distilbert-base-uncased`
Teacher: `roberta-large-mnli`
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
* [x] other: distillation of zero shot text classification models (research_projects)
I'm simply running the official colab script `Distilling Zero Shot Classification.ipynb`, but get a key error when performing the first epoch of the student training.
## To reproduce
Steps to reproduce the behavior:
1. Open the official script https://t.co/JAJ6Eb78vM?amp=1 (you can find this link here as well https://twitter.com/joeddav/status/1363543296166002688?lang=en)
2. Run all the required cells before training
3. Run the cell that runs `transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py`
4. Witness the `KeyError: 'labels'` on the first epoch of the student model training
Full logs:
`2021-06-16 15:33:19.328924: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
06/16/2021 15:33:20 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: False
06/16/2021 15:33:20 - INFO - __main__ - Training/evaluation parameters DistillTrainingArguments(output_dir='./distilbert-base-uncased-agnews-student', overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=False, evaluation_strategy=<IntervalStrategy.NO: 'no'>, prediction_loss_only=False, per_device_train_batch_size=32, per_device_eval_batch_size=128, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=1.0, max_steps=-1, lr_scheduler_type=<SchedulerType.LINEAR: 'linear'>, warmup_ratio=0.0, warmup_steps=0, logging_dir='runs/Jun16_15-33-20_9d2a3f891a99', logging_strategy=<IntervalStrategy.STEPS: 'steps'>, logging_first_step=False, logging_steps=500, save_strategy=<IntervalStrategy.STEPS: 'steps'>, save_steps=500, save_total_limit=0, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', fp16_backend='auto', fp16_full_eval=False, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=[], dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name='./distilbert-base-uncased-agnews-student', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=[], deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, length_column_name='length', report_to=['tensorboard'], ddp_find_unused_parameters=None, dataloader_pin_memory=True, skip_memory_metrics=False, use_legacy_prediction_loop=False, push_to_hub=False, resume_from_checkpoint=None, mp_parameters='')
06/16/2021 15:33:20 - INFO - __main__ - Generating predictions from zero-shot teacher model
[INFO|configuration_utils.py:517] 2021-06-16 15:33:21,219 >> loading configuration file https://huggingface.co/roberta-large-mnli/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/fab42bdbd5cb5e6ff7cabeb9bcc12728f56022f50b9644a3079904564f2bc704.ddc5961cccf081d6ca7f4f58ee119c21895aa9b19f0044f01954cd2ff42fefcb
[INFO|configuration_utils.py:553] 2021-06-16 15:33:21,220 >> Model config RobertaConfig {
"_num_labels": 3,
"architectures": [
"RobertaForSequenceClassification"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 1024,
"id2label": {
"0": "CONTRADICTION",
"1": "NEUTRAL",
"2": "ENTAILMENT"
},
"initializer_range": 0.02,
"intermediate_size": 4096,
"label2id": {
"CONTRADICTION": 0,
"ENTAILMENT": 2,
"NEUTRAL": 1
},
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 16,
"num_hidden_layers": 24,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.6.1",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 50265
}
[INFO|modeling_utils.py:1155] 2021-06-16 15:33:21,507 >> loading weights file https://huggingface.co/roberta-large-mnli/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/63cbd98723b89863bcd86a8002e823de3004a139513559246690c65521cdc9b9.38ef55c51c84ab2e78e5a0e2ea9c25830fd074df70d2f10071eb9a1bc1586ca0
[WARNING|modeling_utils.py:1331] 2021-06-16 15:33:44,205 >> Some weights of the model checkpoint at roberta-large-mnli were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.weight', 'roberta.pooler.dense.bias']
- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[INFO|modeling_utils.py:1348] 2021-06-16 15:33:44,205 >> All the weights of RobertaForSequenceClassification were initialized from the model checkpoint at roberta-large-mnli.
If your task is similar to the task the model of the checkpoint was trained on, you can already use RobertaForSequenceClassification for predictions without further training.
[INFO|configuration_utils.py:517] 2021-06-16 15:33:47,683 >> loading configuration file https://huggingface.co/roberta-large-mnli/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/fab42bdbd5cb5e6ff7cabeb9bcc12728f56022f50b9644a3079904564f2bc704.ddc5961cccf081d6ca7f4f58ee119c21895aa9b19f0044f01954cd2ff42fefcb
[INFO|configuration_utils.py:553] 2021-06-16 15:33:47,684 >> Model config RobertaConfig {
"_num_labels": 3,
"architectures": [
"RobertaForSequenceClassification"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 1024,
"id2label": {
"0": "CONTRADICTION",
"1": "NEUTRAL",
"2": "ENTAILMENT"
},
"initializer_range": 0.02,
"intermediate_size": 4096,
"label2id": {
"CONTRADICTION": 0,
"ENTAILMENT": 2,
"NEUTRAL": 1
},
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 16,
"num_hidden_layers": 24,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.6.1",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 50265
}
[INFO|tokenization_utils_base.py:1717] 2021-06-16 15:33:49,522 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/vocab.json from cache at /root/.cache/huggingface/transformers/64a1d72b2bd05b0aff1a4dd9e7a90a6eea0312b4f914e80b0a923aa8f72219bd.d67d6b367eb24ab43b08ad55e014cf254076934f71d832bbab9ad35644a375ab
[INFO|tokenization_utils_base.py:1717] 2021-06-16 15:33:49,522 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/merges.txt from cache at /root/.cache/huggingface/transformers/425529714b758f50b6d3f93f8093d859856fd41cf1cec7c8edf2ab44aee632b6.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b
[INFO|tokenization_utils_base.py:1717] 2021-06-16 15:33:49,522 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/tokenizer.json from cache at /root/.cache/huggingface/transformers/d077eac6b48c43618a441cba6eab600a5cc6383b98e7eada6d1ad4d3f3cc457e.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730
[INFO|tokenization_utils_base.py:1717] 2021-06-16 15:33:49,522 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:1717] 2021-06-16 15:33:49,522 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/special_tokens_map.json from cache at None
[INFO|tokenization_utils_base.py:1717] 2021-06-16 15:33:49,522 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/tokenizer_config.json from cache at None
100% 15000/15000 [1:15:16<00:00, 3.32it/s]
06/16/2021 16:49:06 - INFO - __main__ - Initializing student model
[INFO|file_utils.py:1532] 2021-06-16 16:49:07,106 >> https://huggingface.co/distilbert-base-uncased/resolve/main/config.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmpy7f4tyyh
Downloading: 100% 442/442 [00:00<00:00, 348kB/s]
[INFO|file_utils.py:1536] 2021-06-16 16:49:07,540 >> storing https://huggingface.co/distilbert-base-uncased/resolve/main/config.json in cache at /root/.cache/huggingface/transformers/23454919702d26495337f3da04d1655c7ee010d5ec9d77bdb9e399e00302c0a1.d423bdf2f58dc8b77d5f5d18028d7ae4a72dcfd8f468e81fe979ada957a8c361
[INFO|file_utils.py:1544] 2021-06-16 16:49:07,540 >> creating metadata file for /root/.cache/huggingface/transformers/23454919702d26495337f3da04d1655c7ee010d5ec9d77bdb9e399e00302c0a1.d423bdf2f58dc8b77d5f5d18028d7ae4a72dcfd8f468e81fe979ada957a8c361
[INFO|configuration_utils.py:517] 2021-06-16 16:49:07,540 >> loading configuration file https://huggingface.co/distilbert-base-uncased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/23454919702d26495337f3da04d1655c7ee010d5ec9d77bdb9e399e00302c0a1.d423bdf2f58dc8b77d5f5d18028d7ae4a72dcfd8f468e81fe979ada957a8c361
[INFO|configuration_utils.py:553] 2021-06-16 16:49:07,541 >> Model config DistilBertConfig {
"activation": "gelu",
"architectures": [
"DistilBertForMaskedLM"
],
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"hidden_dim": 3072,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2",
"3": "LABEL_3"
},
"initializer_range": 0.02,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_2": 2,
"LABEL_3": 3
},
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"pad_token_id": 0,
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"tie_weights_": true,
"transformers_version": "4.6.1",
"vocab_size": 30522
}
[INFO|file_utils.py:1532] 2021-06-16 16:49:07,820 >> https://huggingface.co/distilbert-base-uncased/resolve/main/pytorch_model.bin not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmptuo3f4g2
Downloading: 100% 268M/268M [00:04<00:00, 62.4MB/s]
[INFO|file_utils.py:1536] 2021-06-16 16:49:12,343 >> storing https://huggingface.co/distilbert-base-uncased/resolve/main/pytorch_model.bin in cache at /root/.cache/huggingface/transformers/9c169103d7e5a73936dd2b627e42851bec0831212b677c637033ee4bce9ab5ee.126183e36667471617ae2f0835fab707baa54b731f991507ebbb55ea85adb12a
[INFO|file_utils.py:1544] 2021-06-16 16:49:12,343 >> creating metadata file for /root/.cache/huggingface/transformers/9c169103d7e5a73936dd2b627e42851bec0831212b677c637033ee4bce9ab5ee.126183e36667471617ae2f0835fab707baa54b731f991507ebbb55ea85adb12a
[INFO|modeling_utils.py:1155] 2021-06-16 16:49:12,343 >> loading weights file https://huggingface.co/distilbert-base-uncased/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/9c169103d7e5a73936dd2b627e42851bec0831212b677c637033ee4bce9ab5ee.126183e36667471617ae2f0835fab707baa54b731f991507ebbb55ea85adb12a
[WARNING|modeling_utils.py:1331] 2021-06-16 16:49:12,787 >> Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertForSequenceClassification: ['vocab_layer_norm.bias', 'vocab_transform.weight', 'vocab_transform.bias', 'vocab_layer_norm.weight', 'vocab_projector.bias', 'vocab_projector.weight']
- This IS expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[WARNING|modeling_utils.py:1342] 2021-06-16 16:49:12,787 >> Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['pre_classifier.bias', 'classifier.bias', 'pre_classifier.weight', 'classifier.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
[INFO|configuration_utils.py:517] 2021-06-16 16:49:13,073 >> loading configuration file https://huggingface.co/distilbert-base-uncased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/23454919702d26495337f3da04d1655c7ee010d5ec9d77bdb9e399e00302c0a1.d423bdf2f58dc8b77d5f5d18028d7ae4a72dcfd8f468e81fe979ada957a8c361
[INFO|configuration_utils.py:553] 2021-06-16 16:49:13,074 >> Model config DistilBertConfig {
"activation": "gelu",
"architectures": [
"DistilBertForMaskedLM"
],
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"hidden_dim": 3072,
"initializer_range": 0.02,
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"pad_token_id": 0,
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"tie_weights_": true,
"transformers_version": "4.6.1",
"vocab_size": 30522
}
[INFO|file_utils.py:1532] 2021-06-16 16:49:13,357 >> https://huggingface.co/distilbert-base-uncased/resolve/main/vocab.txt not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmps3o1_gw9
Downloading: 100% 232k/232k [00:00<00:00, 1.83MB/s]
[INFO|file_utils.py:1536] 2021-06-16 16:49:13,766 >> storing https://huggingface.co/distilbert-base-uncased/resolve/main/vocab.txt in cache at /root/.cache/huggingface/transformers/0e1bbfda7f63a99bb52e3915dcf10c3c92122b827d92eb2d34ce94ee79ba486c.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99
[INFO|file_utils.py:1544] 2021-06-16 16:49:13,766 >> creating metadata file for /root/.cache/huggingface/transformers/0e1bbfda7f63a99bb52e3915dcf10c3c92122b827d92eb2d34ce94ee79ba486c.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99
[INFO|file_utils.py:1532] 2021-06-16 16:49:14,049 >> https://huggingface.co/distilbert-base-uncased/resolve/main/tokenizer.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmp1n0mi2iy
Downloading: 100% 466k/466k [00:00<00:00, 3.48MB/s]
[INFO|file_utils.py:1536] 2021-06-16 16:49:14,616 >> storing https://huggingface.co/distilbert-base-uncased/resolve/main/tokenizer.json in cache at /root/.cache/huggingface/transformers/75abb59d7a06f4f640158a9bfcde005264e59e8d566781ab1415b139d2e4c603.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4
[INFO|file_utils.py:1544] 2021-06-16 16:49:14,616 >> creating metadata file for /root/.cache/huggingface/transformers/75abb59d7a06f4f640158a9bfcde005264e59e8d566781ab1415b139d2e4c603.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4
[INFO|file_utils.py:1532] 2021-06-16 16:49:15,461 >> https://huggingface.co/distilbert-base-uncased/resolve/main/tokenizer_config.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmperm21jrj
Downloading: 100% 28.0/28.0 [00:00<00:00, 22.2kB/s]
[INFO|file_utils.py:1536] 2021-06-16 16:49:15,745 >> storing https://huggingface.co/distilbert-base-uncased/resolve/main/tokenizer_config.json in cache at /root/.cache/huggingface/transformers/8c8624b8ac8aa99c60c912161f8332de003484428c47906d7ff7eb7f73eecdbb.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79
[INFO|file_utils.py:1544] 2021-06-16 16:49:15,745 >> creating metadata file for /root/.cache/huggingface/transformers/8c8624b8ac8aa99c60c912161f8332de003484428c47906d7ff7eb7f73eecdbb.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79
[INFO|tokenization_utils_base.py:1717] 2021-06-16 16:49:15,746 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/vocab.txt from cache at /root/.cache/huggingface/transformers/0e1bbfda7f63a99bb52e3915dcf10c3c92122b827d92eb2d34ce94ee79ba486c.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99
[INFO|tokenization_utils_base.py:1717] 2021-06-16 16:49:15,746 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/tokenizer.json from cache at /root/.cache/huggingface/transformers/75abb59d7a06f4f640158a9bfcde005264e59e8d566781ab1415b139d2e4c603.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4
[INFO|tokenization_utils_base.py:1717] 2021-06-16 16:49:15,746 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:1717] 2021-06-16 16:49:15,746 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/special_tokens_map.json from cache at None
[INFO|tokenization_utils_base.py:1717] 2021-06-16 16:49:15,746 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/tokenizer_config.json from cache at /root/.cache/huggingface/transformers/8c8624b8ac8aa99c60c912161f8332de003484428c47906d7ff7eb7f73eecdbb.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79
100% 120000/120000 [00:32<00:00, 3647.18ex/s]
06/16/2021 16:49:49 - INFO - __main__ - Training student model on teacher predictions
[INFO|trainer.py:516] 2021-06-16 16:49:49,272 >> The following columns in the training set don't have a corresponding argument in `DistilBertForSequenceClassification.forward` and have been ignored: text.
[INFO|trainer.py:1156] 2021-06-16 16:49:49,285 >> ***** Running training *****
[INFO|trainer.py:1157] 2021-06-16 16:49:49,285 >> Num examples = 120000
[INFO|trainer.py:1158] 2021-06-16 16:49:49,285 >> Num Epochs = 1
[INFO|trainer.py:1159] 2021-06-16 16:49:49,285 >> Instantaneous batch size per device = 32
[INFO|trainer.py:1160] 2021-06-16 16:49:49,285 >> Total train batch size (w. parallel, distributed & accumulation) = 32
[INFO|trainer.py:1161] 2021-06-16 16:49:49,285 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1162] 2021-06-16 16:49:49,286 >> Total optimization steps = 3750
0% 0/3750 [00:00<?, ?it/s]Traceback (most recent call last):
File "transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py", line 338, in <module>
main()
File "transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py", line 328, in main
trainer.train()
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1272, in train
tr_loss += self.training_step(model, inputs)
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1734, in training_step
loss = self.compute_loss(model, inputs)
File "transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py", line 119, in compute_loss
target_p = inputs["labels"]
KeyError: 'labels'
0% 0/3750 [00:00<?, ?it/s]
`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Not throw a `KeyError`
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12182/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12181 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12181/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12181/comments | https://api.github.com/repos/huggingface/transformers/issues/12181/events | https://github.com/huggingface/transformers/pull/12181 | 921,611,837 | MDExOlB1bGxSZXF1ZXN0NjcwNTg5NzYx | 12,181 | Temporarily deactivate torch-scatter while we wait for new release | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Requested the binary build:\r\nhttps://github.com/rusty1s/pytorch_scatter/issues/224\r\n",
"well, you can also change to `torch==1.8.1` and keep everything else the same.",
"That's a better alternative, let me update.",
"sorry, I meant `pip install torch==1.8.1` - torch-scatter doesn't have 1.8.1 - it's 1.8.0 but works with either pytorch-1.8.x\r\n\r\nhttps://github.com/rusty1s/pytorch_scatter#pytorch-180\r\n\r\ni.e. I suggested that we don't yet switch to pt-1.9.0 until all the dependants catch up.",
"Ah, thank you for clarifying. We do have quite a bunch of failures on torch 1.9.0 (all related to torch fx it seems):\r\n\r\n```\r\nFAILED tests/test_modeling_albert.py::AlbertModelTest::test_torch_fx - File...\r\nFAILED tests/test_modeling_albert.py::AlbertModelTest::test_torch_fx_output_loss\r\nFAILED tests/test_modeling_bert.py::BertModelTest::test_torch_fx - File \"<e...\r\nFAILED tests/test_modeling_bert.py::BertModelTest::test_torch_fx_output_loss\r\nFAILED tests/test_modeling_electra.py::ElectraModelTest::test_torch_fx - Fi...\r\nFAILED tests/test_modeling_electra.py::ElectraModelTest::test_torch_fx_output_loss\r\nFAILED tests/test_modeling_distilbert.py::DistilBertModelTest::test_torch_fx\r\nFAILED tests/test_modeling_distilbert.py::DistilBertModelTest::test_torch_fx_output_loss\r\nFAILED tests/test_modeling_gpt_neo.py::GPTNeoModelTest::test_torch_fx - Fil...\r\nFAILED tests/test_modeling_gpt_neo.py::GPTNeoModelTest::test_torch_fx_output_loss\r\nFAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_torch_fx - File \"<e...\r\nFAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_torch_fx_output_loss\r\nFAILED tests/test_modeling_megatron_bert.py::MegatronBertModelTest::test_torch_fx\r\nFAILED tests/test_modeling_megatron_bert.py::MegatronBertModelTest::test_torch_fx_output_loss\r\nFAILED tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torch_fx\r\nFAILED tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torch_fx_output_loss\r\nFAILED tests/test_modeling_t5.py::T5ModelTest::test_torch_fx - File \"<eval_...\r\nFAILED tests/test_modeling_t5.py::T5ModelTest::test_torch_fx_output_loss - ...\r\n```\r\n\r\nSo agreed to keep the CI on 1.8.1 until we resolve this and can update to 1.9.0. cc @michaelbenayoun ",
"Yes, the torch fx tests should either be skipped or fixed - @michaelbenayoun already knows about this.\r\n\r\nThe problem was uncovered with 1.9.0-RC.\r\n\r\nHe suggested a fix for pytorch instead, but I don't think it made it into 1.9.0",
"Merging since this seems all good I and would really like a green CI :-)",
"@LysandreJik, https://pytorch-geometric.com/whl/torch-1.9.0+cu111.html is ready.\r\n\r\nOne way to proceed is to wait for @michaelbenayoun - or skip those tests for now and swtich to `torch==1.9.0` while updating `torch-scatter` to the link above.",
"Also trying to ask for pytorch-core support to support \"sum\", \"mean\", \"max\" or \"min\" scatter reduction functions, so that we could drop the need to depend on `torch-scatter` - https://github.com/pytorch/pytorch/issues/22378#issuecomment-862705586\r\nas it is a bit of an ordeal for being used in just a single model and even then it's optional.",
"Oh that would be terrific if we had support directly in PyTorch, thanks for asking!",
"One other approach is to provide a slower python-only implementation of the same and fall back to it if `torch-scatter` is not available, and not install the latter on CI."
] | 1,623 | 1,623 | 1,623 | MEMBER | null | Torch 1.9.0 just landed, incompatible with torch-scatter installed with version 1.8.0. While we wait for torch-scatter binaries compatible with 1.9.0 to be released, deactivating the torch-scatter-based tests.
cc @patrickvonplaten @sgugger @NielsRogge | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12181/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12181",
"html_url": "https://github.com/huggingface/transformers/pull/12181",
"diff_url": "https://github.com/huggingface/transformers/pull/12181.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12181.patch",
"merged_at": 1623787438000
} |
https://api.github.com/repos/huggingface/transformers/issues/12180 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12180/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12180/comments | https://api.github.com/repos/huggingface/transformers/issues/12180/events | https://github.com/huggingface/transformers/issues/12180 | 921,586,300 | MDU6SXNzdWU5MjE1ODYzMDA= | 12,180 | Can't run 124M using transformers | {
"login": "MK096",
"id": 20142735,
"node_id": "MDQ6VXNlcjIwMTQyNzM1",
"avatar_url": "https://avatars.githubusercontent.com/u/20142735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MK096",
"html_url": "https://github.com/MK096",
"followers_url": "https://api.github.com/users/MK096/followers",
"following_url": "https://api.github.com/users/MK096/following{/other_user}",
"gists_url": "https://api.github.com/users/MK096/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MK096/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MK096/subscriptions",
"organizations_url": "https://api.github.com/users/MK096/orgs",
"repos_url": "https://api.github.com/users/MK096/repos",
"events_url": "https://api.github.com/users/MK096/events{/privacy}",
"received_events_url": "https://api.github.com/users/MK096/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Do you have a reproducible code example, or a colab notebook? What is your environment as requested in the issue template? Did you try with a forward slash?",
"> Do you have a reproducible code example, or a colab notebook? What is your environment as requested in the issue template? Did you try with a forward slash?\r\n\r\nMy Code:\r\n\r\nimport gpt_2_simple as gpt2\r\nfrom transformers import pipeline, set_seed,GPT2Tokenizer, TFGPT2LMHeadModel\r\nfrom transformers import GPT2LMHeadModel, GPT2Tokenizer,AutoTokenizer, AutoModelWithLMHead\r\nfrom aitextgen import aitextgen\r\nimport os.path\r\n\r\ndata_folder = os.path.join(os.getcwd())\r\nfile_to_open = os.path.join(data_folder, \"124M\")\r\n\r\nprint(file_to_open) \r\ntokenizer = AutoTokenizer.from_pretrained(file_to_open)\r\nmodel = AutoModelWithLMHead.from_pretrained(file_to_open)\r\n\r\nI have attached image of my directory, files inside 124M and error\r\n\r\n\r\n\r\n\r\n\r\n",
"I don't know how you obtained your `124M` folder but it doesn't seem to be using one of our libraries?",
"Our libraries save models with a `config.json`, `pytorch_model.bin` if PyTorch and `tf_model.h5` if TensorFlow.",
"> Our libraries save models with a `config.json`, `pytorch_model.bin` if PyTorch and `tf_model.h5` if TensorFlow.\r\n\r\nI got it from https://github.com/openai/gpt-2\r\ndownload_model.py 124M (in cmd i wrote)\r\n\r\nI was able to run interactive_conditional_samples.py (in src folder)",
"Is that model different from the `gpt2` available on our model hub? https://huggingface.co/gpt2\r\n\r\nYou would load it like so:\r\n```\r\ntokenizer = AutoTokenizer.from_pretrained(\"gpt2\")\r\nmodel = AutoModelWithLMHead.from_pretrained(\"gpt2\")\r\n```\r\n\r\nIf it is different, then you should use the conversion script to convert it to a HF-style checkpoint: https://github.com/huggingface/transformers/blob/master/src/transformers/models/gpt2/convert_gpt2_original_tf_checkpoint_to_pytorch.py",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,627 | 1,627 | NONE | null | I have downloaded gpt 124M on my local machine and i was able to run the interactivesample.py that was provided by them
But when i try to load 124M using transformers, i get following error:
_OSError: Can't load config for 'models\124M'. Make sure that:
- 'models\124M' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'models\124M' is the correct path to a directory containing a config.json file_
**My code:**
tokenizer = AutoTokenizer.from_pretrained("models\\124M")
124M contains following json file : encoder | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12180/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12180/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12179 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12179/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12179/comments | https://api.github.com/repos/huggingface/transformers/issues/12179/events | https://github.com/huggingface/transformers/pull/12179 | 921,493,465 | MDExOlB1bGxSZXF1ZXN0NjcwNDg3OTYx | 12,179 | Tensorflow variant of DataCollatorForLanguageModeling. | {
"login": "aromans",
"id": 14765123,
"node_id": "MDQ6VXNlcjE0NzY1MTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/14765123?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aromans",
"html_url": "https://github.com/aromans",
"followers_url": "https://api.github.com/users/aromans/followers",
"following_url": "https://api.github.com/users/aromans/following{/other_user}",
"gists_url": "https://api.github.com/users/aromans/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aromans/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aromans/subscriptions",
"organizations_url": "https://api.github.com/users/aromans/orgs",
"repos_url": "https://api.github.com/users/aromans/repos",
"events_url": "https://api.github.com/users/aromans/events{/privacy}",
"received_events_url": "https://api.github.com/users/aromans/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, it seems there are a lot of changes in your PR. If you think this isn't the case, do you mind closing this and opening a new PR so that we may see the correct diff? Also feel free to ping @Rocketknight1 and @sgugger for review",
"Will do! We had some issues with git but we can clean it up and resubmit."
] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | Co-authored-by: Dalton Walker <[email protected]>
# What does this PR do?
We didn't see any support for TensorFlow within the DataCollatorForLanguageModeling data class. Integrating directly with TensorFlow seems useful for TensorFlow users and avoids the necessity for tensor conversion.
This PR adds a TFDataCollatorForLangaugeModeling data class that integrates directly with TensorFlow tensors and paves the way for further TFDataCollator conversions.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik
Anyone in the community is free to review the PR.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12179/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12179/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12179",
"html_url": "https://github.com/huggingface/transformers/pull/12179",
"diff_url": "https://github.com/huggingface/transformers/pull/12179.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12179.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12178 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12178/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12178/comments | https://api.github.com/repos/huggingface/transformers/issues/12178/events | https://github.com/huggingface/transformers/pull/12178 | 921,455,321 | MDExOlB1bGxSZXF1ZXN0NjcwNDU1NTkx | 12,178 | Update AutoModel classes in summarization example | {
"login": "ionicsolutions",
"id": 32523967,
"node_id": "MDQ6VXNlcjMyNTIzOTY3",
"avatar_url": "https://avatars.githubusercontent.com/u/32523967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ionicsolutions",
"html_url": "https://github.com/ionicsolutions",
"followers_url": "https://api.github.com/users/ionicsolutions/followers",
"following_url": "https://api.github.com/users/ionicsolutions/following{/other_user}",
"gists_url": "https://api.github.com/users/ionicsolutions/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ionicsolutions/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ionicsolutions/subscriptions",
"organizations_url": "https://api.github.com/users/ionicsolutions/orgs",
"repos_url": "https://api.github.com/users/ionicsolutions/repos",
"events_url": "https://api.github.com/users/ionicsolutions/events{/privacy}",
"received_events_url": "https://api.github.com/users/ionicsolutions/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
This updates the example for text summarization on the `Summary of tasks` page so that no deprecation warnings are shown.
In detail:
- Convert use of deprecated `AutoModelWithLMHead` to `AutoModelForSeq2SeqLM`
- Add newly required `truncation=True` to `tokenizer.encode` with `max_length`
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12178/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12178/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12178",
"html_url": "https://github.com/huggingface/transformers/pull/12178",
"diff_url": "https://github.com/huggingface/transformers/pull/12178.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12178.patch",
"merged_at": 1623767771000
} |
https://api.github.com/repos/huggingface/transformers/issues/12177 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12177/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12177/comments | https://api.github.com/repos/huggingface/transformers/issues/12177/events | https://github.com/huggingface/transformers/issues/12177 | 921,433,978 | MDU6SXNzdWU5MjE0MzM5Nzg= | 12,177 | Exception during hyperparameter search with Ray and transformers library starting from version 4.5.0 | {
"login": "sven-h",
"id": 8777506,
"node_id": "MDQ6VXNlcjg3Nzc1MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8777506?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sven-h",
"html_url": "https://github.com/sven-h",
"followers_url": "https://api.github.com/users/sven-h/followers",
"following_url": "https://api.github.com/users/sven-h/following{/other_user}",
"gists_url": "https://api.github.com/users/sven-h/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sven-h/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sven-h/subscriptions",
"organizations_url": "https://api.github.com/users/sven-h/orgs",
"repos_url": "https://api.github.com/users/sven-h/repos",
"events_url": "https://api.github.com/users/sven-h/events{/privacy}",
"received_events_url": "https://api.github.com/users/sven-h/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @sven-h yes this is a known issue, same as https://github.com/huggingface/transformers/issues/11249.\r\n\r\nFrom this thread:\r\n\r\n> If you disable the memory tracker (pass in skip_memory_metrics=True into your TrainingArguments) then you will no longer get the pickling error. In the next transformers release, the Ray Tune integration will automatically disable memory tracking if it's currently being enabled.\r\n",
"Hi @amogkam\r\nthanks for the fast reply and the answer."
] | 1,623 | 1,623 | 1,623 | NONE | null | I currently face the problem that with recent versions of the transformers library (issue starting at version 4.5.0)
the hyperparameter search with ray tune runs into a serialization issue described below.
## Environment info
- `transformers` version: 4.5.0
- Platform: Linux-4.19.0-16-amd64-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
- Ray version: 1.4.0
### Who can help
Maybe it is interesting to @richardliaw and @amogkam because they were mentioned as responsible for ray/raytune.
## Information
Model I am using (Bert, XLNet ...): distilbert-base-uncased ( model doesn't matter)
The problem arises when using:
* [ x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x ] an official GLUE/SQUaD task: (give the name): GLUE mrpc
## To reproduce
I have created a small working example which shows the error which (at least) I get:.
The code is mainly based on the [blog entry covering ray tune](https://huggingface.co/blog/ray-tune)
```python
import os
os.environ['TOKENIZERS_PARALLELISM'] = 'false'
from datasets import load_dataset, load_metric
from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments
from ray import tune
from ray.util import inspect_serializability
model_name = 'distilbert-base-uncased'
tokenizer = AutoTokenizer.from_pretrained(model_name)
dataset = load_dataset('glue', 'mrpc')
def encode(examples):
outputs = tokenizer(examples['sentence1'], examples['sentence2'], truncation=True)
return outputs
encoded_dataset = dataset.map(encode, batched=True)
def model_init():
return AutoModelForSequenceClassification.from_pretrained(model_name, return_dict=True)
def compute_metrics(eval_pred):
metric = load_metric('glue', 'mrpc')
predictions, labels = eval_pred
predictions = predictions.argmax(axis=-1)
return metric.compute(predictions=predictions, references=labels)
training_args = TrainingArguments("test")
trainer = Trainer(
args=training_args,
tokenizer=tokenizer,
train_dataset=encoded_dataset["train"],
eval_dataset=encoded_dataset["validation"],
model_init=model_init,
compute_metrics=compute_metrics,
)
def search_params(trial):
return {
#toy example
"learning_rate": tune.grid_search([0.000001, 0.00001, 0.0001, 0.001]),
}
trainer.hyperparameter_search(
direction="maximize",
backend="ray",
hp_space = search_params,
n_trials=1,
)
```
This code snippet works with transformers version 4.4.2 and ealier but not on versions 4.5.0 and later.
The error which appeared is
```python
Traceback (most recent call last):
File "working_example.py", line 48, in <module>
trainer.hyperparameter_search(
File "/site-packages/transformers/trainer.py", line 1459, in hyperparameter_search
best_run = run_hp_search(self, n_trials, direction, **kwargs)
File "/site-packages/transformers/integrations.py", line 231, in run_hp_search_ray
analysis = ray.tune.run(
File "/site-packages/ray/tune/tune.py", line 297, in run
_ray_auto_init()
File "/site-packages/ray/tune/tune.py", line 664, in _ray_auto_init
ray.init()
File "/site-packages/ray/_private/client_mode_hook.py", line 62, in wrapper
return func(*args, **kwargs)
File "/site-packages/ray/worker.py", line 866, in init
hook()
File "/site-packages/ray/tune/registry.py", line 171, in flush
self.references[k] = ray.put(v)
File "/site-packages/ray/_private/client_mode_hook.py", line 62, in wrapper
return func(*args, **kwargs)
File "/site-packages/ray/worker.py", line 1527, in put
object_ref = worker.put_object(value)
File "/site-packages/ray/worker.py", line 280, in put_object
serialized_value = self.get_serialization_context().serialize(value)
File "/site-packages/ray/serialization.py", line 326, in serialize
return self._serialize_to_msgpack(value)
File "/site-packages/ray/serialization.py", line 306, in _serialize_to_msgpack
self._serialize_to_pickle5(metadata, python_objects)
File "/site-packages/ray/serialization.py", line 266, in _serialize_to_pickle5
raise e
File "/site-packages/ray/serialization.py", line 262, in _serialize_to_pickle5
inband = pickle.dumps(
File "/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 73, in dumps
cp.dump(obj)
File "/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 580, in dump
return Pickler.dump(self, obj)
TypeError: cannot pickle '_thread.RLock' object
```
Based on this error, I searched for code to check which part is not serializable (because the whole trainer is transferred to each ray trial). I found the [ray serialization page](https://docs.ray.io/en/master/serialization.html#troubleshooting) and executed
```python
inspect_serializability(trainer, name="test")
```
The output was:
```
================================================================================
Checking Serializability of <transformers.trainer.Trainer object at 0x7fce1cbbeee0>
================================================================================
!!! FAIL serialization: cannot pickle '_thread.RLock' object
Serializing 'compute_metrics' <function compute_metrics at 0x7fce1cb5b5e0>...
Serializing 'model_init' <function model_init at 0x7fce1cb5b550>...
Serializing '_gather_and_numpify' <bound method Trainer._gather_and_numpify of <transformers.trainer.Trainer object at 0x7fce1cbbeee0>>...
!!! FAIL serialization: cannot pickle '_thread.RLock' object
Serializing '__func__' <function Trainer._gather_and_numpify at 0x7fce1f739940>...
WARNING: Did not find non-serializable object in <bound method Trainer._gather_and_numpify of <transformers.trainer.Trainer object at 0x7fce1cbbeee0>>. This may be an oversight.
================================================================================
Variable:
FailTuple(_gather_and_numpify [obj=<bound method Trainer._gather_and_numpify of <transformers.trainer.Trainer object at 0x7fce1cbbeee0>>, parent=<transformers.trainer.Trainer object at 0x7fce1cbbeee0>])
was found to be non-serializable. There may be multiple other undetected variables that were non-serializable.
Consider either removing the instantiation/imports of these variables or moving the instantiation into the scope of the function/class.
If you have any suggestions on how to improve this error message, please reach out to the Ray developers on github.com/ray-project/ray/issues/
================================================================================
```
I did not find any major changes between version 4.4.2 and 4.5.0 with regards to integrations.py and trainer.py.
I think the first step would be, that someone else reproduce the behaviour if possible (maybe something is also wrong on my side/setup).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12177/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12177/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12176 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12176/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12176/comments | https://api.github.com/repos/huggingface/transformers/issues/12176/events | https://github.com/huggingface/transformers/pull/12176 | 921,341,404 | MDExOlB1bGxSZXF1ZXN0NjcwMzU4MTM2 | 12,176 | Update conversion of Tatoeba marian models | {
"login": "Traubert",
"id": 2804367,
"node_id": "MDQ6VXNlcjI4MDQzNjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2804367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Traubert",
"html_url": "https://github.com/Traubert",
"followers_url": "https://api.github.com/users/Traubert/followers",
"following_url": "https://api.github.com/users/Traubert/following{/other_user}",
"gists_url": "https://api.github.com/users/Traubert/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Traubert/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Traubert/subscriptions",
"organizations_url": "https://api.github.com/users/Traubert/orgs",
"repos_url": "https://api.github.com/users/Traubert/repos",
"events_url": "https://api.github.com/users/Traubert/events{/privacy}",
"received_events_url": "https://api.github.com/users/Traubert/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Oops, this PR depends on merging a pull request to Tatoeba which hasn't happened yet. Closing for now."
] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | # What does this PR do?
The Helsinki-NLP / Tatoeba NMT models have gone through various
architectural changes, and the old conversion code fails on them. This
commit is something of a rewrite to remedy this, in particular parsing
supplied yaml files rather than README.md files. It needs to be looked
at by someone on the Huggingface side.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger @sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12176/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12176/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12176",
"html_url": "https://github.com/huggingface/transformers/pull/12176",
"diff_url": "https://github.com/huggingface/transformers/pull/12176.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12176.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12175 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12175/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12175/comments | https://api.github.com/repos/huggingface/transformers/issues/12175/events | https://github.com/huggingface/transformers/issues/12175 | 921,312,525 | MDU6SXNzdWU5MjEzMTI1MjU= | 12,175 | TPU training is stuck using T5 with PyTorch Lightning | {
"login": "mozharovsky",
"id": 6762769,
"node_id": "MDQ6VXNlcjY3NjI3Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6762769?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mozharovsky",
"html_url": "https://github.com/mozharovsky",
"followers_url": "https://api.github.com/users/mozharovsky/followers",
"following_url": "https://api.github.com/users/mozharovsky/following{/other_user}",
"gists_url": "https://api.github.com/users/mozharovsky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mozharovsky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mozharovsky/subscriptions",
"organizations_url": "https://api.github.com/users/mozharovsky/orgs",
"repos_url": "https://api.github.com/users/mozharovsky/repos",
"events_url": "https://api.github.com/users/mozharovsky/events{/privacy}",
"received_events_url": "https://api.github.com/users/mozharovsky/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! We don't have CI running with pytorch lightning so we would recommend opening an issue on their repository.\r\n\r\nDid you try a TPU training with the `Trainer` that comes with `transformers`? It should work fine on 8 cores.",
"Hello @LysandreJik, thanks for replying! \r\n\r\nYes, `🤗/Trainer` works perfectly fine with 8-cores, `t5-small` gets fine-tuned in under 10 minutes on TPU-v3. Nevertheless, it's still unclear whether this issue happens due to some lightning trainer internals or T5 model – `roberta-base` works fine with lightning using 8 TPU cores. \r\n\r\nThe latter makes me think that it might be some T5-specific issue, mightn't be?\r\n\r\nP.S. Lightning Trainer reports, that `lm_head.weight` parameter isn't tied (it seems missing prior to moving the model to the XLA device). Just in case.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,627 | 1,627 | NONE | null | ## Environment info
- `transformers` version: 4.6.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
### Who can help
@patrickvonplaten, @patil-suraj
## Information
I'm fine-tuning the `t5-small` model using PyTorch Lightning on TPU v3 (Google Colab) on the `imdb` dataset. The training is stuck using an 8-cores setup and works well using a 1-core setup. It seems super weird since the `roberta-base` model works just fine using all 8-cores.
I've filed a similar issue https://github.com/PyTorchLightning/pytorch-lightning/issues/7984, but would be great to receive some feedback if `t5` models are proved to work on TPUs using Lightning trainer.
## To reproduce
Please, use this Google Colab Notebook:
https://colab.research.google.com/drive/1FbWfkho3Otfl19y5ybkrK5Jw_tWNqV-M?usp=sharing
## Expected behavior
`t5` models should work fine using TPU 8-cores training setup.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12175/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12174 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12174/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12174/comments | https://api.github.com/repos/huggingface/transformers/issues/12174/events | https://github.com/huggingface/transformers/issues/12174 | 921,311,804 | MDU6SXNzdWU5MjEzMTE4MDQ= | 12,174 | Pretrained XLM model with TLM objective generates nonsensical predictions | {
"login": "cbaziotis",
"id": 5629093,
"node_id": "MDQ6VXNlcjU2MjkwOTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5629093?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cbaziotis",
"html_url": "https://github.com/cbaziotis",
"followers_url": "https://api.github.com/users/cbaziotis/followers",
"following_url": "https://api.github.com/users/cbaziotis/following{/other_user}",
"gists_url": "https://api.github.com/users/cbaziotis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cbaziotis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cbaziotis/subscriptions",
"organizations_url": "https://api.github.com/users/cbaziotis/orgs",
"repos_url": "https://api.github.com/users/cbaziotis/repos",
"events_url": "https://api.github.com/users/cbaziotis/events{/privacy}",
"received_events_url": "https://api.github.com/users/cbaziotis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,627 | 1,627 | NONE | null | Hi, I want to use the [`xlm-mlm-tlm-xnli15-1024`](https://huggingface.co/xlm-mlm-tlm-xnli15-1024) pretrained model, which is the XLM model trained with the auxiliary Translation Language Modeling (TLM) objective.
I want to give a translation pair to the model, mask some words in one of the sentences and then get the predictions of the model for the masked words. Check the figure for reference.

My problem is that the model makes nonsensical predictions, which means that either I am doing something wrong, such as feeding the wrong input, or the model is not loaded properly. Here is a code snippet:
```python
import torch
from transformers import XLMWithLMHeadModel, XLMTokenizer
model_name = "xlm-mlm-tlm-xnli15-1024"
tokenizer = XLMTokenizer.from_pretrained(model_name)
model = XLMWithLMHeadModel.from_pretrained(model_name)
model.eval()
src_lang_id = tokenizer.lang2id["en"] # English
trg_lang_id = tokenizer.lang2id["el"] # Greek
src_text = "I love pasta with tomato sauce!".replace("tomato", tokenizer.mask_token)
trg_text = "Μου αρέσουν τα ζυμαρικά με σάλτσα ντομάτας!"
print(f"{src_text}->{trg_text}")
# get token_ids
src_input_ids = torch.tensor([tokenizer.encode(src_text)])
trg_input_ids = torch.tensor([tokenizer.encode(trg_text)])
src_len = src_input_ids.shape[1]
trg_len = trg_input_ids.shape[1]
# get lang_ids
src_langs = torch.tensor([src_lang_id] * src_len).view(1, -1)
trg_langs = torch.tensor([trg_lang_id] * trg_len).view(1, -1)
# get token_type_ids
src_type = torch.tensor([0] * src_len).view(1, -1)
trg_type = torch.tensor([1] * trg_len).view(1, -1)
input_ids = torch.cat([src_input_ids, trg_input_ids], dim=1)
token_type_ids = torch.cat([src_type, trg_type], dim=1)
lang_ids = torch.cat([src_langs, trg_langs], dim=1)
position_ids = torch.cat([torch.arange(src_len), torch.arange(trg_len)])
# encode and predict
result = model(input_ids,
langs=lang_ids,
position_ids=position_ids.view(1, -1),
token_type_ids=token_type_ids)
# get predictions for masked token
masked_index = torch.where(input_ids == tokenizer.mask_token_id)[1].tolist()[0]
result = result[0][:, masked_index].topk(5).indices
result = result.tolist()[0]
print(f"Predictions:", tokenizer.decode(result))
```
Console output:
```
I love pasta with <special1> sauce!->Μου αρέσουν τα ζυμαρικά με σάλτσα ντομάτας!
Predictions: with the 'i'my
```
I tried omitting some of the arguments to the model, changing the example sentence-pair and the languages, but I always get weird predictions. Am I doing something wrong?
Important: I tried downgrading to `transformers==2.9.0` to make this error message go away:
```
Some weights of XLMWithLMHeadModel were not initialized from the model checkpoint at xlm-mlm-tlm-xnli15-1024 and are newly initialized: ['transformer.position_ids']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
However, I noticed that even in that version, the predictions are the same, which means that there is something else going on.
I don't want to train the model on another task. I want to use the pretrained model to make predictions in exactly the same way it was pretrained.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12174/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12174/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12173 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12173/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12173/comments | https://api.github.com/repos/huggingface/transformers/issues/12173/events | https://github.com/huggingface/transformers/pull/12173 | 921,266,049 | MDExOlB1bGxSZXF1ZXN0NjcwMjkzMjY2 | 12,173 | Use a released version of optax rather than installing from Git. | {
"login": "avital",
"id": 37586,
"node_id": "MDQ6VXNlcjM3NTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/37586?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avital",
"html_url": "https://github.com/avital",
"followers_url": "https://api.github.com/users/avital/followers",
"following_url": "https://api.github.com/users/avital/following{/other_user}",
"gists_url": "https://api.github.com/users/avital/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avital/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avital/subscriptions",
"organizations_url": "https://api.github.com/users/avital/orgs",
"repos_url": "https://api.github.com/users/avital/repos",
"events_url": "https://api.github.com/users/avital/events{/privacy}",
"received_events_url": "https://api.github.com/users/avital/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | (We were using a new API that hadn't been released until a few weeks
ago)
# What does this PR do?
Update the version of Optax we depend on in Flax examples' requirement.py to the latest released version.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? <-- I installed dependencies via requirements.txt and then ran run_flax_glue.py.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12173/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12173/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12173",
"html_url": "https://github.com/huggingface/transformers/pull/12173",
"diff_url": "https://github.com/huggingface/transformers/pull/12173.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12173.patch",
"merged_at": 1623755571000
} |
https://api.github.com/repos/huggingface/transformers/issues/12172 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12172/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12172/comments | https://api.github.com/repos/huggingface/transformers/issues/12172/events | https://github.com/huggingface/transformers/issues/12172 | 921,250,808 | MDU6SXNzdWU5MjEyNTA4MDg= | 12,172 | How can we modify the MM-IMDB model for sequence to sequence generation tasks? | {
"login": "abhaygargab",
"id": 27979479,
"node_id": "MDQ6VXNlcjI3OTc5NDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/27979479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhaygargab",
"html_url": "https://github.com/abhaygargab",
"followers_url": "https://api.github.com/users/abhaygargab/followers",
"following_url": "https://api.github.com/users/abhaygargab/following{/other_user}",
"gists_url": "https://api.github.com/users/abhaygargab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhaygargab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhaygargab/subscriptions",
"organizations_url": "https://api.github.com/users/abhaygargab/orgs",
"repos_url": "https://api.github.com/users/abhaygargab/repos",
"events_url": "https://api.github.com/users/abhaygargab/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhaygargab/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\n\r\ncould you please ask this question on the [forum](https://discuss.huggingface.co/) rather than here? We like to keep Github issues for bugs/feature requests.\r\n\r\nThanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,627 | 1,627 | NONE | null | Hi all, thank you so much for the wonderful service.
I have some doubts regarding the training details for MM-Imdb dataset.
Are the image encoder's and tokenizer's embeddings fine-tuned during training on MM-Imdb dataset? If not, can you suggest a way to do it or refer any material for help?
Is there a way to modify the code so that the model’s pre-trained weights can be used for sequence-to-sequence generations tasks instead of classification?
Thank You.. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12172/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12171 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12171/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12171/comments | https://api.github.com/repos/huggingface/transformers/issues/12171/events | https://github.com/huggingface/transformers/pull/12171 | 921,221,255 | MDExOlB1bGxSZXF1ZXN0NjcwMjU0OTMz | 12,171 | [Flax generate] Add params to generate | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds an optional `params` input to the `generate()` function.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12171/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12171/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12171",
"html_url": "https://github.com/huggingface/transformers/pull/12171",
"diff_url": "https://github.com/huggingface/transformers/pull/12171.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12171.patch",
"merged_at": 1623754212000
} |
https://api.github.com/repos/huggingface/transformers/issues/12170 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12170/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12170/comments | https://api.github.com/repos/huggingface/transformers/issues/12170/events | https://github.com/huggingface/transformers/issues/12170 | 921,201,286 | MDU6SXNzdWU5MjEyMDEyODY= | 12,170 | Vision Transformer (ViT) feature vector example (not classification) | {
"login": "raulcarlomagno",
"id": 2282315,
"node_id": "MDQ6VXNlcjIyODIzMTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2282315?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raulcarlomagno",
"html_url": "https://github.com/raulcarlomagno",
"followers_url": "https://api.github.com/users/raulcarlomagno/followers",
"following_url": "https://api.github.com/users/raulcarlomagno/following{/other_user}",
"gists_url": "https://api.github.com/users/raulcarlomagno/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raulcarlomagno/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raulcarlomagno/subscriptions",
"organizations_url": "https://api.github.com/users/raulcarlomagno/orgs",
"repos_url": "https://api.github.com/users/raulcarlomagno/repos",
"events_url": "https://api.github.com/users/raulcarlomagno/events{/privacy}",
"received_events_url": "https://api.github.com/users/raulcarlomagno/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"In HuggingFace Transformers, models typically output a dictionary. You can access the feature vector by getting the `pooler_output` key of that dictionary (assuming you're using `ViTModel`). It's a tensor of shape `(batch_size, hidden_size)`, so in case you're only providing a single image, and you're using the base-sized model, this will be a tensor of shape `(1, 768)`. \r\n\r\nHere's an example:\r\n\r\n```\r\nfrom transformers import ViTFeatureExtractor, ViTModel\r\nfrom PIL import Image\r\nimport requests\r\n\r\nurl = 'http://images.cocodataset.org/val2017/000000039769.jpg'\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\n\r\nfeature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k')\r\nmodel = ViTModel.from_pretrained('google/vit-base-patch16-224-in21k')\r\n\r\ninputs = feature_extractor(images=image, return_tensors=\"pt\")\r\noutputs = model(**inputs)\r\nfeature_vector = outputs.pooler_output\r\n```",
"thank you, that is what i was looking for !\r\nthank you",
"@NielsRogge is it always `pooler_output` key that contains the feature vector for all image transformers (such as CLIP, DeiT, VisualBERT, DETR)?"
] | 1,623 | 1,638 | 1,623 | NONE | null | # 🚀 Feature request
## Motivation
I would like to see an example of using Vision Transformer just for feature extraction, get the feature vector, before the classification part of the neural network as can be done with tensorflow hub
https://www.tensorflow.org/hub/common_signatures/images?hl=en#image_feature_vector
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12170/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12169 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12169/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12169/comments | https://api.github.com/repos/huggingface/transformers/issues/12169/events | https://github.com/huggingface/transformers/issues/12169 | 921,150,416 | MDU6SXNzdWU5MjExNTA0MTY= | 12,169 | Allow setting permissions of downloaded models (via envvar) | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | null | [] | [
"Would PR https://github.com/huggingface/transformers/pull/11119 help with your use-case?",
"> Would PR #11119 help with your use-case?\r\n\r\nIndeed, thanks!"
] | 1,623 | 1,623 | 1,623 | COLLABORATOR | null | In our research group we all have user accounts on a server where we each run our own experiments (Ubuntu behind the scenes). By default, everyone is downloading `transformers` models to their own home directory. Let's say we have 20 researchers, that might mean that we have 20 duplicates of "bert-base-cased" on the server (and of many other models). This is not efficient at all and takes too much room to our liking.
We have tried creating a 777 directory as TRANSFORMERS_CACHE globally, but that does not work. If I download a model, some of the downloaded files get a read/write access for me as the creator of the file. This means that others cannot use the model (permission denied).
Our suggestion or request would be to have an option when downloading a model to also set its permissions for all downloaded files. Preferably adjustable via a (system-wide) environment variable. This would probably need to be added in file_utils.py, similar to other options like "local_files_only".
I currently do not have time to work on this myself, but I am open to any feedback of course. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12169/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12168 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12168/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12168/comments | https://api.github.com/repos/huggingface/transformers/issues/12168/events | https://github.com/huggingface/transformers/issues/12168 | 921,095,095 | MDU6SXNzdWU5MjEwOTUwOTU= | 12,168 | Special tokens not tokenized properly | {
"login": "manueltonneau",
"id": 29440170,
"node_id": "MDQ6VXNlcjI5NDQwMTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/29440170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueltonneau",
"html_url": "https://github.com/manueltonneau",
"followers_url": "https://api.github.com/users/manueltonneau/followers",
"following_url": "https://api.github.com/users/manueltonneau/following{/other_user}",
"gists_url": "https://api.github.com/users/manueltonneau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueltonneau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueltonneau/subscriptions",
"organizations_url": "https://api.github.com/users/manueltonneau/orgs",
"repos_url": "https://api.github.com/users/manueltonneau/repos",
"events_url": "https://api.github.com/users/manueltonneau/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueltonneau/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! What is your tokenizer? Is it a WordPiece-based tokenizer, or a Byte-level BPE-based tokenizer like the original one from RoBERTa?",
"Hi @LysandreJik, thanks for your reply and sorry that I'm just seeing this now. My tokenizer is a byte-level BPE-based tokenizer. ",
"Hi @LysandreJik, let me know if you have a solution for this or if you need more info, thanks a lot in advance :) ",
"Hi,\r\n\r\nHow did you add the additional special tokens? So you start from a pre-trained RoBERTa, then added additional special tokens and further pre-trained on a corpus?\r\n\r\nDid you add these additional special tokens using the tokenizers library? Normally, one can add additional tokens as follows (based on https://github.com/huggingface/tokenizers/issues/247#issuecomment-675458087):\r\n\r\n```\r\nspecial_tokens_dict = {'additional_special_tokens': ['[C1]','[C2]','[C3]','[C4]']}\r\nnum_added_toks = tokenizer.add_special_tokens(special_tokens_dict)\r\n```\r\n\r\nHowever, printing the following:\r\n```\r\nfrom transformers import AutoTokenizer\r\ntokenizer = AutoTokenizer.from_pretrained('manueltonneau/twibert-lowercase-50272') \r\nprint(tokenizer.additional_special_tokens)\r\n```\r\nReturns `[]`. So you can solve it by doing:\r\n\r\n```\r\nspecial_tokens_dict = {'additional_special_tokens': ['<hashtag>']}\r\nnum_added_toks = tokenizer.add_special_tokens(special_tokens_dict)\r\n```\r\nWhen I then test your example:\r\n```\r\ntokenizer.tokenize('<hashtag>')\r\n```\r\n\r\nI get: `['<hashtag>']`.\r\n\r\nAnd when doing:\r\n\r\n```\r\ntokenizer.convert_tokens_to_ids(tokenizer.tokenize(\"<hashtag>\", add_special_tokens=True))\r\n```\r\n\r\nI get: `[0, 7, 2]`.",
"Awesome @NielsRogge, thanks a lot! Will test this and get back to you/close if solved. ",
">How did you add the additional special tokens? So you start from a pre-trained RoBERTa, then added additional special tokens and further pre-trained on a corpus?\r\n\r\nI created a new vocab with the tokenizers module for which I added new special tokens. Here is the code I use below:\r\n\r\n```\r\n# Initialize a tokenizer\r\n tokenizer = Tokenizer(models.BPE())\r\n\r\n # Customize pre-tokenization and decoding\r\n tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel(add_prefix_space=True)\r\n tokenizer.decoder = decoders.ByteLevel()\r\n tokenizer.post_processor = processors.ByteLevel(trim_offsets=True)\r\n\r\n # And then train\r\n trainer = trainers.BpeTrainer(vocab_size=args.vocab_size, min_frequency=2, special_tokens=[\r\n \"<s>\",\r\n \"<pad>\",\r\n \"</s>\",\r\n \"<unk>\",\r\n \"<mask>\",\r\n \"@USER\",\r\n \"HTTPURL\",\r\n \"<hashtag>\",\r\n \"</hashtag>\"\r\n ], show_progress=True)\r\n files = [os.path.join(args.corpus_dir, filename) for filename in os.listdir(args.corpus_dir)]\r\n i = 0\r\n start_time = time.time()\r\n for file in files:\r\n print(f'Starting training on {file}')\r\n tokenizer.train([file], trainer=trainer)\r\n i = i + 1\r\n print(f'{i} files done out of {len(files)} files')\r\n print(f'Time elapsed: {time.time() - start_time} seconds')\r\n\r\n # And Save it\r\n output_dir = f'/scratch/mt4493/twitter_labor/twitter-labor-data/data/pretraining/US/vocab_files/{args.vocab_size}/{args.vocab_name}'\r\n if not os.path.exists(output_dir):\r\n os.makedirs(output_dir)\r\n tokenizer.model.save(output_dir)\r\n```",
"Works fine, thanks again!"
] | 1,623 | 1,672 | 1,625 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Python version: 3.8.5
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik
## Information
Hi,
I have recently further pretrained a RoBERTa model with fairseq. I use a custom vocabulary, trained with the tokenizers module. After converting the fairseq model to pytorch, I loaded all my model-related files [here](https://huggingface.co/manueltonneau/twibert-lowercase-50272/tree/main).
When loading the tokenizer, I noticed that the special tokens are not tokenized properly.
## To reproduce
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('manueltonneau/twibert-lowercase-50272')
tokenizer.tokenize('<mask>')
Out[7]: ['<mask>']
tokenizer.tokenize('<hashtag>')
Out[8]: ['hashtag']
tokenizer.tokenize('<hashtag>')
Out[3]: [0, 23958, 2]
```
## Expected behavior
Since `<hashtag>` is a special token in the vocabulary with ID 7 (see [here](https://huggingface.co/manueltonneau/twibert-lowercase-50272/blob/main/vocab.json)), the last output should be: [0, 7, 2]. `<hashtag>` with the '<>' should also be recognized as a unique token.
## Potential explanation
When looking at the files from [a similar model](https://huggingface.co/vinai/bertweet-base), it seems that the vocab is in txt format and they also have the `bpe.codes` file, which I don't have. Could that be the issue? And if so, how do I convert my files to this format?
For vocab.txt, I have already found your lengthy explanation [here](https://github.com/huggingface/transformers/issues/1083), thanks for this.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12168/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12168/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12167 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12167/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12167/comments | https://api.github.com/repos/huggingface/transformers/issues/12167/events | https://github.com/huggingface/transformers/issues/12167 | 921,081,237 | MDU6SXNzdWU5MjEwODEyMzc= | 12,167 | ViT for resolution beyond 224x224 support | {
"login": "empty-id",
"id": 56990007,
"node_id": "MDQ6VXNlcjU2OTkwMDA3",
"avatar_url": "https://avatars.githubusercontent.com/u/56990007?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/empty-id",
"html_url": "https://github.com/empty-id",
"followers_url": "https://api.github.com/users/empty-id/followers",
"following_url": "https://api.github.com/users/empty-id/following{/other_user}",
"gists_url": "https://api.github.com/users/empty-id/gists{/gist_id}",
"starred_url": "https://api.github.com/users/empty-id/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/empty-id/subscriptions",
"organizations_url": "https://api.github.com/users/empty-id/orgs",
"repos_url": "https://api.github.com/users/empty-id/repos",
"events_url": "https://api.github.com/users/empty-id/events{/privacy}",
"received_events_url": "https://api.github.com/users/empty-id/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"One would need to interpolate the pre-trained position embeddings. You can see how this is done in the original implementation [here](https://github.com/google-research/vision_transformer/blob/00883dd691c63a6830751563748663526e811cee/vit_jax/checkpoint.py#L224).\r\n\r\nYou can find a PyTorch implementation of that [here](https://github.com/yitu-opensource/T2T-ViT/blob/964796c75445aa5163766d1caa20755f67b0da6f/utils.py#L27) (taken from the T2T-ViT implementation), where they show how you can go from 224 to 384. The pre-trained position embeddings are of shape (1, 197, 768) - there are 196 \"positions\" in an image of 224x224 with a patch size of 16x16 as (224/16)^2 = 196 and we add 1 for the [CLS] token - and suppose you want to fine-tune at resolution of 64x64 with a patch size of 8, then the number of position embeddings is (64/8)^2 + 1 = 65. In that case, the position embeddings during fine-tuning are of shape (1, 65, 768), and you can use that function to map the pre-trained position embeddings from shape (1, 197, 768) to (1, 65, 768).",
"Thank you for your reply! Actually, I know how to interpolate the pos embedding. But I don't know how to do it seamlessly with huggingface ViTModel. Is it necessary to modify the internal code?",
"When I change the image size of ViTModel, I cannot even load it from a pretrained checkpoint.\r\n\r\n```python\r\nfrom transformers import ViTModel\r\nmodel = ViTModel.from_pretrained('vit-base-patch16-224', image_size=64)\r\n```\r\n\r\nThis raises an error due to the mismatch of position embedding size.",
"I think you first need to load the `state_dict` of the original model, like so:\r\n\r\n```\r\nfrom transformers import ViTModel\r\n\r\nmodel = ViTModel.from_pretrained('google/vit-base-patch16-224') # load pretrained model\r\nstate_dict = model.state_dict()\r\n```\r\n\r\nThen, initialize a new `ViTModel` with custom `image_size`, update the position embeddings of the `state_dict` and load the new model with that `state_dict`:\r\n\r\n```\r\nfrom transformers import ViTConfig\r\n\r\nconfig = ViTConfig.from_pretrained('google/vit-base-patch16-224', image_size=64)\r\n# new model with custom image_size\r\nmodel = ViTModel(config=config)\r\n\r\n# update state_dict\r\nnew_state_dict = state_dict.copy()\r\nold_posemb = new_state_dict['embeddings.position_embeddings']\r\nif model.embeddings.position_embeddings.shape != old_posemb.shape: # need to resize the position embedding by interpolate\r\n new_posemb = resize_pos_embed(old_posemb, model.embeddings.position_embeddings) # use PyTorch function linked above\r\n new_state_dict['embeddings.position_embeddings'] = new_posemb\r\n\r\n# equip new model with state_dict\r\nmodel.load_state_dict(new_state_dict)\r\n```",
"Wow, you are so smart! That's awesome!",
"Thanks NielsRogge for pointing me here, very helpful resource. Just another quick question, where can we specify the patch size we would like ViT to extract from images? For instance, on CIFAR10 32x32 I wouldn't like to use 16x16 patch size, but maybe something like 8x8 or 4x4 would be more appropriate."
] | 1,623 | 1,631 | 1,623 | NONE | null | When the resolution changes, the size of position embedding of ViTModel also changes, which makes ```from_pretrained``` method not working.
So, how can I use ViT with a different resolution like 64x64? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12167/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12166 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12166/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12166/comments | https://api.github.com/repos/huggingface/transformers/issues/12166/events | https://github.com/huggingface/transformers/pull/12166 | 920,970,190 | MDExOlB1bGxSZXF1ZXN0NjcwMDQwMjA2 | 12,166 | [testing] ensure concurrent pytest workers use a unique port for torch.dist | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | As discussed at https://github.com/huggingface/transformers/issues/12164 currently concurrent tests may try to use the same port when running `-m torch.distributed.launch` and thus fail with `RuntimeError: Address already in use` error. This PR solves this problem by assigning a unique port to each worker when run under `pytest-xdist` with `-n 2` or higher.
It also adds 2 helper `testing_utils.py` functions:
- `pytest_xdist_worker_id`
- `get_torch_dist_unique_port`
to accomplish that.
Actually I'm not 100% sure that the original failure was caused by this problem, as it could be also caused by some run-away test that still holds the port. If this is the case I will work further on this helper function to actually test that the port it returns is free and will have to think of some extra solutions, because checking that the port is free and binding it is not atomic and there could be a race condition leading to the same problem.
But this is an important fix on its own as long as we plan to continue using pytest-xdist
Fixes: https://github.com/huggingface/transformers/issues/12164
@LysandreJik, @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12166/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12166",
"html_url": "https://github.com/huggingface/transformers/pull/12166",
"diff_url": "https://github.com/huggingface/transformers/pull/12166.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12166.patch",
"merged_at": 1623780780000
} |
https://api.github.com/repos/huggingface/transformers/issues/12165 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12165/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12165/comments | https://api.github.com/repos/huggingface/transformers/issues/12165/events | https://github.com/huggingface/transformers/issues/12165 | 920,906,056 | MDU6SXNzdWU5MjA5MDYwNTY= | 12,165 | Documentation for tiny-gpt2 in transformers/examples/pytorch | {
"login": "tginart",
"id": 11379648,
"node_id": "MDQ6VXNlcjExMzc5NjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/11379648?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tginart",
"html_url": "https://github.com/tginart",
"followers_url": "https://api.github.com/users/tginart/followers",
"following_url": "https://api.github.com/users/tginart/following{/other_user}",
"gists_url": "https://api.github.com/users/tginart/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tginart/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tginart/subscriptions",
"organizations_url": "https://api.github.com/users/tginart/orgs",
"repos_url": "https://api.github.com/users/tginart/repos",
"events_url": "https://api.github.com/users/tginart/events{/privacy}",
"received_events_url": "https://api.github.com/users/tginart/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It is randomly initialized and trained for 2 steps. So basically it can only be used for prototyping."
] | 1,623 | 1,623 | 1,623 | NONE | null | # 🚀 Documentation request
The **tiny-gpt2** transformer model is great for fast prototyping, but it seems sparsely documented on the Huggingface hub: https://huggingface.co/sshleifer/tiny-gpt2
## Motivation
It would be helpful if users knew basic info about how tiny-gpt2 was trained. Is it the same corpus as the standard gpt2? Was it distilled from a larger model or trained from scratch? Etc.
## Your contribution
As I did not train tiny-gpt2, I don't know any info about it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12165/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12165/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12164 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12164/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12164/comments | https://api.github.com/repos/huggingface/transformers/issues/12164/events | https://github.com/huggingface/transformers/issues/12164 | 920,895,018 | MDU6SXNzdWU5MjA4OTUwMTg= | 12,164 | [testing] concurrent dist tests fail when using the same master_port | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Would it be helpful to extract the tests that use the same port with a custom decorator, and run them in a separate `run` directive with `-n 1`?",
"That is a possibility too, see the simple proposed solution https://github.com/huggingface/transformers/pull/12166 - perhaps to try first."
] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | The failing `tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq_ddp` on multi-gpu slow runner happens because of concurrency of `-n 2` on push (it's `-n 1` on scheduled), so we end up with 2 dist tests running at the same time and clashing on ending up using the same default port used by `torch.distributed.launch`. and the latter test failing with "Address already in Use".
I started a discussion at https://github.com/pytorch/pytorch/issues/59978 hoping to expose the `init_method=file://` through the CLI, but alas it won't help since the test suite needs to support the older pytorch even if it's exposed.
Furthermore, pytorch-1.9.1 or perhaps higher will have a different way using FileStore as a "rendezvous endpoint" in `torch.distributed.run` - a replacement for `torch.distributed.launch`.
Meanwhile it was proposed to use TorchElastic, which requires launching a server https://pytorch.org/elastic/0.2.2/quickstart.html before the test suite start and somehow ensuring it gets killed at the end. This looks very error-prone to me, especially when the test suite fails.
But I'm not yet sure how to come up with an algorithm to get the unique unused port to each test client, other than writing yet another server that will do the port management on demand.
@LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12164/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12163 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12163/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12163/comments | https://api.github.com/repos/huggingface/transformers/issues/12163/events | https://github.com/huggingface/transformers/issues/12163 | 920,819,951 | MDU6SXNzdWU5MjA4MTk5NTE= | 12,163 | Missing code for predicting custom labels in Bert | {
"login": "gwc4github",
"id": 3164663,
"node_id": "MDQ6VXNlcjMxNjQ2NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3164663?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gwc4github",
"html_url": "https://github.com/gwc4github",
"followers_url": "https://api.github.com/users/gwc4github/followers",
"following_url": "https://api.github.com/users/gwc4github/following{/other_user}",
"gists_url": "https://api.github.com/users/gwc4github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gwc4github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gwc4github/subscriptions",
"organizations_url": "https://api.github.com/users/gwc4github/orgs",
"repos_url": "https://api.github.com/users/gwc4github/repos",
"events_url": "https://api.github.com/users/gwc4github/events{/privacy}",
"received_events_url": "https://api.github.com/users/gwc4github/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\n\r\nTokenizers in HuggingFace Transformers don't take care of padding labels (this should be done by the user). You can only provide text to a tokenizer, and it will turn them into `input_ids`, `attention_mask` and `token_type_ids`. The `tokenize_and_align_labels` function will take care of labeling each token. ",
"Thanks for this note @NielsRogge and sorry for the delay getting back to you. Lots to do here.\r\nWe will make the change in our code but it seems like this would be a good feature for the framework and the code is done.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"@NielsRogge can we at least get a better error message for this?",
"hello how can i find acceptable labels for train_data to fine tuning a pretrained transformer sentiment model>?"
] | 1,623 | 1,686 | 1,627 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: MacOS 10.15.7
- Python version: 3.8
- PyTorch version (GPU?): 1.8.1 No GPU
- Tensorflow version (GPU?): No
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X ] my own task or dataset: (give details below)
NER custom fine-tuned.
## To reproduce
Steps to reproduce the behavior:
1. Create a dataset and load it.
2. Set your features with new labels
3. Load the bert_Base_cased config, transformer, and model
4. Tokenize the data
5. Create a trainer and start it
```python
dataset = load_dataset('json', data_files=datasetPath + pathDel + datasetName, split='train')
# Dataset column that serves as model's input
text_column_name = "tokens"
# Dataset column that serves as fine-tuning labels (ner_tags, pos_tags, or chunk_tags in our case)
label_column_name = "ner_tags"
# Define variables used by tokenize_and_align_labels fn
column_names = dataset.column_names # NOT USED (GWC)
label_list = features[label_column_name].feature.names
label_to_id = {label_list[i]: i for i in range(len(label_list))}
# Need to tell the model how many labels it's supposed to predict
num_labels = len(label_list)
model_name = 'bert-base-cased'
config = AutoConfig.from_pretrained(model_name, num_labels=num_labels)
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True, padding=True, truncation=True) # GWC CHANGED added padding=True and truncation=True
model = AutoModelForTokenClassification.from_pretrained(model_name, config=config)
padding = True
def tokenize_and_align_labels(examples):
tokenized_inputs = tokenizer(
examples[text_column_name],
padding=padding,
truncation=True,
# We use this argument because the texts in our dataset are lists of words (with a label for each word).
is_split_into_words=True,
)
labels = []
for i, label in enumerate(examples[label_column_name]):
word_ids = tokenized_inputs.word_ids(batch_index=i)
previous_word_idx = None
label_ids = []
for word_idx in word_ids:
# Special tokens have a word id that is None. We set the label to -100 so they are automatically
# ignored in the loss function.
if word_idx is None:
label_ids.append(-100)
# We set the label for the first token of each word.
elif word_idx != previous_word_idx:
label_ids.append(label_to_id[label[word_idx]])
# For the other tokens in a word, we set the label to either the current label or -100, depending on
# the label_all_tokens flag.
else:
label_ids.append(label_to_id[label[word_idx]])
previous_word_idx = word_idx
labels.append(label_ids)
tokenized_inputs["labels"] = labels
return tokenized_inputs
train_dataset = dataset.map(
tokenize_and_align_labels,
batched=True,
)
trainer = Trainer(
model=model,
train_dataset=train_dataset,
tokenizer=tokenizer
)
print('Training dataset')
trainer.train()
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I am expecting it to train the model on our custom data. It was failing during training and I found the bug and fixed it. So mostly I am just trying to report the bug.
The bug is in
transformers/tokenization_utils_base.py at line 2990.
In the _pad() method you forgot to add an if statement for labels. More specifically you have a
if self.padding_side == "right": and a if self.padding_side == "left":
and both of them are missing the nested if for labels. (The have one for token_type_ids & special_tokens_mask)
You should add the section for both left and right but here is the change I made for the "right" side:
```python
if needs_to_be_padded:
difference = max_length - len(required_input)
if self.padding_side == "right":
if return_attention_mask:
encoded_inputs["attention_mask"] = [1] * len(required_input) + [0] * difference
if "token_type_ids" in encoded_inputs:
encoded_inputs["token_type_ids"] = (
encoded_inputs["token_type_ids"] + [self.pad_token_type_id] * difference
)
if "labels" in encoded_inputs:
encoded_inputs["labels"] = (
encoded_inputs["labels"] + [-100] * difference
)
if "special_tokens_mask" in encoded_inputs:
encoded_inputs["special_tokens_mask"] = encoded_inputs["special_tokens_mask"] + [1] * difference
encoded_inputs[self.model_input_names[0]] = required_input + [self.pad_token_id] * difference
.....
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12163/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12162 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12162/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12162/comments | https://api.github.com/repos/huggingface/transformers/issues/12162/events | https://github.com/huggingface/transformers/pull/12162 | 920,762,663 | MDExOlB1bGxSZXF1ZXN0NjY5ODYzNjI0 | 12,162 | Add video links to the documentation | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | COLLABORATOR | null | # What does this PR do?
This PR leverages some videos of the course and adds them to our documentation. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12162/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12162/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12162",
"html_url": "https://github.com/huggingface/transformers/pull/12162",
"diff_url": "https://github.com/huggingface/transformers/pull/12162.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12162.patch",
"merged_at": 1623753457000
} |
https://api.github.com/repos/huggingface/transformers/issues/12161 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12161/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12161/comments | https://api.github.com/repos/huggingface/transformers/issues/12161/events | https://github.com/huggingface/transformers/pull/12161 | 920,709,515 | MDExOlB1bGxSZXF1ZXN0NjY5ODE4MjY2 | 12,161 | consistent nn. and nn.functional: part 5 docs | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | Continuing https://github.com/huggingface/transformers/pull/12124 this PR takes care of `docs` - had to do a bit of extra filtering to not break the images:
```
# deal with torch.nn
perl -pi -e 's|^(\s*)import torch\n|$1from torch import nn\n$1import torch\n|' `grep -Ilr torch.nn docs`
find docs -type f -regextype posix-egrep -regex '.*(md|rst)$' -exec perl -X -pi -e 's{(?<!(from |import |[`#/]))torch\.nn\.}{nn.}g' {} \;
find docs -type f -regextype posix-egrep -regex '.*(md|rst)$' -exec perl -pi -e 's|import torch\.nn as nn|from torch import nn|g' {} \;
# deal with F
find docs -type f -regextype posix-egrep -regex '.*(md|rst)$' -exec perl -pi -e 's|from torch.nn import functional as F|from torch import nn|g' {} \;
find docs -type f -regextype posix-egrep -regex '.*(md|rst)$' -exec perl -pi -e 's|import torch.nn.functional as F|from torch import nn|g' {} \;
find docs -type f -regextype posix-egrep -regex '.*(md|rst)$' -exec perl -pi -e 's|(?<!\w)F\.|nn.functional.|g' {} \;
make fixup
```
and one manual tweak. docs are hard to automate rewrites and there is no validation, so had to carefully check each diff.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12161/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12161",
"html_url": "https://github.com/huggingface/transformers/pull/12161",
"diff_url": "https://github.com/huggingface/transformers/pull/12161.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12161.patch",
"merged_at": 1623702873000
} |
https://api.github.com/repos/huggingface/transformers/issues/12160 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12160/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12160/comments | https://api.github.com/repos/huggingface/transformers/issues/12160/events | https://github.com/huggingface/transformers/pull/12160 | 920,705,079 | MDExOlB1bGxSZXF1ZXN0NjY5ODE0NDk0 | 12,160 | [Jax Slow Circle CI] Don't close PR | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for the tip @stas00 ! \r\n\r\nI'll push to the PR every 24h to have a background circle ci test - eventually we should think about a better solution here",
"You can crontab an empty git push to trigger CI, e.g.:\r\n```\r\ncd transformers-flax-cron\r\ngit commit --allow-empty -m \"Trigger CI\"\r\ngit push\r\n```",
"I also need to pull from master regularly - otherwise the tests are always run on the same code no? ",
"Heh, yes of course! That was a blunder suggestion on my part since after rebasing you will always have something to push and if there is nothing to push then there is nothing to test in any case. "
] | 1,623 | 1,626 | 1,626 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12160/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12160/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12160",
"html_url": "https://github.com/huggingface/transformers/pull/12160",
"diff_url": "https://github.com/huggingface/transformers/pull/12160.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12160.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12159 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12159/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12159/comments | https://api.github.com/repos/huggingface/transformers/issues/12159/events | https://github.com/huggingface/transformers/issues/12159 | 920,701,728 | MDU6SXNzdWU5MjA3MDE3Mjg= | 12,159 | Can't run QA fine-tune for bert/albert in distributed way | {
"login": "yl-to",
"id": 23205976,
"node_id": "MDQ6VXNlcjIzMjA1OTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/23205976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yl-to",
"html_url": "https://github.com/yl-to",
"followers_url": "https://api.github.com/users/yl-to/followers",
"following_url": "https://api.github.com/users/yl-to/following{/other_user}",
"gists_url": "https://api.github.com/users/yl-to/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yl-to/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yl-to/subscriptions",
"organizations_url": "https://api.github.com/users/yl-to/orgs",
"repos_url": "https://api.github.com/users/yl-to/repos",
"events_url": "https://api.github.com/users/yl-to/events{/privacy}",
"received_events_url": "https://api.github.com/users/yl-to/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sgugger @philschmid ",
"Could you confirm #11872 fixes it?",
"> Could you confirm #11872 fixes it?\r\n\r\nyeah, confirmed, closing issue."
] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: HEAD detached at v4.6.1
- Platform: Docker, AWS
- Python version: Python 3.8.5
- PyTorch version (GPU?): 1.8.0
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
https://github.com/huggingface/transformers/tree/master/examples/pytorch/question-answering
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Just run:
```
python -m torch.distributed.launch --nproc_per_node=8 run_qa.py \
--model_name_or_path bert-large-uncased-whole-word-masking \
--dataset_name squad \
--do_train \
--do_eval \
--learning_rate 3e-5 \
--num_train_epochs 1 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ./new_out \
--max_steps 100 \
--per_device_eval_batch_size=3 \
--per_device_train_batch_size=3 \
--cache_dir .
```
Got error as below:
```
[INFO|trainer.py:2115] 2021-06-14 19:01:08,718 >> ***** Running Evaluation *****
[INFO|trainer.py:2117] 2021-06-14 19:01:08,718 >> Num examples = 10784
[INFO|trainer.py:2120] 2021-06-14 19:01:08,718 >> Batch size = 3
Traceback (most recent call last):
File "run_qa.py", line 622, in <module>
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
File "run_qa.py", line 622, in <module>
File "run_qa.py", line 622, in <module>
File "run_qa.py", line 622, in <module>
File "run_qa.py", line 622, in <module>
Traceback (most recent call last):
File "run_qa.py", line 622, in <module>
main()
File "run_qa.py", line 581, in main
main()main()main()
File "run_qa.py", line 581, in main
File "run_qa.py", line 581, in main
main() File "run_qa.py", line 581, in main
File "run_qa.py", line 581, in main
Traceback (most recent call last):
File "run_qa.py", line 622, in <module>
metrics = trainer.evaluate()Traceback (most recent call last):
File "/workspace/transformers/examples/pytorch/question-answering/trainer_qa.py", line 44, in evaluate
File "run_qa.py", line 622, in <module>
metrics = trainer.evaluate()metrics = trainer.evaluate() output = eval_loop(
main()
File "/workspace/transformers/examples/pytorch/question-answering/trainer_qa.py", line 44, in evaluate
File "/workspace/transformers/examples/pytorch/question-answering/trainer_qa.py", line 44, in evaluate
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2162, in evaluation_loop
File "run_qa.py", line 581, in main
metrics = trainer.evaluate()
File "/workspace/transformers/examples/pytorch/question-answering/trainer_qa.py", line 44, in evaluate
metrics = trainer.evaluate()
output = eval_loop( File "/workspace/transformers/examples/pytorch/question-answering/trainer_qa.py", line 44, in evaluate
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2162, in evaluation_loop
output = eval_loop(
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2162, in evaluation_loop
main()
output = eval_loop( File "run_qa.py", line 581, in main
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2162, in evaluation_loop
output = eval_loop(
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2162, in evaluation_loop
metrics = trainer.evaluate()
File "/workspace/transformers/examples/pytorch/question-answering/trainer_qa.py", line 44, in evaluate
output = eval_loop(
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2162, in evaluation_loop
main()
File "run_qa.py", line 581, in main
metrics = trainer.evaluate()
File "/workspace/transformers/examples/pytorch/question-answering/trainer_qa.py", line 44, in evaluate
output = eval_loop(
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2162, in evaluation_loop
logits = self._nested_gather(logits)
metrics = trainer.evaluate() File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2252, in _nested_gather
File "/workspace/transformers/examples/pytorch/question-answering/trainer_qa.py", line 44, in evaluate
logits = self._nested_gather(logits)output = eval_loop(
logits = self._nested_gather(logits) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2252, in _nested_gather
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2162, in evaluation_loop
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2252, in _nested_gather
logits = self._nested_gather(logits)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2252, in _nested_gather
logits = self._nested_gather(logits)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2252, in _nested_gather
logits = self._nested_gather(logits)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2252, in _nested_gather
logits = self._nested_gather(logits)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2252, in _nested_gather
tensors = distributed_concat(tensors)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in distributed_concat
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in <genexpr>
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) tensors = distributed_concat(tensors)
tensors = distributed_concat(tensors)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 156, in distributed_concat
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in distributed_concat
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in distributed_concat
logits = self._nested_gather(logits)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2252, in _nested_gather
tensors = distributed_concat(tensors)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in distributed_concat
dist.all_gather(output_tensors, tensor)tensors = distributed_concat(tensors)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1862, in all_gather
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in distributed_concat
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in <genexpr>
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in <genexpr>
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in <genexpr>
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)tensors = distributed_concat(tensors)
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in <genexpr>
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in distributed_concat
tensors = distributed_concat(tensors)return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 156, in distributed_concat
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in distributed_concat
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 156, in distributed_concat
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 156, in distributed_concat
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 156, in distributed_concat
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in <genexpr>
dist.all_gather(output_tensors, tensor)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1862, in all_gather
dist.all_gather(output_tensors, tensor)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1862, in all_gather
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in <genexpr>
dist.all_gather(output_tensors, tensor)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1862, in all_gather
dist.all_gather(output_tensors, tensor)return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1862, in all_gather
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 156, in distributed_concat
work = default_pg.allgather([tensor_list], [tensor])return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
RuntimeError File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 156, in distributed_concat
: Tensors must be non-overlapping and dense
dist.all_gather(output_tensors, tensor)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1862, in all_gather
dist.all_gather(output_tensors, tensor)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1862, in all_gather
tensors = distributed_concat(tensors)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in distributed_concat
work = default_pg.allgather([tensor_list], [tensor])
work = default_pg.allgather([tensor_list], [tensor])RuntimeError
: work = default_pg.allgather([tensor_list], [tensor])Tensors must be non-overlapping and dense
RuntimeError
: Tensors must be non-overlapping and dense
RuntimeError: Tensors must be non-overlapping and densereturn type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in <genexpr>
work = default_pg.allgather([tensor_list], [tensor])
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
RuntimeError File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 156, in distributed_concat
: Tensors must be non-overlapping and dense
work = default_pg.allgather([tensor_list], [tensor])
RuntimeError: Tensors must be non-overlapping and dense
dist.all_gather(output_tensors, tensor)
work = default_pg.allgather([tensor_list], [tensor]) File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1862, in all_gather
RuntimeError: Tensors must be non-overlapping and dense
work = default_pg.allgather([tensor_list], [tensor])
RuntimeError: Tensors must be non-overlapping and dense
Killing subprocess 22340
Killing subprocess 22341
Killing subprocess 22342
Killing subprocess 22343
Killing subprocess 22344
Killing subprocess 22345
Killing subprocess 22346
Killing subprocess 22347
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 340, in <module>
main()
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 326, in main
sigkill_handler(signal.SIGTERM, None) # not coming back
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler
raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
subprocess.CalledProcessError: Command '['/opt/conda/bin/python', '-u', 'run_qa.py', '--local_rank=7', '--model_name_or_path', 'bert-large-uncased-whole-word-masking', '--dataset_name', 'squad', '--do_train', '--do_eval', '--learning_rate', '3e-5', '--num_train_epochs', '1', '--max_seq_length', '384', '--doc_stride', '128', '--output_dir', './new_out', '--max_steps', '100', '--per_device_eval_batch_size=3', '--per_device_train_batch_size=3', '--cache_dir', '.']' returned non-zero exit status 1.
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12159/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12159/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12158 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12158/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12158/comments | https://api.github.com/repos/huggingface/transformers/issues/12158/events | https://github.com/huggingface/transformers/issues/12158 | 920,698,473 | MDU6SXNzdWU5MjA2OTg0NzM= | 12,158 | Pretraining for TFWav2Vec2 | {
"login": "will-rice",
"id": 25072137,
"node_id": "MDQ6VXNlcjI1MDcyMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/25072137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/will-rice",
"html_url": "https://github.com/will-rice",
"followers_url": "https://api.github.com/users/will-rice/followers",
"following_url": "https://api.github.com/users/will-rice/following{/other_user}",
"gists_url": "https://api.github.com/users/will-rice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/will-rice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/will-rice/subscriptions",
"organizations_url": "https://api.github.com/users/will-rice/orgs",
"repos_url": "https://api.github.com/users/will-rice/repos",
"events_url": "https://api.github.com/users/will-rice/events{/privacy}",
"received_events_url": "https://api.github.com/users/will-rice/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,624 | 1,624 | CONTRIBUTOR | null | # 🚀 Feature request
TFWav2Vec2 needs the pretraining implementation like the PyTorch [version](https://huggingface.co/transformers/master/model_doc/wav2vec2.html#wav2vec2forpretraining)
## Motivation
Users of the TensorFlow model will most likely want to be able to do pretraining just like with Pytorch.
## Your contribution
I recently added the [tensorflow model](https://github.com/huggingface/transformers/pull/11617). So I would like to do this one as well.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12158/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12158/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12157 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12157/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12157/comments | https://api.github.com/repos/huggingface/transformers/issues/12157/events | https://github.com/huggingface/transformers/pull/12157 | 920,677,887 | MDExOlB1bGxSZXF1ZXN0NjY5NzkxNDI3 | 12,157 | Add course banner | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Merging so the image is online and I can then adjust the width if necessary."
] | 1,623 | 1,623 | 1,623 | COLLABORATOR | null | # What does this PR do?
This PR adds a course banner in the main README, looking like this:

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12157/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12157",
"html_url": "https://github.com/huggingface/transformers/pull/12157",
"diff_url": "https://github.com/huggingface/transformers/pull/12157.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12157.patch",
"merged_at": 1623763550000
} |
https://api.github.com/repos/huggingface/transformers/issues/12156 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12156/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12156/comments | https://api.github.com/repos/huggingface/transformers/issues/12156/events | https://github.com/huggingface/transformers/pull/12156 | 920,662,162 | MDExOlB1bGxSZXF1ZXN0NjY5Nzc4MDc3 | 12,156 | [style] consistent nn. and nn.functional: part 4 `examples` | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> You have changes in two png files here too, which is weird. \r\n\r\nWhoah! Found some secret stenography embeddings! :)\r\n\r\nProbably got triggered by `s|(?<!\\w)F\\.|nn.functional.|g`\r\n\r\nThank you for noticing, @sgugger - will fix it up!\r\n\r\n> Not sure if we really need to apply this to the research projects which are not actively maintained.\r\n\r\nat least - `wav2vec2` is. Do you want me to reset all but `wav2vec2` under research?\r\n",
"No, it's easier to go for all of them in that case."
] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | This concludes the work on https://github.com/huggingface/transformers/issues/11600 with normalizing `examples` using fully automated:
```
# deal with torch.nn
perl -pi -e 's|^(\s*)import torch\n|$1from torch import nn\n$1import torch\n|' `grep -Ilr torch.nn examples`
find examples -type f -exec perl -X -pi -e 's{(?<!(from |import |[`#/]))torch\.nn\.}{nn.}g' {} \;
find examples -type f -exec perl -pi -e 's|import torch\.nn as nn|from torch import nn|g' {} \;
# deal with F
find examples -type f -exec perl -pi -e 's|from torch.nn import functional as F|from torch import nn|g' {} \;
find examples -type f -exec perl -pi -e 's|import torch.nn.functional as F|from torch import nn|g' {} \;
find examples -type f -exec perl -pi -e 's|(?<!\w)F\.|nn.functional.|g' {} \;
perl -pi -e 's|import torch||' examples/research_projects/pplm/pplm_classification_head.py
# leave legacy unmodified as we can't test is easily
git checkout examples/legacy
make fixup
```
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12156/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12156/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12156",
"html_url": "https://github.com/huggingface/transformers/pull/12156",
"diff_url": "https://github.com/huggingface/transformers/pull/12156.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12156.patch",
"merged_at": 1623698905000
} |
https://api.github.com/repos/huggingface/transformers/issues/12155 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12155/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12155/comments | https://api.github.com/repos/huggingface/transformers/issues/12155/events | https://github.com/huggingface/transformers/pull/12155 | 920,655,813 | MDExOlB1bGxSZXF1ZXN0NjY5NzcyNDgw | 12,155 | [style] consistent nn. and nn.functional: part 3 `tests` | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> It looks like you have changes in two test fixtures, is that intended?\r\n\r\nOh, I was studying `git diff` and missed the binary change - thank you for noticing it, @sgugger - fixed. "
] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | Continuing https://github.com/huggingface/transformers/pull/12124 this PR takes care of `tests` - a slight variation of the automated code:
```
# deal with torch.nn
perl -pi -e 's|^(\s*)import torch\n|$1from torch import nn\n$1import torch\n|' `grep -Ilr torch.nn tests`
find tests -type f -exec perl -X -pi -e 's{(?<!(from |import |[`#/]))torch\.nn\.}{nn.}g' {} \;
find tests -type f -exec perl -pi -e 's|import torch\.nn as nn|from torch import nn|g' {} \;
# deal with F
find tests -type f -exec perl -pi -e 's|from torch.nn import functional as F|from torch import nn|g' {} \;
find tests -type f -exec perl -pi -e 's|import torch.nn.functional as F|from torch import nn|g' {} \;
find tests -type f -exec perl -pi -e 's|(?<!\w)F\.|nn.functional.|g' {} \;
make fixup
```
One concern is for slow tests that would be missed by CI, so let's be on the lookout for the nightly slow run after this PR is merged.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12155/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12155",
"html_url": "https://github.com/huggingface/transformers/pull/12155",
"diff_url": "https://github.com/huggingface/transformers/pull/12155.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12155.patch",
"merged_at": 1623698303000
} |
https://api.github.com/repos/huggingface/transformers/issues/12154 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12154/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12154/comments | https://api.github.com/repos/huggingface/transformers/issues/12154/events | https://github.com/huggingface/transformers/pull/12154 | 920,655,089 | MDExOlB1bGxSZXF1ZXN0NjY5NzcxODMz | 12,154 | [Flax] Fix flax pt equivalence tests | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Corrects bug introduced in https://github.com/huggingface/transformers/pull/11537/files?file-filters%5B%5D=.py#r651157917
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12154/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12154",
"html_url": "https://github.com/huggingface/transformers/pull/12154",
"diff_url": "https://github.com/huggingface/transformers/pull/12154.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12154.patch",
"merged_at": 1623694750000
} |
https://api.github.com/repos/huggingface/transformers/issues/12153 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12153/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12153/comments | https://api.github.com/repos/huggingface/transformers/issues/12153/events | https://github.com/huggingface/transformers/pull/12153 | 920,634,106 | MDExOlB1bGxSZXF1ZXN0NjY5NzUzOTc2 | 12,153 | [style] consistent nn. and nn.functional: part2: templates | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | Continuing https://github.com/huggingface/transformers/pull/12124 this PR takes care of `templates` - had to do some manual tweaking over automated rewrite here since `make fixup` can't process templates.
```
# deal with torch.nn
perl -pi -e 's|^import torch\n|from torch import nn\nimport torch\n|' `grep -Ilr torch.nn templates`
find templates -type f -exec perl -X -pi -e 's{(?<!(from |import |[`#/]))torch\.nn\.}{nn.}g' {} \;
find templates -type f -exec perl -pi -e 's|import torch\.nn as nn|from torch import nn|g' {} \;
# deal with F
find templates -type f -exec perl -pi -e 's|from torch.nn import functional as F|from torch import nn|g' {} \;
find templates -type f -exec perl -pi -e 's|import torch.nn.functional as F|from torch import nn|g' {} \;
find templates -type f -exec perl -pi -e 's|(?<!\w)F\.|nn.functional.|g' {} \;
make fixup
```
and some manual corrections to remove duplicated imports.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12153/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12153",
"html_url": "https://github.com/huggingface/transformers/pull/12153",
"diff_url": "https://github.com/huggingface/transformers/pull/12153.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12153.patch",
"merged_at": 1623696084000
} |
https://api.github.com/repos/huggingface/transformers/issues/12152 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12152/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12152/comments | https://api.github.com/repos/huggingface/transformers/issues/12152/events | https://github.com/huggingface/transformers/issues/12152 | 920,580,616 | MDU6SXNzdWU5MjA1ODA2MTY= | 12,152 | 🤗 The Hugging Face Course is out! | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,627 | 1,627 | MEMBER | null | The first part of the Hugging Face Course is finally out!
Come learn how the :hugs: Ecosystem works :partying_face: : Transformers, Tokenizers, Datasets, Accelerate, the Model Hub!
Share with your friends who want to learn NLP, it's free!
Come join us at https://hf.co/course
Students following this course will understand how to approach (almost) any NLP problem and benefit from all the past experiences of the community.
Come register for the live sessions, ask any questions, and organize study groups on the Hugging Face forums:
https://discuss.huggingface.co/c/course/20

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12152/reactions",
"total_count": 31,
"+1": 11,
"-1": 0,
"laugh": 1,
"hooray": 4,
"confused": 0,
"heart": 8,
"rocket": 6,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/12152/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12151 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12151/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12151/comments | https://api.github.com/repos/huggingface/transformers/issues/12151/events | https://github.com/huggingface/transformers/issues/12151 | 920,472,474 | MDU6SXNzdWU5MjA0NzI0NzQ= | 12,151 | do_normalize set to True by default for WAV2VEC tokenizer | {
"login": "Lhemamou",
"id": 17457873,
"node_id": "MDQ6VXNlcjE3NDU3ODcz",
"avatar_url": "https://avatars.githubusercontent.com/u/17457873?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lhemamou",
"html_url": "https://github.com/Lhemamou",
"followers_url": "https://api.github.com/users/Lhemamou/followers",
"following_url": "https://api.github.com/users/Lhemamou/following{/other_user}",
"gists_url": "https://api.github.com/users/Lhemamou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lhemamou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lhemamou/subscriptions",
"organizations_url": "https://api.github.com/users/Lhemamou/orgs",
"repos_url": "https://api.github.com/users/Lhemamou/repos",
"events_url": "https://api.github.com/users/Lhemamou/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lhemamou/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @patrickvonplaten ",
"Hey @Lhemamou, the parameter `do_normalize` is overwritten by the model's config: https://huggingface.co/facebook/wav2vec2-base-960h/blob/main/feature_extractor_config.json",
"Thanks @patrickvonplaten, it solved the issue ! :) . Nonetheless, in the code from [documentation](https://huggingface.co/transformers/_modules/transformers/models/wav2vec2/feature_extraction_wav2vec2.html#Wav2Vec2FeatureExtractor.__call__), the initialization part of the class Wav2Vec2FeatureExtractor seems to initialize do_normalize to True by default, contrary to what is written in the documentation for the same class function : \r\n\r\n> def __init__(\r\n> self,\r\n> feature_size=1,\r\n> sampling_rate=16000,\r\n> padding_value=0.0,\r\n> return_attention_mask=False,\r\n> do_normalize=True,\r\n> **kwargs\r\n> )\r\n> \r\n\r\nand\r\n\r\n> \r\n> do_normalize (:obj:`bool`, `optional`, defaults to :obj:`False`):\r\n> Whether or not to zero-mean unit-variance normalize the input. Normalizing can help to significantly\r\n> improve the performance for some models, *e.g.*, `wav2vec2-lv60",
"Oh yeah you're right @Lhemamou ! Would you maybe like to open a PR to fix the documentation ? It should state that it defaults to `True` in this case",
"sure I will do it when I have free time :) "
] | 1,623 | 1,626 | 1,626 | NONE | null | ## Environment info
- `transformers` version: 4.6.1
- Platform: macOS-11.2.3-x86_64-i386-64bit
- Python version: 3.8.2
- PyTorch version (GPU?): 1.8.1 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
@sgugger
## Information
Model I am using (Bert, XLNet ...): Wav2Vec
The problem arises when using:
* [*] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
wav_input_16khz, samplerate = sf.read(AUDIOFILE)
tokenizer = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-960h")
tokenizer_2 = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-960h",do_normalize=False)
features = tokenizer(wav_input_16khz, return_tensors="pt").input_values
features_2 = tokenizer_2(wav_input_16khz, return_tensors="pt").input_values
features == features_2
Out[1]: tensor([[False, False, False, ..., False, False, False]])
## Expected behavior
As written in the [documentation](https://huggingface.co/transformers/_modules/transformers/models/wav2vec2/feature_extraction_wav2vec2.html#Wav2Vec2FeatureExtractor.__call__) _"do_normalize (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether or not to zero-mean unit-variance normalize the input. Normalizing can help to significantly
improve the performance for some models, *e.g.*, `wav2vec2-lv60
<https://huggingface.co/models?search=lv60>`__."_ should be set to False.
However, the option seems to be set to True by default during the initialization.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12151/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12151/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12150 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12150/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12150/comments | https://api.github.com/repos/huggingface/transformers/issues/12150/events | https://github.com/huggingface/transformers/pull/12150 | 920,420,900 | MDExOlB1bGxSZXF1ZXN0NjY5NTczMTM0 | 12,150 | Flax T5 | {
"login": "thevasudevgupta",
"id": 53136577,
"node_id": "MDQ6VXNlcjUzMTM2NTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thevasudevgupta",
"html_url": "https://github.com/thevasudevgupta",
"followers_url": "https://api.github.com/users/thevasudevgupta/followers",
"following_url": "https://api.github.com/users/thevasudevgupta/following{/other_user}",
"gists_url": "https://api.github.com/users/thevasudevgupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thevasudevgupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thevasudevgupta/subscriptions",
"organizations_url": "https://api.github.com/users/thevasudevgupta/orgs",
"repos_url": "https://api.github.com/users/thevasudevgupta/repos",
"events_url": "https://api.github.com/users/thevasudevgupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/thevasudevgupta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Exceptionally merging already given the time constraint of the Flax/JAX community week announcement. \r\n@patil-suraj @sgugger, I would be very happy if you could nevertheless take a look after merge so that I can correct suggestions in a follow-up PR before the next transformers release.",
"Is a jax/flax byT5 planned?\r\n\r\nInterested in both byT5 and jax... torn.",
"You can already use ByT5 in jax/flax! Check the models [here](https://huggingface.co/models?filter=jax&search=byt5)",
"oh, my 🤗",
"I'm having trouble finding the Jax model training and architecture definition.\r\n\r\nIs this just loading a byT5 model into a regular T5 inference scaffolding?\r\n\r\nMy aim is to experiment with the training / masking code.",
"Here is a test for FlaxByT5 that could help a bit: https://github.com/huggingface/transformers/blob/332a2458611751e7d9c4d7a21bc454299d50e160/tests/test_modeling_flax_t5.py#L432\r\n\r\nAlso we have `run_summarization` script in Flax that can be easily tweaked for any seq2seq task. And soon (Monday) we'll have FlaxT5 pretraining as well :-) "
] | 1,623 | 1,624 | 1,624 | CONTRIBUTOR | null | # What does this PR do?
This PR will add T5 in Flax/Jax.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12150/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12150/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12150",
"html_url": "https://github.com/huggingface/transformers/pull/12150",
"diff_url": "https://github.com/huggingface/transformers/pull/12150.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12150.patch",
"merged_at": 1624450412000
} |
https://api.github.com/repos/huggingface/transformers/issues/12149 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12149/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12149/comments | https://api.github.com/repos/huggingface/transformers/issues/12149/events | https://github.com/huggingface/transformers/issues/12149 | 920,325,933 | MDU6SXNzdWU5MjAzMjU5MzM= | 12,149 | Feature request for encoding more than one pair of texts | {
"login": "hadaev8",
"id": 20247085,
"node_id": "MDQ6VXNlcjIwMjQ3MDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/20247085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hadaev8",
"html_url": "https://github.com/hadaev8",
"followers_url": "https://api.github.com/users/hadaev8/followers",
"following_url": "https://api.github.com/users/hadaev8/following{/other_user}",
"gists_url": "https://api.github.com/users/hadaev8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hadaev8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hadaev8/subscriptions",
"organizations_url": "https://api.github.com/users/hadaev8/orgs",
"repos_url": "https://api.github.com/users/hadaev8/repos",
"events_url": "https://api.github.com/users/hadaev8/events{/privacy}",
"received_events_url": "https://api.github.com/users/hadaev8/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [] | 1,623 | 1,623 | null | CONTRIBUTOR | null | # 🚀 Feature request
Currently, tokenizer may take only inputs like [['text_0', 'text_1']], would be beneficially to expand is as
[['text_0', 'text_1', ..., 'text_n']]
## Motivation
This would open a convenient way to deal with a new set of processing tasks.
## Your contribution
Don't have any. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12149/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12148 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12148/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12148/comments | https://api.github.com/repos/huggingface/transformers/issues/12148/events | https://github.com/huggingface/transformers/pull/12148 | 920,274,189 | MDExOlB1bGxSZXF1ZXN0NjY5NDQ3ODI0 | 12,148 | [Flax] fix error message | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | MEMBER | null | # What does this PR do?
Fix error message. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12148/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12148",
"html_url": "https://github.com/huggingface/transformers/pull/12148",
"diff_url": "https://github.com/huggingface/transformers/pull/12148.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12148.patch",
"merged_at": 1623676338000
} |
https://api.github.com/repos/huggingface/transformers/issues/12147 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12147/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12147/comments | https://api.github.com/repos/huggingface/transformers/issues/12147/events | https://github.com/huggingface/transformers/pull/12147 | 920,252,609 | MDExOlB1bGxSZXF1ZXN0NjY5NDI5NzYz | 12,147 | Improve detr | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | # What does this PR do?
Fixes #12105, improves some more docs and removes some unused variables in `modeling_detr.py`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12147/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/12147/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12147",
"html_url": "https://github.com/huggingface/transformers/pull/12147",
"diff_url": "https://github.com/huggingface/transformers/pull/12147.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12147.patch",
"merged_at": 1623940674000
} |
https://api.github.com/repos/huggingface/transformers/issues/12146 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12146/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12146/comments | https://api.github.com/repos/huggingface/transformers/issues/12146/events | https://github.com/huggingface/transformers/pull/12146 | 920,247,425 | MDExOlB1bGxSZXF1ZXN0NjY5NDI1NTgy | 12,146 | [Flax] Add links to google colabs | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds links to Flax colabs
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12146/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12146/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12146",
"html_url": "https://github.com/huggingface/transformers/pull/12146",
"diff_url": "https://github.com/huggingface/transformers/pull/12146.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12146.patch",
"merged_at": 1623664829000
} |
https://api.github.com/repos/huggingface/transformers/issues/12145 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12145/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12145/comments | https://api.github.com/repos/huggingface/transformers/issues/12145/events | https://github.com/huggingface/transformers/pull/12145 | 920,207,903 | MDExOlB1bGxSZXF1ZXN0NjY5MzkyMDk1 | 12,145 | Have dummy processors have a `from_pretrained` method | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | MEMBER | null | Fix https://github.com/huggingface/transformers/issues/12100
Before:
```py
>>> from transformers import Speech2TextProcessor
>>> processor =Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
```
```out
Traceback (most recent call last):
File "<input>", line 1, in <module>
AttributeError: type object 'Speech2TextProcessor' has no attribute 'from_pretrained'
```
After:
```py
>>> from transformers import Speech2TextProcessor
>>> processor =Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
```
```out
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/home/lysandre/Workspaces/Python/transformers/src/transformers/utils/dummy_sentencepiece_and_speech_objects.py", line 11, in from_pretrained
requires_backends(cls, ["sentencepiece", "speech"])
File "/home/lysandre/Workspaces/Python/transformers/src/transformers/file_utils.py", line 606, in requires_backends
raise ImportError("".join([BACKENDS_MAPPING[backend][1].format(name) for backend in backends]))
ImportError:
Speech2TextProcessor requires the SentencePiece library but it was not found in your environment. Checkout the instructions on the
installation page of its repo: https://github.com/google/sentencepiece#installation and follow the ones
that match your environment.
Speech2TextProcessor requires the torchaudio library but it was not found in your environment. You can install it with pip:
`pip install torchaudio`
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12145/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12145",
"html_url": "https://github.com/huggingface/transformers/pull/12145",
"diff_url": "https://github.com/huggingface/transformers/pull/12145.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12145.patch",
"merged_at": 1623760746000
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.