url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
β | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
β | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/12144 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12144/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12144/comments | https://api.github.com/repos/huggingface/transformers/issues/12144/events | https://github.com/huggingface/transformers/issues/12144 | 919,996,034 | MDU6SXNzdWU5MTk5OTYwMzQ= | 12,144 | How to train the new wav2vec unsupervised model using hugging face ? | {
"login": "ImtiazKhanDS",
"id": 23047384,
"node_id": "MDQ6VXNlcjIzMDQ3Mzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/23047384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ImtiazKhanDS",
"html_url": "https://github.com/ImtiazKhanDS",
"followers_url": "https://api.github.com/users/ImtiazKhanDS/followers",
"following_url": "https://api.github.com/users/ImtiazKhanDS/following{/other_user}",
"gists_url": "https://api.github.com/users/ImtiazKhanDS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ImtiazKhanDS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ImtiazKhanDS/subscriptions",
"organizations_url": "https://api.github.com/users/ImtiazKhanDS/orgs",
"repos_url": "https://api.github.com/users/ImtiazKhanDS/repos",
"events_url": "https://api.github.com/users/ImtiazKhanDS/events{/privacy}",
"received_events_url": "https://api.github.com/users/ImtiazKhanDS/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"The pretraining of wav2vec2-u is a pretty complex training pipeline. It'll probably still take a bit until we have this merged ",
"@patrickvonplaten @patil-suraj any updates on this yet ?",
"I won't have time in the near future to work on this - feel free to give it a try though. It's a very cool paper :-)",
"Hey HF team,\r\nI see you have [an example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-pretraining) up of how to perform this pre-training. First of all - thank you very much for this work!\r\n\r\nI'm trying to use this code to train a wav2vec-style model for music. As indicated was likely in the above link, I was running into some training stability issues.\r\n\r\nOne thing that particularly helped me with this was reducing the codebook size. The wav2vec paper does an ablation study in the number of groups and vectors (`G` and `V`) and found that small codebooks work very well. I have been experimenting with G=8 and V=8 and it seems more likely to produce a stable training run for my dataset. Might be worth looking into for librispeech if you find the time (or if someone else sees this and is struggling).\r\n\r\nI also had one other question:\r\nWhat was the reasoning behind this initialization choice? https://github.com/huggingface/transformers/blob/main/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1055\r\n\r\nThe mean and variance of the initialized Linear weights after this initialization is very close to the same statistics for the default pytorch initialization (which uses kaiming_uniform init). The difference with your initialization is that it doesn't automatically scale with fan_in and it draws from a normal distribution. I didn't see anything in the paper about either of these details and was just wondering why this was done.\r\n\r\nThanks again for this! It's great work!",
"Hey @neonbjb,\r\n\r\nI think the init here was just a copy-paste from what we had for other models. I think fairseq is actually using the default init values for the attention layers: https://github.com/facebookresearch/fairseq/blob/b5a039c292facba9c73f59ff34621ec131d82341/fairseq/modules/multihead_attention.py#L64 . So maybe we should use this as well here. Does `kaiming_uniform_init` work better for you? \r\nDefinitely open for a PR here to change it",
"I don't think the choice between uniform or normal distributions in the init made an appreciable difference, I was just trying to understand the choice. Reducing the size of V (and increasing G) made the biggest difference in stability.",
"BTW, if I understood correctly, the Data2Vec guys stated that Data2Vec performs better than Wav2Vec2 mainly because it makes no assumption about the number of sound units a spoken language has (= the number of codebook vectors). This codebook vector is a somewhat arbitrary choice and can vary strongly depending on the language. A big gain from Data2Vec is that there is no such hyper-parameter as a codebook which makes the model generalize better. \r\n\r\n@alexeib please correct me if I'm wrong here :sweat_smile: "
] | 1,623 | 1,653 | null | NONE | null | # π Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
How to train the new wav2vec unsupervised model using hugging face ? , The paper link is : https://ai.facebook.com/research/publications/unsupervised-speech-recognition
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12144/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12143 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12143/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12143/comments | https://api.github.com/repos/huggingface/transformers/issues/12143/events | https://github.com/huggingface/transformers/issues/12143 | 919,938,452 | MDU6SXNzdWU5MTk5Mzg0NTI= | 12,143 | Ouput Includes Input | {
"login": "kurbster",
"id": 15655728,
"node_id": "MDQ6VXNlcjE1NjU1NzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/15655728?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kurbster",
"html_url": "https://github.com/kurbster",
"followers_url": "https://api.github.com/users/kurbster/followers",
"following_url": "https://api.github.com/users/kurbster/following{/other_user}",
"gists_url": "https://api.github.com/users/kurbster/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kurbster/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kurbster/subscriptions",
"organizations_url": "https://api.github.com/users/kurbster/orgs",
"repos_url": "https://api.github.com/users/kurbster/repos",
"events_url": "https://api.github.com/users/kurbster/events{/privacy}",
"received_events_url": "https://api.github.com/users/kurbster/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nThanks!",
"Ok, I posted my question [here](https://discuss.huggingface.co/t/output-includes-input/6831).\r\n\r\nThank you!"
] | 1,623 | 1,623 | 1,623 | NONE | null | Whenever I am generating text the input is included in the output. When the input is close to the maximum length the model barely produces any useful output.
# Information
When using transformers.pipeline or transformers.from_pretrianed, the model is only generating the input, when the input is long. For example,
`generator = transformers.pipeline('text-generation', model='gpt2')`
`prompt = "really long text that is 1023 tokens ..."`
`output = generator(prompt, mex_length=1024, do_sample=True, temperature=0.9)`
output in this case would be equal to the input prompt.
## To reproduce
[Here is a Collab notebook](https://colab.research.google.com/drive/1JzwSmFGrWY1bU6f-t-mgMug88NsRVllp#scrollTo=OgOhZxQJNseL) with simple examples of the problem. I am looking to generate output from input ~1300 tokens and running into this issue consistently. Is there a way around this?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12143/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12142 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12142/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12142/comments | https://api.github.com/repos/huggingface/transformers/issues/12142/events | https://github.com/huggingface/transformers/issues/12142 | 919,857,820 | MDU6SXNzdWU5MTk4NTc4MjA= | 12,142 | CLIP tokenizer inconsistent with OpenAI release | {
"login": "normster",
"id": 6687910,
"node_id": "MDQ6VXNlcjY2ODc5MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6687910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/normster",
"html_url": "https://github.com/normster",
"followers_url": "https://api.github.com/users/normster/followers",
"following_url": "https://api.github.com/users/normster/following{/other_user}",
"gists_url": "https://api.github.com/users/normster/gists{/gist_id}",
"starred_url": "https://api.github.com/users/normster/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/normster/subscriptions",
"organizations_url": "https://api.github.com/users/normster/orgs",
"repos_url": "https://api.github.com/users/normster/repos",
"events_url": "https://api.github.com/users/normster/events{/privacy}",
"received_events_url": "https://api.github.com/users/normster/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"The non-fast tokenizer seems to be fine:\r\n\r\n```\r\n>>> tokenizer = transformers.CLIPTokenizer.from_pretrained('openai/clip-vit-base-patch32')\r\n>>> tokenizer('hello world')\r\n{'input_ids': [49406, 3306, 1002, 49407], 'attention_mask': [1, 1, 1, 1]}\r\n```",
"To add more into this, HF's fast tokenizer seems to add an extra token for every white space between words:\r\n```\r\n>>> tokenizer(\"a photo of a cat\")['input_ids']\r\n[320, 220, 1125, 220, 539, 220, 320, 220, 2368]\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hi, is there any update/eta on this?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,629 | 1,629 | NONE | null | ## Environment info
- `transformers` version: 4.6.1
- Platform: Linux-5.4.0-52-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
@patil-suraj
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] my own modified scripts: (give details below)
* [ ] the official example scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
>>> import clip
>>> import transformers
>>> clip.tokenize('hello world')
tensor([[49406, 3306, 1002, 49407, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0]])
>>> tokenizer = transformers.CLIPTokenizerFast.from_pretrained('openai/clip-vit-base-patch32')
>>> tokenizer('hello world')
{'input_ids': [3306, 220, 1002], 'attention_mask': [1, 1, 1]}
```
The HF CLIPTokenizer seems to add an extra token while dropping the <bos> and <eos> tokens. Am I missing something here?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12142/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12142/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12141 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12141/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12141/comments | https://api.github.com/repos/huggingface/transformers/issues/12141/events | https://github.com/huggingface/transformers/issues/12141 | 919,815,259 | MDU6SXNzdWU5MTk4MTUyNTk= | 12,141 | RuntimeError: Could not infer dtype of numpy.int64 on Squad T5 | {
"login": "helloworld123-lab",
"id": 75953751,
"node_id": "MDQ6VXNlcjc1OTUzNzUx",
"avatar_url": "https://avatars.githubusercontent.com/u/75953751?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/helloworld123-lab",
"html_url": "https://github.com/helloworld123-lab",
"followers_url": "https://api.github.com/users/helloworld123-lab/followers",
"following_url": "https://api.github.com/users/helloworld123-lab/following{/other_user}",
"gists_url": "https://api.github.com/users/helloworld123-lab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/helloworld123-lab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/helloworld123-lab/subscriptions",
"organizations_url": "https://api.github.com/users/helloworld123-lab/orgs",
"repos_url": "https://api.github.com/users/helloworld123-lab/repos",
"events_url": "https://api.github.com/users/helloworld123-lab/events{/privacy}",
"received_events_url": "https://api.github.com/users/helloworld123-lab/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I think this issue is still occuring.",
"Hi @wanglec , that's an old notebook and has not been updated since, so I don't recommend it anymore.\r\nThere's a new example in transformers for fine-tuning T5 for qa, [here](https://github.com/huggingface/transformers/tree/master/examples/pytorch/question-answering#fine-tuning-t5-on-squad20). It also uses `Trainer`, so supports training on TPUs. [Here's](https://github.com/huggingface/transformers/tree/master/examples/pytorch#running-on-tpus) a short guide about how to run these scripts on tpu"
] | 1,623 | 1,636 | 1,626 | NONE | null | Hello,
I try to run the code for T5 on Squad dataset in [https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb](url)
I install the required libraries as:
```
!pip install transformers==2.9.1
!pip install -U nlp
!pip install sentencepiece
```
I fixed the xla error as in [https://stackoverflow.com/questions/67257008/oserror-libmkl-intel-lp64-so-1-cannot-open-shared-object-file-no-such-file-or](url)
However, when the training starts, it gives the following error:
Exception in thread Thread-17:
```
Traceback (most recent call last):
File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner
self.run()
File "/usr/lib/python3.7/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/parallel_loader.py", line 139, in _loader_worker
_, data = next(data_iter)
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 521, in __next__
data = self._next_data()
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 561, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.7/dist-packages/nlp/arrow_dataset.py", line 719, in __getitem__
format_kwargs=self._format_kwargs,
File "/usr/local/lib/python3.7/dist-packages/nlp/arrow_dataset.py", line 707, in _getitem
format_kwargs=format_kwargs,
File "/usr/local/lib/python3.7/dist-packages/nlp/arrow_dataset.py", line 619, in _convert_outputs
v = map_nested(command, v, **map_nested_kwargs)
File "/usr/local/lib/python3.7/dist-packages/nlp/utils/py_utils.py", line 191, in map_nested
return function(data_struct)
RuntimeError: Could not infer dtype of numpy.int64
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12141/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12140 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12140/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12140/comments | https://api.github.com/repos/huggingface/transformers/issues/12140/events | https://github.com/huggingface/transformers/issues/12140 | 919,806,712 | MDU6SXNzdWU5MTk4MDY3MTI= | 12,140 | [FLAX] port GPTNeo to Flax | {
"login": "jayendra13",
"id": 651057,
"node_id": "MDQ6VXNlcjY1MTA1Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/651057?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jayendra13",
"html_url": "https://github.com/jayendra13",
"followers_url": "https://api.github.com/users/jayendra13/followers",
"following_url": "https://api.github.com/users/jayendra13/following{/other_user}",
"gists_url": "https://api.github.com/users/jayendra13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jayendra13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jayendra13/subscriptions",
"organizations_url": "https://api.github.com/users/jayendra13/orgs",
"repos_url": "https://api.github.com/users/jayendra13/repos",
"events_url": "https://api.github.com/users/jayendra13/events{/privacy}",
"received_events_url": "https://api.github.com/users/jayendra13/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"GPTNeo is available in Flax."
] | 1,623 | 1,647 | 1,647 | CONTRIBUTOR | null | Port the existing GPTNeo Model to FLAX | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12140/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12139 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12139/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12139/comments | https://api.github.com/repos/huggingface/transformers/issues/12139/events | https://github.com/huggingface/transformers/pull/12139 | 919,779,936 | MDExOlB1bGxSZXF1ZXN0NjY5MDM5MDUx | 12,139 | Add output in a dictionary for TF `generate` method | {
"login": "stancld",
"id": 46073029,
"node_id": "MDQ6VXNlcjQ2MDczMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stancld",
"html_url": "https://github.com/stancld",
"followers_url": "https://api.github.com/users/stancld/followers",
"following_url": "https://api.github.com/users/stancld/following{/other_user}",
"gists_url": "https://api.github.com/users/stancld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stancld/subscriptions",
"organizations_url": "https://api.github.com/users/stancld/orgs",
"repos_url": "https://api.github.com/users/stancld/repos",
"events_url": "https://api.github.com/users/stancld/events{/privacy}",
"received_events_url": "https://api.github.com/users/stancld/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great job @stancld !"
] | 1,623 | 1,624 | 1,624 | CONTRIBUTOR | null | This PR adds two components into the TF `generate` method:
1. It enables the model outputs `attentions`, `hidden_states` and `scores`
2. It enables `return_dict_in_generate`
This PR thus narrows the gap between PyTorch and TF `generate` method implementations.
This PR also adds two tests for the dictionary output.
Besides, this PR fixes handling of 2-tuples of attentions for the XLNet model when `target_mapping is not None`.
**Reviewers:** @Rocketknight1 @patrickvonplaten @sgugger (anyone else in the community)
<hr>
Edit: The above-mentioned features are not implemented for the `generate` method of `TFRagSequenceForGeneration` model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12139/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12139",
"html_url": "https://github.com/huggingface/transformers/pull/12139",
"diff_url": "https://github.com/huggingface/transformers/pull/12139.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12139.patch",
"merged_at": 1624441931000
} |
https://api.github.com/repos/huggingface/transformers/issues/12138 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12138/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12138/comments | https://api.github.com/repos/huggingface/transformers/issues/12138/events | https://github.com/huggingface/transformers/issues/12138 | 919,766,263 | MDU6SXNzdWU5MTk3NjYyNjM= | 12,138 | Using checkpoints in gpt neo xl | {
"login": "MK096",
"id": 20142735,
"node_id": "MDQ6VXNlcjIwMTQyNzM1",
"avatar_url": "https://avatars.githubusercontent.com/u/20142735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MK096",
"html_url": "https://github.com/MK096",
"followers_url": "https://api.github.com/users/MK096/followers",
"following_url": "https://api.github.com/users/MK096/following{/other_user}",
"gists_url": "https://api.github.com/users/MK096/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MK096/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MK096/subscriptions",
"organizations_url": "https://api.github.com/users/MK096/orgs",
"repos_url": "https://api.github.com/users/MK096/repos",
"events_url": "https://api.github.com/users/MK096/events{/privacy}",
"received_events_url": "https://api.github.com/users/MK096/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,626 | 1,626 | NONE | null | Hi,
I downloaded gpt neo xl pretrained model from theeye.eye on my pc.
It downloaded various checkpoints.
How do i use them? ... Because in order to load and use model I'd need encoder. Json, pytorch. Bin, etc..
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12138/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12138/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12137 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12137/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12137/comments | https://api.github.com/repos/huggingface/transformers/issues/12137/events | https://github.com/huggingface/transformers/issues/12137 | 919,765,526 | MDU6SXNzdWU5MTk3NjU1MjY= | 12,137 | wav2vec2 not converging when finetuning | {
"login": "cheongalc",
"id": 22002891,
"node_id": "MDQ6VXNlcjIyMDAyODkx",
"avatar_url": "https://avatars.githubusercontent.com/u/22002891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cheongalc",
"html_url": "https://github.com/cheongalc",
"followers_url": "https://api.github.com/users/cheongalc/followers",
"following_url": "https://api.github.com/users/cheongalc/following{/other_user}",
"gists_url": "https://api.github.com/users/cheongalc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cheongalc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cheongalc/subscriptions",
"organizations_url": "https://api.github.com/users/cheongalc/orgs",
"repos_url": "https://api.github.com/users/cheongalc/repos",
"events_url": "https://api.github.com/users/cheongalc/events{/privacy}",
"received_events_url": "https://api.github.com/users/cheongalc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @meeps123,\r\n\r\nWe try to keep the github issues for code related bugs. For such questions, could you please the [forum](https://discuss.huggingface.co/) instead? :-) Feel free to tag me there! \r\n\r\nAlso could you attach a google colab so that I can take a look at your training script? It is very difficult to draw any conclusions just from reading the text. \r\n\r\nCheers,\r\nPatrick",
"Hi @patrickvonplaten,\r\n\r\nSure thing! I have opened a topic [here](https://discuss.huggingface.co/t/wav2vec2-not-converging-when-finetuning/6773). The Colab notebook is linked there. Thank you for the assistance! "
] | 1,623 | 1,623 | 1,623 | NONE | null | ## Environment info
- `transformers` version: 4.4.0
- Platform: Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten
## Information
Model I am using: wav2vec2
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
I have a dataset of single English word, 1 second long audio files sampled at 16kHz. I wanted to use wav2vec2 for speech recognition instead of just doing audio classification because I wanted the model to be able to generalize to longer audio samples with more words. I followed [the official wav2vec2 guide](https://huggingface.co/blog/fine-tune-wav2vec2-english) almost exactly (the only difference was the dataset used, but I made sure the dataset format and vocab list format was identical as well) but the model does not seem to be converging. The loss would decrease to approx. 3 and stay around there. Checking the predictions made during evaluation, I realized that the model just kept outputting the padding token regardless of the input.
Other issues with similar behaviour are #10884 and #10983. I have tried suggestions there such as increasing the learning rate with no success.
## Expected behavior
The model should show signs of convergence, such as slowly starting to output sensible prediction strings.
Any help is greatly appreciated!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12137/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12136 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12136/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12136/comments | https://api.github.com/repos/huggingface/transformers/issues/12136/events | https://github.com/huggingface/transformers/pull/12136 | 919,752,258 | MDExOlB1bGxSZXF1ZXN0NjY5MDE4MTYx | 12,136 | Fix t5 error message | {
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | # What does this PR do?
Change `inputs` to `input_ids` in error message.
```diff
- f"You cannot specify both {err_msg_prefix}inputs and {err_msg_prefix}inputs_embeds at the same time"
+ f"You cannot specify both {err_msg_prefix}inputs_ids and {err_msg_prefix}inputs_embeds at the same time"
```
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12136/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12136",
"html_url": "https://github.com/huggingface/transformers/pull/12136",
"diff_url": "https://github.com/huggingface/transformers/pull/12136.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12136.patch",
"merged_at": 1623582177000
} |
https://api.github.com/repos/huggingface/transformers/issues/12135 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12135/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12135/comments | https://api.github.com/repos/huggingface/transformers/issues/12135/events | https://github.com/huggingface/transformers/pull/12135 | 919,714,469 | MDExOlB1bGxSZXF1ZXN0NjY4OTg5MTE5 | 12,135 | [lm examples] Replicate --config_overrides addition to other LM examples | {
"login": "kumar-abhishek",
"id": 859465,
"node_id": "MDQ6VXNlcjg1OTQ2NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/859465?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kumar-abhishek",
"html_url": "https://github.com/kumar-abhishek",
"followers_url": "https://api.github.com/users/kumar-abhishek/followers",
"following_url": "https://api.github.com/users/kumar-abhishek/following{/other_user}",
"gists_url": "https://api.github.com/users/kumar-abhishek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kumar-abhishek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kumar-abhishek/subscriptions",
"organizations_url": "https://api.github.com/users/kumar-abhishek/orgs",
"repos_url": "https://api.github.com/users/kumar-abhishek/repos",
"events_url": "https://api.github.com/users/kumar-abhishek/events{/privacy}",
"received_events_url": "https://api.github.com/users/kumar-abhishek/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I don't think this change applies to `run_clm_no_trainer.py` and `run_mlm_no_trainer.py` since the argument `model_name_or_path` is a required argument and we can't have both arguments `model_name_or_path` and `config_overrides` at the same time. "
] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | # What does this PR do?
This PR replays the new feature `--config_overrides` for other scripts under `examples/pytorch/language-modeling/` which was added by https://github.com/huggingface/transformers/pull/11798/
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
Fixes: https://github.com/huggingface/transformers/issues/11875
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stas00 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12135/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12135/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12135",
"html_url": "https://github.com/huggingface/transformers/pull/12135",
"diff_url": "https://github.com/huggingface/transformers/pull/12135.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12135.patch",
"merged_at": 1623672743000
} |
https://api.github.com/repos/huggingface/transformers/issues/12134 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12134/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12134/comments | https://api.github.com/repos/huggingface/transformers/issues/12134/events | https://github.com/huggingface/transformers/pull/12134 | 919,705,458 | MDExOlB1bGxSZXF1ZXN0NjY4OTgyODU2 | 12,134 | Ray Tune Integration Updates | {
"login": "amogkam",
"id": 8068268,
"node_id": "MDQ6VXNlcjgwNjgyNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8068268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amogkam",
"html_url": "https://github.com/amogkam",
"followers_url": "https://api.github.com/users/amogkam/followers",
"following_url": "https://api.github.com/users/amogkam/following{/other_user}",
"gists_url": "https://api.github.com/users/amogkam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amogkam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amogkam/subscriptions",
"organizations_url": "https://api.github.com/users/amogkam/orgs",
"repos_url": "https://api.github.com/users/amogkam/repos",
"events_url": "https://api.github.com/users/amogkam/events{/privacy}",
"received_events_url": "https://api.github.com/users/amogkam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"thanks @sgugger for the fast review! anything blocking to get this merged :) ?",
"This is good for me. You had standing questions for Lysandre so not sure it was ready to be merged, but I will do so if you tell me everything is okay on your side :-)",
"@sgugger yep this is ready to merge!",
"Thanks again!"
] | 1,623 | 1,623 | 1,623 | COLLABORATOR | null | # What does this PR do?
- Automatically disables memory tracker if enabled since the memory tracker is not serializable
- Fixes the Ray Tune integration test
- Adds a new test for Ray Client API
- Adds integration tests back to the scheduled Github Actions pipeline
Closes #11249, https://github.com/huggingface/transformers/issues/12177
@LysandreJik @richardliaw
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12134/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12134",
"html_url": "https://github.com/huggingface/transformers/pull/12134",
"diff_url": "https://github.com/huggingface/transformers/pull/12134.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12134.patch",
"merged_at": 1623780689000
} |
https://api.github.com/repos/huggingface/transformers/issues/12133 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12133/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12133/comments | https://api.github.com/repos/huggingface/transformers/issues/12133/events | https://github.com/huggingface/transformers/issues/12133 | 919,659,706 | MDU6SXNzdWU5MTk2NTk3MDY= | 12,133 | Adding fastseq support to more recent version of HF transformers | {
"login": "tingofurro",
"id": 2609265,
"node_id": "MDQ6VXNlcjI2MDkyNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2609265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tingofurro",
"html_url": "https://github.com/tingofurro",
"followers_url": "https://api.github.com/users/tingofurro/followers",
"following_url": "https://api.github.com/users/tingofurro/following{/other_user}",
"gists_url": "https://api.github.com/users/tingofurro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tingofurro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tingofurro/subscriptions",
"organizations_url": "https://api.github.com/users/tingofurro/orgs",
"repos_url": "https://api.github.com/users/tingofurro/repos",
"events_url": "https://api.github.com/users/tingofurro/events{/privacy}",
"received_events_url": "https://api.github.com/users/tingofurro/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,626 | 1,626 | NONE | null | # π Feature request
Would it be possible to integrate [fastseq](https://github.com/microsoft/fastseq) with the later version of HuggingFace transformer models.
## Motivation
fastseq is a library that gives speedup on Transformers for text generation. They claim to have pretty large speedups (3-8x) for various Transformer architectures (GPT2, Bart, etc.).
The only caveat is they only support an older version of HF transformers (3.0.2).
Has anyone already looked into making it compatible with the latest API of HuggingFace models?
## Your contribution
I am willing to discuss and can contribute if no one has planned to do so already.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12133/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12132 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12132/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12132/comments | https://api.github.com/repos/huggingface/transformers/issues/12132/events | https://github.com/huggingface/transformers/pull/12132 | 919,618,233 | MDExOlB1bGxSZXF1ZXN0NjY4OTE2ODAz | 12,132 | Use text_column_name variable instead of "text" | {
"login": "nbroad1881",
"id": 24982805,
"node_id": "MDQ6VXNlcjI0OTgyODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nbroad1881",
"html_url": "https://github.com/nbroad1881",
"followers_url": "https://api.github.com/users/nbroad1881/followers",
"following_url": "https://api.github.com/users/nbroad1881/following{/other_user}",
"gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions",
"organizations_url": "https://api.github.com/users/nbroad1881/orgs",
"repos_url": "https://api.github.com/users/nbroad1881/repos",
"events_url": "https://api.github.com/users/nbroad1881/events{/privacy}",
"received_events_url": "https://api.github.com/users/nbroad1881/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,637 | 1,623 | CONTRIBUTOR | null | `text_column_name` was already defined above where I made the changes and it was also used below where I made changes.
This is a very minor change. If a dataset does not use "text" as the column name, then the `tokenize_function` will now use whatever column is assigned to `text_column_name`. `text_column_name` is just the first column name if "text" is not a column name. It makes the function a little more robust, though I would assume that 90% + of datasets use "text" anyway.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger, @patil-suraj
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12132/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12132",
"html_url": "https://github.com/huggingface/transformers/pull/12132",
"diff_url": "https://github.com/huggingface/transformers/pull/12132.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12132.patch",
"merged_at": 1623672673000
} |
https://api.github.com/repos/huggingface/transformers/issues/12131 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12131/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12131/comments | https://api.github.com/repos/huggingface/transformers/issues/12131/events | https://github.com/huggingface/transformers/pull/12131 | 919,576,606 | MDExOlB1bGxSZXF1ZXN0NjY4ODgyNjI0 | 12,131 | [Flax] Add Beam Search | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Circle CI error seem unrelated: ```OSError: /home/circleci/.local/lib/python3.7/site-packages/torch_scatter/_scatter_cpu.so: undefined symbol: _ZNK2at6Tensor6deviceEv```",
"Rebase from https://github.com/huggingface/transformers/pull/12181"
] | 1,623 | 1,623 | 1,623 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This adds beam search for Flax. Aggressive integration tests for Bart-large-cnn is added.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12131/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12131",
"html_url": "https://github.com/huggingface/transformers/pull/12131",
"diff_url": "https://github.com/huggingface/transformers/pull/12131.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12131.patch",
"merged_at": 1623833034000
} |
https://api.github.com/repos/huggingface/transformers/issues/12130 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12130/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12130/comments | https://api.github.com/repos/huggingface/transformers/issues/12130/events | https://github.com/huggingface/transformers/pull/12130 | 919,495,702 | MDExOlB1bGxSZXF1ZXN0NjY4ODE2NDM1 | 12,130 | Fix for making student ProphetNet for Seq2Seq Distillation | {
"login": "vishal-burman",
"id": 19861874,
"node_id": "MDQ6VXNlcjE5ODYxODc0",
"avatar_url": "https://avatars.githubusercontent.com/u/19861874?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vishal-burman",
"html_url": "https://github.com/vishal-burman",
"followers_url": "https://api.github.com/users/vishal-burman/followers",
"following_url": "https://api.github.com/users/vishal-burman/following{/other_user}",
"gists_url": "https://api.github.com/users/vishal-burman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vishal-burman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vishal-burman/subscriptions",
"organizations_url": "https://api.github.com/users/vishal-burman/orgs",
"repos_url": "https://api.github.com/users/vishal-burman/repos",
"events_url": "https://api.github.com/users/vishal-burman/events{/privacy}",
"received_events_url": "https://api.github.com/users/vishal-burman/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"We're not actively maintaining those examples, so would need an approval from the original author (@sshleifer ) before merging",
"LGTM!",
"Thank you both!"
] | 1,623 | 1,624 | 1,624 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
Enables making student model of ProphetNet
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger @LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12130/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12130",
"html_url": "https://github.com/huggingface/transformers/pull/12130",
"diff_url": "https://github.com/huggingface/transformers/pull/12130.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12130.patch",
"merged_at": 1624282605000
} |
https://api.github.com/repos/huggingface/transformers/issues/12129 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12129/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12129/comments | https://api.github.com/repos/huggingface/transformers/issues/12129/events | https://github.com/huggingface/transformers/issues/12129 | 919,488,014 | MDU6SXNzdWU5MTk0ODgwMTQ= | 12,129 | TypeError when trying to load pretrained ALBERT model in BertTokenizer | {
"login": "Puranjay-del-Mishra",
"id": 56340378,
"node_id": "MDQ6VXNlcjU2MzQwMzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/56340378?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Puranjay-del-Mishra",
"html_url": "https://github.com/Puranjay-del-Mishra",
"followers_url": "https://api.github.com/users/Puranjay-del-Mishra/followers",
"following_url": "https://api.github.com/users/Puranjay-del-Mishra/following{/other_user}",
"gists_url": "https://api.github.com/users/Puranjay-del-Mishra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Puranjay-del-Mishra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Puranjay-del-Mishra/subscriptions",
"organizations_url": "https://api.github.com/users/Puranjay-del-Mishra/orgs",
"repos_url": "https://api.github.com/users/Puranjay-del-Mishra/repos",
"events_url": "https://api.github.com/users/Puranjay-del-Mishra/events{/privacy}",
"received_events_url": "https://api.github.com/users/Puranjay-del-Mishra/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Why don't you try to load the ALBERT tokenizer in an ALBERT tokenizer?\r\n```py\r\nfrom transformers import AlbertTokenizer\r\n\r\ntokenizer = AlbertTokenizer.from_pretrained(\"albert-base-v2\")\r\n```",
"Hey!\r\nI was not aware of its existence to be honest, but shouldnt loading it in the BertTokenizer work?",
"ALBERT and BERT are different models, and the ALBERT tokenizer isn't related to BERT's tokenizer at all. They're not based on the same algorithms: BERT's tokenizer is using WordPiece, ALBERT's using Unigram.\r\n\r\nIf you're looking for a tokenizer to encompass all other tokenizers, take a look at the [`Auto*` classes](https://huggingface.co/transformers/model_doc/auto.html):\r\n\r\n```py\r\nfrom transformers import AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"albert-base-v2\")\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,627 | 1,627 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): BertTokenizer
The problem arises when using:
* [ *] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [* ] my own task or dataset: (give details below)
Trying to tokenize a dataset of tweets
## To reproduce
Steps to reproduce the behavior:
from transformers import BertTokenizer
tokenizerr = BertTokenizer.from_pretrained("albert-base-v2")
->> The error message is ->
TypeError Traceback (most recent call last)
<ipython-input-11-f632f8d4de7e> in <module>()
1 from transformers import BertTokenizer
----> 2 tokenizerr = BertTokenizer.from_pretrained("albert-base-v2")
3 frames
/usr/lib/python3.7/genericpath.py in isfile(path)
28 """Test whether a path is a regular file"""
29 try:
---> 30 st = os.stat(path)
31 except OSError:
32 return False
TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The code should load the pre-trained BertTokenizer model for the Albert-base-v2 model. The same thing happened with Albert-base-v1
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12129/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12128 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12128/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12128/comments | https://api.github.com/repos/huggingface/transformers/issues/12128/events | https://github.com/huggingface/transformers/issues/12128 | 919,453,220 | MDU6SXNzdWU5MTk0NTMyMjA= | 12,128 | got multiple values for argument 'input_shape' | {
"login": "garner1",
"id": 8114746,
"node_id": "MDQ6VXNlcjgxMTQ3NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8114746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/garner1",
"html_url": "https://github.com/garner1",
"followers_url": "https://api.github.com/users/garner1/followers",
"following_url": "https://api.github.com/users/garner1/following{/other_user}",
"gists_url": "https://api.github.com/users/garner1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/garner1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/garner1/subscriptions",
"organizations_url": "https://api.github.com/users/garner1/orgs",
"repos_url": "https://api.github.com/users/garner1/repos",
"events_url": "https://api.github.com/users/garner1/events{/privacy}",
"received_events_url": "https://api.github.com/users/garner1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, \r\nIt seems to me that `input_shape` is defined twice inside super().init(...), probably both in 'config' and 'input_shape'.\r\nThanks for helping!",
"Hey @garner1, \r\n\r\nWould you like to open a PR to fix it in `research_projects/performer/`?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,626 | 1,626 | NONE | null | ## Environment info
- `transformers` version: 4.6.1
- Platform: Linux-4.14.232-176.381.amzn2.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.7.9
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: No
### Who can help
@TevenLeScao, @Patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. bash transformers/examples/research_projects/performer/sanity_script.sh
[05:45:53] - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: False
[05:45:53] - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir=experiments, overwrite_output_dir=True, do_train=True, do_eval=False, do_predict=False, evaluation_strategy=IntervalStrategy.NO, prediction_loss_only=False, per_device_train_batch_size=4, per_device_eval_batch_size=8, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=0.0005, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_ratio=0.0, warmup_steps=100, logging_dir=runs/Jun12_05-45-53_ip-10-228-58-93.int.klarna.net, logging_strategy=IntervalStrategy.STEPS, logging_first_step=False, logging_steps=500, save_strategy=IntervalStrategy.STEPS, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level=O1, fp16_backend=auto, fp16_full_eval=False, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=[], dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=experiments, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=[], deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, length_column_name=length, report_to=['tensorboard'], ddp_find_unused_parameters=None, dataloader_pin_memory=True, skip_memory_metrics=False, use_legacy_prediction_loop=False, push_to_hub=False, resume_from_checkpoint=None, _n_gpu=1, mp_parameters=)
[05:45:53] - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): s3.amazonaws.com:443
[05:45:53] - DEBUG - urllib3.connectionpool - https://s3.amazonaws.com:443 "HEAD /datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py HTTP/1.1" 200 0
[05:45:53] - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): raw.githubusercontent.com:443
[05:45:54] - DEBUG - urllib3.connectionpool - https://raw.githubusercontent.com:443 "HEAD /huggingface/datasets/1.8.0/datasets/wikipedia/wikipedia.py HTTP/1.1" 200 0
[05:45:54] - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): raw.githubusercontent.com:443
[05:45:54] - DEBUG - urllib3.connectionpool - https://raw.githubusercontent.com:443 "HEAD /huggingface/datasets/1.8.0/datasets/wikipedia/dataset_infos.json HTTP/1.1" 200 0
[05:45:54] - WARNING - datasets.builder - Reusing dataset wikipedia (/home/silvano.garnerone/.cache/huggingface/datasets/wikipedia/20200501.simple/1.0.0/2fe8db1405aef67dff9fcc51e133e1f9c5b0106f9d9e9638188176d278fd5ff1)
[05:45:54] - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): s3.amazonaws.com:443
[05:45:54] - DEBUG - urllib3.connectionpool - https://s3.amazonaws.com:443 "HEAD /datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py HTTP/1.1" 200 0
[05:45:54] - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): raw.githubusercontent.com:443
[05:45:54] - DEBUG - urllib3.connectionpool - https://raw.githubusercontent.com:443 "HEAD /huggingface/datasets/1.8.0/datasets/wikipedia/wikipedia.py HTTP/1.1" 200 0
[05:45:54] - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): raw.githubusercontent.com:443
[05:45:55] - DEBUG - urllib3.connectionpool - https://raw.githubusercontent.com:443 "HEAD /huggingface/datasets/1.8.0/datasets/wikipedia/dataset_infos.json HTTP/1.1" 200 0
[05:45:55] - WARNING - datasets.builder - Reusing dataset wikipedia (/home/silvano.garnerone/.cache/huggingface/datasets/wikipedia/20200501.simple/1.0.0/2fe8db1405aef67dff9fcc51e133e1f9c5b0106f9d9e9638188176d278fd5ff1)
[05:45:55] - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): s3.amazonaws.com:443
[05:45:55] - DEBUG - urllib3.connectionpool - https://s3.amazonaws.com:443 "HEAD /datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py HTTP/1.1" 200 0
[05:45:55] - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): raw.githubusercontent.com:443
[05:45:55] - DEBUG - urllib3.connectionpool - https://raw.githubusercontent.com:443 "HEAD /huggingface/datasets/1.8.0/datasets/wikipedia/wikipedia.py HTTP/1.1" 200 0
[05:45:55] - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): raw.githubusercontent.com:443
[05:45:55] - DEBUG - urllib3.connectionpool - https://raw.githubusercontent.com:443 "HEAD /huggingface/datasets/1.8.0/datasets/wikipedia/dataset_infos.json HTTP/1.1" 200 0
[05:45:55] - WARNING - datasets.builder - Reusing dataset wikipedia (/home/silvano.garnerone/.cache/huggingface/datasets/wikipedia/20200501.simple/1.0.0/2fe8db1405aef67dff9fcc51e133e1f9c5b0106f9d9e9638188176d278fd5ff1)
[05:45:55] - INFO - absl - Starting the local TPU driver.
[05:45:55] - INFO - absl - Unable to initialize backend 'tpu_driver': Not found: Unable to find driver in registry given worker: local://
[05:45:55] - INFO - absl - Unable to initialize backend 'tpu': Invalid argument: TpuPlatform is not available.
[05:45:56] - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
[05:45:56] - DEBUG - urllib3.connectionpool - https://huggingface.co:443 "HEAD /bert-base-cased/resolve/main/config.json HTTP/1.1" 200 0
[05:45:56] - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
[05:45:56] - DEBUG - urllib3.connectionpool - https://huggingface.co:443 "HEAD /bert-base-cased/resolve/main/config.json HTTP/1.1" 200 0
[05:45:56] - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
[05:45:57] - DEBUG - urllib3.connectionpool - https://huggingface.co:443 "HEAD /bert-base-cased/resolve/main/flax_model.msgpack HTTP/1.1" 302 0
Traceback (most recent call last):
File "run_mlm_performer.py", line 543, in <module>
dropout_rate=0.1,
File "/home/silvano.garnerone/.local/lib/python3.7/site-packages/transformers/modeling_flax_utils.py", line 326, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/home/silvano.garnerone/performer/modeling_flax_performer.py", line 482, in __init__
super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype) #input_shape is already present in config
TypeError: __init__() got multiple values for argument 'input_shape'
## Expected behavior
The script to run without error
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12128/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12127 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12127/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12127/comments | https://api.github.com/repos/huggingface/transformers/issues/12127/events | https://github.com/huggingface/transformers/issues/12127 | 919,435,964 | MDU6SXNzdWU5MTk0MzU5NjQ= | 12,127 | Multi-GPU training has literally no GPU-Utilization (0%) | {
"login": "sajastu",
"id": 10419055,
"node_id": "MDQ6VXNlcjEwNDE5MDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/10419055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sajastu",
"html_url": "https://github.com/sajastu",
"followers_url": "https://api.github.com/users/sajastu/followers",
"following_url": "https://api.github.com/users/sajastu/following{/other_user}",
"gists_url": "https://api.github.com/users/sajastu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sajastu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sajastu/subscriptions",
"organizations_url": "https://api.github.com/users/sajastu/orgs",
"repos_url": "https://api.github.com/users/sajastu/repos",
"events_url": "https://api.github.com/users/sajastu/events{/privacy}",
"received_events_url": "https://api.github.com/users/sajastu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Solved by #11045 \r\n\r\nI had to use distributed training to this end. Closing..."
] | 1,623 | 1,623 | 1,623 | NONE | null | I know that multi-GPU training is handled by the trainer class automatically through `CUDA_VISIBLE_DEVICES=...` flag in transformers. But, I'm having a weird problem. Like, after setting `CUDA_VISIBLE_DEVICES=0,1,2`, 3 GPUs are being used and `nvidia-smi`outputs the following:
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... On | 00000000:00:17.0 Off | 0 |
| N/A 76C P0 293W / 300W | 13758MiB / 16160MiB | 94% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 Tesla V100-SXM2... On | 00000000:00:18.0 Off | 0 |
| N/A 43C P0 72W / 300W | 4770MiB / 16160MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 Tesla V100-SXM2... On | 00000000:00:19.0 Off | 0 |
| N/A 43C P0 73W / 300W | 4770MiB / 16160MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
```
I'm trying the inference mode of pegasus model:
```
CUDA_VISIBLE_DEVICES=0,1,2 python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path /home/code-base/user_space/saved_models/pytorch/reddit_tifu/ \
--do_predict \
--train_file $DS_BASE_DIR/train.json \
--validation_file $DS_BASE_DIR/validation.json \
--test_file $DS_BASE_DIR/test.json \
--output_dir /home/code-base/user_space/saved_models/pegasus/ \
--per_device_train_batch_size=3 \
--per_device_eval_batch_size=3 \
--overwrite_output_dir \
--predict_with_generate \
--text_column text \
--summary_column summary \
--num_beams 5
```
The strange thing to me is that GPU-Util of GPU-1 and GPU-2 is 0%, while they got a part of their memory filled. Though, this is not the case about GPU-0. I'm hesitating now if I'm using the correct way of doing multi-GPU training. Any advice or hint would be appreciated!
## Environment info
- `transformers` version: 4.7.0 dev
- Platform: Ubuntu 18.04
- Python version: 3.8
- PyTorch version (GPU?): 1.6
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12127/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12126 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12126/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12126/comments | https://api.github.com/repos/huggingface/transformers/issues/12126/events | https://github.com/huggingface/transformers/issues/12126 | 919,408,065 | MDU6SXNzdWU5MTk0MDgwNjU= | 12,126 | [Performance] Tracking open Issues and PRs (pytorch transformers) | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
},
{
"id": 2690307185,
"node_id": "MDU6TGFiZWwyNjkwMzA3MTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Performance",
"name": "Performance",
"color": "207F32",
"default": false,
"description": ""
},
{
"id": 3081136536,
"node_id": "MDU6TGFiZWwzMDgxMTM2NTM2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Difficult%20Issue",
"name": "Good Difficult Issue",
"color": "684CC7",
"default": false,
"description": ""
}
] | open | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"@stas00 If I want to work on this issue, should I pick one of those issues to keep track of its performance?\r\nCan you also tell me how I can keep track of the performances? Can you give me some guidance?\r\n",
"Hi @JuheonChu, this is not an Issue to work on. As the title says this is a collection of pointers to track other Issues. It's dated but many issues that it links to are still valid. So you can click on the issue that resonates with you and discuss the details there - not here. \r\n\r\nI hope this addresses your question."
] | 1,623 | 1,675 | null | CONTRIBUTOR | null | Let's use this Issue to track performance issues and enhancement requests, so it's easier to prioritize the work.
**This is for pytorch `transformers`**
Also I will label it as a `Good Difficult Issue` in case someone is ready for a challenging but rewarding experience of figuring things out. If you do want to take the challenge comment in the corresponding Issue/PR that resonates with you so others would know you're working on it.
If I missed any other relevant open performance-related Issues/PRs that need attention please comment below.
## Regression:
- [ ] https://github.com/huggingface/transformers/pull/11218 Regression after Bart-like refactoring - need to compare the original Bart refactoring PR since most likely the regression happened there.
- [ ]
## Odd slowness:
- [ ] https://github.com/huggingface/transformers/issues/10816 figuring out why eval with --fp16_full_eval is 25% slower
- [ ]
## Fused kernels possibilities:
- [ ] https://github.com/huggingface/transformers/issues/11368 Megatron fused CUDA kernels to improve Hugging Face model classes' scalability
- [ ] research pytorch kernels?
- [ ] I know Deepspeed has various kernels that we might be able to use
## Faster / leaner startup / module loading
- [ ] https://github.com/huggingface/transformers/issues/12274 - skip storage allocation which gets dropped for pretrained weights
## Faster optimizers
- [ ] https://github.com/huggingface/transformers/issues/12084 - a proposal to port `MemoryEfficientFP16Optimizer` from fairseq
- [ ] https://github.com/huggingface/transformers/issues/9965 - `torch.optim._multi_tensor` faster optimizers - having some bottleneck in the test script - need to profile
## Scalability
- [ ] https://github.com/huggingface/transformers/issues/10321 Tensor Parallelism
## Deepspeed-specific features
- [ ] https://github.com/huggingface/transformers/issues/9606 a list of features that can be integrated
- [ ] https://github.com/huggingface/transformers/issues/12273 - make `from_pretrained` loading faster
## Tests
- [ ] No issue yet, but we really need to add performance regression tests
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12126/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 5,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12126/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12125 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12125/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12125/comments | https://api.github.com/repos/huggingface/transformers/issues/12125/events | https://github.com/huggingface/transformers/pull/12125 | 919,332,478 | MDExOlB1bGxSZXF1ZXN0NjY4NjY5MjA2 | 12,125 | Correct typo in summary of tasks doc | {
"login": "dataista0",
"id": 4383443,
"node_id": "MDQ6VXNlcjQzODM0NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4383443?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dataista0",
"html_url": "https://github.com/dataista0",
"followers_url": "https://api.github.com/users/dataista0/followers",
"following_url": "https://api.github.com/users/dataista0/following{/other_user}",
"gists_url": "https://api.github.com/users/dataista0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dataista0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dataista0/subscriptions",
"organizations_url": "https://api.github.com/users/dataista0/orgs",
"repos_url": "https://api.github.com/users/dataista0/repos",
"events_url": "https://api.github.com/users/dataista0/events{/privacy}",
"received_events_url": "https://api.github.com/users/dataista0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,624 | 1,624 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12125/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12125",
"html_url": "https://github.com/huggingface/transformers/pull/12125",
"diff_url": "https://github.com/huggingface/transformers/pull/12125.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12125.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12124 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12124/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12124/comments | https://api.github.com/repos/huggingface/transformers/issues/12124/events | https://github.com/huggingface/transformers/pull/12124 | 919,310,472 | MDExOlB1bGxSZXF1ZXN0NjY4NjQ5MDU0 | 12,124 | [style] consistent nn. and nn.functional | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | As discussed in https://github.com/huggingface/transformers/issues/11600 this PR normalizes to `nn.functional.foo()` replacing `F.` and `torch.nn.` with `nn.`.
This is all automated by:
```
# deal with torch.nn
perl -pi -e 's|^import torch\n|from torch import nn\nimport torch\n|' `grep -Ilr torch.nn src`
find src -type f -exec perl -X -pi -e 's{(?<!(from |import |[`#/]))torch\.nn\.}{nn.}g' {} \;
find src -type f -exec perl -pi -e 's|import torch\.nn as nn|from torch import nn|g' {} \;
# deal with F
find src -type f -exec perl -pi -e 's|from torch.nn import functional as F|from torch import nn|g' {} \;
find src -type f -exec perl -pi -e 's|import torch.nn.functional as F|from torch import nn|g' {} \;
find src -type f -exec perl -pi -e 's|(?<!\w)F\.|nn.functional.|g' {} \;
git checkout src/transformers/data/data_collator.py
perl -pi -e 's|import torch||' src/transformers/models/prophetnet/convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py
make fixup
```
This is just `src/transformers` for now. If happy I can do the same for `templates`, `tests` and `examples` next
To kind reviewers: This is a massive auto-rewrite, if you notice some missed patterns please just point one instance to me, so I will adjust regex to catch them all. (since we need to do at least `tests`/`examples` too)
@sgugger, @LysandreJik, @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12124/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12124",
"html_url": "https://github.com/huggingface/transformers/pull/12124",
"diff_url": "https://github.com/huggingface/transformers/pull/12124.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12124.patch",
"merged_at": 1623689069000
} |
https://api.github.com/repos/huggingface/transformers/issues/12123 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12123/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12123/comments | https://api.github.com/repos/huggingface/transformers/issues/12123/events | https://github.com/huggingface/transformers/pull/12123 | 919,234,975 | MDExOlB1bGxSZXF1ZXN0NjY4NTgwNTI4 | 12,123 | [optim] implement AdafactorSchedule | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Correct me if I am wrong, but I see two flaws with the current solution:\r\n1) Only the first learning rate is returned as float, the following ones are tensors of size 1 - gives error while trying to pickle the logging history,\r\n2) Adafactor has separate learning rates for each of the network components (Linear layers, normalizations...). The current solution gives only the LR of the first component, usually the embedding matrix.\r\n",
"In other words you're saying this was a half-baked solution. It is very much so. The original workaround idea was just to return a dumb number, to make things work with HF Trainer, as Adafactor wasn't designed to share its LRs with other components.\r\n\r\n@LukasStankevicius, would you like to enhance my initial hack to fully support the features you mentioned lacking/incomplete? It surely could use some TLC.",
"For my own use, I modified Adafactor scheduler as follows:\r\n```python\r\nfrom transformers.optimization import AdafactorSchedule\r\n\r\nclass MyAdafactorSchedule(AdafactorSchedule):\r\n def get_lr(self):\r\n opt = self.optimizer\r\n if \"step\" in opt.state[opt.param_groups[0][\"params\"][0]]:\r\n lrs = [opt._get_lr(group, opt.state[p]).item() for group in opt.param_groups for p in group[\"params\"]]\r\n else:\r\n lrs = []\r\n return [lrs]\r\n```\r\n\r\nNow it does not give errors while pickling logging history and reports learning rates for all components. However, it pollutes the logs (a single logged step may contain a list of over 100 learning rates).\r\n\r\nYou may average, but then, that is a point of logging lr at all?\r\nSo, I do not know the optimal solution here. Maybe just warning in documentation about Adafactor learning rates."
] | 1,623 | 1,633 | 1,623 | CONTRIBUTOR | null | Currently Adafactor doesn't use an external scheduler and doesn't expose its lr values, and especially as reported in https://github.com/huggingface/transformers/issues/11612 the Trainer can't work without a scheduler, so this PR:
- implements `AdafactorSchedule` which is a proxy to `Adafactor` and can pull the lr values out of it
- adds a basic test
- updates docs
The implementation is somewhat hackish, but it's good enough for now.
Fixes: https://github.com/huggingface/transformers/issues/11612
@sgugger, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12123/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12123",
"html_url": "https://github.com/huggingface/transformers/pull/12123",
"diff_url": "https://github.com/huggingface/transformers/pull/12123.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12123.patch",
"merged_at": 1623689029000
} |
https://api.github.com/repos/huggingface/transformers/issues/12122 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12122/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12122/comments | https://api.github.com/repos/huggingface/transformers/issues/12122/events | https://github.com/huggingface/transformers/pull/12122 | 919,227,193 | MDExOlB1bGxSZXF1ZXN0NjY4NTczNjgz | 12,122 | Model card defaults | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @sgugger , really cool feature :+1: \r\n\r\nIt would also be a good feature to distinguish between dev and test score :)",
"@LysandreJik yeah no let's not add it as it's not required\r\n\r\nWill be easier to maintain a mapping on the hub's side if it's not (needlessly) overridden. cc @osanseviero cf. https://github.com/huggingface/huggingface_hub/pull/109 in \"How is a model's type of inference API and widget determined?\""
] | 1,623 | 1,623 | 1,623 | COLLABORATOR | null | # What does this PR do?
This PR adds some better defaults to the auto-generated model cards for:
- the dataset names and tags
- the checkpoint it's fine-tuned from
- the type of task
As an example, on the classic fine-tuning of bert using the Trainer on GLUE, this is what we get for the metadata without telling the Trainer anything:
```
---
license: apache-2.0
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: finetuned-bert
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8431372549019608
- name: F1
type: f1
value: 0.8915254237288135
---
```
As a side note, the implementation completes the worked begun for a file with the mapping of the auto model names (to avoid importing all models) in order to properly guess the class name. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12122/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12122/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12122",
"html_url": "https://github.com/huggingface/transformers/pull/12122",
"diff_url": "https://github.com/huggingface/transformers/pull/12122.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12122.patch",
"merged_at": 1623787297000
} |
https://api.github.com/repos/huggingface/transformers/issues/12121 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12121/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12121/comments | https://api.github.com/repos/huggingface/transformers/issues/12121/events | https://github.com/huggingface/transformers/pull/12121 | 919,202,625 | MDExOlB1bGxSZXF1ZXN0NjY4NTUxNTY3 | 12,121 | Don't log anything before logging is setup in examples | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | COLLABORATOR | null | # What does this PR do?
As flagged out by #12090, the examples contain some logs before the logging is properly set up. This PR fixes that.
Fixes #12090 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12121/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12121",
"html_url": "https://github.com/huggingface/transformers/pull/12121",
"diff_url": "https://github.com/huggingface/transformers/pull/12121.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12121.patch",
"merged_at": 1623672213000
} |
https://api.github.com/repos/huggingface/transformers/issues/12120 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12120/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12120/comments | https://api.github.com/repos/huggingface/transformers/issues/12120/events | https://github.com/huggingface/transformers/issues/12120 | 919,169,842 | MDU6SXNzdWU5MTkxNjk4NDI= | 12,120 | ValueError in predict function for ClassificationModel | {
"login": "GoyalMansi",
"id": 47935154,
"node_id": "MDQ6VXNlcjQ3OTM1MTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/47935154?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GoyalMansi",
"html_url": "https://github.com/GoyalMansi",
"followers_url": "https://api.github.com/users/GoyalMansi/followers",
"following_url": "https://api.github.com/users/GoyalMansi/following{/other_user}",
"gists_url": "https://api.github.com/users/GoyalMansi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GoyalMansi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GoyalMansi/subscriptions",
"organizations_url": "https://api.github.com/users/GoyalMansi/orgs",
"repos_url": "https://api.github.com/users/GoyalMansi/repos",
"events_url": "https://api.github.com/users/GoyalMansi/events{/privacy}",
"received_events_url": "https://api.github.com/users/GoyalMansi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,626 | 1,626 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-4.15.0-122-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.2
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@LysandreJik
## Information
Model I am using (Bert, XLNet ...): bert
The problem arises when using:
* [ ] my own modified scripts:
I noticed then when I decrease my train batch size from 32 to 16, i get the following bug:

(Please note that training for 10 epochs happens successfully)
The tasks I am working on is:
* [ ] my own task or dataset:
My own dataset for binary classification of text documents.
## Expected behavior
Evaluation should happen as expected. I am not sure what to fix/how to investigate. Could not find much about it online.
ValueError: could not broadcast input array from shape (16,2) into shape (4,2) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12120/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12119 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12119/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12119/comments | https://api.github.com/repos/huggingface/transformers/issues/12119/events | https://github.com/huggingface/transformers/pull/12119 | 919,150,805 | MDExOlB1bGxSZXF1ZXN0NjY4NTA0ODYx | 12,119 | Adding ZeroShotImageClassificationPipeline | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"Pinging @patil-suraj @LysandreJik ",
"Friendly ping @LysandreJik @patil-suraj",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"unstale?",
"> This is looking good, thanks a lot for adding this! Left a few comments.\r\n\r\nThanks for this, many places where doc/code was out of sync/bad copy paste here.\r\n\r\n> Could you explain a bit how batching is handled ?\r\n\r\nhttps://github.com/huggingface/transformers/pull/14225\r\n\r\nThis should contain more information on how it's handled internally. The pseudo code and images try to convey how it's done.\r\nTell me how this could be improved, it should belong in the doc actually.",
"@LysandreJik Can you do a second quick review please ? I think adding new pipeline merits a bit more eyes than 4.",
"@patil-suraj Do you think we can add a sigmoid to get `multi_label` or are the outputs of the model not compatible with this ?\r\n@FrancescoSaverioZuppichini ",
"> @patil-suraj Do you think we can add a sigmoid to get `multi_label` or are the outputs of the model not compatible with this ? @FrancescoSaverioZuppichini\r\n\r\nTechnically, yes. But I don't know how well that will work.",
"Ok let's drop it then. I actually thought about it with maybe multiple prompts too (This photo is about ..., This photo is not about ...) to recover somehow the entailment thing, but CLIP was not trained with this in mind so let's just skip it.)\r\n",
"@LysandreJik friendly ping to get a third opinion before merging.",
"After thinking about it, sigmoid won't probably work well since it wasn't trained directly with it. We could (in theory) normalize the `image_logits` and return the ones that are more close to biggest other (meaning they all \"fit\" the image in the same way). Following @patil-suraj comment, I'm not sure how well this works either.",
"time to continue the widget PR? https://github.com/huggingface/huggingface_hub/pull/118 π"
] | 1,623 | 1,645 | 1,645 | CONTRIBUTOR | null | # What does this PR do?
- Based on CLIP
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@suraj-patil @LysandreJik
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12119/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12119",
"html_url": "https://github.com/huggingface/transformers/pull/12119",
"diff_url": "https://github.com/huggingface/transformers/pull/12119.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12119.patch",
"merged_at": 1645605702000
} |
https://api.github.com/repos/huggingface/transformers/issues/12118 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12118/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12118/comments | https://api.github.com/repos/huggingface/transformers/issues/12118/events | https://github.com/huggingface/transformers/issues/12118 | 919,130,832 | MDU6SXNzdWU5MTkxMzA4MzI= | 12,118 | Passing a custom stopping_criteria list to model.generate() yields a multiple value error for that keyword arg | {
"login": "bitbanger",
"id": 120894,
"node_id": "MDQ6VXNlcjEyMDg5NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/120894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bitbanger",
"html_url": "https://github.com/bitbanger",
"followers_url": "https://api.github.com/users/bitbanger/followers",
"following_url": "https://api.github.com/users/bitbanger/following{/other_user}",
"gists_url": "https://api.github.com/users/bitbanger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bitbanger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bitbanger/subscriptions",
"organizations_url": "https://api.github.com/users/bitbanger/orgs",
"repos_url": "https://api.github.com/users/bitbanger/repos",
"events_url": "https://api.github.com/users/bitbanger/events{/privacy}",
"received_events_url": "https://api.github.com/users/bitbanger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @bitbanger,\r\n\r\nCould you provide a reproducible code snippet that we could just copy paste into a python shell to reproduce the error? :-) Thanks!",
"Hi there! Thanks for your response! Sure, here you go. I've confirmed that this code yields the error when run in the environment described in my report:\r\n\r\n```\r\nimport torch\r\nfrom transformers import GPT2Tokenizer, GPT2DoubleHeadsModel \r\nfrom transformers.generation_stopping_criteria import StoppingCriteria, StoppingCriteriaList \r\n\r\nclass DummyStopCriterion(StoppingCriteria): \r\n def __call__(self, input_ids: torch.LongTensor, score: torch.FloatTensor, **kwargs): \r\n return len(input_ids.squeeze()) > 10\r\n\r\ntok = GPT2Tokenizer.from_pretrained('distilgpt2') \r\nmodel = GPT2DoubleHeadsModel.from_pretrained('distilgpt2') \r\n\r\ninput_ids = tok.encode('This should reproduce the bug', return_tensors='pt') \r\nmodel.generate(input_ids, stopping_criteria=StoppingCriteriaList([DummyStopCriterion()]))\r\n```",
"Adding a bit more context,\r\n\r\nthe error is \r\n```\r\ntransformers.generation_utils.GenerationMixin.greedy_search() got multiple values for keyword argument 'stopping_criteria'\r\n```\r\n\r\nThe reason is, stopping_criteria is **not** a valid argument to `generate` so it get passed as `model_kwargs` which in turn are passed to `greedy` which already receives `stopping_criteria` because it gets created within `generate`.\r\n\r\nThe proposed solution is simply to enable it (with `logits_processor`) as a real argument of `generate` (doc should specify it's intended for users with know-how, most users should use simple arguments)\r\n\r\n\r\nwdyt ? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,628 | 1,628 | NONE | null | ---
name: "\U0001F41B Bug Report"
about: Submit a bug report to help us improve transformers
title: ''
labels: ''
assignees: ''
---
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: macOS-10.15.5-x86_64-i386-64bit
- Python version: 3.8.8
- PyTorch version (GPU?): 1.18.1 (no)
- Tensorflow version (GPU?): N/A
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
- set model_kwargs programmatically: @patrickvonplaten
- set stopping_criteria programmatically: @Narsil
## Information
Model I am using (Bert, XLNet ...): GPT2DoubleHeadsModel (pretrained model: distilgpt2)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below): Any script I write that passes a custom StoppingCriteriaList via the stopping_criteria keyword arg of generation_utils.GenerationMixin.generate() seems to reproduce this issue.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below): a simple personal chatbot harness with a custom newline stopping criterion
## To reproduce
Steps to reproduce the behavior:
1. Load a trained model using transformer.generation_utils.GenerationMixin
2. Define a custom StoppingCriteria and StoppingCriteriaList
3. Pass the custom StoppingCriteriaList as a keyword arg to model.generate(), e.g. model.generate(...stopping_criteria=my_custom_list...)
The above steps will yield a "got multiple values for keyword argument 'stopping_criteria'" error message.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Ideally, there would be no error message, and the stopping_criteria kwarg would be passed through normally. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12118/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12117 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12117/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12117/comments | https://api.github.com/repos/huggingface/transformers/issues/12117/events | https://github.com/huggingface/transformers/issues/12117 | 919,067,063 | MDU6SXNzdWU5MTkwNjcwNjM= | 12,117 | GPT Neo Tokenizers can't change BOS or EOS token | {
"login": "mallorbc",
"id": 39721523,
"node_id": "MDQ6VXNlcjM5NzIxNTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/39721523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mallorbc",
"html_url": "https://github.com/mallorbc",
"followers_url": "https://api.github.com/users/mallorbc/followers",
"following_url": "https://api.github.com/users/mallorbc/following{/other_user}",
"gists_url": "https://api.github.com/users/mallorbc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mallorbc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mallorbc/subscriptions",
"organizations_url": "https://api.github.com/users/mallorbc/orgs",
"repos_url": "https://api.github.com/users/mallorbc/repos",
"events_url": "https://api.github.com/users/mallorbc/events{/privacy}",
"received_events_url": "https://api.github.com/users/mallorbc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hi there, I just tried this but couldn't reproduce it. Here's the colab if you want to check https://colab.research.google.com/drive/1gGWMOdjF6wIVfUlo0XE1LfupY5T7ioVS?usp=sharing",
"I was using 4.6.1. Perhaps its been fixed. I ran your code as well and didn't see the issue."
] | 1,623 | 1,633 | 1,629 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- Platform: Linux-5.8.0-55-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: RTX 3090
- Using distributed or parallel set-up in script?: Using DeepSpeed
Conda env:
channels:
- pytorch
- nvidia
- conda-forge
- defaults
dependencies:
- _libgcc_mutex=0.1=main
- blas=1.0=mkl
- bzip2=1.0.8=h7b6447c_0
- ca-certificates=2021.5.30=ha878542_0
- certifi=2021.5.30=py37h89c1867_0
- cudatoolkit=11.1.74=h6bb024c_0
- ffmpeg=4.3=hf484d3e_0
- freetype=2.10.4=h5ab3b9f_0
- gmp=6.2.1=h2531618_2
- gnutls=3.6.15=he1e5248_0
- intel-openmp=2021.2.0=h06a4308_610
- joblib=1.0.1=pyhd8ed1ab_0
- jpeg=9b=h024ee3a_2
- lame=3.100=h7b6447c_0
- lcms2=2.12=h3be6417_0
- ld_impl_linux-64=2.33.1=h53a641e_7
- libblas=3.9.0=9_mkl
- libcblas=3.9.0=9_mkl
- libffi=3.3=he6710b0_2
- libgcc-ng=9.1.0=hdf63c60_0
- libgfortran-ng=7.5.0=h14aa051_19
- libgfortran4=7.5.0=h14aa051_19
- libiconv=1.15=h63c8f33_5
- libidn2=2.3.1=h27cfd23_0
- liblapack=3.9.0=9_mkl
- libpng=1.6.37=hbc83047_0
- libstdcxx-ng=9.1.0=hdf63c60_0
- libtasn1=4.16.0=h27cfd23_0
- libtiff=4.2.0=h85742a9_0
- libunistring=0.9.10=h27cfd23_0
- libuv=1.40.0=h7b6447c_0
- libwebp-base=1.2.0=h27cfd23_0
- lz4-c=1.9.3=h2531618_0
- mkl=2021.2.0=h06a4308_296
- mkl-service=2.3.0=py37h27cfd23_1
- mkl_fft=1.3.0=py37h42c9631_2
- mkl_random=1.2.1=py37ha9443f7_2
- ncurses=6.2=he6710b0_1
- nettle=3.7.2=hbbd107a_1
- numpy=1.20.2=py37h2d18471_0
- numpy-base=1.20.2=py37hfae3a4d_0
- olefile=0.46=py37_0
- openh264=2.1.0=hd408876_0
- openssl=1.1.1k=h27cfd23_0
- pillow=8.2.0=py37he98fc37_0
- pip=21.1.1=py37h06a4308_0
- python=3.7.10=hdb3f193_0
- python_abi=3.7=1_cp37m
- pytorch=1.8.1=py3.7_cuda11.1_cudnn8.0.5_0
- readline=8.1=h27cfd23_0
- scikit-learn=0.23.2=py37hddcf8d6_3
- scipy=1.5.3=py37h8911b10_0
- setuptools=52.0.0=py37h06a4308_0
- six=1.15.0=py37h06a4308_0
- sqlite=3.35.4=hdfb4753_0
- threadpoolctl=2.1.0=pyh5ca1d4c_0
- tk=8.6.10=hbc83047_0
- torchaudio=0.8.1=py37
- torchvision=0.9.1=py37_cu111
- typing_extensions=3.7.4.3=pyha847dfd_0
- wheel=0.36.2=pyhd3eb1b0_0
- xz=5.2.5=h7b6447c_0
- zlib=1.2.11=h7b6447c_3
- zstd=1.4.9=haebb681_0
- pip:
- chardet==4.0.0
- click==8.0.1
- datasets==1.7.0
- deepspeed==0.4.0+8def3cb
- dill==0.3.3
- filelock==3.0.12
- fsspec==2021.6.0
- huggingface-hub==0.0.8
- idna==2.10
- importlib-metadata==4.5.0
- multiprocess==0.70.11.1
- ninja==1.10.0.post2
- packaging==20.9
- pandas==1.2.4
- protobuf==3.17.3
- psutil==5.8.0
- pyarrow==3.0.0
- pyparsing==2.4.7
- python-dateutil==2.8.1
- pytz==2021.1
- regex==2021.4.4
- requests==2.25.1
- sacremoses==0.0.45
- tensorboardx==1.8
- tokenizers==0.10.3
- tqdm==4.49.0
- transformers==4.6.1
- triton==0.4.2
- urllib3==1.26.5
- xxhash==2.0.2
- zipp==3.4.1
### Who can help
@LysandreJik seems to be the one to tag as this is an issue with the tokenizer
## Information
When loading the GPT Neo tokenizer with either the GPT2Tokenizer, or with the AutoTokenizer, you are unable to change the EOS of BOS tokens through passing arguments
Model I am using: GPT Neo 2.7B and 1.3B and its Tokenizer
The problem arises when using:
* [* ] my own modified scripts: (give details below)
The tasks I am working on is:
*[ *] my own task or dataset: (give details below)
I am trying to finetune the model using DeepSpeed and a custom dataset
## To reproduce
Steps to reproduce the behavior:
1. Load GPT Neo Tokenizer from pretrained using either AutoTokenizer or GPT2Tokenizer
2. Pass arguments to change EOS and BOS tokens
3. Print out the tokens using tokenizer.bos_token and tokenizer.eos_token
4. Notice that it has not changed
5. Do steps 1-3 for another model, say gpt2 and notice that it does change
```python
tokenizer = AutoTokenizer.from_pretrained(
"gpt2-xl",bos_token='<|beginoftext|>',
eos_token='<|endoftext|>', pad_token='<|pad|>')
print(tokenizer.bos_token)
print(tokenizer.eos_token)
print()
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/gpt-neo-2.7B",bos_token='<|beginoftext|>',
eos_token='<|endoftext|>', pad_token='<|pad|>')
print(tokenizer.bos_token)
print(tokenizer.eos_token)
print()
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/gpt-neo-1.3B",bos_token='<|beginoftext|>',
eos_token='<|endoftext|>', pad_token='<|pad|>')
print(tokenizer.bos_token)
print(tokenizer.eos_token)
quit()
```
That gives this:
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
<|beginoftext|>
<|endoftext|>
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
<|endoftext|>
<|endoftext|>
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
<|endoftext|>
<|endoftext|>
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The expected behavior is that the values of the bos and eos tokens change. It does not change though.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12117/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12116 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12116/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12116/comments | https://api.github.com/repos/huggingface/transformers/issues/12116/events | https://github.com/huggingface/transformers/pull/12116 | 918,965,396 | MDExOlB1bGxSZXF1ZXN0NjY4MzM2Njk4 | 12,116 | Enable add_prefix_space on run_ner if necessary | {
"login": "kumapo",
"id": 70637,
"node_id": "MDQ6VXNlcjcwNjM3",
"avatar_url": "https://avatars.githubusercontent.com/u/70637?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kumapo",
"html_url": "https://github.com/kumapo",
"followers_url": "https://api.github.com/users/kumapo/followers",
"following_url": "https://api.github.com/users/kumapo/following{/other_user}",
"gists_url": "https://api.github.com/users/kumapo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kumapo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kumapo/subscriptions",
"organizations_url": "https://api.github.com/users/kumapo/orgs",
"repos_url": "https://api.github.com/users/kumapo/repos",
"events_url": "https://api.github.com/users/kumapo/events{/privacy}",
"received_events_url": "https://api.github.com/users/kumapo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for your PR, but this is too complicated. The examples are just example, and should be modified by users directly for specific use-cases, we can't support everything out of the box.",
"@sgugger I agree with you it's somewhat complicated.\r\n\r\nSo, I've pushed codes that simplified as possible and also support for training roberta.\r\n\r\nActually I've experienced #9607 on training roberta for ner task.\r\nRefering to #9607, only roberta-base and roberta-large have the issue. \r\nSo, it's enough that run_ner supports roberta for now.\r\n\r\nIf you think it's still too complicated, I will close this pr and just use it for me.",
"Thank you for your suggestion!\r\nLet me update the pr.",
"I've updated the pr.\r\n@sgugger, please take a look through it."
] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | # What does this PR do?
Enable add_prefix_space for a tokenizer on run_ner and run_ner_no_trainer if it needs to be instantiated with.
Fixes #9607
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
I've tested it:
```
% python -m pytest -n auto --dist=loadfile -s -v ./examples/
...
Results (256.02s):
24 passed
21 skipped
```
additionally checked style and quality then fixed it up:
```
% make style && make quality && make fixup
...
All done! β¨ π° β¨
```
## Who can review?
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12116/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12116",
"html_url": "https://github.com/huggingface/transformers/pull/12116",
"diff_url": "https://github.com/huggingface/transformers/pull/12116.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12116.patch",
"merged_at": 1623764001000
} |
https://api.github.com/repos/huggingface/transformers/issues/12115 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12115/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12115/comments | https://api.github.com/repos/huggingface/transformers/issues/12115/events | https://github.com/huggingface/transformers/issues/12115 | 918,788,539 | MDU6SXNzdWU5MTg3ODg1Mzk= | 12,115 | Hosted inference api keeps returning 400 error | {
"login": "kevhahn97",
"id": 5675167,
"node_id": "MDQ6VXNlcjU2NzUxNjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5675167?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kevhahn97",
"html_url": "https://github.com/kevhahn97",
"followers_url": "https://api.github.com/users/kevhahn97/followers",
"following_url": "https://api.github.com/users/kevhahn97/following{/other_user}",
"gists_url": "https://api.github.com/users/kevhahn97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kevhahn97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kevhahn97/subscriptions",
"organizations_url": "https://api.github.com/users/kevhahn97/orgs",
"repos_url": "https://api.github.com/users/kevhahn97/repos",
"events_url": "https://api.github.com/users/kevhahn97/events{/privacy}",
"received_events_url": "https://api.github.com/users/kevhahn97/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I found it resolved by fixing model config. Now it's working"
] | 1,623 | 1,624 | 1,624 | NONE | null | I'm not sure if it's okay to make issue with this topic, but I couldn't find a place to share my problem so I'm making an issue.
### Problem description
When I try to inference a public model (facebook/blenderbot-1B-distill), it keeps returning 400 error with message below, whether I tried it on model hub or through HTTP request.
`'We could not properly load your model with any of the classes {model_classes}, are you sure this model can be loaded with the specified task ?'`
I used this model normally a few days ago, but now it's not working. May I ask for a help? Any advice would be appreciated..

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12115/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12114 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12114/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12114/comments | https://api.github.com/repos/huggingface/transformers/issues/12114/events | https://github.com/huggingface/transformers/issues/12114 | 918,753,672 | MDU6SXNzdWU5MTg3NTM2NzI= | 12,114 | Get the loss in LongformerForQuestionAnswering for fine-tuning | {
"login": "hanane-djeddal",
"id": 48019914,
"node_id": "MDQ6VXNlcjQ4MDE5OTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/48019914?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hanane-djeddal",
"html_url": "https://github.com/hanane-djeddal",
"followers_url": "https://api.github.com/users/hanane-djeddal/followers",
"following_url": "https://api.github.com/users/hanane-djeddal/following{/other_user}",
"gists_url": "https://api.github.com/users/hanane-djeddal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hanane-djeddal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hanane-djeddal/subscriptions",
"organizations_url": "https://api.github.com/users/hanane-djeddal/orgs",
"repos_url": "https://api.github.com/users/hanane-djeddal/repos",
"events_url": "https://api.github.com/users/hanane-djeddal/events{/privacy}",
"received_events_url": "https://api.github.com/users/hanane-djeddal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"In HuggingFace Transformers, `xxxForQuestionAnswering` models don't take a `labels` argument as input. Rather, one should provide `start_positions` and `end_positions`. These indicate which token are the start of the answer, and which token are the end of the answer.\r\n\r\nCheck out this notebook which showcases how to fine-tune a model for question-answering: https://github.com/huggingface/notebooks/blob/master/examples/question_answering.ipynb\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,626 | 1,626 | NONE | null | Hello,
I'm trying to fine-tune **LongformerForQuestionAnswering** on a custom dataset. I've written a script for training (without using hugging face _Trainer_), and I need the loss of the model for that. On the Longformer docs page, it's written that :
**loss** (torch.FloatTensor of shape (1,), optional, returned when labels is provided) β Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
Meaning that the model is supposed to return **loss** when the input _label_ is provided, however, the model takes no such an input (it's not mentioned in the doc, and the model generates an error when passing the input (label=...))
I've noticed that for **LongformerForMaskedLM** this is not an issue since the model does take _label_ as an input.
I am wondering if there is a way to get the **loss** from LongformerForQuestionAnswering and perhaps to correct this on the docs page.
Thanks!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12114/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12113 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12113/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12113/comments | https://api.github.com/repos/huggingface/transformers/issues/12113/events | https://github.com/huggingface/transformers/pull/12113 | 918,665,066 | MDExOlB1bGxSZXF1ZXN0NjY4MDcwOTA3 | 12,113 | Optimizing away the `fill-mask` pipeline. | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ping @LysandreJik Do you mind doing a quick review ?\r\n\r\n(Tests do not have to be modified, for this to work, but it will output a lot of warning and be slower than necessary)",
"Thanks for the ping, reviewing now!",
"That's a very neat idea !\r\n\r\nIt must be quite slow though, right ?",
"Yes, definitely too slow to actually put in tests and generally a bad idea to rely on model hub checkpoints for this I think, it was just the quickest way to ensure that all tokenizer/model pairs really do continue working",
"Yes, maybe have a script or something for larger refactors for sure."
] | 1,623 | 1,624 | 1,624 | CONTRIBUTOR | null | # What does this PR do?
- Don't send anything to the tokenizer unless needed. Vocab check is much
faster
- Keep BC by sending data to the tokenizer when needed. User handling
warning messages will see performance benefits again
- Make `targets` and `top_k` work together better `top_k` cannot be higher
than `len(targets)` but can be smaller still.
- Actually simplify the `target_ids` in case of duplicate (it can happen
because we're parsing raw strings)
- Removed useless code to fail on empty strings. It works only if empty
string is in first position, moved to ignoring them instead.
- Changed the related tests as only the tests would fail correctly
(having incorrect value in first position)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #12099
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik @EtaoinWu
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12113/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12113",
"html_url": "https://github.com/huggingface/transformers/pull/12113",
"diff_url": "https://github.com/huggingface/transformers/pull/12113.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12113.patch",
"merged_at": 1624437485000
} |
https://api.github.com/repos/huggingface/transformers/issues/12112 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12112/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12112/comments | https://api.github.com/repos/huggingface/transformers/issues/12112/events | https://github.com/huggingface/transformers/issues/12112 | 918,584,304 | MDU6SXNzdWU5MTg1ODQzMDQ= | 12,112 | How to pass `past_key_values` to GPTNeo model? | {
"login": "sooftware",
"id": 42150335,
"node_id": "MDQ6VXNlcjQyMTUwMzM1",
"avatar_url": "https://avatars.githubusercontent.com/u/42150335?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sooftware",
"html_url": "https://github.com/sooftware",
"followers_url": "https://api.github.com/users/sooftware/followers",
"following_url": "https://api.github.com/users/sooftware/following{/other_user}",
"gists_url": "https://api.github.com/users/sooftware/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sooftware/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sooftware/subscriptions",
"organizations_url": "https://api.github.com/users/sooftware/orgs",
"repos_url": "https://api.github.com/users/sooftware/repos",
"events_url": "https://api.github.com/users/sooftware/events{/privacy}",
"received_events_url": "https://api.github.com/users/sooftware/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"never mind. I solved it.",
"How? I find the shape of `past_key_values` is very strange."
] | 1,623 | 1,627 | 1,623 | NONE | null | How to pass `past_key_values` to the GPTNeo model?
I want to pass `past_key_values` to the GPTNeo model. I set `past_key_values` as a size Tuple[Tuple[torch.Tensor]] => `(num_layers, 2, batch_size, seq_length, num_heads, d_head)`. But I got a below error message.
```
Original Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/kaki_ai/test/big-lm/prefix_tuning/models/prefix_tuning_gpt_neo.py", line 61, in forward
use_cache=True
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/kaki_ai/test/big-lm/prefix_tuning/models/gpt_neo_for_causal_lm_wrapper.py", line 87, in forward
return_dict=return_dict,
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/kaki_ai/test/big-lm/prefix_tuning/transformers/models/gpt_neo/modeling_gpt_neo.py", line 866, in forward
output_attentions=output_attentions,
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/kaki_ai/test/big-lm/prefix_tuning/transformers/models/gpt_neo/modeling_gpt_neo.py", line 563, in forward
output_attentions=output_attentions,
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/kaki_ai/test/big-lm/prefix_tuning/transformers/models/gpt_neo/modeling_gpt_neo.py", line 505, in forward
output_attentions=output_attentions,
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/kaki_ai/test/big-lm/prefix_tuning/transformers/models/gpt_neo/modeling_gpt_neo.py", line 412, in forward
key_value_hidden_states = torch.cat([past, hidden_states], dim=1)
RuntimeError: Tensors must have same number of dimensions: got 3 and 4
```
@patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12112/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12111 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12111/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12111/comments | https://api.github.com/repos/huggingface/transformers/issues/12111/events | https://github.com/huggingface/transformers/pull/12111 | 918,549,495 | MDExOlB1bGxSZXF1ZXN0NjY3OTY3MDg5 | 12,111 | add readme for flax clm | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | MEMBER | null | # What does this PR do?
Update the language modeling readme for CLM.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12111/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12111",
"html_url": "https://github.com/huggingface/transformers/pull/12111",
"diff_url": "https://github.com/huggingface/transformers/pull/12111.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12111.patch",
"merged_at": 1623663235000
} |
https://api.github.com/repos/huggingface/transformers/issues/12110 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12110/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12110/comments | https://api.github.com/repos/huggingface/transformers/issues/12110/events | https://github.com/huggingface/transformers/pull/12110 | 918,382,719 | MDExOlB1bGxSZXF1ZXN0NjY3ODE3MDcy | 12,110 | Fix head masking generate tests | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes:
FAILED tests/test_modeling_bart.py::BartModelTest::test_generate_with_head_masking
FAILED tests/test_modeling_bigbird_pegasus.py::BigBirdPegasusModelTest::test_generate_with_head_masking
FAILED tests/test_modeling_blenderbot.py::BlenderbotModelTest::test_generate_with_head_masking
FAILED tests/test_modeling_blenderbot_small.py::BlenderbotSmallModelTest::test_generate_with_head_masking
FAILED tests/test_modeling_fsmt.py::FSMTModelTest::test_generate_with_head_masking
FAILED tests/test_modeling_led.py::LEDModelTest::test_generate_with_head_masking
FAILED tests/test_modeling_m2m_100.py::M2M100ModelTest::test_generate_with_head_masking
FAILED tests/test_modeling_marian.py::MarianModelTest::test_generate_with_head_masking
FAILED tests/test_modeling_mbart.py::MBartModelTest::test_generate_with_head_masking
FAILED tests/test_modeling_pegasus.py::PegasusModelTest::test_generate_with_head_masking
FAILED tests/test_modeling_speech_to_text.py::Speech2TextModelTest::test_generate_with_head_masking
on GPU
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12110/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12110/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12110",
"html_url": "https://github.com/huggingface/transformers/pull/12110",
"diff_url": "https://github.com/huggingface/transformers/pull/12110.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12110.patch",
"merged_at": 1623398647000
} |
https://api.github.com/repos/huggingface/transformers/issues/12109 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12109/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12109/comments | https://api.github.com/repos/huggingface/transformers/issues/12109/events | https://github.com/huggingface/transformers/issues/12109 | 918,302,233 | MDU6SXNzdWU5MTgzMDIyMzM= | 12,109 | Why attention mask is -10000 but not * 0? | {
"login": "chenxran",
"id": 45041313,
"node_id": "MDQ6VXNlcjQ1MDQxMzEz",
"avatar_url": "https://avatars.githubusercontent.com/u/45041313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chenxran",
"html_url": "https://github.com/chenxran",
"followers_url": "https://api.github.com/users/chenxran/followers",
"following_url": "https://api.github.com/users/chenxran/following{/other_user}",
"gists_url": "https://api.github.com/users/chenxran/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chenxran/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chenxran/subscriptions",
"organizations_url": "https://api.github.com/users/chenxran/orgs",
"repos_url": "https://api.github.com/users/chenxran/repos",
"events_url": "https://api.github.com/users/chenxran/events{/privacy}",
"received_events_url": "https://api.github.com/users/chenxran/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\n\r\nsearching previous Github issues, I found this one, which might help you: #542",
"Thanks for the reference! While I consider that maybe implementing a softmax function that allows masking after e^x may also be an approach to implement `attention_mask`?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,626 | 1,626 | NONE | null | I am reading the code in Roberta and I found that the implementation of avoiding padding token to be calculated in self-attention is to minus its value by 10000 in function `get_extended_attention_mask`.
I am wondering that why implement the mask by directly multiplying the value of padding tokens with zero? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12109/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12108 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12108/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12108/comments | https://api.github.com/repos/huggingface/transformers/issues/12108/events | https://github.com/huggingface/transformers/issues/12108 | 918,208,661 | MDU6SXNzdWU5MTgyMDg2NjE= | 12,108 | How to access training loss in TrainerCallback? | {
"login": "saikatG",
"id": 5086807,
"node_id": "MDQ6VXNlcjUwODY4MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5086807?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saikatG",
"html_url": "https://github.com/saikatG",
"followers_url": "https://api.github.com/users/saikatG/followers",
"following_url": "https://api.github.com/users/saikatG/following{/other_user}",
"gists_url": "https://api.github.com/users/saikatG/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saikatG/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saikatG/subscriptions",
"organizations_url": "https://api.github.com/users/saikatG/orgs",
"repos_url": "https://api.github.com/users/saikatG/repos",
"events_url": "https://api.github.com/users/saikatG/events{/privacy}",
"received_events_url": "https://api.github.com/users/saikatG/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,626 | 1,626 | NONE | null | Hi,
How can i access the current loss in the `on_step` function in `TrainerCallback`? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12108/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12107 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12107/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12107/comments | https://api.github.com/repos/huggingface/transformers/issues/12107/events | https://github.com/huggingface/transformers/issues/12107 | 918,117,006 | MDU6SXNzdWU5MTgxMTcwMDY= | 12,107 | How can I add a CNN layer on top of bert model? | {
"login": "zekaouinoureddine",
"id": 61702091,
"node_id": "MDQ6VXNlcjYxNzAyMDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/61702091?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zekaouinoureddine",
"html_url": "https://github.com/zekaouinoureddine",
"followers_url": "https://api.github.com/users/zekaouinoureddine/followers",
"following_url": "https://api.github.com/users/zekaouinoureddine/following{/other_user}",
"gists_url": "https://api.github.com/users/zekaouinoureddine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zekaouinoureddine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zekaouinoureddine/subscriptions",
"organizations_url": "https://api.github.com/users/zekaouinoureddine/orgs",
"repos_url": "https://api.github.com/users/zekaouinoureddine/repos",
"events_url": "https://api.github.com/users/zekaouinoureddine/events{/privacy}",
"received_events_url": "https://api.github.com/users/zekaouinoureddine/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\n\r\nplease ask this question on the [forum](https://discuss.huggingface.co/). We like to keep Github issues for bugs/feature requests.\r\n\r\nThanks. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,626 | 1,626 | CONTRIBUTOR | null | ### Information
I'm working on a **binary classification task** and used **BERT** model from transformers library to do it using the custom model below:
```python
class BERT(nn.Module):
def __init__(self):
super(BERT, self).__init__()
self.bert = BertModel.from_pretrained(BERT_PATH, return_dict=False)
self.dropout = nn.Dropout(0.2)
self.out = nn.Linear(768, 1)
def forward(self, ids, mask, token_type_ids):
outputs = self.bert(ids, attention_mask=mask,token_type_ids=token_type_ids)
# Use the pooled output
output = self.dropout(outputs[1])
return self.out(output)
```
### What I'm looking for?
Now I'm looking to use a `CNN` layer on top of `BERT` with the following configurations to see how my model will perform:
```
self.cnn = nn.Sequential(
nn.Conv2d(? ? ?),
nn.ReLU(),
nn.MaxPool2d(? ? ?)
)
```
### The problem encountered.
I have already tried but encountered errors regarding setting the dimensions. In your opinion what configuration should I put in the sequential model to avoid the problem of adjusting the dimensions? If you can **copy-paste** my code and offer me the final custom model with the right **Sequential model included**, I will be thankful. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12107/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12106 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12106/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12106/comments | https://api.github.com/repos/huggingface/transformers/issues/12106/events | https://github.com/huggingface/transformers/pull/12106 | 918,089,505 | MDExOlB1bGxSZXF1ZXN0NjY3NTU4NTY4 | 12,106 | Add GPT-J 6B support to the gpt-neo implementation | {
"login": "finetunej",
"id": 82650881,
"node_id": "MDQ6VXNlcjgyNjUwODgx",
"avatar_url": "https://avatars.githubusercontent.com/u/82650881?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/finetunej",
"html_url": "https://github.com/finetunej",
"followers_url": "https://api.github.com/users/finetunej/followers",
"following_url": "https://api.github.com/users/finetunej/following{/other_user}",
"gists_url": "https://api.github.com/users/finetunej/gists{/gist_id}",
"starred_url": "https://api.github.com/users/finetunej/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/finetunej/subscriptions",
"organizations_url": "https://api.github.com/users/finetunej/orgs",
"repos_url": "https://api.github.com/users/finetunej/repos",
"events_url": "https://api.github.com/users/finetunej/events{/privacy}",
"received_events_url": "https://api.github.com/users/finetunej/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"Just as a note, we have a PyTorch checkpoint for GPT-J that we will be ready to upload once this PR goes through.",
"The conversion script in the top post generates a single file checkpoint, but for models of this size, I've found split up checkpoints usually more efficient to load and handle. Such split up checkpoints can be generated using [this conversion script](https://gist.github.com/finetuneanon/7dd417a31338a63f219a49702e0550db) and loaded as follows:\r\n\r\n```python\r\ntry:\r\n from collections.abc import MutableMapping\r\nexcept ImportError:\r\n from collections import MutableMapping\r\nfrom pathlib import Path\r\n\r\nclass Checkpoint(MutableMapping):\r\n def __init__(self, chkpt_dir, device=\"cpu\"):\r\n self.device = device\r\n self.chkpt_dir = Path(chkpt_dir)\r\n self.checkpoint = torch.load(str(chkpt_dir / Path(\"m.pt\")))\r\n def __len__(self):\r\n return len(self.checkpoint)\r\n def __getitem__(self, key):\r\n path = self.chkpt_dir / Path(self.checkpoint[key]).name\r\n return torch.load(str(path), map_location=self.device)\r\n def __setitem__(self, key, value):\r\n return\r\n def __delitem__(self, key, value):\r\n return\r\n def keys(self):\r\n return self.checkpoint.keys()\r\n def __iter__(self):\r\n for key in self.checkpoint:\r\n yield (key, self.__getitem__(key))\r\n def __copy__(self):\r\n return Checkpoint(self.chkpt_dir, device=self.device)\r\n def copy(self):\r\n return Checkpoint(self.chkpt_dir, device=self.device)\r\n\r\nfrom transformers import GPTNeoForCausalLM, AutoConfig\r\nconfig = AutoConfig.from_pretrained(model_name)\r\nmodel = GPTNeoForCausalLM.from_pretrained(pretrained_model_name_or_path=None, config=config, state_dict=Checkpoint(\"checkpoint\"))\r\n```\r\n\r\nHaving a more integrated or better specifid way of loading them would be helpful, but I'm not sure what the best place for that would be.\r\n\r\n**Edit: Updated to handle renamed checkpoint folders.**",
"I noticed that there was a typo in the config file linked from the PR text, which caused it to be invalid JSON. It's fixed now.",
"Also ran some evaluations using the [eval harness](https://github.com/EleutherAI/lm-evaluation-harness) on the ported model now:\r\n\r\n| Task | Metric |Value |\r\n|----------|---------------|-----:|\r\n|lambada |ppl |4.1060|\r\n| |ppl_stderr |0.0886|\r\n| |acc |0.6833|\r\n| |acc_stderr |0.0065|\r\n|winogrande|acc |0.6480|\r\n| |acc_stderr |0.0134|\r\n|piqa |acc |0.7541|\r\n| |acc_stderr |0.0100|\r\n| |acc_norm |0.7612|\r\n| |acc_norm_stderr|0.0099|\r\n|hellaswag |acc |0.4895|\r\n| |acc_stderr |0.0050|\r\n| |acc_norm |0.6614|\r\n| |acc_norm_stderr|0.0047|",
"The eval numbers are a little shy of what we have for the Jax model, but close enough that FP rounding could plausibly explain the difference:\r\n\r\nLambada: 3.99 ppl, 0.697 acc\r\nWinogrande: 0.653\r\nPiQA: 0.765\r\nHellaSwag: 0.661",
"It should also be noted that my results were with fp16. It should be easy enough to modify the conversion script to cast to fp32 (just replace `half()` with `float()`), which might give results closer to the original evaluation, but I don't currently have access to hardware which could run the model in fp32 at a reasonable speed to do an evaluation on it.",
"> It should also be noted that my results were with fp16. It should be easy enough to modify the conversion script to cast to fp32 (just replace `half()` with `float()`), which might give results closer to the original evaluation, but I don't currently have access to hardwhere which could run the model in fp32 at a reasonable speed to do an evaluation on it.\r\n\r\nOh yes, I concur. That wasnβt meant as a detraction at all. Iβm not sure if EAI had enough free GPUs, but I can look at getting the evals run at full precision later this week.",
"Just curious - how long before new models are merged to the repo, generally speaking? And how long until it's available in the hosted inference API?",
"Hi @finetuneanon \r\n\r\nAmazing, thanks a lot for porting `GPT-J` so quickly!\r\n\r\nThe changes to local attention look good to me. But would be nice to split the PR into two \r\n1. simplify local attention\r\n2. and add GPT-J in a new model file.\r\n\r\nWhile supporting `GPT-J` in the `GPTNeo` model is totally doable, we would like to avoid that. The overall philosophy is to combine more than one model only if the forward pass is exactly similar or requires some really minor changes. If I understand it correctly, here are the differences between `GPT-J` and `GPTNeo` :\r\n- `GPT-J` uses rotary embeddings\r\n- It scales attention weights\r\n- no bias in the attention projection layer (the `out_proj` layer in attention class)\r\n- does not use layer_norm before the feed forward layer (`mlp`)\r\n- no residual connection between `hidden_states` and `attention_output` , just one residual connection which is added to `attention + mlp(hiddn)`\r\n- uses bias in the output projection layer\r\n- does not ties word embeddings with output layer\r\n\r\nThe current PR handles this using the `config.jax` argument, but itβs a bit confusing, and generally, one config param should not control these many changes. So if we really decide to support this in `GPTNeo` we would probably end-up with different config params like `attention_projection_bias`, `output_bias`, `attention_residual`, `scale_attention`. So itβs cleaner IMO to add a new model for this.\r\n\r\nAlso, `Transformers` isnβt really a modular toolkit for building different models, The goal is to keep every model file responsible for one model so it becomes easier for everyone to understand it and modify it according to their needs. And also makes it easy for us to maintain these different models.\r\ncc @LysandreJik , @sgugger , @patrickvonplaten\r\n\r\nHappy to help in any way to add the model :) ",
"To be quite honest, I think reading a couple of if branches makes the differences between models much clearer than having to compare two completely different model classes with small differences. You mention that transformers is not intended to be a modular framework, so there should be no issue with controlling these changes through a single configuration variable, although perhaps the naming of `jax` is not optimal. I would be open to changing this to e.g. `gptj`. Splitting it up into multiple options would only make sense to actually turn it into a modular framework.\r\n\r\nI would also prefer not splitting the pull request.",
"Let's agree to disagree: this is one of the core principle of the Transformers library, explicitly stated in our [philosophy](https://huggingface.co/transformers/philosophy.html). We've been proceeding like this since the beginning, and while we definitely understand where you're coming from, this is a defining principle of our library which we are not eager to change as it has been validated both by [surveys](https://discuss.huggingface.co/t/transformers-huge-community-feedback/120) and by community feedback.\r\n\r\nUnfortunately, we will have to insist on GPT-J following the same approach as the rest of the models - for philosophy, maintenance and coherence's sake. Let us know if you would like us to take over, we are happy to! Thank you for your understanding.",
"Yes, please take over in that case.",
"> Let's agree to disagree: this is one of the core principle of the Transformers library, explicitly stated in our [philosophy](https://huggingface.co/transformers/philosophy.html). We've been proceeding like this since the beginning, and while we definitely understand where you're coming from, this is a defining principle of our library which we are not eager to change as it has been validated both by [surveys](https://discuss.huggingface.co/t/transformers-huge-community-feedback/120) and by community feedback.\r\n> \r\n> Unfortunately, we will have to insist on GPT-J following the same approach as the rest of the models - for philosophy, maintenance and coherence's sake. Let us know if you would like us to take over, we are happy to! Thank you for your understanding.\r\n\r\nIf the current changes were to be submitted as a new model, instead of a modification to GPT-Neo, would there be any significant further changes to be made?",
"> If the current changes were to be submitted as a new model, instead of a modification to GPT-Neo, would there be any significant further changes to be made?\r\n\r\nThe new modeling file is the main thing, we have a [template to add new models](https://github.com/huggingface/transformers/tree/master/templates/adding_a_new_model) that should take care of everything else.",
"I have opened a PR that attempts to refit this into HF's paradigm. I recommend closing this PR and directing discussion to #12243 12243",
"As discussed above, this PR will be split into two\r\n- GPT-J (which Stella has already started)\r\n- simplifying GPTNeo local attention \r\n\r\nClosing this PR now."
] | 1,623 | 1,623 | 1,623 | NONE | null | # What does this PR do?
This PR mainly adds support for the GPT-J 6B model. A [conversion script](https://gist.github.com/finetuneanon/ee196c6cd16af1de4ca444862414683a) and [config.json](https://gist.github.com/finetuneanon/a55bdb3f5881e361faef0e96e1d41f09) for the slim checkpoint are also available.
It also addresses the local attention issue from #11320 in the same way as PR #11630 and works around an issue with torch.multinomial that will allow zero probability tokens to be chosen when sampling from an fp16 model.
Special thanks to the great folks of the EleutherAI discord, who helped me debug the RoPE implementation and to @kurumuz (NovelAI) who worked on this as well.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #11320
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12106/reactions",
"total_count": 7,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 7,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12106/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12106",
"html_url": "https://github.com/huggingface/transformers/pull/12106",
"diff_url": "https://github.com/huggingface/transformers/pull/12106.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12106.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12105 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12105/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12105/comments | https://api.github.com/repos/huggingface/transformers/issues/12105/events | https://github.com/huggingface/transformers/issues/12105 | 917,991,927 | MDU6SXNzdWU5MTc5OTE5Mjc= | 12,105 | What is the correct way to pass labels to DetrForSegmentation? | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for noticing, that's a mistake. The masks needs to be torch.FloatTensor of shape (number of bounding boxes in the image, height, width) - with height and width equal to those of the `pixel_values`. \r\n\r\nNote that predicting boxes is required for the training to be possible, since the Hungarian matching is computed using distances between boxes.\r\n\r\nHowever, I've got no less than 5 notebooks coming up that illustrate how to use DETR ;) \r\n\r\nI will fix this docs issue, together with some other small improvements, in a PR. ",
"No worries! I got it working after this. Training is a bit finicky though π
. Looking forward to those notebooks!!"
] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | The [current documentation](https://huggingface.co/transformers/master/model_doc/detr.html#transformers.DetrForSegmentation.forward) for `DetrModelForSegmentation.forward` says the following about `labels` kwarg:
> The class labels themselves should be a torch.LongTensor of len (number of bounding boxes in the image,), the boxes a torch.FloatTensor of shape (number of bounding boxes in the image, 4) and the **masks a torch.FloatTensor of shape (number of bounding boxes in the image, 4).**
But when I looked at the tests, it seems the shape of `masks` is `torch.rand(self.n_targets, self.min_size, self.max_size)` .
https://github.com/huggingface/transformers/blob/d2753dcbec7123500c1a84a7c2143a79e74df48f/tests/test_modeling_detr.py#L87-L103
---
I'm guessing this is a documentation mixup!
Anyways, it would be super helpful to include a snippet in the DETR docs that shows how to correctly pass masks/other labels + get the loss/loss dict. π
CC: @NielsRogge | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12105/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12104 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12104/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12104/comments | https://api.github.com/repos/huggingface/transformers/issues/12104/events | https://github.com/huggingface/transformers/issues/12104 | 917,667,984 | MDU6SXNzdWU5MTc2Njc5ODQ= | 12,104 | Issue with mBART50 es-en translation | {
"login": "sumanthd17",
"id": 28291870,
"node_id": "MDQ6VXNlcjI4MjkxODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/28291870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sumanthd17",
"html_url": "https://github.com/sumanthd17",
"followers_url": "https://api.github.com/users/sumanthd17/followers",
"following_url": "https://api.github.com/users/sumanthd17/following{/other_user}",
"gists_url": "https://api.github.com/users/sumanthd17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sumanthd17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sumanthd17/subscriptions",
"organizations_url": "https://api.github.com/users/sumanthd17/orgs",
"repos_url": "https://api.github.com/users/sumanthd17/repos",
"events_url": "https://api.github.com/users/sumanthd17/events{/privacy}",
"received_events_url": "https://api.github.com/users/sumanthd17/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"Ping ",
"Hi @patrickvonplaten any thing you wanted to check. Sorry for the late response, was a bit tied up",
"@patil-suraj - seems like multiple people have problems with mBART50...should we maybe leave a note in the official docs about it? "
] | 1,623 | 1,633 | null | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: colab
- Python version: 3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): mBART-large-50-many-to-one-nmt
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is: Translation
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behaviour:
The below notebook can be used to reproduce the results
1. https://colab.research.google.com/drive/1LEY3bI9mS7D-n6rJ70iKq3lN9_DQCQh7?usp=sharing
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I've used this model to translate a lot of Spanish text. But I observed that for some examples it's printing completely random things.
The above example should return something like this `1980 Mount St. Helens eruption`
The current output is `The Committee recommends that the State party take all necessary measures to ensure the full implementation of the present recommendations, inter alia, by transmitting them to the members of the Council of Ministers, the Parliament, the Parliamentary Assembly and the Senate, the Parliamentary Assembly and the National Assembly, for appropriate consideration and further action.`
Tagging @patrickvonplaten, @patil-suraj here. I believe this is not really a code issue, but something intrinsic to the model. Any ideas why this is happening?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12104/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12103 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12103/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12103/comments | https://api.github.com/repos/huggingface/transformers/issues/12103/events | https://github.com/huggingface/transformers/issues/12103 | 917,491,655 | MDU6SXNzdWU5MTc0OTE2NTU= | 12,103 | ViT tensorflow Implementation | {
"login": "elk-cloner",
"id": 5828101,
"node_id": "MDQ6VXNlcjU4MjgxMDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5828101?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elk-cloner",
"html_url": "https://github.com/elk-cloner",
"followers_url": "https://api.github.com/users/elk-cloner/followers",
"following_url": "https://api.github.com/users/elk-cloner/following{/other_user}",
"gists_url": "https://api.github.com/users/elk-cloner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elk-cloner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elk-cloner/subscriptions",
"organizations_url": "https://api.github.com/users/elk-cloner/orgs",
"repos_url": "https://api.github.com/users/elk-cloner/repos",
"events_url": "https://api.github.com/users/elk-cloner/events{/privacy}",
"received_events_url": "https://api.github.com/users/elk-cloner/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"Hi, I've contributed ViT in PyTorch and wanted to also add ViT in Tensorflow, but there's currently a limitation to adding TF models that don't expect `input_ids` as an input (ViT only requires `pixel_values`). Any TF model in HuggingFace currently relies on a `input_processing` function (defined [here](https://github.com/huggingface/transformers/blob/77f4c46b501322e9bffb5416dfbf0397deefd7d8/src/transformers/modeling_tf_utils.py#L315)), and this function needs to be updated to also support models that don't expect input_ids as input.\r\n\r\ncc @Rocketknight1 \r\n\r\nMy current implementation can be found [here](https://github.com/NielsRogge/transformers/blob/modeling_vit_tf_v2/src/transformers/models/vit/modeling_tf_vit.py).",
"> Any TF model in HuggingFace currently relies on a `input_processing` function (defined [here](https://github.com/huggingface/transformers/blob/77f4c46b501322e9bffb5416dfbf0397deefd7d8/src/transformers/modeling_tf_utils.py#L315)), and this function needs to be updated to also support models that don't expect input_ids as input.\r\n\r\nnice work @NielsRogge. Could you answer these two questions, please?\r\n\r\n1. Can't /Should we use something like `ViTFeatureExtractor` that was defined [here](https://github.com/huggingface/transformers/blob/fe3576488ad122b12364c66ef09dee38b3763f5f/src/transformers/models/vit/feature_extraction_vit.py#L31)??\r\n2. What's the problem of current implementation of `input_processing ` ? if we feed `input_ids` tensor of shape `[batch_size, w, h, c]` to it what would be the problems ? ",
"> 1. Can't /Should we use something like `ViTFeatureExtractor` that was defined [here](https://github.com/huggingface/transformers/blob/fe3576488ad122b12364c66ef09dee38b3763f5f/src/transformers/models/vit/feature_extraction_vit.py#L31)??\r\n\r\nIf `TFViTModel` and `TFViTForImageClassification` will be available, you can indeed use `ViTFeatureExtractor` to prepare images for the model (you only need to update the `return_tensors` parameter value to `\"tf\"` instead of `\"pt\"`).\r\n\r\n> 2\\. What's the problem of current implementation of `input_processing ` ? if we feed `input_ids` tensor of shape `[batch_size, w, h, c]` to it what would be the problems ?\r\n\r\nCurrently it only works as follows:\r\n\r\n```\r\ninputs = {\"input_ids\": None, \"pixel_values\": pixel_values}\r\noutputs = model(inputs)\r\n```\r\n\r\n",
"Hey! TF maintainer here - we're definitely aware of the issues with `input_processing`, but we're still working on the ways to fix it without breaking other things! If your model works when passing a null `input_ids`, it's fine to use that for now - you could possibly insert a shim into your `call()` method to avoid the user having to do it themselves?",
"I see a TF version of Wav2Vec2 has just been added, and they overwrote the `input_processing` function with a custom `input_values_processing` function as seen [here](https://github.com/huggingface/transformers/blob/040283170cd559b59b8eb37fe9fe8e99ff7edcbc/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py#L61). So I might do the same for ViT."
] | 1,623 | 1,623 | null | CONTRIBUTOR | null | # π Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
I was reading about ViT in the HuggingFace document and noticed there is no TF implementation of it. It would be great to have it in HuggingFace repo.
## Motivation
I have seen [this](https://keras.io/examples/vision/image_classification_with_vision_transformer/) and think it wouldn't be so hard. We can convert pytorch pretrain weights and use it for tensroflow model.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. --> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12103/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12102 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12102/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12102/comments | https://api.github.com/repos/huggingface/transformers/issues/12102/events | https://github.com/huggingface/transformers/pull/12102 | 917,395,730 | MDExOlB1bGxSZXF1ZXN0NjY2OTMzNzQ3 | 12,102 | Appending label2id and id2label to models for inference | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12102/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12102",
"html_url": "https://github.com/huggingface/transformers/pull/12102",
"diff_url": "https://github.com/huggingface/transformers/pull/12102.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12102.patch",
"merged_at": 1623335105000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/12101 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12101/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12101/comments | https://api.github.com/repos/huggingface/transformers/issues/12101/events | https://github.com/huggingface/transformers/issues/12101 | 917,142,745 | MDU6SXNzdWU5MTcxNDI3NDU= | 12,101 | GPT2 medium config n_ctx is wrong I guess? | {
"login": "s4sarath",
"id": 10637096,
"node_id": "MDQ6VXNlcjEwNjM3MDk2",
"avatar_url": "https://avatars.githubusercontent.com/u/10637096?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/s4sarath",
"html_url": "https://github.com/s4sarath",
"followers_url": "https://api.github.com/users/s4sarath/followers",
"following_url": "https://api.github.com/users/s4sarath/following{/other_user}",
"gists_url": "https://api.github.com/users/s4sarath/gists{/gist_id}",
"starred_url": "https://api.github.com/users/s4sarath/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/s4sarath/subscriptions",
"organizations_url": "https://api.github.com/users/s4sarath/orgs",
"repos_url": "https://api.github.com/users/s4sarath/repos",
"events_url": "https://api.github.com/users/s4sarath/events{/privacy}",
"received_events_url": "https://api.github.com/users/s4sarath/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"No, 1024 is correct. It refers to the sequence length of the model. ",
"Oh sorry.\r\nThen whats the config parameter of ```intermediate projection after attention```, which is ```4096``` in gpt2-medium.",
"Looking at the [config attributes of GPT-2](https://huggingface.co/transformers/model_doc/gpt2.html#transformers.GPT2Config), there's an attribute called `n_inner` which is defined as \"Dimensionality of the inner feed-forward layers. None will set it to 4 times `n_embd`\". \r\n\r\nApparently, the `n_embd` attribute of the medium-sized GPT-2 model is 1024. So this times 4 equals 4096. ",
"Oh my bad. Thanks. There is too much of inconsistencies between different model configs. "
] | 1,623 | 1,623 | 1,623 | NONE | null | Hi Guys,
For gpt2-medium ```n_ctx: 4096``` right ?
But in config it is showing ```n_ctx: 1024``` . | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12101/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12100 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12100/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12100/comments | https://api.github.com/repos/huggingface/transformers/issues/12100/events | https://github.com/huggingface/transformers/issues/12100 | 917,130,830 | MDU6SXNzdWU5MTcxMzA4MzA= | 12,100 | 'Speech2TextProcessor' has no attribute 'from_pretrained'` | {
"login": "Shiro-LK",
"id": 26505641,
"node_id": "MDQ6VXNlcjI2NTA1NjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26505641?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shiro-LK",
"html_url": "https://github.com/Shiro-LK",
"followers_url": "https://api.github.com/users/Shiro-LK/followers",
"following_url": "https://api.github.com/users/Shiro-LK/following{/other_user}",
"gists_url": "https://api.github.com/users/Shiro-LK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shiro-LK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shiro-LK/subscriptions",
"organizations_url": "https://api.github.com/users/Shiro-LK/orgs",
"repos_url": "https://api.github.com/users/Shiro-LK/repos",
"events_url": "https://api.github.com/users/Shiro-LK/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shiro-LK/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! I'm fixing this in #12145 to return a better error."
] | 1,623 | 1,623 | 1,623 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: ubuntu
- Python version: 3.7.10
- PyTorch version (GPU?): cpu 1.8.1
- Tensorflow version (GPU?): no tensorflow
- Using GPU in script?: no cpu
Models:
Speech2TextProcessor
## To reproduce
Steps to reproduce the behavior:
1. processor =Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
The error I got :
`AttributeError: type object 'Speech2TextProcessor' has no attribute 'from_pretrained'`
<!-- A clear and concise description of what you would expect to happen. -->
I don't understand why I got this issue because in processing_speech_to_text.py, line 78 "from_pretrained" exist. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12100/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12099 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12099/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12099/comments | https://api.github.com/repos/huggingface/transformers/issues/12099/events | https://github.com/huggingface/transformers/issues/12099 | 917,112,751 | MDU6SXNzdWU5MTcxMTI3NTE= | 12,099 | FillMaskPipeline very slow when provided with a large `targets` | {
"login": "EtaoinWu",
"id": 22369305,
"node_id": "MDQ6VXNlcjIyMzY5MzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/22369305?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EtaoinWu",
"html_url": "https://github.com/EtaoinWu",
"followers_url": "https://api.github.com/users/EtaoinWu/followers",
"following_url": "https://api.github.com/users/EtaoinWu/following{/other_user}",
"gists_url": "https://api.github.com/users/EtaoinWu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EtaoinWu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EtaoinWu/subscriptions",
"organizations_url": "https://api.github.com/users/EtaoinWu/orgs",
"repos_url": "https://api.github.com/users/EtaoinWu/repos",
"events_url": "https://api.github.com/users/EtaoinWu/events{/privacy}",
"received_events_url": "https://api.github.com/users/EtaoinWu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Do you have an example to reproduce the issue ? Benchmarking is sometimes suprising and hardware dependant.\r\n\r\nI can imagine that this is indeed a slowdown as Python - Rust communication is not free.\r\nHowever the omitted part of your comment is error detection, which is important and we need to keep it.\r\n\r\n```python\r\n if len(targets) == 0 or len(targets[0]) == 0:\r\n raise ValueError(\"At least one target must be provided when passed.\")\r\n if isinstance(targets, str):\r\n targets = [targets]\r\n\r\n targets_proc = []\r\n for target in targets:\r\n target_enc = self.tokenizer.tokenize(target)\r\n if len(target_enc) > 1 or target_enc[0] == self.tokenizer.unk_token:\r\n logger.warning(\r\n f\"The specified target token `{target}` does not exist in the model vocabulary. \"\r\n f\"Replacing with `{target_enc[0]}`.\"\r\n )\r\n targets_proc.append(target_enc[0])\r\n target_inds = np.array(self.tokenizer.convert_tokens_to_ids(targets_proc))\r\n```\r\n\r\nI think we can get away with encoding every target at once, then iterating through the whole array to do the error detection.\r\n\r\nHowever, as this is a performance problem, I think we realistically need to test that improving performance on 10K targets, does not reduce performance significantly on 10targets (which is a more common usage).\r\n\r\nCaveat: when a target is going to be very long (like 20 tokens) with 10k targets, the resulting array will be 20 x 10k for ids, that can pile up quite fast memory usage. In that context, it could be much slower to pass everything at once. We need to benchmark that too.\r\n\r\nWe don't have benchmarking tests right now, but if a PR goes in I think a test should demonstrate the usage and have a clear comment at leat about this specific decision.",
"An example code can be found [in this colab example](https://colab.research.google.com/gist/EtaoinWu/0cf5b37882bd18bcc554d3da717a3974/fillmaskpipeline-test.ipynb). On the default Google machine that I wrote this notebook on, the version with a `targets` argument slows down significantly (100ish ms to 600ish ms).\r\n\r\n> Caveat: when a target is going to be very long (like 20 tokens) with 10k targets, the resulting array will be 20 x 10k for ids, that can pile up quite fast memory usage. In that context, it could be much slower to pass everything at once. We need to benchmark that too.\r\n\r\nThe current behavior of `FillMaskPipeline` is that when a multi-token string is passed, only the first token is used. I doubt anyone would actually need this, because if someone want to choose a token from a subset of the vocabulary to fill into a mask, they usually know the subset exactly. Deliberately passing multi-token strings into `FillMaskPipeline` (and expecting it to tokenize them and drop all-but-first tokens) does not make much sense.\r\n\r\n### Another discovery\r\n\r\nWhen coding my example, I just discovered the bottleneck of the performance problem. When provided with a `targets` argument, `FillMaskPipeline` ignores its `top_k` parameter, which means that it has to output a whole list proportional to `len(targets)`, and that's the bottleneck (at least in my test). The code example above actually respects `top_k` parameter when a `targets` is present, hence much faster when constructing the return value. After the optimization, the code costs 200ish milliseconds.",
"> An example code can be found [in this colab example](https://colab.research.google.com/gist/EtaoinWu/0cf5b37882bd18bcc554d3da717a3974/fillmaskpipeline-test.ipynb). On the default Google machine that I wrote this notebook on, the version with a `targets` argument slows down significantly (100ish ms to 600ish ms).\r\n\r\nThanks, this will help ! \r\n> \r\n> > Caveat: when a target is going to be very long (like 20 tokens) with 10k targets, the resulting array will be 20 x 10k for ids, that can pile up quite fast memory usage. In that context, it could be much slower to pass everything at once. We need to benchmark that too.\r\n> \r\n> The current behavior of `FillMaskPipeline` is that when a multi-token string is passed, only the first token is used. I doubt anyone would actually need this, because if someone want to choose a token from a subset of the vocabulary to fill into a mask, they usually know the subset exactly. Deliberately passing multi-token strings into `FillMaskPipeline` (and expecting it to tokenize them and drop all-but-first tokens) does not make much sense.\r\n\r\nAs a maintainer of a live product, I can tell you not everyone is aware of what happens behind a pipeline (and it is exactly why they exist, so we can abstract away all nitty gritty details of transformers). So it will happen that some users will try out those examples and be surprised at slowness. \r\nIt's something that `pipelines` should try to address if possible. \r\n\r\n> ### Another discovery\r\n> \r\n> When coding my example, I just discovered the bottleneck of the performance problem. When provided with a `targets` argument, `FillMaskPipeline` ignores its `top_k` parameter, which means that it has to output a whole list proportional to `len(targets)`, and that's the bottleneck (at least in my test). The code example above actually respects `top_k` parameter when a `targets` is present, hence much faster when constructing the return value. After the optimization, the code costs 200ish milliseconds.\r\n\r\nOk, I think I remember the `targets` being added and the decision was that if `top_k` > `len(targets)` we were not obliged of honoring `top_k` because it wouldn't make any sense. `top_k` < `len(targets)` should be honored though.\r\n\r\n",
"I was able to reproduce and optimize away most of the performance, now any example should run at roughly the same speed.\r\n\r\nSlowdown will happen when you miss the vocabulary, but the warnings should help users figure it out.",
"Thanks a lot. As a background, I found the issue when reproducing the following paper:\r\n\r\n> Deng, Liming, et al. \"An Iterative Polishing Framework Based on Quality Aware Masked Language Model for Chinese Poetry Generation.\" _Proceedings of the AAAI Conference on Artificial Intelligence_. Vol. 34. No. 05. 2020.\r\n\r\nwhich involves calling `FillMaskPipeline` iteratively 10 times at most for each API call, which depending on the input, may or may not have the `targets` parameter. The time difference in the two types of API calls made me find this issue."
] | 1,623 | 1,624 | 1,624 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-5.4.0-67-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1 (False)
- Tensorflow version (GPU?): N/A
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik @Narsil
## Information
The model I am using: `ethanyt/guwenbert-base`, with a `RoBERTa` model and a `BertTokenizerFast` tokenizer.
## To reproduce
Steps to reproduce the behavior:
1. Initialize a `fill-mask` pipeline with the model and the tokenizer mentioned above
2. Call it with any sentence and a large `targets` (with a length of ~10k single words)
## Problem
The call would be much slower than a similar call without a `targets` argument. A call without a `targets` argument costs ~0.1s, while a call with a `targets` argument costs ~0.3s.
The following code is present in `src/transformers/pipelines/fill_mask.py`:
```python
class FillMaskPipeline(Pipeline):
# ...
def __call__(self, *args, targets=None, top_k: Optional[int] = None, **kwargs):
# ...
if targets is not None:
# ...
targets_proc = []
for target in targets:
target_enc = self.tokenizer.tokenize(target)
# ...
targets_proc.append(target_enc[0])
```
This function iterates through targets, rather than sending it directly to `tokenize`, which does not utilize the batch processing optimization of `TokenizerFast`s, hence the slow speed. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12099/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12098 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12098/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12098/comments | https://api.github.com/repos/huggingface/transformers/issues/12098/events | https://github.com/huggingface/transformers/issues/12098 | 917,007,148 | MDU6SXNzdWU5MTcwMDcxNDg= | 12,098 | π New model addition - GPT-J-6B | {
"login": "Xirider",
"id": 37597043,
"node_id": "MDQ6VXNlcjM3NTk3MDQz",
"avatar_url": "https://avatars.githubusercontent.com/u/37597043?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Xirider",
"html_url": "https://github.com/Xirider",
"followers_url": "https://api.github.com/users/Xirider/followers",
"following_url": "https://api.github.com/users/Xirider/following{/other_user}",
"gists_url": "https://api.github.com/users/Xirider/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Xirider/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Xirider/subscriptions",
"organizations_url": "https://api.github.com/users/Xirider/orgs",
"repos_url": "https://api.github.com/users/Xirider/repos",
"events_url": "https://api.github.com/users/Xirider/events{/privacy}",
"received_events_url": "https://api.github.com/users/Xirider/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hello @patrickvonplaten! would you be able to give an estimation for the timeline of the implementation of the this model in huggingface? ",
"I have a PR adding support for this model here: #12098",
"> I have a PR adding support for this model here: #12098\r\n\r\nYou probably wanted to link this PR: https://github.com/huggingface/transformers/pull/12106",
"Yeah, copied the wrong thing somehow.",
"@finetuneanon great! Do you know when it will be ready to use from the transformer library? Thnx for the work. ",
"Depends on when it will be merged. Until then you can install my branch like this:\r\n\r\n```\r\npip install git+https://github.com/finetuneanon/transformers@gpt-j\r\n```\r\n\r\nConvert the weights with the conversion script linked from the PR.",
"@finetuneanon I did pip install the transformers@gpt-j and I managed to convert the weights through the script you referenced but only thing I'm now struggling with is making the config file. I uploaded the gpt-j-6b.json file to colab but I don't how to make the config variable via AutoConfig class(don't know if that is how it is made). So If you could let me know how to make the config file, I would appreciate it a lot. \r\nthis [colab](https://colab.research.google.com/drive/1xl5tRYTiVISn6FMfhgyB70LZ-Biwj43E#scrollTo=QI_zE5QA8ycF) file containts all the code. ",
"Rename it into config.json, put it into a folder and you should be able to `AutoConfig.from_pretrained(\"whatever-folder\")`"
] | 1,623 | 1,630 | 1,630 | NONE | null | # π New model addition - GPT-J-6B
## Model description
The GPT-J-6B model (GPT-NEO model in Jax with 6B parameters trained on the Pile)
Repo: https://github.com/kingoflolz/mesh-transformer-jax
Weights:
[Slim weights (bf16 weights only, for inference, 9GB)](https://the-eye.eu/public/AI/GPT-J-6B/step_383500_slim.tar.zstd)
[Full weights (including optimizer params, 61GB)](https://the-eye.eu/public/AI/GPT-J-6B/step_383500.tar.zstd)
## Open source status
* [x] the model implementation is available: (give details)
* [x] the model weights are available: (give details)
* [ ] who are the authors: (mention them, if possible by @gh-username)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12098/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12098/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12097 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12097/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12097/comments | https://api.github.com/repos/huggingface/transformers/issues/12097/events | https://github.com/huggingface/transformers/pull/12097 | 917,004,090 | MDExOlB1bGxSZXF1ZXN0NjY2NTk4OTA4 | 12,097 | Add from_pretrained to dummy timm objects | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Failure of the templates came from the failure of `make quality` on master, fixed in a commit, so this is good to merge!",
"Thanks a lot @sgugger!"
] | 1,623 | 1,623 | 1,623 | MEMBER | null | Closes https://github.com/huggingface/transformers/issues/12091
cc @NielsRogge
@sgugger Am I missing something relative to specifying that these dummy items should be generated with the `from_pretrained` method? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12097/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12097/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12097",
"html_url": "https://github.com/huggingface/transformers/pull/12097",
"diff_url": "https://github.com/huggingface/transformers/pull/12097.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12097.patch",
"merged_at": 1623428830000
} |
https://api.github.com/repos/huggingface/transformers/issues/12096 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12096/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12096/comments | https://api.github.com/repos/huggingface/transformers/issues/12096/events | https://github.com/huggingface/transformers/issues/12096 | 916,944,730 | MDU6SXNzdWU5MTY5NDQ3MzA= | 12,096 | DetrFeatureExtractor post_process not rescaling bboxes as expected | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"False alarm...img sizes need to be flipped.\r\n\r\n```\r\n# ...\r\n\r\nimg_sizes = torch.tensor([tuple(reversed(im.size)) for im in images])\r\n\r\n# ...\r\n```"
] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:master
- Platform:Google Colab
- Python version:3.7
- PyTorch version (GPU?):1.8.1
- Tensorflow version (GPU?):N/A
- Using GPU in script?:N/A
- Using distributed or parallel set-up in script?:N/A
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@NielsRogge
## Information
Model I am using (Bert, XLNet ...): `DetrForObjectDetection`
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Colab Below
## To reproduce
Steps to reproduce the behavior:
<a href="https://colab.research.google.com/gist/nateraw/b844f1f5118abd05c09a077fdec75dd3/detr-resize-issue.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I would expect for `feature_extractor.post_process` to rescale the bounding boxes so they match the input images. Right now they seem to be scaled differently.
For example - the following should create `processed_outputs` that contain bbox values that are ready to be plotted along with the original image.
```python
import PIL
import torch
from transformers import DetrFeatureExtractor, DetrForObjectDetection
feature_extractor = DetrFeatureExtractor.from_pretrained('facebook/detr-resnet-50')
model = DetrForObjectDetection.from_pretrained('facebook/detr-resnet-50')
images: List[PIL.Image.Image]) = ... # Some list of PIL images
inputs = feature_extractor(images, return_tensors='pt')
outputs = model(**inputs)
img_sizes = torch.tensor([im.size for im in images])
processed_outputs = feature_extractor.post_process(outputs, img_sizes)
```
:thought_balloon: - One thought I had was that I'm not sure if I'm preparing the `img_sizes` tensor correctly above. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12096/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12096/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12095 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12095/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12095/comments | https://api.github.com/repos/huggingface/transformers/issues/12095/events | https://github.com/huggingface/transformers/issues/12095 | 916,897,537 | MDU6SXNzdWU5MTY4OTc1Mzc= | 12,095 | Continuous training on Fine-tuned Model | {
"login": "Noskid1999",
"id": 34827312,
"node_id": "MDQ6VXNlcjM0ODI3MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/34827312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Noskid1999",
"html_url": "https://github.com/Noskid1999",
"followers_url": "https://api.github.com/users/Noskid1999/followers",
"following_url": "https://api.github.com/users/Noskid1999/following{/other_user}",
"gists_url": "https://api.github.com/users/Noskid1999/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Noskid1999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Noskid1999/subscriptions",
"organizations_url": "https://api.github.com/users/Noskid1999/orgs",
"repos_url": "https://api.github.com/users/Noskid1999/repos",
"events_url": "https://api.github.com/users/Noskid1999/events{/privacy}",
"received_events_url": "https://api.github.com/users/Noskid1999/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nThanks!",
"Sure, thank you."
] | 1,623 | 1,623 | 1,623 | NONE | null | # π Feature request
How can I continue training on a Fine-tuned Model?
I have a fine tuned model from OpenSLR data. And I want to continue training on an model as I continue to gain transcribed audio data over time. Can I do like making the fine-tuned model as a checkpoint?
## Motivation
I am aiming to make a model for Nepali Language. I have a way to collect data over time and it is continuous. So, I want to find a way I can train the model continuously as I gain data over time
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12095/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12094 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12094/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12094/comments | https://api.github.com/repos/huggingface/transformers/issues/12094/events | https://github.com/huggingface/transformers/issues/12094 | 916,807,948 | MDU6SXNzdWU5MTY4MDc5NDg= | 12,094 | Create a torchscript version of Tokenizer in Bert | {
"login": "soheilesm",
"id": 29102608,
"node_id": "MDQ6VXNlcjI5MTAyNjA4",
"avatar_url": "https://avatars.githubusercontent.com/u/29102608?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/soheilesm",
"html_url": "https://github.com/soheilesm",
"followers_url": "https://api.github.com/users/soheilesm/followers",
"following_url": "https://api.github.com/users/soheilesm/following{/other_user}",
"gists_url": "https://api.github.com/users/soheilesm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/soheilesm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/soheilesm/subscriptions",
"organizations_url": "https://api.github.com/users/soheilesm/orgs",
"repos_url": "https://api.github.com/users/soheilesm/repos",
"events_url": "https://api.github.com/users/soheilesm/events{/privacy}",
"received_events_url": "https://api.github.com/users/soheilesm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! I think you're mistaking tokenizers for models. The two are part of the NLP pipeline, but they're very different. The tokenizer prepares the inputs for the model - but it isn't a PyTorch module. It's either plain Python code, or a rust object with a Python wrapper (like it is the case here).\r\n\r\nSince it's not a torch module - it doesn't make sense for it to have the `eval` method. Same for your second question, a tokenizer cannot be traced as it's not a torch module.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,626 | 1,626 | NONE | null | Hi,
Not sure if a feature request is a proper flag for the below request:
I want to create an executable version of Tokenizer for Bert - Below is a small code piece:
```
from transformers import AutoTokenizer, AutoModel
import torch
sentences = ['This framework generates tokens for each input sentence']
tokenizer_model = AutoTokenizer.from_pretrained("sentence-transformers/paraphrase-mpnet-base-v2", torchscript=True)
encoded_input = tokenizer_model(sentences, padding=True, truncation=True, max_length=128, return_tensors='pt')
# !!! complains that 'tokenizer_model' doesn't have eval()
tokenizer_model.eval();
# !!! tokenizer_model takes list of sentences as inputs, how should I provide tensorial dummpy inputs?
traced_tokenizer_model = torch.jit.trace(tokenizer_model, dummpy_inputs)
torch.jit.save(traced_tokenizer_model, "traced_tokenize_bert.pt")
```
My first problem is that `tokenizer_model` doesnβt have `eval()` - So how I can follow the guideline for creating the traced_models.
My second problem is that the `tokenizer_model` takes as inputs a list of strings. How am I supposed to provide tensorial form dummy inputs to create the `traced_tokenizer_model`?
I have followed the instructions on your page for creating torchscripts but do not know how I can create one for the Tokenizer module above.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12094/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12094/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12093 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12093/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12093/comments | https://api.github.com/repos/huggingface/transformers/issues/12093/events | https://github.com/huggingface/transformers/issues/12093 | 916,801,968 | MDU6SXNzdWU5MTY4MDE5Njg= | 12,093 | Speedup batch matmul in pytorch | {
"login": "AngThanos",
"id": 41022754,
"node_id": "MDQ6VXNlcjQxMDIyNzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/41022754?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AngThanos",
"html_url": "https://github.com/AngThanos",
"followers_url": "https://api.github.com/users/AngThanos/followers",
"following_url": "https://api.github.com/users/AngThanos/following{/other_user}",
"gists_url": "https://api.github.com/users/AngThanos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AngThanos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AngThanos/subscriptions",
"organizations_url": "https://api.github.com/users/AngThanos/orgs",
"repos_url": "https://api.github.com/users/AngThanos/repos",
"events_url": "https://api.github.com/users/AngThanos/events{/privacy}",
"received_events_url": "https://api.github.com/users/AngThanos/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nThanks!",
"I've found the solution by using deepspeed.\n\nHowever, I encountered the error and I opened an issue here. Please help me\nwith this issue. Thanks.\n\nhttps://github.com/microsoft/DeepSpeed/issues/1153\n\nOn Thu, Jun 10, 2021 at 2:04 PM Lysandre Debut ***@***.***>\nwrote:\n\n> Hello, thanks for opening an issue! We try to keep the github issues for\n> bugs/feature requests.\n> Could you ask your question on the forum <https://discuss.huggingface.co>\n> instead?\n>\n> Thanks!\n>\n> β\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/12093#issuecomment-858369696>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AJY7KIWLCP5AOY7CA3JI6ZDTSBPY3ANCNFSM46ND2SZQ>\n> .\n>\n"
] | 1,623 | 1,623 | 1,623 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
Hi there,
I'm trying to combine Sparse Transformer into Vision Transformer to speed up the time-consuming training also inference.
However, the speed is really slow with my code which is shown below. Can someone help me point out where I was wrong?
Thanks.
`import torch
import math
query_layer = torch.randn(32, 12, 197, 64)
key_layer = torch.randn(32, 12, 197, 64)
key_layer_transpose = key_layer.transpose(-1, -2)
dim0 = query_layer.shape[0]
dim1 = query_layer.shape[1]
dim2 = query_layer.shape[2]
dim3 = query_layer.shape[3]
print(dim0, dim1, dim2, dim3)
#orignal transformer attention score calculation
attention_scores = torch.matmul(query_layer, key_layer_transpose)
print(attention_scores)
#my modified based on Sparse Transformer for speeding up the training (but actually lower than at least 20 times in comparison with the original transformer
N = math.sqrt(dim3)
attention_scores = torch.zeros(dim0, dim1, dim2, dim2, device='cuda:0')
for i_dim0 in range(dim0):
for i_dim1 in range(dim1):
for i in range(dim2):
for j in range(dim2):
if (i == j) or ((i - j) % N == 0 and i - j > 0):
attention_scores[i_dim0, i_dim1, i, j] = torch.matmul(query_layer[i_dim0, i_dim1, i, :], key_layer_transpose[i_dim0, i_dim1, :, j])
attention_scores.shape
print(attention_scores)
`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12093/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12093/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12092 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12092/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12092/comments | https://api.github.com/repos/huggingface/transformers/issues/12092/events | https://github.com/huggingface/transformers/issues/12092 | 916,735,993 | MDU6SXNzdWU5MTY3MzU5OTM= | 12,092 | Replicating PEGASUS results on a benchmark dataset | {
"login": "sajastu",
"id": 10419055,
"node_id": "MDQ6VXNlcjEwNDE5MDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/10419055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sajastu",
"html_url": "https://github.com/sajastu",
"followers_url": "https://api.github.com/users/sajastu/followers",
"following_url": "https://api.github.com/users/sajastu/following{/other_user}",
"gists_url": "https://api.github.com/users/sajastu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sajastu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sajastu/subscriptions",
"organizations_url": "https://api.github.com/users/sajastu/orgs",
"repos_url": "https://api.github.com/users/sajastu/repos",
"events_url": "https://api.github.com/users/sajastu/events{/privacy}",
"received_events_url": "https://api.github.com/users/sajastu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,626 | 1,626 | NONE | null | I'm trying to replicate the PEGASUS results on Reddit-TIFU dataset, but the scores I'm getting are a bit far from what has been reported in the main paper. I'm using the same test set as the one authors used in the main paper (80-10-10 splits based on `TensorflowDataset` according to their code-base). Would anyone have had similar experience of working with PEGASUS on either of reported datasets? Although I'm looking to replicate Reddit-TIFU results, but that would be also good to see if anyone could get the results replicated on either of the experimental datasets.
It has to be mentioned that I'm using the finetuned checkpoint on the Reddit-TIFU dataset: `google/pegasus-reddit_tifu` without further fine-tuning (actually I don't need that) using the following script; `pegasus.sh`
```
CUDA_VISIBLE_DEVICES=0,1,2,3,4 python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path google/pegasus-reddit_tifu \
--do_predict \
--train_file $DS_BASE_DIR/train.json \
--validation_file $DS_BASE_DIR/val.json \
--test_file $DS_BASE_DIR/test.json \
--output_dir /home/code-base/user_space/saved_models/pegasus/ \
--per_device_train_batch_size=2 \
--per_device_eval_batch_size=2 \
--overwrite_output_dir \
--predict_with_generate \
--text_column text \
--summary_column summary
```
The scores I'm achieving: `do_predict` output:
> ***** predict metrics *****
> predict_gen_len = 40.294
> predict_loss = 3.9969
> **predict_rouge1 = 27.13
> predict_rouge2 = 8.38
> predict_rougeL = 20.68**
> predict_samples = 4214
However, the (best) reported scores are:
> **predict_rouge1 = 26.63
> predict_rouge2 = 9.01
> predict_rougeL = 21.60**
Even, assuming that `google/pegasus-reddit_tifu`'s pretraining is [improved](https://huggingface.co/google/pegasus-reddit_tifu) (Mixed & Stochastic), I can't reproduce the reported results on Reddit-TIFU, which are: R-1: 27.99/ R-2: 9.81/ R-L: 22.94
## Environment info
- `transformers` version: 4.7.0 dev
- Platform: Linux Ubuntu
- Python version: 3.8
- PyTorch version (GPU?): 1.6
- Tensorflow version (GPU?): --
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes, I'm using four GPUs for prediction.
### Who can help
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @sshleifer, @patrickvonplaten, @patil-suraj
## Information
Model I am using (Bert, XLNet ...): PEGASUS
The problem arises when using:
* [ x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ x] an official GLUE/SQUaD task: (give the name): Reddit-TIFU
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. `bash pegasus.sh` _bash script is posted above_
## Expected behavior
I expect to be able to reproduce the official results reported in the main PEGASUS paper on Reddit-TIFU dataset; however, I'm getting higher Rouge-1 score, while lower Rouge-2 and Rouge-L scores. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12092/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12092/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12091 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12091/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12091/comments | https://api.github.com/repos/huggingface/transformers/issues/12091/events | https://github.com/huggingface/transformers/issues/12091 | 916,713,256 | MDU6SXNzdWU5MTY3MTMyNTY= | 12,091 | Provide more useful error message in Detr from_pretrained when timm not installed | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for raising an issue! Fixing this in #12097 ",
"Thanks for already testing out the model! Highly appreciate your feedback. If anything related to the model/docs can be improved, feel free to reach out. \r\n\r\n"
] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: master
- Platform: Google Colab
- Python version: 3.7
- PyTorch version (GPU?):1.8.1+cu101
- Tensorflow version (GPU?):N/A
- Using GPU in script?:N/A
- Using distributed or parallel set-up in script?:N/A
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
[Here's a colab notebook.
](https://colab.research.google.com/drive/1b8RVCARcZU8kBFywRID8ZLPAYwhcd10p?usp=sharing)
1. install latest transformers from master w/o installing `timm`.
2. Try to init any `DetrModel` `from_pretrained`, and you'll see you get a misleading error
```python
from transformers import DetrForObjectDetection
model = DetrForObjectDetection.from_pretrained('facebook/detr-resnet-50')
```
Error thrown:
```bash
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-2-175d7bae5f8e> in <module>()
1 from transformers import DetrForObjectDetection
2
----> 3 model = DetrForObjectDetection.from_pretrained('facebook/detr-resnet-50')
AttributeError: type object 'DetrForObjectDetection' has no attribute 'from_pretrained'
```
3. Try the same w/ timm installed, and see that it works.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Have message informing user that they must have `timm` installed to use `from_pretrained` with `Detr` models instead of just telling them there is no attribute `from_pretrained`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12091/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12091/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12090 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12090/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12090/comments | https://api.github.com/repos/huggingface/transformers/issues/12090/events | https://github.com/huggingface/transformers/issues/12090 | 916,592,024 | MDU6SXNzdWU5MTY1OTIwMjQ= | 12,090 | Checkpoint detected info log in run_clm.py | {
"login": "kishorninawe",
"id": 57661019,
"node_id": "MDQ6VXNlcjU3NjYxMDE5",
"avatar_url": "https://avatars.githubusercontent.com/u/57661019?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kishorninawe",
"html_url": "https://github.com/kishorninawe",
"followers_url": "https://api.github.com/users/kishorninawe/followers",
"following_url": "https://api.github.com/users/kishorninawe/following{/other_user}",
"gists_url": "https://api.github.com/users/kishorninawe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kishorninawe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kishorninawe/subscriptions",
"organizations_url": "https://api.github.com/users/kishorninawe/orgs",
"repos_url": "https://api.github.com/users/kishorninawe/repos",
"events_url": "https://api.github.com/users/kishorninawe/events{/privacy}",
"received_events_url": "https://api.github.com/users/kishorninawe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for flagging this! It should be fixed by the PR mentioned above."
] | 1,623 | 1,623 | 1,623 | NONE | null | I think # Setup logging should above the # Detecting last checkpoint. so it can show warning Checkpoint detected, resuming training at... in examples of pytorch's language-modeling run_clm.py
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12090/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12090/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12089 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12089/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12089/comments | https://api.github.com/repos/huggingface/transformers/issues/12089/events | https://github.com/huggingface/transformers/pull/12089 | 916,589,393 | MDExOlB1bGxSZXF1ZXN0NjY2MjQxOTcx | 12,089 | [Wav2Vec2ForPretraining] Correct checkpoints wav2vec2 & fix tests | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Correct weights have been uploaded so change the tests accordingly. Also some minor fixes are added.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12089/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12089",
"html_url": "https://github.com/huggingface/transformers/pull/12089",
"diff_url": "https://github.com/huggingface/transformers/pull/12089.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12089.patch",
"merged_at": 1623267719000
} |
https://api.github.com/repos/huggingface/transformers/issues/12088 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12088/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12088/comments | https://api.github.com/repos/huggingface/transformers/issues/12088/events | https://github.com/huggingface/transformers/pull/12088 | 916,443,451 | MDExOlB1bGxSZXF1ZXN0NjY2MTA4NjIz | 12,088 | [versions] rm require_version_examples | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | As explained in https://github.com/huggingface/transformers/issues/12086 `require_version_examples` wrapper is no longer useful since examples' requirements are now scattered across multiple files, so removing it as it can't be used because of that.
Fixes: https://github.com/huggingface/transformers/issues/12086
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12088/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12088",
"html_url": "https://github.com/huggingface/transformers/pull/12088",
"diff_url": "https://github.com/huggingface/transformers/pull/12088.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12088.patch",
"merged_at": 1623261773000
} |
https://api.github.com/repos/huggingface/transformers/issues/12087 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12087/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12087/comments | https://api.github.com/repos/huggingface/transformers/issues/12087/events | https://github.com/huggingface/transformers/pull/12087 | 916,435,522 | MDExOlB1bGxSZXF1ZXN0NjY2MTAxNzQ3 | 12,087 | [examples/flax] pass decay_mask fn to optimizer | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | MEMBER | null | Fixes Typo | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12087/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12087/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12087",
"html_url": "https://github.com/huggingface/transformers/pull/12087",
"diff_url": "https://github.com/huggingface/transformers/pull/12087.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12087.patch",
"merged_at": 1623260967000
} |
https://api.github.com/repos/huggingface/transformers/issues/12086 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12086/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12086/comments | https://api.github.com/repos/huggingface/transformers/issues/12086/events | https://github.com/huggingface/transformers/issues/12086 | 916,421,610 | MDU6SXNzdWU5MTY0MjE2MTA= | 12,086 | examples requirements isn't in sync with `require_version_examples` | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, that works."
] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | We have `require_version_examples`
https://github.com/huggingface/transformers/blob/b1a8aa94f0a2ccea7c68b79066141aa822b96e42/src/transformers/utils/versions.py#L123-L126
but it looks like it wasn't updated in the last reshuffle and requirements got split and it suggests incorrect solution.
I tried to offer to use it here: https://github.com/huggingface/transformers/pull/11927
but since requirements are now scattered over multiple files we probably should remove it and its usage in legacy scripts, since it gives wrong info where it's currently still used.
It's just one usage in several legacy scripts:
```
require_version_examples("pytorch_lightning>=1.0.4")
```
so we can just replace it with:
```
require_version("pytorch_lightning>=1.0.4")
```
which would do the trick.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12086/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12085 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12085/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12085/comments | https://api.github.com/repos/huggingface/transformers/issues/12085/events | https://github.com/huggingface/transformers/pull/12085 | 916,224,640 | MDExOlB1bGxSZXF1ZXN0NjY1OTI2NTM2 | 12,085 | PyTorch MLM - Dummy Script | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12085/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12085/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12085",
"html_url": "https://github.com/huggingface/transformers/pull/12085",
"diff_url": "https://github.com/huggingface/transformers/pull/12085.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12085.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12084 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12084/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12084/comments | https://api.github.com/repos/huggingface/transformers/issues/12084/events | https://github.com/huggingface/transformers/issues/12084 | 916,210,493 | MDU6SXNzdWU5MTYyMTA0OTM= | 12,084 | Memory Efficient FP 16 Training | {
"login": "rajgar114",
"id": 29262332,
"node_id": "MDQ6VXNlcjI5MjYyMzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/29262332?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajgar114",
"html_url": "https://github.com/rajgar114",
"followers_url": "https://api.github.com/users/rajgar114/followers",
"following_url": "https://api.github.com/users/rajgar114/following{/other_user}",
"gists_url": "https://api.github.com/users/rajgar114/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajgar114/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajgar114/subscriptions",
"organizations_url": "https://api.github.com/users/rajgar114/orgs",
"repos_url": "https://api.github.com/users/rajgar114/repos",
"events_url": "https://api.github.com/users/rajgar114/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajgar114/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Is \"memory efficient\" fp16 different to the fp16 available in the training scripts?",
"Yes, both are different.\r\nhttps://github.com/pytorch/fairseq/issues/2907",
"Thanks for the link! cc @sgugger @stas00 ",
"I think you must be referring to this comment https://github.com/pytorch/fairseq/issues/2907#issuecomment-729722676\r\n\r\n> --memory-efficient-fp16 gets rid of the FP32 model copy and only maintains FP32 momentum in the optimizer. Thus you'll see 0.5x memory usage from the model weights, 0.5x memory usage in the forward/backward, and 1.0x memory usage in the optimizer (relative to FP32).\r\n\r\ncorrect?\r\n\r\nI think this is the implementation: https://github.com/pytorch/fairseq/blob/f8a7c93440cd925f70979a6082c18f830b39e44b/fairseq/optim/fp16_optimizer.py#L456\r\n\r\nAppears to be added 2 years ago. And you quoted an old paper from 2019. Do you think it's actually something that's worth investigating? Somehow I'd expect for it to adopted by other projects if it were to work great, so it'd be good to ask someone experienced with it whether it's actually good.\r\n",
"Not sure how to evaluate the effectiveness this proposal, it would be helpful to have some case studies that show the actual improvements.\r\n\r\nI asked around and someone reported that someone mentioned this was useful for certain models, but it'd help to know which and how they were trained so that there is an actual proven setup to work with. \r\n\r\nI found pytext included a variation of it here: https://github.com/facebookresearch/pytext/commit/c6d13acbafc856fdc0291bf6608d6f318b6690d2, but I can't find any other references via google, which is not very encouraging. But we don't know whether other implementations use the same name.\r\n\r\nBut also let's ask this, @rajgar114, would you like to try to work on this and submit a PR when you have something working?",
"`I think you must be referring to this comment pytorch/fairseq#2907 (comment) ?`\r\n\r\nYes, @stas00 I was referring to this comment [pytorch/fairseq#2907](https://github.com/pytorch/fairseq/issues/2907#issuecomment-729722676) only.\r\n\r\n`Appears to be added 2 years ago.`\r\n\r\nNo doubt the paper is quite old but I have found some instances that memory efficient fp16 training worked for people having low end GPU's. Here is an example:\r\nhttps://bleepcoder.com/fairseq/551167214/oom-while-trying-to-train-bart \r\n\r\n`Do you think it's actually something that's worth investigating?`\r\n\r\nI am not completely sure that how much impact it can cause on the reduction in GPU memory consumption. We should definitely ask some experienced guys and also try to compare and analyze the results with & without using memory efficient fp16 version. \r\n\r\n`Would you like to try to work on this and submit a PR when you have something working?`\r\n\r\n@stas00 Thanks for giving this wonderful opportunity. I would love to work on open source projects. But I can't because of my busy schedule, I would not be able to spend time on this project. Sorry for that. ",
"Thank you for the feedback, @rajgar114 and letting us know that you'd love to work on that but reality won't allow that at the moment.\r\n\r\nI have added it to https://github.com/huggingface/transformers/issues/12126 so it won't get lost and hopefully it'd find a champion.\r\n\r\nFrom your part what would help is to find specific models/setups where it has been found to be converging well despite the limitations it imposed on itself. So that whoever works on this will have a good chance of succeeding. Thank you.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I do not know exactly if my case fits this issue, but I have trained seq2seq model with fairseq (v0.10.1) and current transformer 4.9. In both cases I have used encoder-decoder and BART architecture and saw that Fariseq allows me to use a greater batch size 120- 160 (sentences) around 5000 tokens. Transformers library with deepspeed integration handles max 48(sentences).\r\nI have used 3x Geforce 3090 with 24 GB ram. In both cases model size ~70M parameters (6 layers for encoder and decoder, hidden_size=512, ff=2048)\r\nConcludes, fairseq training is almost 3 times faster (measured by the number of tokens seen during training in a fixed time budget).\r\n",
"Have you tried using activation checkpointing? That should save a lot of memory and enable much larger batch sizes.\r\n\r\nAlso it might be good to try both Deepspeed zero-2 and zero-3 stages - I don't know which one you were optimizing with."
] | 1,623 | 1,630 | 1,626 | NONE | null | # π Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Fairseq uses memory efficient FP 16 training as explained in https://arxiv.org/pdf/1904.01038.pdf.
## Motivation
Generally the model requires high end GPU's to fine-tune on larger length datasets. Using memory efficient FP 16 we can reduce the need of high GPU's and thus models can be fine-tune without OOM problems.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12084/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12083 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12083/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12083/comments | https://api.github.com/repos/huggingface/transformers/issues/12083/events | https://github.com/huggingface/transformers/pull/12083 | 916,166,382 | MDExOlB1bGxSZXF1ZXN0NjY1ODc3NzA4 | 12,083 | Add text_column_name and label_column_name to run_ner and run_ner_no_trainer args | {
"login": "kumapo",
"id": 70637,
"node_id": "MDQ6VXNlcjcwNjM3",
"avatar_url": "https://avatars.githubusercontent.com/u/70637?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kumapo",
"html_url": "https://github.com/kumapo",
"followers_url": "https://api.github.com/users/kumapo/followers",
"following_url": "https://api.github.com/users/kumapo/following{/other_user}",
"gists_url": "https://api.github.com/users/kumapo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kumapo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kumapo/subscriptions",
"organizations_url": "https://api.github.com/users/kumapo/orgs",
"repos_url": "https://api.github.com/users/kumapo/repos",
"events_url": "https://api.github.com/users/kumapo/events{/privacy}",
"received_events_url": "https://api.github.com/users/kumapo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sgugger thank you for your suggestions! \r\nI've pushed a commit based on it.",
"I'm appreciated for your fix too!"
] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | # What does this PR do?
it's nice to enable to specify which columns are for `text` and `label` for run_ner with its arguments.
especially it's useful when we train it on not csv (i.e. json) dataset because `text` and `label` columns are determined by column order if default columns are missing.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
```
% python -m pytest -n auto --dist=loadfile -s -v ./examples/
...
Results (1653.79s):
18 passed
3 skipped
```
## Who can review?
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12083/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12083/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12083",
"html_url": "https://github.com/huggingface/transformers/pull/12083",
"diff_url": "https://github.com/huggingface/transformers/pull/12083.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12083.patch",
"merged_at": 1623326600000
} |
https://api.github.com/repos/huggingface/transformers/issues/12082 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12082/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12082/comments | https://api.github.com/repos/huggingface/transformers/issues/12082/events | https://github.com/huggingface/transformers/pull/12082 | 916,058,131 | MDExOlB1bGxSZXF1ZXN0NjY1Nzg2MDQ5 | 12,082 | Add support for XLM-R XL and XXL models | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@Soonhwan-Kwon conversion script is currently not working yet for the newer models, but it is working for XLM-R Base for example. Layer norm changes need to be done first in RoBERTa modeling code, so that conversion script is writing a correct model :)",
"Hi @stefan-it thanks for contributing the new models and do you have any plan to push the code and models into https://huggingface.co/models recently @patrickvonplaten ? ",
"Waiting for this model. Is there any expected timeline? @patrickvonplaten ",
"Should we try to look into it again @stefan-it ? :-)"
] | 1,623 | 1,630 | 1,627 | COLLABORATOR | null | Hi,
this PR adds support for the recently released XL and XXL models for XLM-R. These models are described in the ["Larger-Scale Transformers for Multilingual Masked Language Modeling"](https://arxiv.org/abs/2105.00572) paper.
It turns out, that these new models are trained with a more recent version of `fairseq` compared to the "old" XLM-R Base and Large models. Only the current `master` version of `fairseq` is able to load these new models correctly. Unfortunately, some model changes were made (see [this](https://github.com/pytorch/fairseq/commit/54423d3b22a3e7f536e02e9e5445cef9becbd60d) refactoring commit), and the following changes needs also to be done in Transformers library:
The XLM-R Base and Large model used layer normalization in the embeddings, whereas the newer XL and XXL models do not make use of normalized embeddings: layer normalization is done at the end of the transformer. See discussion here: https://github.com/pytorch/fairseq/issues/3600
@patrickvonplaten proposed to introduce a new `RobertaConfig` variable - like `normalize_embeddings` - in order to reflect these model changes in `modeling_roberta.py` directly, instead of writing a new model class (which copies 99% of existing code).
----
Changes made so far:
* [x] Update conversion script to work with lastest `fairseq` master version (*1.0.0a*)
Necessary changes:
* [ ] Introduce new config variable in `RobertaConfig` to indicate different layer normalization "strategies"
* [ ] Implement these different layer normalization settings in all modeling classes
* [ ] Re-run conversion script and upload converted XLM-R XL and XXL models to hub | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12082/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12082/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12082",
"html_url": "https://github.com/huggingface/transformers/pull/12082",
"diff_url": "https://github.com/huggingface/transformers/pull/12082.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12082.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12081 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12081/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12081/comments | https://api.github.com/repos/huggingface/transformers/issues/12081/events | https://github.com/huggingface/transformers/issues/12081 | 916,009,237 | MDU6SXNzdWU5MTYwMDkyMzc= | 12,081 | GPT2 Flax "TypeError: JAX only supports number and bool dtypes, got dtype object in array" | {
"login": "s4sarath",
"id": 10637096,
"node_id": "MDQ6VXNlcjEwNjM3MDk2",
"avatar_url": "https://avatars.githubusercontent.com/u/10637096?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/s4sarath",
"html_url": "https://github.com/s4sarath",
"followers_url": "https://api.github.com/users/s4sarath/followers",
"following_url": "https://api.github.com/users/s4sarath/following{/other_user}",
"gists_url": "https://api.github.com/users/s4sarath/gists{/gist_id}",
"starred_url": "https://api.github.com/users/s4sarath/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/s4sarath/subscriptions",
"organizations_url": "https://api.github.com/users/s4sarath/orgs",
"repos_url": "https://api.github.com/users/s4sarath/repos",
"events_url": "https://api.github.com/users/s4sarath/events{/privacy}",
"received_events_url": "https://api.github.com/users/s4sarath/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"I see where this is coming from. For now, could you try initializing model and tokenizer like this\r\n```\r\ntokenizer = GPT2TokenizerFast.from_pretrained(model_id, padding_side=\"left\", pad_token=\"<|endoftext|>\")\r\nmodel = FlaxGPT2LMHeadModel.from_pretrained(model_id, pad_token_id=50256,)\r\n```\r\n\r\nWe'll soon publish a detailed colab about Flax generate ",
"Done buddy. Worked. Thanks a lot. \r\nDoes Flax ```model.generate``` makes use of caching ``` Query and Value ``` in attention layers ?\r\n\r\nI have too run a benchmark for generation. Its fair to compare only if caching is supported. ",
"yeah it only works with caching ",
"Thanks . Closing the issue."
] | 1,623 | 1,623 | 1,623 | NONE | null | On GPU
```
>>> from transformers import AutoTokenizer, FlaxAutoModelForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("gpt2-medium")
>>> model = FlaxAutoModelForCausalLM.from_pretrained("gpt2-medium")
>>> input_context = "The dog"
>>> # encode input context
>>> input_ids = tokenizer(input_context, return_tensors="jax").input_ids
>>> # generate candidates using sampling
>>> outputs = model.generate(input_ids=input_ids, max_length=20, top_k=30, do_sample=True)
TypeError: JAX only supports number and bool dtypes, got dtype object in array
```
@patrickvonplaten @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12081/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12080 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12080/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12080/comments | https://api.github.com/repos/huggingface/transformers/issues/12080/events | https://github.com/huggingface/transformers/pull/12080 | 915,991,540 | MDExOlB1bGxSZXF1ZXN0NjY1NzMwMTg4 | 12,080 | Fix missing id2label and label2id in run_ner.py | {
"login": "lohjine",
"id": 66872975,
"node_id": "MDQ6VXNlcjY2ODcyOTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/66872975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lohjine",
"html_url": "https://github.com/lohjine",
"followers_url": "https://api.github.com/users/lohjine/followers",
"following_url": "https://api.github.com/users/lohjine/following{/other_user}",
"gists_url": "https://api.github.com/users/lohjine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lohjine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lohjine/subscriptions",
"organizations_url": "https://api.github.com/users/lohjine/orgs",
"repos_url": "https://api.github.com/users/lohjine/repos",
"events_url": "https://api.github.com/users/lohjine/events{/privacy}",
"received_events_url": "https://api.github.com/users/lohjine/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This has just been done in #12001 :-)\r\nThanks for the PR!"
] | 1,623 | 1,623 | 1,623 | NONE | null | This is to retain the NER labels when training, so it can be used to map the labels during later prediction.
This functionality is present in the old version [https://github.com/huggingface/transformers/blob/master/examples/legacy/token-classification/run_ner.py#L170](https://github.com/huggingface/transformers/blob/master/examples/legacy/token-classification/run_ner.py#L170), but missing in the current one.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12080/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12080",
"html_url": "https://github.com/huggingface/transformers/pull/12080",
"diff_url": "https://github.com/huggingface/transformers/pull/12080.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12080.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12079 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12079/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12079/comments | https://api.github.com/repos/huggingface/transformers/issues/12079/events | https://github.com/huggingface/transformers/issues/12079 | 915,986,212 | MDU6SXNzdWU5MTU5ODYyMTI= | 12,079 | Use Distilbert to run language model, encounter error "Unrecognized configuration class " | {
"login": "OleNet",
"id": 3206718,
"node_id": "MDQ6VXNlcjMyMDY3MTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3206718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OleNet",
"html_url": "https://github.com/OleNet",
"followers_url": "https://api.github.com/users/OleNet/followers",
"following_url": "https://api.github.com/users/OleNet/following{/other_user}",
"gists_url": "https://api.github.com/users/OleNet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OleNet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OleNet/subscriptions",
"organizations_url": "https://api.github.com/users/OleNet/orgs",
"repos_url": "https://api.github.com/users/OleNet/repos",
"events_url": "https://api.github.com/users/OleNet/events{/privacy}",
"received_events_url": "https://api.github.com/users/OleNet/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"As printed by the error, DistilBERT is not supported by `AutoModelForCausalLM`, since it's an encoder-only model. Please use one of the supported models to perform autoregressive (i.e. left-to-right) language modeling.",
"Why the [README](https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) have gave a example of distilbert ? ",
"The example you refer to is `run_mlm.py` (mlm is short for masked language modeling). However, the script you're using above is `run_clm.py` (clm is short for causal language modeling, also called autoregressive language modeling). DistilBERT only supports mlm, not clm. ",
"Ah! yes, you are right!\r\nMy fault !\r\nThanks for your answer."
] | 1,623 | 1,623 | 1,623 | NONE | null |
- `transformers` version: 4.6.1
- Platform: centos 7.5
- Python version: 3.7
- PyTorch version (GPU?): 1.10
- Using GPU in script?: v100
- Using distributed or parallel set-up in script?: no
## To reproduce
```
python3 run_clm.py \
--model_name_or_path distilbert-base-uncased \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--do_train \
--do_eval \
--output_dir ./tmp/test-clm
```
```
[INFO|tokenization_utils_base.py:1717] 2021-06-09 17:11:05,729 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/vocab.txt from cache at /home/work/liujiaxiang/.cache/huggingface/transformers/0e1bbfda7f63a99bb52e3915dcf10c3c92122b827d92eb2d34ce94ee79ba486c.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99
[INFO|tokenization_utils_base.py:1717] 2021-06-09 17:11:05,730 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/tokenizer.json from cache at /home/work/liujiaxiang/.cache/huggingface/transformers/75abb59d7a06f4f640158a9bfcde005264e59e8d566781ab1415b139d2e4c603.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4
[INFO|tokenization_utils_base.py:1717] 2021-06-09 17:11:05,730 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:1717] 2021-06-09 17:11:05,730 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/special_tokens_map.json from cache at None
[INFO|tokenization_utils_base.py:1717] 2021-06-09 17:11:05,730 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/tokenizer_config.json from cache at /home/work/liujiaxiang/.cache/huggingface/transformers/8c8624b8ac8aa99c60c912161f8332de003484428c47906d7ff7eb7f73eecdbb.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79
Traceback (most recent call last):
File "run_clm.py", line 536, in <module>
main()
File "run_clm.py", line 322, in main
use_auth_token=True if model_args.use_auth_token else None,
File "/ssd2/liujiaxiang/workfiles/transformer_invariant_bigger/transformers/src/transformers/models/auto/auto_factory.py", line 397, in from_pretrained
f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
ValueError: Unrecognized configuration class <class 'transformers.models.distilbert.configuration_distilbert.DistilBertConfig'> for this kind of AutoModel: AutoModelForCausalLM.
Model type should be one of RoFormerConfig, BigBirdPegasusConfig, GPTNeoConfig, BigBirdConfig, CamembertConfig, XLMRobertaConfig, RobertaConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, XLMConfig, CTRLConfig, ReformerConfig, BertGenerationConfig, XLMProphetNetConfig, ProphetNetConfig, BartConfig, MBartConfig, PegasusConfig, MarianConfig, BlenderbotConfig, BlenderbotSmallConfig, MegatronBertConfig.
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12079/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12079/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12078 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12078/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12078/comments | https://api.github.com/repos/huggingface/transformers/issues/12078/events | https://github.com/huggingface/transformers/issues/12078 | 915,721,071 | MDU6SXNzdWU5MTU3MjEwNzE= | 12,078 | OSError: Unable to open file (file signature not found) | {
"login": "Holy-Shine",
"id": 14997709,
"node_id": "MDQ6VXNlcjE0OTk3NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/14997709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Holy-Shine",
"html_url": "https://github.com/Holy-Shine",
"followers_url": "https://api.github.com/users/Holy-Shine/followers",
"following_url": "https://api.github.com/users/Holy-Shine/following{/other_user}",
"gists_url": "https://api.github.com/users/Holy-Shine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Holy-Shine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Holy-Shine/subscriptions",
"organizations_url": "https://api.github.com/users/Holy-Shine/orgs",
"repos_url": "https://api.github.com/users/Holy-Shine/repos",
"events_url": "https://api.github.com/users/Holy-Shine/events{/privacy}",
"received_events_url": "https://api.github.com/users/Holy-Shine/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @Holy-Shine \r\nTry:\r\n```python\r\nfrom transformers import TFAutoModel\r\nmodel = TFAutoModel.from_pretrained(\"hfl/chinese-bert-wwm-ext\")\r\n```",
"\r\n@vishal-burman \r\nthanks! It works for me. \r\nAnd I found that my tf_model.h5 file in my local dir definitely too \"thin\" that model loader cannot figure out it."
] | 1,623 | 1,623 | 1,623 | NONE | null | python version: 3.7.6
transformers: 4.6.1
tensorflow-cpu: 2.3.1
my code:
```python
from transformers import TFAutoModel
model = TFAutoModel.from_pretrained("./chinese-bert-wwm-ext")
```
and `chinese-bert-wwm-ext` is a model dir that is downloaded from [https://huggingface.co/models](url)γ
After I run this code in my jupyter notebook, I get an OSError:
```
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
~\Anaconda3\lib\site-packages\transformers\modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1291 try:
-> 1292 missing_keys, unexpected_keys = load_tf_weights(model, resolved_archive_file, load_weight_prefix)
1293 except OSError:
~\Anaconda3\lib\site-packages\transformers\modeling_tf_utils.py in load_tf_weights(model, resolved_archive_file, _prefix)
470 # Read the H5 file
--> 471 with h5py.File(resolved_archive_file, "r") as f:
472 # Retrieve the name of each layer from the H5 file
~\Anaconda3\lib\site-packages\h5py\_hl\files.py in __init__(self, name, mode, driver, libver, userblock_size, swmr, rdcc_nslots, rdcc_nbytes, rdcc_w0, track_order, **kwds)
407 fapl, fcpl=make_fcpl(track_order=track_order),
--> 408 swmr=swmr)
409
~\Anaconda3\lib\site-packages\h5py\_hl\files.py in make_fid(name, mode, userblock_size, fapl, fcpl, swmr)
172 flags |= h5f.ACC_SWMR_READ
--> 173 fid = h5f.open(name, flags, fapl=fapl)
174 elif mode == 'r+':
h5py\_objects.pyx in h5py._objects.with_phil.wrapper()
h5py\_objects.pyx in h5py._objects.with_phil.wrapper()
h5py\h5f.pyx in h5py.h5f.open()
OSError: Unable to open file (file signature not found)
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-8-724814da42c1> in <module>
----> 1 model = TFAutoModel.from_pretrained('./chinese-bert-wwm-ext/')
~\Anaconda3\lib\site-packages\transformers\models\auto\auto_factory.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
379 if type(config) in cls._model_mapping.keys():
380 model_class = _get_model_class(config, cls._model_mapping)
--> 381 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
382 raise ValueError(
383 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
~\Anaconda3\lib\site-packages\transformers\modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1293 except OSError:
1294 raise OSError(
-> 1295 "Unable to load weights from h5 file. "
1296 "If you tried to load a TF 2.0 model from a PyTorch checkpoint, please set from_pt=True. "
1297 )
OSError: Unable to load weights from h5 file. If you tried to load a TF 2.0 model from a PyTorch checkpoint, please set from_pt=True.
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12078/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12077 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12077/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12077/comments | https://api.github.com/repos/huggingface/transformers/issues/12077/events | https://github.com/huggingface/transformers/pull/12077 | 915,686,956 | MDExOlB1bGxSZXF1ZXN0NjY1NDc0NDI1 | 12,077 | [Deepspeed] new docs | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,624 | 1,624 | CONTRIBUTOR | null | This PR expands/improves Deepspeed docs:
- documents `sub_group_size` tuneup (thanks @samyam)
- updates install info
- adds issue filing instructions
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12077/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12077/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12077",
"html_url": "https://github.com/huggingface/transformers/pull/12077",
"diff_url": "https://github.com/huggingface/transformers/pull/12077.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12077.patch",
"merged_at": 1624471658000
} |
https://api.github.com/repos/huggingface/transformers/issues/12076 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12076/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12076/comments | https://api.github.com/repos/huggingface/transformers/issues/12076/events | https://github.com/huggingface/transformers/pull/12076 | 915,672,679 | MDExOlB1bGxSZXF1ZXN0NjY1NDYyNzQw | 12,076 | [wav2vec2 / Deepspeed] sync LayerDrop for Wav2Vec2Encoder + tests | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | This PR continues https://github.com/huggingface/transformers/pull/11638 and:
- adds the same gpu syncing for `Wav2Vec2Encoder` LayerDrop as there is for ` Wav2Vec2EncoderStableLayerNorm`
- double the tests to test `"patrickvonplaten/wav2vec2_tiny_random"` to exercise `Wav2Vec2Encoder` too
@patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12076/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12076/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12076",
"html_url": "https://github.com/huggingface/transformers/pull/12076",
"diff_url": "https://github.com/huggingface/transformers/pull/12076.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12076.patch",
"merged_at": 1623241263000
} |
https://api.github.com/repos/huggingface/transformers/issues/12075 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12075/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12075/comments | https://api.github.com/repos/huggingface/transformers/issues/12075/events | https://github.com/huggingface/transformers/issues/12075 | 915,516,235 | MDU6SXNzdWU5MTU1MTYyMzU= | 12,075 | Using whitespace tokenizer for training models | {
"login": "neel04",
"id": 11617870,
"node_id": "MDQ6VXNlcjExNjE3ODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/11617870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neel04",
"html_url": "https://github.com/neel04",
"followers_url": "https://api.github.com/users/neel04/followers",
"following_url": "https://api.github.com/users/neel04/following{/other_user}",
"gists_url": "https://api.github.com/users/neel04/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neel04/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neel04/subscriptions",
"organizations_url": "https://api.github.com/users/neel04/orgs",
"repos_url": "https://api.github.com/users/neel04/repos",
"events_url": "https://api.github.com/users/neel04/events{/privacy}",
"received_events_url": "https://api.github.com/users/neel04/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Thanks a lot for the well-crafted issue and reproducer, this is very helpful. Regarding your problem 2, I have a question: why are you saving the tokenizer's model, rather than the tokenizer itself? \r\n\r\nI would argue that saving the entire tokenizer in a `tokenizer.json` would be better:\r\n```py\r\n# And now it is ready, we can save the vocabulary with\r\ntokenizer.save('./tok/tokenizer.json')\r\n``` \r\nThen you'll be able to reload your fast tokenizer (that is looking for a `tokenizer.json` file!) seamlessly:\r\n```py\r\nfrom transformers import BigBirdTokenizerFast\r\n\r\ntokenizer = BigBirdTokenizerFast.from_pretrained(\"tok\", max_len=16000)\r\n```\r\nI also verified that you do indeed recover the same encoding as when using the `tokenizers` library:\r\n```py\r\n>>> tokenizer(\"23 39999 999 8888 212\").tokens()\r\n['23', '39999', '999', '8888', '212']\r\n```\r\n\r\nRegarding your first question, I don't see anywhere in your code where you're adding a BERT template processor. I've taken the liberty to add it right after your `tokenizer` creation, see below. I am unaware of the error you got, but when trying it I had an error saying that `tokenizer.token_to_id(\"<s>\")` was returning `None`. \r\n\r\nTo fix this you can specify that `<s>` and `<s/>` are special tokens when initializing your BPE trainer, as I have done below.\r\n\r\n```py\r\nfrom tokenizers import Tokenizer, trainers\r\nfrom tokenizers.models import BPE\r\nfrom tokenizers.normalizers import Lowercase\r\nfrom tokenizers.pre_tokenizers import CharDelimiterSplit\r\n\r\n# We build our custom tokenizer:\r\ntokenizer = Tokenizer(BPE()) \r\ntokenizer.normalizer = Lowercase()\r\ntokenizer.pre_tokenizer = CharDelimiterSplit(' ')\r\n\r\n# We can train this tokenizer by giving it a list of path to text files:\r\ntrainer = trainers.BpeTrainer(special_tokens=[\"[UNK]\", \"<s>\", \"</s>\"], show_progress=True)\r\ntokenizer.train(files=['/content/dataset.txt'], trainer=trainer)\r\n\r\nfrom tokenizers.processors import BertProcessing\r\nimport tokenizers \r\n\r\ntokenizer.post_processor = tokenizers.processors.BertProcessing(\r\n (\"</s>\", tokenizer.token_to_id(\"</s>\")),\r\n (\"<s>\", tokenizer.token_to_id(\"<s>\")),\r\n)\r\ntokenizer.enable_truncation(max_length=16000)\r\n```\r\n\r\nAfter this, encoding a sequence returns the correct tokens with the correct special tokens:\r\n```py\r\n>>> tokenizer.encode(\"23 39999 999 8888 212\").tokens\r\n['<s>', '23', '39999', '999', '8888', '212', '</s>']\r\n```",
"Thanks a ton @LysandreJik and replying so quickly and efficiently :cake: :+1: :rocket: !!! \r\n\r\nFor anyone else who might stumble on this problem, I have modified a simple example via the [Colab](https://colab.research.google.com/drive/1z_GzMGpcl-7Vg7eWUPOqybojDfw2gli_?usp=sharing) link attached above. If in any case it might not be working, I have uploaded the `.ipynb` file alongside this comment too. :hugs: \r\n\r\nHave a fantastic day!\r\n\r\n[HF_issue_repro.zip](https://github.com/huggingface/transformers/files/6623500/HF_issue_repro.zip)\r\n\r\n",
"@LysandreJik Sorry to disturb you again, but I had this peculiar problem. I wanted to train BigBird on TPU, and its reporting that the config.json might have missing parameters.\r\n```py\r\n[INFO|tokenization_auto.py:427] 2021-06-25 12:16:10,662 >> Could not locate the tokenizer configuration file, will try to use the model config instead.\r\n[INFO|configuration_utils.py:528] 2021-06-25 12:16:10,668 >> loading configuration file ./tok/config.json\r\nException in device=TPU:0: Unrecognized model in ./tok. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: visual_bert, roformer, clip, bigbird_pegasus, deit, luke, detr, gpt_neo, big_bird, speech_to_text, vit, wav2vec2, m2m_100, convbert, led, blenderbot-small, retribert, ibert, mt5, t5, mobilebert, distilbert, albert, bert-generation, camembert, xlm-roberta, pegasus, marian, mbart, megatron_bert, mpnet, bart, blenderbot, reformer, longformer, roberta, deberta-v2, deberta, flaubert, fsmt, squeezebert, hubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm-prophetnet, prophetnet, xlm, ctrl, electra, encoder-decoder, funnel, lxmert, dpr, layoutlm, rag, tapas\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py\", line 329, in _mp_start_fn\r\n _start_fn(index, pf_cfg, fn, args)\r\n File \"/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py\", line 323, in _start_fn\r\n fn(gindex, *args)\r\n File \"/content/run_mlm.py\", line 520, in _mp_fn\r\n main()\r\n File \"/content/run_mlm.py\", line 313, in main\r\n tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, **tokenizer_kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/auto/tokenization_auto.py\", line 529, in from_pretrained\r\n config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/auto/configuration_auto.py\", line 457, in from_pretrained\r\n f\"Unrecognized model in {pretrained_model_name_or_path}. \"\r\nValueError: Unrecognized model in ./tok. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: visual_bert, roformer, clip, bigbird_pegasus, deit, luke, detr, gpt_neo, big_bird, speech_to_text, vit, wav2vec2, m2m_100, convbert, led, blenderbot-small, retribert, ibert, mt5, t5, mobilebert, distilbert, albert, bert-generation, camembert, xlm-roberta, pegasus, marian, mbart, megatron_bert, mpnet, bart, blenderbot, reformer, longformer, roberta, deberta-v2, deberta, flaubert, fsmt, squeezebert, hubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm-prophetnet, prophetnet, xlm, ctrl, electra, encoder-decoder, funnel, lxmert, dpr, layoutlm, rag, tapas\r\n```\r\nSo apparently, I have been saving the tokenizer's state only, not the entire model. This is how I am doing\r\n```py\r\n!mkdir tok\r\n# And now it is ready, we can save the tokenizer's state only, not the model\r\ntokenizer.save('./tok/config.json')\r\n```\r\nI think that `config.json` might be a product of the tokenizer's model when saving, which we are omitting by saving the state only?\r\nTo make sure, I searched the `json` file to confirm that key is indeed not present there.\r\n\r\nWould you happen to have a clue as to what I can do here?",
"Assuming the tokenizer state to be saved is the specific one for the model, I did this\r\n```py\r\ntokenizer = BigBirdTokenizerFast.from_pretrained(\"/content/tok\", max_len=16000)\r\ntokenizer.save_pretrained('./tokenizer')\r\n```\r\nAnd tried to load the tokenizer again. However, I can't verify whether it works because upon running the script, I lose connection to the instance :thinking: \r\n\r\nIs this the correct usage though?",
"Hi @neel04.\r\n\r\nI'm thinking you're facing an issue that was solved in the latest `transformers` release. Before the latest `transformers` release, `AutoTokenizer` couldn't guess which tokenizer to load from *just* the tokenizer files, it also needed to have access to the model's `config.json` in order to see the model and tokenizer classes.\r\n\r\nIt was addressed in the latest `transformers` release, where the tokenizer class would now be saved in `tokenizer_config.json`.\r\n\r\nPlease let me know if either of these fixes work:\r\n\r\n1. Upgrade to the latest version, complete the `tokenizer_config.json` in your `./tok` directory with the following:\r\n```\r\n\"tokenizer_class\": \"BigBirdTokenizer\"\r\n```\r\nIf it's not present, then create it.\r\n\r\n2. Stay at your current version, and add a `config.json` file containing the same information in your `./tok` folder.\r\n\r\nRegarding your second question, yes, using `save_pretrained` alongside `from_pretrained` is the correct usage.",
"Hey @LysandreJik,\r\nThanks a ton for the tips, I will surely try them if I face this error again! :hugs:\r\n\r\nI am using the `master` branch now for my project, so I hope I won't face this problem again. However, I can't completely verify whether it works because I am unable to run it on TPU due to some memory leak.\r\n\r\nIf related problems arise, I would surely try out either of your fixes :rocket: \r\n\r\nHave a fantastic day!"
] | 1,623 | 1,624 | 1,623 | NONE | null | ## Environment info
- `transformers` version: 4.6.1
- Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Using GPU in script?: Yes/depends
- Using distributed or parallel set-up in script?: No
### Who can help
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- tokenizers: @LysandreJik
## Information
Model I am using (Bert, XLNet ...): `BigBird`
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
I have a dataset for which I wanted to use a tokenizer based on whitespace rather than any subword segmentation approach.
This snippet I got off github has a way to construct and use the custom tokenizer that operates on whitespaces:-
```py
from tokenizers import Tokenizer, trainers
from tokenizers.models import BPE
from tokenizers.normalizers import Lowercase
from tokenizers.pre_tokenizers import CharDelimiterSplit
# We build our custom tokenizer:
tokenizer = Tokenizer(BPE())
tokenizer.normalizer = Lowercase()
tokenizer.pre_tokenizer = CharDelimiterSplit(' ')
# We can train this tokenizer by giving it a list of path to text files:
trainer = trainers.BpeTrainer(special_tokens=["[UNK]"], show_progress=True)
tokenizer.train(files=['/content/dataset.txt'], trainer=trainer)
```
I wanted to use it for pre-training the `BigBird` model, but facing two issues:
1. I canβt seem to be able to use this snippet with the custom `tokenizer` above to convert tokenized sentences in model-friendly sequences
```py
from tokenizers.processors import BertProcessing
tokenizer._tokenizer.post_processor = tokenizers.processors.BertProcessing(
("</s>", tokenizer.token_to_id("</s>")),
("<s>", tokenizer.token_to_id("<s>")),
)
tokenizer.enable_truncation(max_length=16000)
```
This returns me an error, and without any preprocessing the output does not contain the sequence start and end tokens (`<s>`; `</s>`) as expected.
2. Next problem arises, when I save the tokenizer state in the specified folder, I am unable to use it via:
```py
tokenizer = BigBirdTokenizerFast.from_pretrained("./tok", max_len=16000)
```
since it yields the error that my directory does not βreferenceβ the tokenizer files, which shouldnβt be an issue since using `RobertaTokenizerFast` does work - I assume it has something to do in the tokenization `post-processing` phase.
<h2>Fully Reproducible Colab</h2>
I am really confused about this - I have created a fully reproducible colab notebook, with commented problems and synthetic data. Please find it [here](https://colab.research.google.com/drive/1z_GzMGpcl-7Vg7eWUPOqybojDfw2gli_?usp=sharing).
Thanx a ton in advance!!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12075/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12074 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12074/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12074/comments | https://api.github.com/repos/huggingface/transformers/issues/12074/events | https://github.com/huggingface/transformers/pull/12074 | 915,393,245 | MDExOlB1bGxSZXF1ZXN0NjY1MjEyMDc3 | 12,074 | [test] support more than 2 gpus | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | This is just a small tweak to have the `tests/test_trainer.py::TrainerIntegrationTest::test_fp16_full_eval` not fail under 3+ gpus rigs.
@LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12074/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12074",
"html_url": "https://github.com/huggingface/transformers/pull/12074",
"diff_url": "https://github.com/huggingface/transformers/pull/12074.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12074.patch",
"merged_at": 1623255827000
} |
https://api.github.com/repos/huggingface/transformers/issues/12073 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12073/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12073/comments | https://api.github.com/repos/huggingface/transformers/issues/12073/events | https://github.com/huggingface/transformers/issues/12073 | 915,360,083 | MDU6SXNzdWU5MTUzNjAwODM= | 12,073 | src_lang/tgt_lang missing in mbart example | {
"login": "zijwang",
"id": 25057983,
"node_id": "MDQ6VXNlcjI1MDU3OTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/25057983?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zijwang",
"html_url": "https://github.com/zijwang",
"followers_url": "https://api.github.com/users/zijwang/followers",
"following_url": "https://api.github.com/users/zijwang/following{/other_user}",
"gists_url": "https://api.github.com/users/zijwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zijwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zijwang/subscriptions",
"organizations_url": "https://api.github.com/users/zijwang/orgs",
"repos_url": "https://api.github.com/users/zijwang/repos",
"events_url": "https://api.github.com/users/zijwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/zijwang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | null | [] | [
"Tagging @patrickvonplaten @patil-suraj @LysandreJik again in case you know what was going on here. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hi @zijwang , thanks a lot for spotting this.\r\n\r\nThe tokenizer API is changed a bit, instead of passing `src_lang` and `tgt_lang` to tokenizes `__call__` method, we can now pass these when initializing the tokenizer, or we could set those properties as well. Here's a minimal example \r\n\r\n```python\r\nfrom transformers import MBartForConditionalGeneration, MBartTokenizer\r\n\r\ntokenizer = MBartTokenizer.from_pretrained(\"facebook/mbart-large-en-ro\", src_lang=\"en_XX\", tgt_lang=\"ro_RO\")\r\n\r\n# to change the src_lang\r\ntokenizer.src_lang = \"fr_XX\"\r\n```"
] | 1,623 | 1,630 | 1,630 | NONE | null | ## Environment info
- `transformers` version: 4.6
- Platform: linux
- Python version: 3.8
- PyTorch version (GPU?): 1.8
### Who can help
Models:
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
Library:
- tokenizers: @LysandreJik
## Information
Model I am using (Bert, XLNet ...): mbart
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
## To reproduce
I am running the official example in the doc [here](https://huggingface.co/transformers/model_doc/mbart.html) under `Supervised training`. However, there is a warning of
```
Keyword arguments {'src_lang': 'en_XX', 'tgt_lang': 'ro_RO'} not recognized.
```
when running
```
inputs = tokenizer(example_english_phrase, return_tensors="pt", src_lang="en_XX", tgt_lang="ro_RO")
```
Is this normal? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12073/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12072 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12072/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12072/comments | https://api.github.com/repos/huggingface/transformers/issues/12072/events | https://github.com/huggingface/transformers/issues/12072 | 915,261,622 | MDU6SXNzdWU5MTUyNjE2MjI= | 12,072 | Inconsistent behavior on CPU vs. GPU | {
"login": "mar-muel",
"id": 19345805,
"node_id": "MDQ6VXNlcjE5MzQ1ODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/19345805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mar-muel",
"html_url": "https://github.com/mar-muel",
"followers_url": "https://api.github.com/users/mar-muel/followers",
"following_url": "https://api.github.com/users/mar-muel/following{/other_user}",
"gists_url": "https://api.github.com/users/mar-muel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mar-muel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mar-muel/subscriptions",
"organizations_url": "https://api.github.com/users/mar-muel/orgs",
"repos_url": "https://api.github.com/users/mar-muel/repos",
"events_url": "https://api.github.com/users/mar-muel/events{/privacy}",
"received_events_url": "https://api.github.com/users/mar-muel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! This is weird, you indeed get a significantly different output. Running your exact code sample above, only changing the device to `cuda` yields the same results for me:\r\n```\r\ntensor([[0.0769]])\r\ntensor([[0.0769]])\r\n```\r\n\r\nTried it a few times, and I always get the same results - I've added an additional statement to ensure we get the exact same output:\r\n```py\r\nprint(torch.allclose(pred1, pred2))\r\n```\r\n\r\nAnd we do! \r\n\r\nI feel this may be a setup issue - would you mind opening trying it on Colab and sharing it if you get the same results so that I can investigate?",
"Thanks a lot @LysandreJik - Yes, indeed there's no issues on Colab.\r\n\r\nI turns out the problem only occurs with PyTorch versions\r\n```bash\r\n# pip freeze | grep torch\r\ntorch==1.8.1+cu111\r\ntorchaudio==0.8.1\r\ntorchvision==0.9.1+cu111\r\n```\r\n\r\nBut using `torch==1.8.1` works fine. \r\n\r\nThis is the output of my `nvidia-smi`:\r\n```\r\n+-----------------------------------------------------------------------------+\r\n| NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0 |\r\n|-------------------------------+----------------------+----------------------+\r\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\r\n| | | MIG M. |\r\n|===============================+======================+======================|\r\n| 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 |\r\n| N/A 49C P0 70W / 149W | 0MiB / 11441MiB | 100% Default |\r\n| | | N/A |\r\n+-------------------------------+----------------------+----------------------+\r\n\r\n+-----------------------------------------------------------------------------+\r\n| Processes: |\r\n| GPU GI CI PID Type Process name GPU Memory |\r\n| ID ID Usage |\r\n|=============================================================================|\r\n| No running processes found |\r\n+-----------------------------------------------------------------------------+\r\n```\r\n\r\nI created my environment like this:\r\n```bash\r\nconda create -n ml python==3.8\r\nconda activate ml\r\npip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html\r\npip install transformers\r\n```\r\n\r\nWould you mind checking whether you can reproduce with the above? \r\n\r\nI'd really like to understand what's going on here π
",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,626 | 1,626 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: Linux-4.19.0-16-cloud-amd64-x86_64-with-debian-10.9
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): 2.5.0 (False)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): AutoModel
## To reproduce
Steps to reproduce the behavior:
Hi all - I've been struggling with inconsistent behavior on CPU vs. GPU.
When running on CPU the following code works as expected:
```Python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
def predict(model, tokenizer, test_str, device):
input_ids = tokenizer(test_str, return_tensors='pt', padding=True).to(device)
model.to(device)
model.eval()
with torch.no_grad():
pred = model(**input_ids)
logits = pred.logits.cpu()
return logits
device = 'cpu'
model_dir = 'test_dir'
model_type = 'roberta-base'
test_str = [
'Hello! I am a test string!',
]
model = AutoModelForSequenceClassification.from_pretrained(model_type, num_labels=1)
tokenizer = AutoTokenizer.from_pretrained(model_type)
# save model
model.save_pretrained(model_dir)
pred1 = predict(model, tokenizer, test_str, device)
print(pred1)
model = AutoModelForSequenceClassification.from_pretrained(model_dir)
pred2 = predict(model, tokenizer, test_str, device)
print(pred2)
```
Output:
```
# Obviously output is random, however is identical
tensor([[-0.0238]])
tensor([[-0.0238]])
```
But when I change the to cuda by changing the device
```python
device = 'cuda'
```
I get a significantly different output:
```
tensor([[-0.3194]])
tensor([[-0.3414]])
```
Weirdly the above doesn't happen if I increase the length of my test string:
```
test_str = [
'Hello! I am a test string! Hello! I am a test string! Hello! I am a test string! Hello! I am a test string! ',
]
```
I'm pretty sure I'm missing something obvious - any help is appreciated! π
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I expect the output of the loaded model to be identical not only on CPU but also on GPU.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12072/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12071 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12071/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12071/comments | https://api.github.com/repos/huggingface/transformers/issues/12071/events | https://github.com/huggingface/transformers/issues/12071 | 915,139,509 | MDU6SXNzdWU5MTUxMzk1MDk= | 12,071 | XLM-R XL/XXL | {
"login": "sbmaruf",
"id": 32699797,
"node_id": "MDQ6VXNlcjMyNjk5Nzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/32699797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sbmaruf",
"html_url": "https://github.com/sbmaruf",
"followers_url": "https://api.github.com/users/sbmaruf/followers",
"following_url": "https://api.github.com/users/sbmaruf/following{/other_user}",
"gists_url": "https://api.github.com/users/sbmaruf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sbmaruf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sbmaruf/subscriptions",
"organizations_url": "https://api.github.com/users/sbmaruf/orgs",
"repos_url": "https://api.github.com/users/sbmaruf/repos",
"events_url": "https://api.github.com/users/sbmaruf/events{/privacy}",
"received_events_url": "https://api.github.com/users/sbmaruf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"I believe @stefan-it is on it :sunglasses: ",
"I successfully converted xlmr.xl to huggingface model.\r\n```\r\ntorch.Size([1, 11, 250880]) torch.Size([1, 11, 250880])\r\nmax_absolute_diff = 4.482269287109375e-05\r\nDo both models output the same tensors? π₯\r\nSaving model to converted_xlmr.xl2\r\nConfiguration saved in converted_xlmr.xl2/config.json\r\nModel weights saved in converted_xlmr.xl2/pytorch_model.bin\r\n```\r\nis there anything I can do to help?\r\n\r\nI'm middle of converting xxl model too.",
"While processing xxl it produces output with absolute error of 0.003273.\r\nIs it possible because of the model size(10.7B)?\r\n```\r\ntorch.Size([1, 11, 250880]) torch.Size([1, 11, 250880])\r\nmax_absolute_diff = 0.00327301025390625\r\nDo both models output the same tensors? π©\r\n```",
"@Soonhwan-Kwon Did you able to solve the issue? I don't have much experience. But I think the good error margin is `< 1e-6`. ",
"@sbmaruf I found out that model conversion in fairseq ver 0.10.2 produced wrong result on both side, and it made min absolute diff small. @stefan-it told that he made it work and it is a great news! https://github.com/huggingface/transformers/pull/12082",
"I've managed to get the same value and pushed PR in @stefan-it's repo.\r\n```\r\nour_output\r\ntensor([[[ 4.9569e+01, -1.0970e+00, 3.6279e+01, ..., 1.3821e+00,\r\n 1.2402e+00, 1.0905e+01],\r\n [ 8.5117e+00, -9.9209e-02, 3.3087e+01, ..., 1.4223e+00,\r\n 1.5715e+00, 1.1260e+01],\r\n [ 9.4228e+00, 1.8814e-01, 2.4515e+01, ..., 2.4245e+00,\r\n 1.0935e+00, 1.1929e+01],\r\n ...,\r\n [ 8.8886e+00, -1.7367e-02, 2.5994e+01, ..., 1.9401e+00,\r\n 1.8700e+00, 1.2002e+01],\r\n [ 9.7415e+00, -2.6768e-01, 3.2220e+01, ..., 1.9813e+00,\r\n 1.3128e+00, 9.6978e+00],\r\n [ 1.6002e+01, 1.6512e+00, 5.7907e+01, ..., 1.9653e+00,\r\n 1.3225e+00, 1.8848e+01]]], grad_fn=<AddBackward0>)\r\ntheir_output\r\ntensor([[[ 4.9569e+01, -1.0970e+00, 3.6280e+01, ..., 1.3821e+00,\r\n 1.2402e+00, 1.0905e+01],\r\n [ 8.5117e+00, -9.9211e-02, 3.3087e+01, ..., 1.4223e+00,\r\n 1.5715e+00, 1.1260e+01],\r\n [ 9.4228e+00, 1.8814e-01, 2.4515e+01, ..., 2.4245e+00,\r\n 1.0935e+00, 1.1929e+01],\r\n ...,\r\n [ 8.8886e+00, -1.7370e-02, 2.5994e+01, ..., 1.9401e+00,\r\n 1.8700e+00, 1.2002e+01],\r\n [ 9.7415e+00, -2.6768e-01, 3.2220e+01, ..., 1.9813e+00,\r\n 1.3128e+00, 9.6978e+00],\r\n [ 1.6002e+01, 1.6512e+00, 5.7907e+01, ..., 1.9653e+00,\r\n 1.3225e+00, 1.8848e+01]]], grad_fn=<AddBackward0>)\r\n```",
"Hi @Soonhwan-Kwon Thanks for contributing the convertion code. Have you tested whether you could load the converted xlmr-xl or xlm-xxl using huggingface? ",
"@ccclyu Yes I have tested the model and confirmed the better performance than xlmr large model in specific task. ",
"@Soonhwan-Kwon Glad to know that. I have successfully converted the parameters using your PR https://github.com/stefan-it/transformers/pull/1 but it may have minor conflict with the current transformer codebase. \r\n\r\nBy the way, how do you load the huge model (13GB parameters for xlm-xl) using huggingface since one single GPU could not load the whole model? Did you use DeepSpeed for model parallels ? \r\n\r\n \r\n\r\n ",
"@ccclyu There are many options and deepspeed is the one option as you mentioned, and you can freeze layers to reduce gpu memory usage.",
"In progress in https://github.com/huggingface/transformers/pull/13210 by @Soonhwan-Kwon ",
"is there any news about this?"
] | 1,623 | 1,695 | 1,695 | NONE | null | # π New model addition
## Model description
The larger version of XLMR.
[Source](https://github.com/pytorch/fairseq/tree/master/examples/xlmr)
Model | Description | #params | vocab size | Download
---|---|---|---|---
`xlmr.xl` | XLM-R (`layers=36, model_dim=2560`) | 3.5B | 250k | [xlm.xl.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/xlmr/xlmr.xl.tar.gz)
`xlmr.xxl` | XLM-R (`layers=48, model_dim=4096`) | 10.7B | 250k | [xlm.xxl.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/xlmr/xlmr.xxl.tar.gz)
<!-- Important information -->
## Open source status
* [x] the model implementation is available: (give details) -> Already available in huggingface
* [x] the model weights are available: (give details) -> link + source provided.
* [ ] who are the authors: (mention them, if possible by @gh-username)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12071/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12071/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12070 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12070/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12070/comments | https://api.github.com/repos/huggingface/transformers/issues/12070/events | https://github.com/huggingface/transformers/pull/12070 | 915,040,737 | MDExOlB1bGxSZXF1ZXN0NjY0OTA3NDcy | 12,070 | Properly indent block_size | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | COLLABORATOR | null | # What does this PR do?
Fixes a typo in the run_clm example. Fixes #12048 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12070/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12070",
"html_url": "https://github.com/huggingface/transformers/pull/12070",
"diff_url": "https://github.com/huggingface/transformers/pull/12070.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12070.patch",
"merged_at": 1623162422000
} |
https://api.github.com/repos/huggingface/transformers/issues/12069 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12069/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12069/comments | https://api.github.com/repos/huggingface/transformers/issues/12069/events | https://github.com/huggingface/transformers/pull/12069 | 915,039,001 | MDExOlB1bGxSZXF1ZXN0NjY0OTA1OTc2 | 12,069 | [WIP] Add helper function to align labels between datasets and model config | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Closing this in favour of implementing generic functionality on the `datasets` side here: https://github.com/huggingface/datasets/pull/2457"
] | 1,623 | 1,623 | 1,623 | MEMBER | null | # What does this PR do?
This PR adds a helper function to align the `label2id` and `id2label` mappings between a `datasets.Dataset` and `PretrainedConfig`, with the alignment performed on the dataset itself.
This will help us with the Hub evaluation, where we won't know in advance whether a model that is fine-tuned on say MNLI has the same mappings as the MNLI dataset we load from `datasets`.
An example where this is needed is if we naively try to evaluate `microsoft/deberta-base-mnli` on `mnli` because the model config has the following mappings:
```python
"id2label": {
"0": "CONTRADICTION",
"1": "NEUTRAL",
"2": "ENTAILMENT"
},
"label2id": {
"CONTRADICTION": 0,
"ENTAILMENT": 2,
"NEUTRAL": 1
}
```
while the `mnli` dataset has the `contradiction` and `neutral` labels swapped:
```python
id2label = {0: 'entailment', 1: 'neutral', 2: 'contradiction'}
label2id = {'contradiction': 2, 'entailment': 0, 'neutral': 1}
```
As a result, we get a much lower accuracy during evaluation:
```python
from datasets import load_dataset
from transformers.trainer_utils import EvalPrediction
from transformers import AutoModelForSequenceClassification, Trainer
# load dataset for evaluation
mnli = load_dataset("glue", "mnli", split="test")
# load model
model_ckpt = "microsoft/deberta-base-mnli"
model = AutoModelForSequenceClassification.from_pretrained(checkpoint)
# preprocess, create trainer ...
mnli_enc = ...
trainer = Trainer(model, args=args, tokenizer=tokenizer)
# generate preds
preds = trainer.predict(mnli_enc)
# preds.label_ids misalinged with model.config => returns wrong accuracy (too low)!
compute_metrics(EvalPrediction(preds.predictions, preds.label_ids))
```
The fix is to use the helper function before running the evaluation to make sure the label IDs are aligned:
```python
from transformers.modeling_utils import align_dataset_labels_with_config
mnli_enc_aligned = align_dataset_labels_with_config(dataset=mnli_enc, config=model.config, label_column="label")
# preds now aligned and everyone is happy :)
preds = trainer.predict(mnli_enc_aligned)
```
cc @thomwolf @lhoestq
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12069/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12069",
"html_url": "https://github.com/huggingface/transformers/pull/12069",
"diff_url": "https://github.com/huggingface/transformers/pull/12069.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12069.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12068 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12068/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12068/comments | https://api.github.com/repos/huggingface/transformers/issues/12068/events | https://github.com/huggingface/transformers/issues/12068 | 914,932,225 | MDU6SXNzdWU5MTQ5MzIyMjU= | 12,068 | grads is None when using GPT2 transformers in tensorflow | {
"login": "yananchen1989",
"id": 26405281,
"node_id": "MDQ6VXNlcjI2NDA1Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yananchen1989",
"html_url": "https://github.com/yananchen1989",
"followers_url": "https://api.github.com/users/yananchen1989/followers",
"following_url": "https://api.github.com/users/yananchen1989/following{/other_user}",
"gists_url": "https://api.github.com/users/yananchen1989/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yananchen1989/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yananchen1989/subscriptions",
"organizations_url": "https://api.github.com/users/yananchen1989/orgs",
"repos_url": "https://api.github.com/users/yananchen1989/repos",
"events_url": "https://api.github.com/users/yananchen1989/events{/privacy}",
"received_events_url": "https://api.github.com/users/yananchen1989/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,626 | 1,626 | NONE | null | transformers ver: `4.7.0.dev0`
```
from transformers import GPT2Tokenizer, TFGPT2LMHeadModel, TFGPT2Model, TFAutoModelForCausalLM
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
tokenizer.padding_side = "left"
tokenizer.pad_token = tokenizer.eos_token # to avoid an error
gpt2 = TFGPT2LMHeadModel.from_pretrained('gpt2')
gpt2.trainable = True
num_return_sequences = 1
#token_lens = [len(tokenizer.tokenize(sent)) for sent in prompts]
#max_length = math.ceil(np.array(token_lens).max())*2
max_len = get_tokens_len(ds, 0.99)
cce = tf.keras.losses.CategoricalCrossentropy()
optimizer = keras.optimizers.Adam(learning_rate=0.0001)
def loss_fn(output_sequences, labels):
syn_sents = tokenizer.batch_decode(output_sequences, clean_up_tokenization_spaces=True, skip_special_tokens=True)
syn_sents_pure = []
for sent, sent_syn in zip(prompts, syn_sents):
syn_sents_pure.append(sent_syn.replace(sent, '').replace('\n',' ').strip())
preds = model(np.array(syn_sents_pure))
assert preds.shape[0] == len(prompts) and preds.shape[1] == num_classes
label_oht = tf.keras.utils.to_categorical( np.array([label_idx[l] for l in labels]), num_classes = num_classes, dtype='int' )
label_oht_tf = tf.convert_to_tensor(label_oht)
assert label_oht.shape == preds.shape
loss_value = cce(label_oht_tf, preds)#.numpy()
return loss_value
rows = ds.df_test.sample(5)
prompts = rows['content'].tolist()
labels = rows['label'].tolist()
with tf.GradientTape() as tape:
# Run the forward pass of the layer.
# The operations that the layer applies
# to its inputs are going to be recorded
# on the GradientTape.
#logits = model(x_batch_train, training=True) # Logits for this minibatch
inputs = tokenizer(prompts, padding='max_length', truncation=True, max_length=max_len, return_tensors="tf")
output_sequences = gpt2.generate(
input_ids = inputs['input_ids'],
attention_mask = inputs['attention_mask'],
max_length= max_len*2,
temperature=1,
top_k=0,
top_p=0.9,
repetition_penalty=1,
do_sample=True,
num_return_sequences=num_return_sequences
)
# Compute the loss value for this minibatch.
loss_value = loss_fn(output_sequences, labels) # <tf.Tensor: shape=(), dtype=float32, numpy=0.062384058>
# Use the gradient tape to automatically retrieve
# the gradients of the trainable variables with respect to the loss.
grads = tape.gradient(loss_value, gpt2.trainable_weights)
```
I load the pre-trained model gpt2 from `TFGPT2LMHeadModel` and I use its synthesis sentences given prompts to calculate the loss.
The loss seems ok, it is a tensor, such as
> <tf.Tensor: shape=(), dtype=float32, numpy=1.0446845>
But all the elements of `grads` is None
Why this? Any hints ?
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12068/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12067 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12067/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12067/comments | https://api.github.com/repos/huggingface/transformers/issues/12067/events | https://github.com/huggingface/transformers/issues/12067 | 914,736,726 | MDU6SXNzdWU5MTQ3MzY3MjY= | 12,067 | Selecting specific GPU CUDA devices | {
"login": "kenghweeng",
"id": 16697123,
"node_id": "MDQ6VXNlcjE2Njk3MTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/16697123?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kenghweeng",
"html_url": "https://github.com/kenghweeng",
"followers_url": "https://api.github.com/users/kenghweeng/followers",
"following_url": "https://api.github.com/users/kenghweeng/following{/other_user}",
"gists_url": "https://api.github.com/users/kenghweeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kenghweeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kenghweeng/subscriptions",
"organizations_url": "https://api.github.com/users/kenghweeng/orgs",
"repos_url": "https://api.github.com/users/kenghweeng/repos",
"events_url": "https://api.github.com/users/kenghweeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/kenghweeng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"When you do `CUDA_VISIBLE_DEVICES=1,8`, CUDA will still call the two available GPUs 0 and 1, 0 will correspond to 1 and 1 to 8. I f you look at the output of `nvidia-smi`, you will see the training will only run on GPUs 1 and 8.",
"Thank you, silly me. I'll close the issue, thanks!"
] | 1,623 | 1,623 | 1,623 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.0
- Platform: Linux-4.15.0-65-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?:
### Who can help
Hello @sgugger,
Steps to reproduce the behavior:
1. I would like to use selected CUDA GPU cores among 8 of them in the HF `Trainer` class. I've written something along the following lines:
2. So, I've done `export CUDA_VISIBLE_DEVICES=1,8` to select specific GPU devices, and ran:
```
training_args = TrainingArguments(
output_dir=self._output_dir,
overwrite_output_dir=True,
num_train_epochs=epochs,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size= int(batch_size/2), # since evaluation per
logging_steps = 20,
save_total_limit = 20,
warmup_steps=500,
weight_decay=0.01,
evaluation_strategy = "steps",
load_best_model_at_end = True,
eval_accumulation_steps = 1,
logging_dir = "logs"
)
trainer = Trainer(
model=self._model,
args=training_args,
tokenizer=self._tokenizer,
data_collator=self._data_collator,
train_dataset=self._train,
eval_dataset = self._test,
)
print("Devices used are:")
print(training_args.device)
```
## Expected behavior
I was under the impression that the `training_args.device` should return me cuda:1,8 or something along those lines, but it still reverted back to cuda:0. Are there any arguments I could specify to select a particular core.
Thank you in advance!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12067/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12066 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12066/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12066/comments | https://api.github.com/repos/huggingface/transformers/issues/12066/events | https://github.com/huggingface/transformers/pull/12066 | 914,631,915 | MDExOlB1bGxSZXF1ZXN0NjY0NTMwNTY1 | 12,066 | Fix LUKE integration tests | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | # What does this PR do?
Fixes the (slow) integration tests of LUKE. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12066/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12066",
"html_url": "https://github.com/huggingface/transformers/pull/12066",
"diff_url": "https://github.com/huggingface/transformers/pull/12066.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12066.patch",
"merged_at": 1623144098000
} |
https://api.github.com/repos/huggingface/transformers/issues/12065 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12065/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12065/comments | https://api.github.com/repos/huggingface/transformers/issues/12065/events | https://github.com/huggingface/transformers/issues/12065 | 914,585,740 | MDU6SXNzdWU5MTQ1ODU3NDA= | 12,065 | How can we predict story future based on past events? | {
"login": "krigeta",
"id": 75309361,
"node_id": "MDQ6VXNlcjc1MzA5MzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/75309361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krigeta",
"html_url": "https://github.com/krigeta",
"followers_url": "https://api.github.com/users/krigeta/followers",
"following_url": "https://api.github.com/users/krigeta/following{/other_user}",
"gists_url": "https://api.github.com/users/krigeta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krigeta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krigeta/subscriptions",
"organizations_url": "https://api.github.com/users/krigeta/orgs",
"repos_url": "https://api.github.com/users/krigeta/repos",
"events_url": "https://api.github.com/users/krigeta/events{/privacy}",
"received_events_url": "https://api.github.com/users/krigeta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\n\r\nPlease ask this question on the [forum](https://discuss.huggingface.co/), rather than here. Github issues are mostly for bugs/feature requests.\r\n\r\nThanks.",
"Okay I will hop there",
"Hello @NielsRogge, I posted there but I think the forum is not so resposive. so it would be great if you help me with this request.\r\n\r\nPlease.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,626 | 1,626 | NONE | null | Hello, is it possible to predict story future events on the basis of past events using transformer? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12065/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12064 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12064/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12064/comments | https://api.github.com/repos/huggingface/transformers/issues/12064/events | https://github.com/huggingface/transformers/issues/12064 | 914,473,179 | MDU6SXNzdWU5MTQ0NzMxNzk= | 12,064 | ImportError: cannot ipmort name 'TFAutoModel' | {
"login": "Holy-Shine",
"id": 14997709,
"node_id": "MDQ6VXNlcjE0OTk3NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/14997709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Holy-Shine",
"html_url": "https://github.com/Holy-Shine",
"followers_url": "https://api.github.com/users/Holy-Shine/followers",
"following_url": "https://api.github.com/users/Holy-Shine/following{/other_user}",
"gists_url": "https://api.github.com/users/Holy-Shine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Holy-Shine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Holy-Shine/subscriptions",
"organizations_url": "https://api.github.com/users/Holy-Shine/orgs",
"repos_url": "https://api.github.com/users/Holy-Shine/repos",
"events_url": "https://api.github.com/users/Holy-Shine/events{/privacy}",
"received_events_url": "https://api.github.com/users/Holy-Shine/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | NONE | null | transformers: 4.6.1
tensorflow-gpu: 2.0.0
when I wrote code below in my jupyter-notebook:
`from transformers import TFAutoModel`
I got an ImportError:
> **ImportError: cannot ipmort name 'TFAutoModel'**
I wonder what's wrong with my code or dev environment. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12064/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12063 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12063/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12063/comments | https://api.github.com/repos/huggingface/transformers/issues/12063/events | https://github.com/huggingface/transformers/pull/12063 | 914,457,997 | MDExOlB1bGxSZXF1ZXN0NjY0MzcxMjcx | 12,063 | Fix tapas issue | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | # What does this PR do?
Fixes #12060
However, the (slow) integration tests of TAPAS that use relative position embeddings are failing for me locally, most likely due to the new version of the [torch-scatter](https://github.com/rusty1s/pytorch_scatter) dependency. I'll look into that.
Update: just tested the models in Google Colab (which has `torch 1.8.1+cu101`). Everything seems to work fine there. However, when running locally on `torch 1.8.1+cu111`, I'm getting entirely different logits/hidden states. Both are using `torch-scatter 2.7.0`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12063/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12063",
"html_url": "https://github.com/huggingface/transformers/pull/12063",
"diff_url": "https://github.com/huggingface/transformers/pull/12063.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12063.patch",
"merged_at": 1623144151000
} |
https://api.github.com/repos/huggingface/transformers/issues/12062 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12062/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12062/comments | https://api.github.com/repos/huggingface/transformers/issues/12062/events | https://github.com/huggingface/transformers/issues/12062 | 914,430,037 | MDU6SXNzdWU5MTQ0MzAwMzc= | 12,062 | fp16 models getting auto converted to fp32 in .from_pretrained() | {
"login": "asit2898",
"id": 51470339,
"node_id": "MDQ6VXNlcjUxNDcwMzM5",
"avatar_url": "https://avatars.githubusercontent.com/u/51470339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asit2898",
"html_url": "https://github.com/asit2898",
"followers_url": "https://api.github.com/users/asit2898/followers",
"following_url": "https://api.github.com/users/asit2898/following{/other_user}",
"gists_url": "https://api.github.com/users/asit2898/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asit2898/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asit2898/subscriptions",
"organizations_url": "https://api.github.com/users/asit2898/orgs",
"repos_url": "https://api.github.com/users/asit2898/repos",
"events_url": "https://api.github.com/users/asit2898/events{/privacy}",
"received_events_url": "https://api.github.com/users/asit2898/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"cc @stas00 ",
"Oh, do you mean that your model was already in fp16 to start with? This combination I haven't tried yet. \r\n\r\nFirst when reporting Deepspeed problems please always share the deepspeed config file and the TrainingArguments.\r\n\r\nand then we can look at sorting it out. \r\n",
"Yes, the saved model was already in fp16. Apologies, here are the needed files:\r\n\r\nA) DeepSpeed config file:\r\n```json\r\n{\"zero_allow_untested_optimizer\": true,\r\n \"optimizer\": {\r\n \"type\": \"AdamW\",\r\n \"params\": {\r\n \"lr\":3e-5,\r\n \"betas\": [\r\n 0.9,\r\n 0.999\r\n ],\r\n \"eps\": 1e-8,\r\n \"weight_decay\": 3e-7\r\n }\r\n },\r\n\"scheduler\": {\r\n \"type\": \"WarmupLR\",\r\n \"params\": {\r\n \"warmup_min_lr\": 0,\r\n \"warmup_max_lr\": 3e-5,\r\n \"warmup_num_steps\": 500\r\n }\r\n },\r\n\"train_batch_size\": 24,\r\n\"fp16\": {\r\n \"enabled\": true,\r\n \"loss_scale\": 0,\r\n \"initial_scale_power\": 16\r\n }\r\n\r\n}\r\n\r\n```\r\nB) Training Arguments:\r\n```python\r\nTrainingArguments(output_dir=/data/dps_finetune_16_wikitext, overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=False, evaluation_strategy=IntervalStrategy.STEPS, prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=10.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_ratio=0.0, warmup_steps=0, logging_dir=runs/Jun08_18-02-30_jp3-g-31374-37031-i-2p4p2, logging_strategy=IntervalStrategy.STEPS, logging_first_step=False, logging_steps=500, save_strategy=IntervalStrategy.STEPS, save_steps=100, save_total_limit=5, no_cuda=False, seed=42, fp16=False, fp16_opt_level=O1, fp16_backend=auto, fp16_full_eval=False, local_rank=0, tpu_num_cores=None, tpu_metrics_debug=False, debug=[], dataloader_drop_last=False, eval_steps=10, dataloader_num_workers=0, past_index=-1, run_name=/data/dps_finetune_16_wikitext, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=[], deepspeed=/data/config_fine_tune_bert.json, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, length_column_name=length, report_to=['mlflow', 'tensorboard'], ddp_find_unused_parameters=None, dataloader_pin_memory=False, skip_memory_metrics=False, use_legacy_prediction_loop=False, push_to_hub=False, resume_from_checkpoint=None, _n_gpu=1, mp_parameters=)\r\n```\r\nfp16 is set to False. I have also tried with fp16=True but no difference in behaviour was observed.\r\n\r\nI also tested by loading the saved fp16 state_dict separately using torch.load() and then used it to initialize the BertForMaskedLM as follows:\r\n\r\n```python\r\nimport torch\r\nfrom transformers import BertConfig\r\n\r\nstate_dict = torch.load(model_path+ \"pytorch_model.bin\")\r\nconfig = BertConfig.from_json_file(model_path+ \"config.json\")\r\nmodel = BertForMaskedLM.from_pretrained(None,config = config, state_dict = a)\r\nmodel.dtype\r\n```\r\nmodel.dtype still outputs torch.float32.\r\n\r\nThe config.json file above (saved model's config file) is as follows:\r\n```json\r\n{\r\n \"_name_or_path\": \"/data/bert-base-cased/\",\r\n \"architectures\": [\r\n \"BertForMaskedLM\"\r\n ],\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"gradient_checkpointing\": false,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"layer_norm_eps\": 1e-12,\r\n \"max_position_embeddings\": 512,\r\n \"model_type\": \"bert\",\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 12,\r\n \"pad_token_id\": 0,\r\n \"position_embedding_type\": \"absolute\",\r\n \"transformers_version\": \"4.6.1\",\r\n \"type_vocab_size\": 2,\r\n \"use_cache\": true,\r\n \"vocab_size\": 28996\r\n}\r\n\r\n```\r\nThe _name_or_path points to the location of the pre-finetuning fp32 model. However, changing its value to the post-finetuning fp16 model also does not lead to any change in model.dtype output. Please let me know if there are any checks I could run or files I could provide. \r\nThanks!",
"Thank you for sharing these details. So indeed this looks like a case I haven't run into and this is not an integration issue.\r\n\r\nSo under zero3 `from_pretrained` calls `zero.Init()` which prepares the model for deepspeed's stage 3 work and it also gathers/scatters the model pieces across the gpus during state_dict loading. So this is the doing of one of these 2. But they are needed in order to use the deepspeed optimizer which works either in fp32 or mixed precision mode - Deepspeeds's `fp16.enabled` == mixed precision. They currently don't have fp16 non-mixed precision mode as far as I know. But clearly there is a need for that.\r\n\r\nMost likely this is something Deepspeed core will have to solve. This use case is probably new to them too.\r\n\r\nSo please kindly use https://github.com/microsoft/DeepSpeed/issues/new to post the same details (Edit -> Copy-n-Paste) there.\r\nand please tag me so that I could track the outcome and adjust things if need be in our side.\r\n\r\nThank you, @asit2898 \r\n",
"Hi @asit2898 , thanks for reporting your issue. I can help look at things from DeepSpeed's side.\r\n\r\nWas the model fine-tuned with ZeRO enabled? From the DS config above it seems not, unless it is enabled somewhere on the HF side of things.\r\n\r\n@stas00 , does the `from_pretrained` codepath go through DeepSpeed's `load_checkpoint()`, or is the checkpoint logic all on HF's side?\r\n\r\nTo start, I did a quick experiment with DeepSpeed (without ZeRO) and examined model parameter dtypes before and after `deepspeed.initialize()`. So far I haven't reproduced the issue:\r\n\r\n- When FP16 is *not* enabled, the model's dtype is unchanged (eg., fp32 stays fp32 and fp16 stays fp16).\r\n- When fp16 *is* enabled, the model weights are fp16 after `deepspeed.initialize()` no matter the initial dtype of fp32 or fp16.",
"\r\n> @stas00 , does the `from_pretrained` codepath go through DeepSpeed's `load_checkpoint()`, or is the checkpoint logic all on HF's side?\r\n\r\nAs posted above `from_pretrained` \r\n\r\nSo under zero3 from_pretrained:\r\n1. calls zero.Init() which prepares the model for deepspeed's stage 3 work and \r\n2. it also gathers/scatters the model pieces across the gpus during state_dict loading. \r\n\r\n> I did a quick experiment with DeepSpeed (without ZeRO)\r\n\r\nThe key is zero3. `from_pretrained` doesn't do anything deepspeed-wise unless it's zero3.",
"@ShadenSmith @stas00 Thanks for the replies! I did not enable any stage of ZeRO and just ran DeepSpeed using pure data parallelism. \r\nThe saved model was in fp16 at the end of DeepSpeed finetuning using HG Trainer which I think is in accordance with the experiments you carried out... \r\n\r\n It is only after I load the saved model using .from_pretrained() method that the weights get auto-converted to 32 bits... \r\n\r\nI am not very familiar with HG source code, but given that .from_pretrained() takes only the state_dict and model configuration as arguments, especially in the following case that I mentioned:\r\n```python\r\nimport torch\r\nfrom transformers import BertConfig\r\n\r\nstate_dict = torch.load(model_path+ \"pytorch_model.bin\")\r\nconfig = BertConfig.from_json_file(model_path+ \"config.json\")\r\nmodel = BertForMaskedLM.from_pretrained(None,config = config, state_dict = a)\r\nmodel.dtype\r\n```\r\nThe HG object behaviour should be independent of whether or not the model was trained on DeepSpeed right :thinking:\r\nLet me know if there are any experiments that can help isolate the effects of DeepSpeed from those of HG.\r\n\r\n",
"Thanks for the clarification @asit2898 / @stas00 .\r\n\r\n@stas00 , I don't yet understand the conclusion that the issue is in core DeepSpeed. Since ZeRO-3 is not enabled, is HF expecting the `Init()` to do something else? It should just be a no-op so long as Z3 is not enabled. Is the expectation on HF's side that there are fp32 weights that should be converted to fp16 in this instance? Or is the thought that `Init()` is still executing, and the weights are bumped to fp32 there when scattering?\r\n\r\nThe only model dtype transformations that we should be making are converting to FP16 when that is enabled. This issue is going in the opposite direction and I am not sure where the FP32 conversion would happen.",
"OK, Let me try to reproduce this first and then it'd be much easier to discuss this further.\r\n\r\nfor some reason I was under the impression that zero3 was enabled! but reviewing the config posted by @asit2898 it's not.\r\n\r\nI will make an fp16 model, try to reproduce the problem and then follow up.",
"OK, this doesn't seem to have anything to do with Deepspeed.\r\n\r\nObserve:\r\n```\r\nimport torch\r\nfrom transformers import BertForMaskedLM\r\n\r\nmname = \"prajjwal1/bert-tiny\"\r\nmodel = BertForMaskedLM.from_pretrained(mname)\r\nmodel = model.half()\r\nprint(model.dtype)\r\n\r\nmodel_path = \"/tmp/bert-fp16\"\r\nmodel.save_pretrained(model_path)\r\n\r\nmodel = BertForMaskedLM.from_pretrained(model_path)\r\nprint(model.dtype)\r\n```\r\nprints:\r\n```\r\ntorch.float16\r\ntorch.float32\r\n```\r\n\r\nI will look next at why this bug is happening.\r\n",
"OK, so it's not even `transformers`, it's pytorch that does that in `load_state_dict` https://github.com/pytorch/pytorch/issues/39428\r\n\r\nHere is a standalone torch example:\r\n```\r\nimport torch\r\nfrom torch import nn\r\n\r\nmodel = nn.Linear(1,1)\r\nmodel = model.half()\r\nprint(model.weight.dtype)\r\ntorch.save(model.state_dict(), 'model.pkl')\r\n\r\nmodel = nn.Linear(1,1)\r\nmodel.load_state_dict(torch.load('model.pkl'))\r\nprint(model.weight.dtype)\r\n```\r\nprints\r\n```\r\ntorch.float16\r\ntorch.float32\r\n```\r\n\r\n\r\n",
"Thinking more about it I think `load_state_dict` does the right thing. It adjusts the weights to the dtype of the model.\r\n\r\nSince the user can't access the model until after `from_pretrained` they have no chance to choose its dtype.\r\n\r\n1. So one possible solution here is to add an optional `dtype` arg to `from_pretrained` and if it's passed, do:\r\n```\r\nmodel.to(dtype=dtype)\r\n```\r\n\r\nas soon as it's instantiated.\r\n\r\n2. An alternative approach is to sample the weight's dtype and convert the model automatically to that type. Is it ever possible that the weights could be of different dtype? If not this might be the transparent solution.\r\n\r\nOf course, the user could do `model.half()` immediately after `from_pretrained` but the problem is that it will require 2x RAM which the user might not have, so the switching should occur before weights loading.\r\n\r\n@sgugger, @LysandreJik, @patrickvonplaten - what do you think?",
"I'm okay with having a `dtype` argument to `from_pretrained`, personally.",
"I edited just now to offer an automatic detection. item 2.",
"@asit2898, until we sort it out please use `model.half()` after `from_pretrained` as a workaround.",
"I'm fine with having a `dtype` argument to `from_pretrained` as well, and if possible an automatic detection would be even better. \r\n\r\nI would also be fine with a configuration attribute that would identify between fp32/fp16/bfloat16, as users have been surprised in the past that models weighing ~500mb on the hub ended up taking up much more RAM and much more disk space on their machines in the past (automatic detection would be better than having another configuration attribute).",
"Ah yes, this is definitely something that could be stored in the configuration!",
"Which also connects to my proposal from 2 months ago: https://github.com/huggingface/transformers/issues/11209, though it's slightly different since a model could be pre-trained in mixed precision and saved in fp32.\r\n\r\nThe thing is - if you have the weights of the model, it doesn't take long to get the dtype of the tensors it contains in its saved `state_dict` (pytorch) - One question - is it guaranteed they are always of the same dtype and it's enough to check one of them, or should all be checked and the highest be used if there are mixed?\r\n\r\n\r\n",
"Specific discussion on auto-detection:\r\n\r\nTo do auto-detecting `torch.load()` needs to be moved before model instantiating.\r\nThen we need to set default dtype,\r\nhttps://pytorch.org/docs/stable/generated/torch.set_default_tensor_type.html\r\n\r\nSo the protocol would be:\r\n\r\n1. torch.load (which would need to be moved up) or use `state_dict` if it was passed to `from_pretrained`\r\n2. read one (all?) dtypes of the weights \r\n3. set `torch.set_default_tensor_type(dtype)`\r\n4. instantiate the model\r\n5. restore `torch.set_default_tensor_type` to its previous value (so could be context manager)\r\n6. `_load_from_state_dict`\r\n",
"And if we choose to implement this for pytorch what do we do with tf and flax?",
"@stas00 Thanks a lot for addressing the issue! I really did not expect the issue to lie in the way PyTorch loads the model. I'll continue using model.half() and would be happy to help in any way I can...",
"@Rocketknight1, do you have an idea of how that is/would be managed with TensorFlow?\r\n\r\n@patrickvonplaten @patil-suraj, do you have an idea of how that is/would be managed with JAX/Flax?",
"@LysandreJik Keras is quite opinionated about this - it has plenty of support for mixed-precision training (like PyTorch AMP) using a `Policy` object but I don't know of too many people doing true full float16/bfloat16 training, and I think you'd have to do that in pure TF or use some backend functions like `K.set_floatx`. I also think it has weird side-effects and breaks some layers.",
"Looks like we lost momentum on this one.\r\n\r\nPlease answer the following 2 questions with 1x and 2x (e.g. 1c, 2b - multiple versions are ok too if you're flexible)\r\n\r\n1. dtype setting mechanism:\r\na. do we autodiscover the dtype from the state_dict\r\nb. do we pass an explicit `dtype` argument to `from_pretrained`\r\nc. a+b - with the `dtype` argument overriding autodiscovery\r\nd. using model config attribute - need to change save_pretrained to save this attribute\r\ne. a+d - with d overriding autodiscovery\r\n\r\n2. Scope of the solution:\r\na. do we try to solve this for all 3 frameworks, \r\nb. just pytorch for now - will be documented as such\r\n\r\nThank you!\r\n\r\np.s. if we add `from_pretrained(..., dtype)` should we do the same for `from_config(..., dtype)` so that the behavior is the same?",
"I'd vote for 1a, overridden by a configuration attribute (1d?) rather than the `from_pretrained` argument, and 2b.",
"Agreed with Lysandre: using a config attribute (which defaults to None or \"auto\") and switch back to the autodiscovery if this attribute is not set to a specific value. ",
"**update**: added 1d and 1e options as proposed.\r\n\r\nSo if we go with 1e - `from_config` is then just 1d, right? since there is no model to do autodiscovery from.\r\n\r\nQuestion: could it be possible that the model will have some weights that use a different dtype than the rest of the model?",
"Yes, `from_config` uses just 1d.\r\nFor your question, I'm not aware of such a situation existing.",
"@asit2898, please give a try to this PR https://github.com/huggingface/transformers/pull/12316 - it should do the right thing automatically as requested.",
"@asit2898, the PR is almost done, and once merged you will need to use one of:\r\n```\r\n model = T5ForConditionalGeneration.from_pretrained(\"t5\", torch_dtype=torch.float16)\r\n model = T5ForConditionalGeneration.from_pretrained(\"t5\", torch_dtype=\"auto\")\r\n```\r\nto meet your needs."
] | 1,623 | 1,687 | 1,624 | NONE | null | **stas00 edited**: this Issue has nothing to do with Deepspeed, but pure `transformers`
---------------------
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-3.10.0-1127.13.1.el7.x86_64-x86_64-with-centos-7.7.1908-Core
- Python version: 3.6.8
- PyTorch version (GPU?): 1.6.0+cu92 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes (not essential)
- Using distributed or parallel set-up in script?: Yes (not essential)
### Who can help
@LysandreJik @sgugger
## Information
Model I am using (Bert, XLNet ...): BertForMaskedLM
The problem arises when using:
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] my own task or dataset: (give details below)
Masked LM
## To reproduce
Steps to reproduce the behavior:
1. Finetune a 16-bit low precision BertForMaskedLM model on any dataset using DeepSpeed and Trainer
2. Load the model and check the dtype using:
```python
from transformers import BertTokenizer, BertForMaskedLM
tokenizer = BertTokenizer.from_pretrained(tokenizer_path)
model = BertForMaskedLM.from_pretrained(model_path)
print(model.dtype)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Outputs torch.float32 instead of the expected torch.float16. I was able to recover the original weights using model.half()
I think it would be helpful to highlight this behaviour of forced autoconversion either as a warning or as a part of from_pretrained() method's documentation or provide an additional argument to help retain fp16 weights. Willing to pick this issue up. Please let me know what would be the most appropriate fix.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12062/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12061 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12061/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12061/comments | https://api.github.com/repos/huggingface/transformers/issues/12061/events | https://github.com/huggingface/transformers/issues/12061 | 914,252,830 | MDU6SXNzdWU5MTQyNTI4MzA= | 12,061 | [testing] making network tests more reliable | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Yes, I think that can help, we have similar issues in the [`huggingface_hub`](https://github.com/huggingface/huggingface_hub) repository. \r\n\r\nI'm wondering if these issues don't come from the fact that these tests are very quick to run, therefore bursting the server which has issues handling all requests. It also happens with tokenizers which also run fast, but not with models.\r\n\r\nIf that's the case then a `time.sleep(3)` would work, but spreading the tests so that they're not run sequentially could also work. \r\n\r\ncc @julien-c ",
"From what I'm observing this issue doesn't happen anymore - should we close the issue and reopen if the network failures reappear at some point?",
"Sounds good, @LysandreJik ",
"OK, it's happening again, \r\n\r\n```\r\n2021-09-28T00:56:00.8216138Z 502 Server Error: Bad Gateway for url: https://huggingface.co/patrickvonplaten/t5-tiny-random/resolve/main/config.json\r\n2021-09-28T00:56:00.8217204Z ___________ TestDeepSpeedWithLauncher.test_do_eval_no_train_1_zero3 ____________\r\n```\r\n\r\nOur deepspeed integration tests are now integrated into the Deepspeed core CI and they report these failures.\r\n\r\nYou can see other HF projects reporting this issue as well:\r\ne.g. see this thread: https://huggingface.slack.com/archives/C01BWJU0YKW/p1632819750394400\r\n\r\nI wonder if we should somehow have a way not only to retry the download but gracefully recover and most lilkely having a special setting in our test suite that when network failure occurs despite the retries the test skips rather than fails - we won't use that on our CI but for external use it'd be important not to interfere with their testing needs.",
"Reopening this since this is a problem.\r\n\r\ne.g. our deepspeed section of tests run on the Deepspeed CI intermittently fails to fetch files from the hub.\r\n\r\n```\r\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: \r\nhttps://huggingface.co/sshleifer/tiny-gpt2/resolve/main/config.json\r\n```\r\n\r\nwhich impacts their CI.\r\n\r\nI think perhaps we need a retry mechanism in the core of the network fetches and not put the burden on the tests.\r\n\r\n@LysandreJik ",
"Yes! How would you like to tackle this? With a retry on each test, with a small pause?\r\nI wonder how best to handle it, given that chaining that test with no pause would probably result in the same issue happening over and over again, repeatedly, while putting a pause might significantly slow the test suite down.\r\n\r\nDo you have any ideas regarding how to solve this best?",
"I believe the retry mechanism should be part of the download API, since that's the unreliable link in the chain.\r\n\r\nI propose to have new arguments in the download API with sensible defaults:\r\n- `try_times=3` - how many times to try before giving up\r\n- `try_sleep_secs=1` - how long to sleep between trying again\r\n\r\nWith these defaults the longest delay is 2 seconds, which is probably not an issue for the test suite. Especially if we cache downloads.\r\n\r\nIf it can't download after 3 tries then if the client is OK then the server is at fault and it needs a higher capacity/scalability to handle a high request rate.\r\n\r\n",
"That sounds good, even if I'm a bit afraid that retrying in succession won't solve much. When a test fails for server error, then usually other tests fail. I'm still open to trying it out to see if it improves these errors!\r\n\r\nWould you like to give it a try? I'm guessing only this method needs to be modified: https://github.com/huggingface/transformers/blob/efea0f868bd381244e3cef51b388293e41a36d1e/src/transformers/file_utils.py#L1594\r\n\r\ncc @julien-c as this is a safeguard against the server's instabilities.",
"BTW @LysandreJik i think we should soon switch from `file_utils` to `huggingface_hub` no?\r\n\r\nnone of this is really transformers-specific?",
"Indeed, some of the logic could be upstreamed in `huggingface_hub` (was pushing this back as I'm a fervent believer of \"if it ain't broke, don't fix it\", especially for such a core component of the library which doesn't need to evolve much)",
"yes, same feeling. However i think we should try to prevent the two codebases from diverging too much since initially the code was extracted from transformers anyways\r\n\r\n(failure retry is an example of a quite big change, for instance)\r\n\r\nMaybe if we do this, an option would be to upstream the same change to huggingface_hub then?",
"Yes, that sounds like a good plan. We've started moving some methods (like `HfApi`) to `huggingface_hub` anyway, so for iso-behavior methods, I'm fine to move them in `huggingface_hub` sooner rather than later.\r\n\r\nLet's go with the retry option first in `transformers`, and I'll take the opportunity to upstream it in `huggingface_hub` once we have settled on the logic and it is merged in `transformers`.",
"As @sgugger mentions offline, this issue also appears in the push to hub methods (403 errors, 500 errors), so maybe adding a retry option there for testing would make sense as well",
"> That sounds good, even if I'm a bit afraid that retrying in succession won't solve much. When a test fails for server error, then usually other tests fail. I'm still open to trying it out to see if it improves these errors!\r\n\r\nShould these incidents (repetitive failures) be also logged or does someone review server logs to ensure that these failures aren't indicative of an issue with the server?\r\n\r\nWe need to have a clear distinction between a failure due to network transport issues vs server's inability to cope with the traffic. If the server is overloaded, then of course re-trying won't help. But then we need to fix the server not to be overloaded.",
"FWIW, this issue continues on our CI:\r\n```\r\nConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.16.1/metrics/sacrebleu/sacrebleu.py\r\n```\r\n\r\n",
"Do you have a link for `ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.16.1/metrics/sacrebleu/sacrebleu.py\r\n`? \r\n\r\ncc @lhoestq ",
"Oh, it was just giving an example of an intermittent failure on our CI. It was fine when CI restarted.\r\n\r\nSo with re-try it could have been avoided. Since all other files were fetched or looked up just fine.",
"Hi ! If it can help, note that in `datasets` we've already added a retry mechanism in [file_utils.py](https://github.com/huggingface/datasets/blob/16f562b381a9e2ad1934b82ffcd6ea1695b6d74e/src/datasets/utils/file_utils.py#L378-L387)",
"@lhoestq, I didn't follow all the paths, but it appears that `max_retries` is either 0 or 1 almost everywhere in `datasets` unless the user overrides it. Unless you believe a single retry is sufficient.\r\n\r\nBut, yes, this is what we want in transformers! Thank you!",
"Additionally, I'm thinking this. Most of the time on set ups like CI or a developer's box most of the datasets and transformers files have already been cached.\r\n\r\nWould it make sense to check that if\r\n1. there is a failure to fetch a model or a dataset or a support file \r\n2. and there is already a cached version of the same \r\n\r\nto simply switch to using a local file and tell the user that this was done?\r\n\r\nI believe this is an even more realistic use case and will 10-100x reduce the amount of failures due to network issues.\r\n\r\nIf you agree I'd use the following algorithm:\r\n\r\n1. try to fetch the file \r\n2. look up local cache\r\n3. retry to fetch the file\r\n4. retry to fetch the file\r\n5. assert with: bailing after re-tried 3 times and no local version found cached\r\n\r\nwith each step being needed only if the previous fails.\r\n",
"Here is another example of CI intermittent failure which could have been re-tried and not fail the whole CI:\r\n\r\n```\r\nE requests.exceptions.HTTPError: 502 Server Error: Bad Gateway for url: https://huggingface.co/api/models/facebook/mbart-large-50-one-to-many-mmt\r\n```\r\n\r\nSource: \r\nhttps://app.circleci.com/pipelines/github/huggingface/transformers/31002/workflows/327de938-0361-420e-abb5-c35d45bca5bb/jobs/318450\r\n",
"I'm all for a retry mechanism, especially given the recent errors we've been seeing in the CI.\r\n\r\nRegarding the fetched files, I'd rather we keep it the same as it is right now: we have a `local_files_only` keyword that enables fetching from the local folder. With this argument, we have this option as an opt-in, rather than as a behind-the-scenes method, which I think is important.\r\n\r\nOtherwise, the user might use `from_pretrained` to fetch the latest version of a repository, and the version fetched could actually be the latest they have on their setup, which is a small (but potentially impactful) breaking change.\r\n\r\n~Would you have some bandwidth to implement the retry mechanism?~ I should have a bit of time to tackle this by the time I'm back from holidays. ",
"We can't use `local_files_only` on CI since then we will miss updated remote data.\r\n\r\nI agree with your discussion of the user-mode.\r\n\r\nHere are a few more extensions to my proposal:\r\n\r\n1. we can differentiate between CI-mode and user-mode. in CI-mode (env var activated) we can use the algo suggested in https://github.com/huggingface/transformers/issues/12061#issuecomment-987448258 \r\n\r\n2. In parallel I think there is a difference when we get a 50x and 40x response. Regardless of CI-mode or not, a 40x is a client error and should not try to use a local cache. 50x is a server error and thus a local cache should be used.\r\n\r\nWith the caveat for non-public repos where codes are obscure not to expose the private repo layout and the un-authenticated user always get 40x regardless of true path, but I think this falls neatly into the 40x group anyway - a client error.\r\n\r\nSo here an updated algo:\r\n\r\n```\r\n\r\n# in this algo a successful \"fetch the file from online or cache\" exits the algo.\r\n\r\nIf env[\"TRANSFORMERS_CI_CACHE_RETRY_ON_500\"]:\r\n\r\n 1. try to fetch the file \r\n 2. if 50x: look up local cache\r\n else: fail\r\n 3. if not in cache: sleep and retry to fetch the file\r\n 4. if 50x: sleep and retry to fetch the file\r\n else: fail\r\n 5. assert with: bailing after re-tried 3 times and no local version found cached\r\n \r\nelse: # normal mode\r\n \r\n 1. try to fetch the file\r\n 2. do nothing\r\n 3. if 50x: sleep and retry to fetch the file\r\n else: fail\r\n 4. if 50x: sleep and retry to fetch the file\r\n else: fail\r\n 5. assert with: bailing after re-tried 3 times \r\n```\r\n\r\nThe 2 halves are almost the same with the only addition of cache lookup in the CI-mode for step 2. Hence the do nothing 2nd step in the 2nd half.\r\n\r\nWhat do you think?\r\n\r\nand of course the same should apply to `datasets` and `transformers`",
"Thank you for handling the github bot - would love to make time for this this or next week."
] | 1,623 | 1,642 | null | CONTRIBUTOR | null | We have a group of tests that require a reliable network, which is never 100% so they fail for many months.
I propose that those tests will be rewritten with unstable network in mind and include:
1. `time.sleep(3)`
2. retry 3-5 times
e.g. one of the candidates is:
`tests/test_hf_api.py::HfApiEndpointsTest::test_list_repos_objs`
but also recent tests that push to hub.
Perhaps a simple retry context manager can be added to `testing_utils.py`, which would trap exceptions and retry after a pause. And then simply wrap the content of existing tests into that context manager, e.g.:
```
with RetryAfterSleepTest():
# normal test code
```
it could accept the number of retries and sleep time between retries for optional arguments.
Of course, it's probably even better to make it also a decorator. e.g. `@unreliable_network_retry`
@LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12061/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12060 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12060/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12060/comments | https://api.github.com/repos/huggingface/transformers/issues/12060/events | https://github.com/huggingface/transformers/issues/12060 | 914,244,111 | MDU6SXNzdWU5MTQyNDQxMTE= | 12,060 | [skipped test] to fix | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | https://github.com/huggingface/transformers/pull/12059 skipped failing: `tests/test_modeling_tapas.py::TapasUtilitiesTest::test_reduce_sum_vectorized`
This issue is to track its resolution so that it won't be forgotten.
@LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12060/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12059 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12059/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12059/comments | https://api.github.com/repos/huggingface/transformers/issues/12059/events | https://github.com/huggingface/transformers/pull/12059 | 914,242,711 | MDExOlB1bGxSZXF1ZXN0NjY0MTc0NTcy | 12,059 | [CI] skip failing test | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | skipping a consistently failing test that breaks CI
@LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12059/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12059",
"html_url": "https://github.com/huggingface/transformers/pull/12059",
"diff_url": "https://github.com/huggingface/transformers/pull/12059.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12059.patch",
"merged_at": 1623124121000
} |
https://api.github.com/repos/huggingface/transformers/issues/12058 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12058/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12058/comments | https://api.github.com/repos/huggingface/transformers/issues/12058/events | https://github.com/huggingface/transformers/pull/12058 | 914,096,634 | MDExOlB1bGxSZXF1ZXN0NjY0MDQwMzIx | 12,058 | [Deepspeed] various fixes | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"That's a good call, Sylvain. I think the deprecation warning has been showing up long enough that I didn't bother checking. But I did check now and all is good it was done before deepspeed==0.3.16 was released ([commit](https://github.com/microsoft/DeepSpeed/commit/0d4a54a04d658db40a120bc10c6f1f1a4478f6f1)) and I retested with 0.3.16 just to be sure."
] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | This PR includes a few small fixes in config files, tests and docs:
- replace deprecated config `cpu_offload` with `offload_optimizer`
- `sub_group_size` setting was too big - needing too much GPU RAM
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12058/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12058",
"html_url": "https://github.com/huggingface/transformers/pull/12058",
"diff_url": "https://github.com/huggingface/transformers/pull/12058.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12058.patch",
"merged_at": 1623166575000
} |
https://api.github.com/repos/huggingface/transformers/issues/12057 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12057/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12057/comments | https://api.github.com/repos/huggingface/transformers/issues/12057/events | https://github.com/huggingface/transformers/pull/12057 | 914,048,967 | MDExOlB1bGxSZXF1ZXN0NjYzOTk2NjU1 | 12,057 | adds metric prefix. | {
"login": "riklopfer",
"id": 413300,
"node_id": "MDQ6VXNlcjQxMzMwMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/413300?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/riklopfer",
"html_url": "https://github.com/riklopfer",
"followers_url": "https://api.github.com/users/riklopfer/followers",
"following_url": "https://api.github.com/users/riklopfer/following{/other_user}",
"gists_url": "https://api.github.com/users/riklopfer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/riklopfer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riklopfer/subscriptions",
"organizations_url": "https://api.github.com/users/riklopfer/orgs",
"repos_url": "https://api.github.com/users/riklopfer/repos",
"events_url": "https://api.github.com/users/riklopfer/events{/privacy}",
"received_events_url": "https://api.github.com/users/riklopfer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You just need to tweak the test of this script in examples/pytorch/test_examples.py, since it's looking for f1 instead of eval_f1. Same for exact.",
"@sgugger, the example tests are fixed now, this other failure is a mystery to me. I suspect that failure is caused by another change. Let me know if you think the `run_torch_tests` failure is related to this PR and I'll look into it. ",
"No this failure is independent and currently being investigated, so we can merge this PR safely. Thanks again!"
] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | # What does this PR do?
Adds metric prefix to metrics dict. This is needed for `metric_for_best_model` to function properly. See https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L1516
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12057/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12057/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12057",
"html_url": "https://github.com/huggingface/transformers/pull/12057",
"diff_url": "https://github.com/huggingface/transformers/pull/12057.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12057.patch",
"merged_at": 1623119650000
} |
https://api.github.com/repos/huggingface/transformers/issues/12056 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12056/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12056/comments | https://api.github.com/repos/huggingface/transformers/issues/12056/events | https://github.com/huggingface/transformers/issues/12056 | 913,861,123 | MDU6SXNzdWU5MTM4NjExMjM= | 12,056 | [testing] set tests to not rebuild datasets | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"`datasets` has just made this feature disabled by default: https://github.com/huggingface/datasets/pull/2460\r\n\r\nSo nothing needs to be done."
] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | recently `datasets` created in memory datasets enabled by default - which is great for those who wants it but is a terrible idea for tests and those who need to develop things constantly restarting the scripts as datasets aren't being cached and rebuilt on every run.
So we should turn this feature off in `*/conftest.py` by setting:
```
HF_MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=0
```
But it's going to be renamed shortly to `HF_DATASETS_IN_MEMORY_MAX_SIZE`
https://github.com/huggingface/datasets/pull/2409#issuecomment-850549742
https://github.com/huggingface/datasets/pull/2454
So for now this issue is tracking this change and then will add it to the tests.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12056/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12056/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12055 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12055/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12055/comments | https://api.github.com/repos/huggingface/transformers/issues/12055/events | https://github.com/huggingface/transformers/issues/12055 | 913,543,128 | MDU6SXNzdWU5MTM1NDMxMjg= | 12,055 | Settings for perfect Story writing based on the input text? | {
"login": "krigeta",
"id": 75309361,
"node_id": "MDQ6VXNlcjc1MzA5MzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/75309361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krigeta",
"html_url": "https://github.com/krigeta",
"followers_url": "https://api.github.com/users/krigeta/followers",
"following_url": "https://api.github.com/users/krigeta/following{/other_user}",
"gists_url": "https://api.github.com/users/krigeta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krigeta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krigeta/subscriptions",
"organizations_url": "https://api.github.com/users/krigeta/orgs",
"repos_url": "https://api.github.com/users/krigeta/repos",
"events_url": "https://api.github.com/users/krigeta/events{/privacy}",
"received_events_url": "https://api.github.com/users/krigeta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, sorry for the long answer time! Unfortunately, we favor using the [forum](https://discuss.huggingface.co) for questions like this where you're much more likely to get an answer than on Github Issues. If that's not already the case, do you mind opening a thread there?\r\n\r\nThanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,634 | 1,634 | NONE | null | Hello, thank you so much for this awesome project.
What settings do you suggest so by using those settings it seems like a story is continuing based on the previous story input as a text file? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12055/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12054 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12054/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12054/comments | https://api.github.com/repos/huggingface/transformers/issues/12054/events | https://github.com/huggingface/transformers/issues/12054 | 913,438,571 | MDU6SXNzdWU5MTM0Mzg1NzE= | 12,054 | How to update the GPT2 with loss which are provided from another separate module? | {
"login": "yananchen1989",
"id": 26405281,
"node_id": "MDQ6VXNlcjI2NDA1Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yananchen1989",
"html_url": "https://github.com/yananchen1989",
"followers_url": "https://api.github.com/users/yananchen1989/followers",
"following_url": "https://api.github.com/users/yananchen1989/following{/other_user}",
"gists_url": "https://api.github.com/users/yananchen1989/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yananchen1989/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yananchen1989/subscriptions",
"organizations_url": "https://api.github.com/users/yananchen1989/orgs",
"repos_url": "https://api.github.com/users/yananchen1989/repos",
"events_url": "https://api.github.com/users/yananchen1989/events{/privacy}",
"received_events_url": "https://api.github.com/users/yananchen1989/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I guess you can do it like so:\r\n```\r\nfrom transformers import GPT2Tokenizer, GPT2LMHeadModel\r\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2')\r\nmodel = GPT2LMHeadModel.from_pretrained('gpt2')\r\n\r\nmodel.train()\r\n# when generating, we will use the logits of right-most token to predict the next token\r\n# so the padding should be on the left\r\ntokenizer.padding_side = \"left\" \r\ntokenizer.pad_token = tokenizer.eos_token # to avoid an error\r\n\r\nprompts = [\"Hello, my dog is a little\", \"Hello, my dog is\"]\r\ninputs = tokenizer(prompts, padding=True, return_tensors=\"pt\")\r\n\r\noutput_sequences = model.generate(\r\n input_ids=inputs['input_ids'],\r\n attention_mask=inputs['attention_mask']\r\n)\r\n\r\nloss = black_box(output_sequences)\r\nloss.backward()\r\n```\r\n\r\nPlease note that the [forum](https://discuss.huggingface.co/) is a better place to ask questions, Github issues are mostly for bugs/feature requests.\r\n\r\nThanks.",
"> I guess you can do it like so:\r\n> \r\n> ```\r\n> from transformers import GPT2Tokenizer, GPT2LMHeadModel\r\n> tokenizer = GPT2Tokenizer.from_pretrained('gpt2')\r\n> model = GPT2LMHeadModel.from_pretrained('gpt2')\r\n> \r\n> model.train()\r\n> # when generating, we will use the logits of right-most token to predict the next token\r\n> # so the padding should be on the left\r\n> tokenizer.padding_side = \"left\" \r\n> tokenizer.pad_token = tokenizer.eos_token # to avoid an error\r\n> \r\n> prompts = [\"Hello, my dog is a little\", \"Hello, my dog is\"]\r\n> inputs = tokenizer(prompts, padding=True, return_tensors=\"pt\")\r\n> \r\n> output_sequences = model.generate(\r\n> input_ids=inputs['input_ids'],\r\n> attention_mask=inputs['attention_mask']\r\n> )\r\n> \r\n> loss = black_box(output_sequences)\r\n> loss.backward()\r\n> ```\r\n> \r\n> Please note that the [forum](https://discuss.huggingface.co/) is a better place to ask questions, Github issues are mostly for bugs/feature requests.\r\n> \r\n> Thanks.\r\n\r\nthanks, very helpful. "
] | 1,623 | 1,623 | 1,623 | NONE | null | Suppose I have N prompts(sentences) for generation. They are fed into GPT2 and get the corresponding synthesis sentences.
And I have a separate black box which can return loss given these synthesis samples. The black box is just another component.
It is natural for every batch that GPT2 generate samples and get the loss, repeatedly.
What I want to do is use the loss from the black box to update the parameters of GPT2, at each batch.
The generation of GPT2 is quite simple, but how can I implement the idea of updating it with the loss?
Is there any example for doing this ?
Please give some thoughts, thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12054/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12053 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12053/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12053/comments | https://api.github.com/repos/huggingface/transformers/issues/12053/events | https://github.com/huggingface/transformers/pull/12053 | 913,415,686 | MDExOlB1bGxSZXF1ZXN0NjYzNDUxMTEy | 12,053 | [JAX] Bump jax lib | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Thanks for spotting it @stas00
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12053/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12053",
"html_url": "https://github.com/huggingface/transformers/pull/12053",
"diff_url": "https://github.com/huggingface/transformers/pull/12053.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12053.patch",
"merged_at": 1623067458000
} |
https://api.github.com/repos/huggingface/transformers/issues/12052 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12052/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12052/comments | https://api.github.com/repos/huggingface/transformers/issues/12052/events | https://github.com/huggingface/transformers/issues/12052 | 913,387,977 | MDU6SXNzdWU5MTMzODc5Nzc= | 12,052 | No max_length set on huawei-noah/TinyBERT_General_4L_312D/config.json | {
"login": "alexcombessie",
"id": 4739848,
"node_id": "MDQ6VXNlcjQ3Mzk4NDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4739848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexcombessie",
"html_url": "https://github.com/alexcombessie",
"followers_url": "https://api.github.com/users/alexcombessie/followers",
"following_url": "https://api.github.com/users/alexcombessie/following{/other_user}",
"gists_url": "https://api.github.com/users/alexcombessie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexcombessie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexcombessie/subscriptions",
"organizations_url": "https://api.github.com/users/alexcombessie/orgs",
"repos_url": "https://api.github.com/users/alexcombessie/repos",
"events_url": "https://api.github.com/users/alexcombessie/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexcombessie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @patrickvonplaten @JetRunner,\r\n\r\nApologies for following up, I know it's a busy time.\r\n\r\nWould you have some time to look into this issue?\r\n\r\nThanks,\r\n\r\nAlex ",
"Hi Alex, I think the right thing to do is to look up `max_len` from the TinyBERT paper. Do you know what is that setting? ",
"> Hi Alex, I think the right thing to do is to look up `max_len` from the TinyBERT paper. Do you know what is that setting?\r\n\r\nYeah, you are right. The paper seems to indicate 128 for the general distillation.\r\n\r\n\r\n\r\nI will reach out to the authors because they mention another length of 64 for task-specific distillation. I just want to be sure which one is used by the model hosted on Huggingface.\r\n\r\nAs a side-note, it would be really useful (at least to me) to have some automated checks and/or feedback system on the model hub.\r\n",
"Hi @JetRunner,\r\n\r\nI got the following answer from the author (Xiaoqi Jiao)\r\n\r\n> The max_len of TinyBERT is 128, but if the max sequence length of your downstream task is less than max_len, you may set max_len to a small value like 64 to save the computing resources.\r\n\r\nShould I add `max_length: 128` on the model hub? Happy to take this small PR directly.\r\n\r\nCheers,\r\n\r\nAlex\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,627 | 1,627 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Darwin-20.3.0-x86_64-i386-64bit
- Python version: 3.6.10
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten @JetRunner
## Information
Model I am using: huawei-noah/TinyBERT_General_4L_312D
The problem arises when using:
* [x] my own modified scripts: (give details below)
```{python}
import json
import pandas as pd
import gzip
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('huawei-noah/TinyBERT_General_4L_312D')
def parse(path):
g = gzip.open(path, 'rb')
for l in g:
yield json.loads(l)
def getDF(path):
i = 0
df = {}
for d in parse(path):
df[i] = d
i += 1
return pd.DataFrame.from_dict(df, orient='index')
local_path_to_review_data = "/Users/alexandrecombessie/Downloads/Software_5.json.gz" # See download link below
df = getDF(local_path_to_review_data)
df["review_text_full_embeddings"] = [
json.dumps(x.tolist()) for x in model.encode(df["reviewText"].astype(str))
]
```
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
- Amazon review dataset sample (http://deepyeti.ucsd.edu/jianmo/amazon/categoryFilesSmall/Software_5.json.gz)
## To reproduce
Steps to reproduce the behavior:
See script above
## Expected behavior
A `max_length` should be set in the model `config.json` for the tokenizer to apply truncation (which is my expected behavior).
See https://huggingface.co/huawei-noah/TinyBERT_General_4L_312D/blob/main/config.json
I could do it myself, but I am not able to understand what is the right length to set.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12052/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12052/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12051 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12051/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12051/comments | https://api.github.com/repos/huggingface/transformers/issues/12051/events | https://github.com/huggingface/transformers/pull/12051 | 913,336,532 | MDExOlB1bGxSZXF1ZXN0NjYzMzgzNDEx | 12,051 | Add early stopping args to TrainingArguments | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I reused the script:\r\n- https://github.com/huggingface/transformers/blob/master/examples/pytorch/token-classification/run_ner.py\r\n\r\nbut allowing early stopping: [`EarlyStoppingCallback`](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_callback.py#L505)\r\n\r\nTherefore, I had to add those args to `TrainingArguments` to allow end users of my script pass those parameters to EarlyStoppingCallback.\r\n\r\nI thought that it might be useful for end users to have those args in TrainingArguments so that they can use early stopping in their trainer the same way I did for my script.",
"We don't have any `TrainingArguments` that are not used anywhere. Users are already complaining this class has too many, so if we had some they have to do something.\r\nIf they go with an update in the example scripts, then all the example scripts should be updated in the PR :-) ",
"I could also add early stopping to some of the example scripts... π
I may do it this weekend though...",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,626 | 1,626 | MEMBER | null | # What does this PR do?
While working in the collaborative training project, I added early stopping args to `TrainingArguments`.
Feel free to close this PR if you consider it is not pertinent.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12051/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12051",
"html_url": "https://github.com/huggingface/transformers/pull/12051",
"diff_url": "https://github.com/huggingface/transformers/pull/12051.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12051.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12050 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12050/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12050/comments | https://api.github.com/repos/huggingface/transformers/issues/12050/events | https://github.com/huggingface/transformers/issues/12050 | 913,283,797 | MDU6SXNzdWU5MTMyODM3OTc= | 12,050 | [end2end RAG] AttributeError: module 'pickle' has no attribute 'PickleBuffer' | {
"login": "shunyuzh",
"id": 41095167,
"node_id": "MDQ6VXNlcjQxMDk1MTY3",
"avatar_url": "https://avatars.githubusercontent.com/u/41095167?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shunyuzh",
"html_url": "https://github.com/shunyuzh",
"followers_url": "https://api.github.com/users/shunyuzh/followers",
"following_url": "https://api.github.com/users/shunyuzh/following{/other_user}",
"gists_url": "https://api.github.com/users/shunyuzh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shunyuzh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shunyuzh/subscriptions",
"organizations_url": "https://api.github.com/users/shunyuzh/orgs",
"repos_url": "https://api.github.com/users/shunyuzh/repos",
"events_url": "https://api.github.com/users/shunyuzh/events{/privacy}",
"received_events_url": "https://api.github.com/users/shunyuzh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\n\nThanks a lot. I think this error is due to Python version. Check with 3.8 or above. It will work.",
"> Hi,\r\n> \r\n> Thanks a lot. I think this error is due to Python version. Check with 3.8 or above. It will work.\r\n\r\nThank you, it really worked when I run test_finetune.sh. \r\n\r\nEmm, it's silly that I have tried to change Python version from 3.6 to 3.7, but forgot the 3.8. ",
"Perfect:)\n\nOn Mon, Jun 7, 2021, 21:41 Dopaminezsy ***@***.***> wrote:\n\n> Hi,\n>\n> Thanks a lot. I think this error is due to Python version. Check with 3.8\n> or above. It will work.\n>\n> Thank you, it really worked when I run test_finetune.sh.\n>\n> Emm, it's silly that I have tried to change Python version from 3.6 to\n> 3.7, but forgot the 3.8.\n>\n> β\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/12050#issuecomment-855777762>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGRPFOPULVKPX2AAR7LTRSH5PANCNFSM46HEVE4A>\n> .\n>\n",
"Hi, friend @shamanez :\r\nSorry to disturb you again. I face the following bug when run finetune_rag_ray_end2end.sh. \r\nCould you give some sugguestions?\r\n\r\n```\r\n2021-06-08 09:40:19,202\tINFO worker.py:726 -- Connecting to existing Ray cluster at address: 10.34.96.6:6379\r\nINFO:__main__:Getting named actors for NODE_RANK 0, LOCAL_RANK 1\r\nTraceback (most recent call last):\r\n File \"/workspaceblobstore/azureml/rag1_1623144585_ed3652ce/transformers/examples/research_projects/rag-end2end-retriever/finetune_rag.py\", line 794, in <module>\r\n main(args)\r\n File \"/workspaceblobstore/azureml/rag1_1623144585_ed3652ce/transformers/examples/research_projects/rag-end2end-retriever/finetune_rag.py\", line 726, in main\r\n named_actors = [ray.get_actor(\"retrieval_worker_{}\".format(i)) for i in range(args.num_retrieval_workers)]\r\n File \"/workspaceblobstore/azureml/rag1_1623144585_ed3652ce/transformers/examples/research_projects/rag-end2end-retriever/finetune_rag.py\", line 726, in <listcomp>\r\n named_actors = [ray.get_actor(\"retrieval_worker_{}\".format(i)) for i in range(args.num_retrieval_workers)]\r\n File \"/home/t-shzhang/.local/lib/python3.8/site-packages/ray/_private/client_mode_hook.py\", line 62, in wrapper\r\n return func(*args, **kwargs)\r\n File \"/home/t-shzhang/.local/lib/python3.8/site-packages/ray/worker.py\", line 1659, in get_actor\r\n handle = worker.core_worker.get_named_actor_handle(name)\r\n File \"python/ray/_raylet.pyx\", line 1521, in ray._raylet.CoreWorker.get_named_actor_handle\r\n File \"python/ray/_raylet.pyx\", line 159, in ray._raylet.check_status\r\nValueError: Failed to look up actor with name 'retrieval_worker_0'. You are either trying to look up a named actor you didn't create, the named actor died, or the actor hasn't been created because named actor creation is asynchronous.\r\n```",
"Seems like a problem in your cluster. What is your system. Seems like it\nis a multi node system.\n\nOn Tue, Jun 8, 2021, 21:52 Dopaminezsy ***@***.***> wrote:\n\n> Hi, friend @shamanez <https://github.com/shamanez> :\n> Sorry to disturb you again. I face the following bug when run\n> finetune_rag_ray_end2end.sh.\n> Could you give some sugguestions?\n>\n> 2021-06-08 09:40:19,202\tINFO worker.py:726 -- Connecting to existing Ray cluster at address: 10.34.96.6:6379\n> INFO:__main__:Getting named actors for NODE_RANK 0, LOCAL_RANK 1\n> Traceback (most recent call last):\n> File \"/workspaceblobstore/azureml/rag1_1623144585_ed3652ce/transformers/examples/research_projects/rag-end2end-retriever/finetune_rag.py\", line 794, in <module>\n> main(args)\n> File \"/workspaceblobstore/azureml/rag1_1623144585_ed3652ce/transformers/examples/research_projects/rag-end2end-retriever/finetune_rag.py\", line 726, in main\n> named_actors = [ray.get_actor(\"retrieval_worker_{}\".format(i)) for i in range(args.num_retrieval_workers)]\n> File \"/workspaceblobstore/azureml/rag1_1623144585_ed3652ce/transformers/examples/research_projects/rag-end2end-retriever/finetune_rag.py\", line 726, in <listcomp>\n> named_actors = [ray.get_actor(\"retrieval_worker_{}\".format(i)) for i in range(args.num_retrieval_workers)]\n> File \"/home/t-shzhang/.local/lib/python3.8/site-packages/ray/_private/client_mode_hook.py\", line 62, in wrapper\n> return func(*args, **kwargs)\n> File \"/home/t-shzhang/.local/lib/python3.8/site-packages/ray/worker.py\", line 1659, in get_actor\n> handle = worker.core_worker.get_named_actor_handle(name)\n> File \"python/ray/_raylet.pyx\", line 1521, in ray._raylet.CoreWorker.get_named_actor_handle\n> File \"python/ray/_raylet.pyx\", line 159, in ray._raylet.check_status\n> ValueError: Failed to look up actor with name 'retrieval_worker_0'. You are either trying to look up a named actor you didn't create, the named actor died, or the actor hasn't been created because named actor creation is asynchronous.\n>\n> β\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/12050#issuecomment-856630780>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGTIOEOYQODXAUE25S3TRXR6FANCNFSM46HEVE4A>\n> .\n>\n",
"Actually I have discussed this issue previously. \n\n\nThis happens when you try to run the code in distributed mode. @calderma also mentioned the same thing.\n\nhttps://github.com/huggingface/transformers/pull/11655#issuecomment-845295355\n\n\nI think this is not an issue with Ray or anything. It is something with how you run a distributed code in Pytorch Lightining. Sadly I do not have a distributed system to test :(. \n\nBut in the above thread I pointed out some workarounds. Also I have mentioned the reason to get this issue.\n\nJust to summarize.. we initialize RAY actors only in master process (when initializing the master ddp process). Other DDP processes simply access the RAY worker by its name. \n\n\nBut when having a distributed system, I think initialization should happen in each node. In order to activate distributed training, you need to add **node** variable to lightning trainer. Then you should initialize the training as given in their tutoria. Please let me know how it goes.\n\n\nPlease follow the following commands to run the code in a Cluster.\n\nhttps://pytorch-lightning.readthedocs.io/en/latest/clouds/cluster.html#general-purpose-cluster\n",
"> Actually I have discussed this issue previously.\r\n> \r\n> This happens when you try to run the code in distributed mode. @calderma also mentioned the same thing.\r\n> \r\n> [#11655 (comment)](https://github.com/huggingface/transformers/pull/11655#issuecomment-845295355)\r\n> \r\n> I think this is not an issue with Ray or anything. It is something with how you run a distributed code in Pytorch Lightining. Sadly I do not have a distributed system to test :(.\r\n> \r\n> But in the above thread I pointed out some workarounds. Also I have mentioned the reason to get this issue.\r\n> \r\n> Just to summarize.. we initialize RAY actors only in master process (when initializing the master ddp process). Other DDP processes simply access the RAY worker by its name.\r\n> \r\n> But when having a distributed system, I think initialization should happen in each node. In order to activate distributed training, you need to add **node** variable to lightning trainer. Then you should initialize the training as given in their tutoria. Please let me know how it goes.\r\n> \r\n> Please follow the following commands to run the code in a Cluster.\r\n> \r\n> https://pytorch-lightning.readthedocs.io/en/latest/clouds/cluster.html#general-purpose-cluster\r\n\r\nThank you a lot. I think it will help. \r\n\r\nI'm going to try.",
"Hi again, \r\nAs for the DDP problem, I followed your instructions to add **node** variable before define trainer in lightning_base.py. But it didn't help and the BUG also as before. Do you know there are other instructions on PL?\r\n\r\n```\r\n train_params[\"accelerator\"] = \"ddp\" or \"ddp2\" or \"dp\"\r\n train_params[\"num_nodes\"] = 1\r\n trainer = pl.Trainer.from_argparse_args(\r\n args,\r\n weights_summary=None,\r\n callbacks=[logging_callback] + extra_callbacks + [InitCallback()] + [checkpoint_callback],\r\n logger=logger,\r\n plugins=[DDPPlugin(find_unused_parameters=True)], # this is needed in new pytorch-lightning new version\r\n val_check_interval=1,\r\n num_sanity_val_steps=2,\r\n **train_params,\r\n )\r\n```\r\nBy the way, I think it may be related to the RAY, for that RAY works before pl.Trainer in finetune_ray.py? Feel free to point out my naive error.",
"Yeah Ray workers get initialized before starting the training loop. Checkout if conditions mentioned in RAY worker initialization part. It basically says initialize only in the master process.\n\n\nBut when you try distributed training, it says it can't find already initialized worker. That means processes hasn't shared between nodes.\n\nThis is not a problem when you run the code in normal mode. I did not change anything to the RAY initialization. You will get the same problem in Original RAG too.\n\nCheck https://pytorch-lightning.readthedocs.io/en/latest/clouds/cluster.html#general-purpose-cluster.\n\n\nThis has several ways to start your cluster. I think you need to play around with those environment variables.",
"@Dopaminezsy \r\nThere is one more thing you can do. I updated the original [RAG](https://github.com/huggingface/transformers/blob/master/examples/research_projects/rag/finetune_rag.py) with the latest PL. But I use custom plugging there (I did not use it for this project since it is still in an experimental plugging). Can you try to run the original RAG in distributed mode and let me know? \r\n\r\nAlso, play around with these [lines](https://github.com/huggingface/transformers/blob/master/examples/research_projects/rag/finetune_rag.py#L535). \r\n\r\n\r\n\r\nOne more thing, try to print something after this [line](https://github.com/huggingface/transformers/blob/master/examples/research_projects/rag-end2end-retriever/finetune_rag.py#L709). I want to know whether your code fails before the DDP process. If it doesn't go inside the if condition, when starting, it is a problem with RAY, otherwise it is something with PL. Please let me know these things asap.",
"I forgot to tell you something related: I have changed [https://github.com/huggingface/transformers/blob/master/examples/research_projects/rag/finetune_rag.py#L535](url) to the below (changed **and** to **or**) . \r\n\r\n`if (\"LOCAL_RANK\" not in os.environ or os.environ[\"LOCAL_RANK\"] == 0) or ( \"NODE_RANK\" not in os.environ or os.environ[\"NODE_RANK\"] == 0 ):<!--EndFragment-->`\r\n\r\nIf use the original code with **and**, I got the bug as follows: \r\n```\r\nINFO worker.py:726 -- Connecting to existing Ray cluster at address: 10.34.64.3:6379\r\nTraceback (most recent call last):\r\n File \"finetune_rag.py\", line 790, in <module>\r\n main(args)\r\n File \"finetune_rag.py\", line 718, in main\r\n os.environ[\"NODE_RANK\"], os.environ[\"LOCAL_RANK\"]\r\n File \"/opt/miniconda/envs/rag/lib/python3.8/os.py\", line 675, in __getitem__\r\n raise KeyError(key) from None\r\nKeyError: 'LOCAL_RANK'\r\n```\r\nYou may benefit from the above information. Now I am going to think your new suggestions.\r\n\r\n",
"Actually, I think I can solve your problem. Please let me know once you have done the test (find whether the code goes inside the if condition in the initial process).",
"> I forgot to tell you something related: I have changed [https://github.com/huggingface/transformers/blob/master/examples/research_projects/rag/finetune_rag.py#L535](url) to the below (changed **and** to **or**) .\r\n> \r\n> `if (\"LOCAL_RANK\" not in os.environ or os.environ[\"LOCAL_RANK\"] == 0) or ( \"NODE_RANK\" not in os.environ or os.environ[\"NODE_RANK\"] == 0 ):<!--EndFragment-->`\r\n> \r\n> If use the original code with **and**, I got the bug as follows:\r\n> \r\n> ```\r\n> INFO worker.py:726 -- Connecting to existing Ray cluster at address: 10.34.64.3:6379\r\n> Traceback (most recent call last):\r\n> File \"finetune_rag.py\", line 790, in <module>\r\n> main(args)\r\n> File \"finetune_rag.py\", line 718, in main\r\n> os.environ[\"NODE_RANK\"], os.environ[\"LOCAL_RANK\"]\r\n> File \"/opt/miniconda/envs/rag/lib/python3.8/os.py\", line 675, in __getitem__\r\n> raise KeyError(key) from None\r\n> KeyError: 'LOCAL_RANK'\r\n> ```\r\n> \r\n> You may benefit from the above information. Now I am going to think your new suggestions.\r\n\r\nYes, this should give you an error. You have to use **and** operator. Because in the beginning there is no \"Local_Rank\" variable and it only checks the **\"\"LOCAL_RANK\" not in os. environ \"** condition, prior to going into the next term with Nodes.\r\n\r\nBut if you remove the **and** operator between two conditions, it will try to check this \"os.environ[\"LOCAL_RANK\"] == 0\". I know this is bit tricky :) ",
"Newly: \r\nThese are about when I run latest version of original RAG on the clusters.\r\n\r\nWhen using **and** operator, it faced _KeyError: 'LOCAL_RANK'_ as befor. \r\nWhen changing **and** operator to **or**, it faced _ValueError: Failed to look up actor with name 'retrieval_worker_0'._ as before.\r\nWhen changing **and** to **or** and add **train_params[\"num_nodes\"] = 1**, it also faced _ValueError: Failed to look up actor with name 'retrieval_worker_0'._ \r\n\r\nOverall, it's the same problem as them when running END2END-RAG.",
"Can you find out during the initialization, the code enters the if the condition or not? \r\n\r\nTry to print something after this [line](https://github.com/huggingface/transformers/blob/master/examples/research_projects/rag-end2end-retriever/finetune_rag.py#L709). I want to know whether your code fails before the DDP process. If it doesn't go inside the if condition, when starting, it is a problem with RAY, otherwise it is something with PL. Please let me know these things asap.",
"When I run as follows :\r\n```\r\n print(\"debug: LOCAL RANK in if = {}\".format(\"LOCAL_RANK\" not in os.environ or os.environ[\"LOCAL_RANK\"] == 0))\r\n print(\"debug: NODE RANK in if = {}\".format(\"NODE_RANK\" not in os.environ or os.environ[\"NODE_RANK\"] == 0))\r\n if (\"LOCAL_RANK\" not in os.environ or os.environ[\"LOCAL_RANK\"] == 0) and (\r\n \"NODE_RANK\" not in os.environ or os.environ[\"NODE_RANK\"] == 0\r\n ):\r\n print('Debug: 1 Go into successfully')\r\n remote_cls = ray.remote(RayRetriever)\r\n named_actors = [\r\n remote_cls.options(name=\"retrieval_worker_{}\".format(i)).remote()\r\n for i in range(args.num_retrieval_workers)\r\n ]\r\n print('Debug: 2 Initially successfully')\r\n else:\r\n print(\"Debug: 3 in else\")\r\n logger.info(\r\n \"Getting named actors for NODE_RANK {}, LOCAL_RANK {}\".format(\r\n os.environ[\"NODE_RANK\"], os.environ[\"LOCAL_RANK\"]\r\n )\r\n )\r\n print('Debug: 4444444')\r\n named_actors = [ray.get_actor(\"retrieval_worker_{}\".format(i)) for i in range(args.num_retrieval_workers)]\r\n print('Debug: 5555555')\r\n```\r\nPrint like: ( I copy all as belows after accomplishing conda env)\r\n```\r\n2021-06-09 08:34:41,460\tINFO worker.py:726 -- Connecting to existing Ray cluster at address: 10.36.112.5:6379\r\nshzhang Debug NODE_RANK 0\r\nshzhang Debug LOCAL_RANK Not exist.\r\ndebug: LOCAL RANK in if = True\r\ndebug: NODE RANK in if = False\r\nDebug: 3 in else\r\nTraceback (most recent call last):\r\n File \"finetune_rag.py\", line 634, in <module>\r\n main(args)\r\n File \"finetune_rag.py\", line 561, in main\r\n os.environ[\"NODE_RANK\"], os.environ[\"LOCAL_RANK\"]\r\n File \"/opt/miniconda/envs/rag/lib/python3.8/os.py\", line 675, in __getitem__\r\n raise KeyError(key) from None\r\nKeyError: 'LOCAL_RANK'\r\nStarting the daemon thread to refresh tokens in background for process with pid = 956\r\n\r\n```\r\n",
"Ok I think problem is with the if conditions. When you run the code it\r\nnever goes inside if condition. Which means your workers doesn't get\r\ninitialized. The main issue is you already have a Node variable in\r\nos.envron. do you get me? If not I can explain. \r\nOn Wed, Jun 9, 2021, 20:44 Dopaminezsy ***@***.***> wrote:\r\n\r\n> When I run as follows :\r\n>\r\n> print(\"debug: LOCAL RANK in if = {}\".format(\"LOCAL_RANK\" not in os.environ or os.environ[\"LOCAL_RANK\"] == 0))\r\n>\r\n> print(\"debug: NODE RANK in if = {}\".format(\"NODE_RANK\" not in os.environ or os.environ[\"NODE_RANK\"] == 0))\r\n>\r\n> if (\"LOCAL_RANK\" not in os.environ or os.environ[\"LOCAL_RANK\"] == 0) and (\r\n>\r\n> \"NODE_RANK\" not in os.environ or os.environ[\"NODE_RANK\"] == 0\r\n>\r\n> ):\r\n>\r\n> print('Debug: 1 Go into successfully')\r\n>\r\n> remote_cls = ray.remote(RayRetriever)\r\n>\r\n> named_actors = [\r\n>\r\n> remote_cls.options(name=\"retrieval_worker_{}\".format(i)).remote()\r\n>\r\n> for i in range(args.num_retrieval_workers)\r\n>\r\n> ]\r\n>\r\n> print('Debug: 2 Initially successfully')\r\n>\r\n> else:\r\n>\r\n> print(\"Debug: 3 in else\")\r\n>\r\n> logger.info(\r\n>\r\n> \"Getting named actors for NODE_RANK {}, LOCAL_RANK {}\".format(\r\n>\r\n> os.environ[\"NODE_RANK\"], os.environ[\"LOCAL_RANK\"]\r\n>\r\n> )\r\n>\r\n> )\r\n>\r\n> print('Debug: 4444444')\r\n>\r\n> named_actors = [ray.get_actor(\"retrieval_worker_{}\".format(i)) for i in range(args.num_retrieval_workers)]\r\n>\r\n> print('Debug: 5555555')\r\n>\r\n>\r\n> Print like: ( I copy all as belows after accomplishing conda env)\r\n>\r\n> 2021-06-09 08:34:35,151\tINFO scripts.py:560 -- Local node IP: 10.36.112.5\r\n>\r\n> 2021-06-09 08:34:36,497\tINFO services.py:1272 -- View the Ray dashboard at οΏ½[1mοΏ½[32mhttp://127.0.0.1:8265οΏ½[39mοΏ½[22m\r\n>\r\n> 2021-06-09 08:34:37,529\tSUCC scripts.py:592 -- --------------------\r\n>\r\n> 2021-06-09 08:34:37,529\tSUCC scripts.py:593 -- Ray runtime started.\r\n>\r\n> 2021-06-09 08:34:37,529\tSUCC scripts.py:594 -- --------------------\r\n>\r\n> 2021-06-09 08:34:37,529\tINFO scripts.py:596 -- Next steps\r\n>\r\n> 2021-06-09 08:34:37,530\tINFO scripts.py:597 -- To connect to this Ray runtime from another node, run\r\n>\r\n> 2021-06-09 08:34:37,530\tINFO scripts.py:601 -- ray start --address='10.36.112.5:6379' --redis-password='5241590000000000'\r\n>\r\n> 2021-06-09 08:34:37,530\tINFO scripts.py:606 -- Alternatively, use the following Python code:\r\n>\r\n> 2021-06-09 08:34:37,530\tINFO scripts.py:609 -- import ray\r\n>\r\n> 2021-06-09 08:34:37,530\tINFO scripts.py:610 -- ray.init(address='auto', _redis_password='5241590000000000')\r\n>\r\n> 2021-06-09 08:34:37,530\tINFO scripts.py:618 -- If connection fails, check your firewall settings and network configuration.\r\n>\r\n> 2021-06-09 08:34:37,530\tINFO scripts.py:623 -- To terminate the Ray runtime, run\r\n>\r\n> 2021-06-09 08:34:37,530\tINFO scripts.py:624 -- ray stop\r\n>\r\n> /home/t-shzhang/.local/lib/python3.8/site-packages/ray/autoscaler/_private/cli_logger.py:57: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.\r\n>\r\n> warnings.warn(\r\n>\r\n> 2021-06-09 08:34:41,460\tINFO worker.py:726 -- Connecting to existing Ray cluster at address: 10.36.112.5:6379\r\n>\r\n> shzhang Debug NODE_RANK 0\r\n>\r\n> shzhang Debug LOCAL_RANK Not exist.\r\n>\r\n> debug: LOCAL RANK in if = True\r\n>\r\n> debug: NODE RANK in if = False\r\n>\r\n> Debug: 3 in else\r\n>\r\n> Traceback (most recent call last):\r\n>\r\n> File \"finetune_rag.py\", line 634, in <module>\r\n>\r\n> main(args)\r\n>\r\n> File \"finetune_rag.py\", line 561, in main\r\n>\r\n> os.environ[\"NODE_RANK\"], os.environ[\"LOCAL_RANK\"]\r\n>\r\n> File \"/opt/miniconda/envs/rag/lib/python3.8/os.py\", line 675, in __getitem__\r\n>\r\n> raise KeyError(key) from None\r\n>\r\n> KeyError: 'LOCAL_RANK'\r\n>\r\n> Starting the daemon thread to refresh tokens in background for process with pid = 956\r\n>\r\n>\r\n>\r\n>\r\n> β\r\n> You are receiving this because you were mentioned.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/12050#issuecomment-857508306>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGSIFGTZSNYBG6QCW23TR4SWZANCNFSM46HEVE4A>\r\n> .\r\n>\r\n",
"I can't see the email for it's ***@***.***, so I will email you at your gmail on Linkedin.",
"sure.\n\nOn Wed, Jun 9, 2021 at 9:35 PM Dopaminezsy ***@***.***> wrote:\n\n> I can't see the email for it's *@*.***, so I will email you at your gmail\n> on Linkedin.\n>\n> β\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/12050#issuecomment-857544425>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGWK5AWQTW24OA62XD3TR4YUVANCNFSM46HEVE4A>\n> .\n>\n\n\n-- \n[image: Augmented Human Lab] <http://www.ahlab.org/> [image: uni]\n<https://www.auckland.ac.nz/en/abi.html>\n\nGayal Shamane\nPh.D. Candidate\nAugmented Human Lab\nAuckland Bioengineering Institute | The University of Auckland\n",
"Hey did you manage to solve ?",
"There seem to be something surpring!\r\n\r\nMy solvement: \r\n1. remove the condition after **and** operator in the below, remaining only the front condition.\r\n `if (\"LOCAL_RANK\" not in os.environ or os.environ[\"LOCAL_RANK\"] == 0) and ( \"NODE_RANK\" not in os.environ or os.environ[\"NODE_RANK\"] == 0 )`\r\n2. restrict the version of Ray to ray==1.3.0 not ray>=1.3.0 (it will install ray==1.4.0 on the cluster).\r\n\r\nThen it works well as belows:\r\n```\r\nValidating: 100%|ββββββββββ| 2962/2964 [22:52<00:00, 2.22it/s]\u001b[A\r\nEpoch 0: 0%| | 2964/64358290 [23:10<8388:32:39, 2.13it/s, loss=nan, v_num=6]\r\n\r\nValidating: 100%|ββββββββββ| 2963/2964 [22:52<00:00, 2.22it/s]\u001b[A\r\nEpoch 0: 0%| | 2965/64358290 [23:11<8388:26:04, 2.13it/s, loss=nan, v_num=6]\r\n```\r\n\r\n I'm going to check it for times, avoiding mistake.",
"Nice. Actually the problem was your cluster has a default node.\nNow you are simply checking whether the DDP process has started or not.\n\nBTW what happens when you are using latest RAY ? \n\nOn Wed, Jun 9, 2021, 23:45 Dopaminezsy ***@***.***> wrote:\n\n> There seem to be something surpring!\n>\n> My solvement:\n>\n> 1. remove the condition after *and* operator in the below, remaining\n> only the front condition.\n> if (\"LOCAL_RANK\" not in os.environ or os.environ[\"LOCAL_RANK\"] == 0)\n> and ( \"NODE_RANK\" not in os.environ or os.environ[\"NODE_RANK\"] == 0 )\n> 2. restrict the version of Ray to ray==1.3.0 not ray>=1.3.0 (it will\n> install ray==1.4.0 on the cluster).\n>\n> Then it works well as belows:\n>\n> Validating: 100%|ββββββββββ| 2962/2964 [22:52<00:00, 2.22it/s]οΏ½[A\n>\n> Epoch 0: 0%| | 2964/64358290 [23:10<8388:32:39, 2.13it/s, loss=nan, v_num=6]\n>\n>\n>\n> Validating: 100%|ββββββββββ| 2963/2964 [22:52<00:00, 2.22it/s]οΏ½[A\n>\n> Epoch 0: 0%| | 2965/64358290 [23:11<8388:26:04, 2.13it/s, loss=nan, v_num=6]\n>\n>\n> I'm going to check it for times, avoiding mistake.\n>\n> β\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/12050#issuecomment-857625392>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGRLAAQ5BSVF4DUFQY3TR5H4HANCNFSM46HEVE4A>\n> .\n>\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"[updated]\r\n\r\n@shamanez \r\n\r\nSorry to share my reproduced results later.\r\n\r\nI got my result EM=40.31 in end2end way, just following the same setting of [rag-end2end-retriever](https://github.com/huggingface/transformers/tree/master/examples/research_projects/rag-end2end-retriever).\r\n",
"That's nice to hear. Thanks for letting me know.\n\nOn Thu, Jul 22, 2021, 22:28 Dopaminezsy ***@***.***> wrote:\n\n> [updated]\n>\n> @shamanez <https://github.com/shamanez>\n>\n> Sorry to share my reproduced results later.\n>\n> I got my result EM=40.31 in end2end way, just following the same setting\n> of rag-end2end-retriever\n> <https://github.com/huggingface/transformers/tree/master/examples/research_projects/rag-end2end-retriever>\n> .\n>\n> β\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/12050#issuecomment-884809427>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGSXHGF4MBT3VBWVICDTY7XFPANCNFSM46HEVE4A>\n> .\n>\n"
] | 1,623 | 1,626 | 1,626 | NONE | null | Hi folks,
@shamanez , thanks for your awesome project of end2end RAG. But when I reproduce the results of https://github.com/huggingface/transformers/tree/master/examples/research_projects/rag-end2end-retriever, I face some problems.
```
Traceback (most recent call last):
File "finetune_rag.py", line 789, in <module>
main(args)
File "finetune_rag.py", line 726, in main
model: GenerativeQAModule = GenerativeQAModule(args)
File "finetune_rag.py", line 123, in __init__
hparams.model_name_or_path, hparams.actor_handles, config=config
File "/home/shunyu/container/Project/transformers/examples/research_projects/rag-end2end-retriever/distributed_ray_retriever.py", line 165, in from_pretrained
index=index,
File "/home/shunyu/container/Project/transformers/examples/research_projects/rag-end2end-retriever/distributed_ray_retriever.py", line 93, in __init__
for worker in self.retrieval_workers
File "/home/shunyu/container/Project/transformers/examples/research_projects/rag-end2end-retriever/distributed_ray_retriever.py", line 93, in <listcomp>
for worker in self.retrieval_workers
File "/anaconda/envs/rag2/lib/python3.6/site-packages/ray/actor.py", line 112, in remote
return self._remote(args, kwargs)
File "/anaconda/envs/rag2/lib/python3.6/site-packages/ray/actor.py", line 153, in _remote
return invocation(args, kwargs)
File "/anaconda/envs/rag2/lib/python3.6/site-packages/ray/actor.py", line 147, in invocation
num_returns=num_returns)
File "/anaconda/envs/rag2/lib/python3.6/site-packages/ray/actor.py", line 865, in _actor_method_call
list_args, name, num_returns, self._ray_actor_method_cpus)
File "python/ray/_raylet.pyx", line 1359, in ray._raylet.CoreWorker.submit_actor_task
File "python/ray/_raylet.pyx", line 1364, in ray._raylet.CoreWorker.submit_actor_task
File "python/ray/_raylet.pyx", line 304, in ray._raylet.prepare_args
File "/anaconda/envs/rag2/lib/python3.6/site-packages/ray/serialization.py", line 324, in serialize
return self._serialize_to_msgpack(value)
File "/anaconda/envs/rag2/lib/python3.6/site-packages/ray/serialization.py", line 304, in _serialize_to_msgpack
self._serialize_to_pickle5(metadata, python_objects)
File "/anaconda/envs/rag2/lib/python3.6/site-packages/ray/serialization.py", line 264, in _serialize_to_pickle5
raise e
File "/anaconda/envs/rag2/lib/python3.6/site-packages/ray/serialization.py", line 261, in _serialize_to_pickle5
value, protocol=5, buffer_callback=writer.buffer_callback)
File "/anaconda/envs/rag2/lib/python3.6/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 73, in dumps
cp.dump(obj)
File "/anaconda/envs/rag2/lib/python3.6/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 580, in dump
return Pickler.dump(self, obj)
File "pyarrow/io.pxi", line 1021, in pyarrow.lib.Buffer.__reduce_ex__
AttributeError: module 'pickle' has no attribute 'PickleBuffer'
```
We think it's mainly related to the version and dependency of ray, pyarrow and datasets.
My main pip list:
```
faiss-cpu == 1.7.0
datasets == 1.6.2
psutil == 5.7.0
torch == 1.6.0
pytorch-lightning == 1.3.1
nvidia-ml-py3 == 7.352.0
ray == 1.3.0
pyarrow == 3.0.0
```
I think anyone can investigate it from this reproducible example:
```
git clone https://github.com/huggingface/transformers
cd transformers
pip install .
cd ./examples/research_projects/rag-end2end-retriever
pip install -r requirements.txt
bash ./test_run/test_finetune.sh
```
And someone was likely to face the same questions https://discuss.ray.io/t/cant-pickle-pyarrow-dataset-expression/1685/8
So @shamanez, could you please show the entire pip list of the env to run END2END RAG? Or point out how to fix it?
Let me know if more information is needed and thanks for your help. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12050/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12049 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12049/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12049/comments | https://api.github.com/repos/huggingface/transformers/issues/12049/events | https://github.com/huggingface/transformers/pull/12049 | 913,144,513 | MDExOlB1bGxSZXF1ZXN0NjYzMjE1Mzc0 | 12,049 | fix past_key_values docs | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,623 | 1,623 | 1,623 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #12032
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12049/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12049",
"html_url": "https://github.com/huggingface/transformers/pull/12049",
"diff_url": "https://github.com/huggingface/transformers/pull/12049.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12049.patch",
"merged_at": 1623059643000
} |
https://api.github.com/repos/huggingface/transformers/issues/12048 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12048/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12048/comments | https://api.github.com/repos/huggingface/transformers/issues/12048/events | https://github.com/huggingface/transformers/issues/12048 | 913,143,457 | MDU6SXNzdWU5MTMxNDM0NTc= | 12,048 | OpenAI GPT language modeling shape mismatch: 512 position embeddings, 1024 input emebddings | {
"login": "avi-jit",
"id": 11348738,
"node_id": "MDQ6VXNlcjExMzQ4NzM4",
"avatar_url": "https://avatars.githubusercontent.com/u/11348738?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avi-jit",
"html_url": "https://github.com/avi-jit",
"followers_url": "https://api.github.com/users/avi-jit/followers",
"following_url": "https://api.github.com/users/avi-jit/following{/other_user}",
"gists_url": "https://api.github.com/users/avi-jit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avi-jit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avi-jit/subscriptions",
"organizations_url": "https://api.github.com/users/avi-jit/orgs",
"repos_url": "https://api.github.com/users/avi-jit/repos",
"events_url": "https://api.github.com/users/avi-jit/events{/privacy}",
"received_events_url": "https://api.github.com/users/avi-jit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Note that `openai-gpt` has a max_length of 512. See under `n_positions` in the config here: https://huggingface.co/openai-gpt/blob/main/config.json. \r\n\r\nThe `run_clm.py` script however sets `max_length` to 1024 by default. To fix your bug you should run:\r\n\r\n```bash\r\npython transformers/examples/pytorch/language-modeling/run_clm.py --model_name_or_path openai-gpt --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir /tmp/test-clm --per_device_train_batch_size 2 --gradient_accumulation_steps 4 --block_size 512\r\n```",
"Actually, it's weird that you get this error since:\r\n\r\n```python\r\nfrom transformers import OpenAIGPTTokenizer\r\ntokenizer = OpenAIGPTTokenizer.from_pretrained(\"openai-gpt\")\r\ntokenizer.model_max_length # prints 512\r\n```\r\n\r\n=> so the block size should have automatically been correctly set ",
"There is a small bug with a line not properly indented, fixing."
] | 1,623 | 1,623 | 1,623 | NONE | null | ## Environment info
- `transformers` version: 4.7.0.dev0
- Platform: Linux-4.19.0-16-cloud-amd64-x86_64-with-glibc2.10
- Python version: 3.8.10
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: below log is for cpu; also fails with gpu but cpu gives better error
- Using distributed or parallel set-up in script?: NA for cpu
### Who can help
- gpt2: @patrickvonplaten, @LysandreJik
- openai-gpt: @sgugger
## Information
Model I am using (Bert, XLNet ...): openai-gpt
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name) causal language modelling
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behaviour:
1. new environment, editable installation from source
2. CUDA_VISIBLE_DEVICES=, nice python transformers/examples/pytorch/language-modeling/run_clm.py --model_name_or_path openai-gpt --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir /tmp/test-clm --per_device_train_batch_size 2 --gradient_accumulation_steps 4
```Shell
06/07/2021 05:58:13 - WARNING - __main__ - Process rank: -1, device: cpu, n_gpu: 0distributed training: False, 16-bits training: False
06/07/2021 05:58:13 - INFO - __main__ - Training/evaluation parameters TrainingArguments(
_n_gpu=0,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_pin_memory=True,
ddp_find_unused_parameters=None,
debug=[],
deepspeed=None,
disable_tqdm=False,
do_eval=True,
do_predict=False,
do_train=True,
eval_accumulation_steps=None,
eval_steps=500,
evaluation_strategy=IntervalStrategy.NO,
fp16=False,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
gradient_accumulation_steps=4,
greater_is_better=None,
group_by_length=False,
ignore_data_skip=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=5e-05,
length_column_name=length,
load_best_model_at_end=False,
local_rank=-1,
log_on_each_node=True,
logging_dir=runs/Jun07_05-58-13_fermi-debug,
logging_first_step=False,
logging_steps=500,
logging_strategy=IntervalStrategy.STEPS,
lr_scheduler_type=SchedulerType.LINEAR,
max_grad_norm=1.0,
max_steps=-1,
metric_for_best_model=None,
mp_parameters=,
no_cuda=False,
num_train_epochs=3.0,
output_dir=/tmp/test-clm,
overwrite_output_dir=False,
past_index=-1,
per_device_eval_batch_size=8,
per_device_train_batch_size=2,
prediction_loss_only=False,
push_to_hub=False,
remove_unused_columns=True,
report_to=[],
resume_from_checkpoint=None,
run_name=/tmp/test-clm,
save_steps=500,
save_strategy=IntervalStrategy.STEPS,
save_total_limit=None,
seed=42,
sharded_ddp=[],
skip_memory_metrics=True,
tpu_metrics_debug=False,
tpu_num_cores=None,
use_legacy_prediction_loop=False,
warmup_ratio=0.0,
warmup_steps=0,
weight_decay=0.0,
)
06/07/2021 05:58:14 - WARNING - datasets.builder - Reusing dataset wikitext (/home/avit/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/aa5e094000ec7afeb74c3be92c88313cd6f132d564c7effd961c10fd47c76f20)
[INFO|configuration_utils.py:517] 2021-06-07 05:58:14,482 >> loading configuration file https://huggingface.co/openai-gpt/resolve/main/config.json from cache at /home/avit/.cache/huggingface/transformers/bebb46f5735701bc248ef9faa26f12577944fa7fc8e9be1a774b94d4cb8b79b6.ba6f10a5446f364b92311c09e55e49aa27024a4aeefc1ea50fd733b77bcd997d
[INFO|configuration_utils.py:553] 2021-06-07 05:58:14,483 >> Model config OpenAIGPTConfig {
"afn": "gelu",
"architectures": [
"OpenAIGPTLMHeadModel"
],
"attn_pdrop": 0.1,
"embd_pdrop": 0.1,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "openai-gpt",
"n_ctx": 512,
"n_embd": 768,
"n_head": 12,
"n_layer": 12,
"n_positions": 512,
"n_special": 0,
"predict_special_tokens": true,
"resid_pdrop": 0.1,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"task_specific_params": {
"text-generation": {
"do_sample": true,
"max_length": 50
}
},
"transformers_version": "4.7.0.dev0",
"vocab_size": 40478
}
[INFO|configuration_utils.py:517] 2021-06-07 05:58:14,766 >> loading configuration file https://huggingface.co/openai-gpt/resolve/main/config.json from cache at /home/avit/.cache/huggingface/transformers/bebb46f5735701bc248ef9faa26f12577944fa7fc8e9be1a774b94d4cb8b79b6.ba6f10a5446f364b92311c09e55e49aa27024a4aeefc1ea50fd733b77bcd997d
[INFO|configuration_utils.py:553] 2021-06-07 05:58:14,767 >> Model config OpenAIGPTConfig {
"afn": "gelu",
"architectures": [
"OpenAIGPTLMHeadModel"
],
"attn_pdrop": 0.1,
"embd_pdrop": 0.1,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "openai-gpt",
"n_ctx": 512,
"n_embd": 768,
"n_head": 12,
"n_layer": 12,
"n_positions": 512,
"n_special": 0,
"predict_special_tokens": true,
"resid_pdrop": 0.1,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"task_specific_params": {
"text-generation": {
"do_sample": true,
"max_length": 50
}
},
"transformers_version": "4.7.0.dev0",
"vocab_size": 40478
}
[INFO|tokenization_utils_base.py:1717] 2021-06-07 05:58:16,461 >> loading file https://huggingface.co/openai-gpt/resolve/main/vocab.json from cache at /home/avit/.cache/huggingface/transformers/918c57540c636a2a662770d208fcf20aa8c3faea78201fc612e5c84f052f1119.ac55819e76b0f8b0c32cbb407436947d090d98f8952f38376ee249ed382927ab
[INFO|tokenization_utils_base.py:1717] 2021-06-07 05:58:16,461 >> loading file https://huggingface.co/openai-gpt/resolve/main/merges.txt from cache at /home/avit/.cache/huggingface/transformers/a682e219a788dde0e4f77bc5a470d85a4d7e493420506ce7e3266f7be122cf9e.2150b9689fda7ca7c6224ff32672c004259f974e96934e8eb69d8dd546d682db
[INFO|tokenization_utils_base.py:1717] 2021-06-07 05:58:16,461 >> loading file https://huggingface.co/openai-gpt/resolve/main/tokenizer.json from cache at /home/avit/.cache/huggingface/transformers/325373fcbb0daa99905371727842a87ae9ca0f02f71db071720bb4d5a59076cf.b1810f3c6ed9fc0632664008484a9b569103559c04ac90321723cd808a3a96f9
[INFO|tokenization_utils_base.py:1717] 2021-06-07 05:58:16,461 >> loading file https://huggingface.co/openai-gpt/resolve/main/added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:1717] 2021-06-07 05:58:16,461 >> loading file https://huggingface.co/openai-gpt/resolve/main/special_tokens_map.json from cache at None
[INFO|tokenization_utils_base.py:1717] 2021-06-07 05:58:16,461 >> loading file https://huggingface.co/openai-gpt/resolve/main/tokenizer_config.json from cache at None
[INFO|modeling_utils.py:1155] 2021-06-07 05:58:16,805 >> loading weights file https://huggingface.co/openai-gpt/resolve/main/pytorch_model.bin from cache at /home/avit/.cache/huggingface/transformers/3e867ce638da986403594a5acbb39846ecb9c3b360a3b526348dd54b06938e55.93527980a112896731f93175b7c1cbc6b0fd771fad85fcc777ff5d49d249782e
[INFO|modeling_utils.py:1339] 2021-06-07 05:58:18,886 >> All model checkpoint weights were used when initializing OpenAIGPTLMHeadModel.
[WARNING|modeling_utils.py:1341] 2021-06-07 05:58:18,886 >> Some weights of OpenAIGPTLMHeadModel were not initialized from the model checkpoint at openai-gpt and are newly initialized: ['lm_head.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
0%| | 0/5 [00:00<?, ?ba/s]
40%|ββββ | 2/5 [00:00<00:00, 18.62ba/s][WARNING|tokenization_utils_base.py:3170] 2021-06-07 05:58:19,096 >> Token indices sequence length is longer than the specified maximum sequence length for this model (535 > 512). Running this sequence through the model will result in indexing errors
[WARNING|run_clm.py:347] 2021-06-07 05:58:19,097 >> ^^^^^^^^^^^^^^^^ Please ignore the warning above - this long input will be chunked into smaller bits before being passed to the model.
100%|ββββββββββ| 5/5 [00:00<00:00, 24.33ba/s]
0%| | 0/37 [00:00<?, ?ba/s]
8%|β | 3/37 [00:00<00:01, 22.90ba/s]
16%|ββ | 6/37 [00:00<00:01, 23.70ba/s]
22%|βββ | 8/37 [00:00<00:01, 20.28ba/s]
30%|βββ | 11/37 [00:00<00:01, 21.11ba/s]
38%|ββββ | 14/37 [00:00<00:01, 21.90ba/s]
46%|βββββ | 17/37 [00:00<00:00, 22.32ba/s]
54%|ββββββ | 20/37 [00:00<00:00, 23.04ba/s]
62%|βββββββ | 23/37 [00:01<00:00, 23.13ba/s]
70%|βββββββ | 26/37 [00:01<00:00, 21.79ba/s]
78%|ββββββββ | 29/37 [00:01<00:00, 22.03ba/s]
86%|βββββββββ | 32/37 [00:01<00:00, 22.01ba/s]
95%|ββββββββββ| 35/37 [00:01<00:00, 22.39ba/s]
100%|ββββββββββ| 37/37 [00:01<00:00, 22.54ba/s]
0%| | 0/4 [00:00<?, ?ba/s]
75%|ββββββββ | 3/4 [00:00<00:00, 22.82ba/s]
100%|ββββββββββ| 4/4 [00:00<00:00, 24.22ba/s]
0%| | 0/5 [00:00<?, ?ba/s]
20%|ββ | 1/5 [00:00<00:01, 2.53ba/s]
40%|ββββ | 2/5 [00:00<00:01, 2.66ba/s]
60%|ββββββ | 3/5 [00:01<00:00, 2.74ba/s]
80%|ββββββββ | 4/5 [00:01<00:00, 2.91ba/s]
100%|ββββββββββ| 5/5 [00:01<00:00, 3.54ba/s]
0%| | 0/37 [00:00<?, ?ba/s]
3%|β | 1/37 [00:00<00:10, 3.30ba/s]
5%|β | 2/37 [00:00<00:11, 3.11ba/s]
8%|β | 3/37 [00:01<00:11, 3.05ba/s]
11%|β | 4/37 [00:01<00:10, 3.04ba/s]
14%|ββ | 5/37 [00:01<00:09, 3.22ba/s]
16%|ββ | 6/37 [00:01<00:09, 3.28ba/s]
19%|ββ | 7/37 [00:02<00:09, 3.02ba/s]
22%|βββ | 8/37 [00:02<00:09, 3.06ba/s]
24%|βββ | 9/37 [00:02<00:09, 3.03ba/s]
27%|βββ | 10/37 [00:03<00:08, 3.05ba/s]
30%|βββ | 11/37 [00:03<00:08, 3.01ba/s]
32%|ββββ | 12/37 [00:03<00:08, 2.97ba/s]
35%|ββββ | 13/37 [00:04<00:08, 2.91ba/s]
38%|ββββ | 14/37 [00:04<00:07, 3.04ba/s]
41%|ββββ | 15/37 [00:04<00:07, 3.05ba/s]
43%|βββββ | 16/37 [00:05<00:07, 2.97ba/s]
46%|βββββ | 17/37 [00:05<00:06, 2.95ba/s]
49%|βββββ | 18/37 [00:05<00:06, 3.00ba/s]
51%|ββββββ | 19/37 [00:06<00:05, 3.01ba/s]
54%|ββββββ | 20/37 [00:06<00:05, 3.09ba/s]
57%|ββββββ | 21/37 [00:06<00:05, 2.98ba/s]
59%|ββββββ | 22/37 [00:07<00:05, 2.89ba/s]
62%|βββββββ | 23/37 [00:07<00:04, 2.97ba/s]
65%|βββββββ | 24/37 [00:07<00:04, 3.11ba/s]
68%|βββββββ | 25/37 [00:08<00:03, 3.23ba/s]
70%|βββββββ | 26/37 [00:08<00:03, 3.21ba/s]
73%|ββββββββ | 27/37 [00:08<00:03, 3.04ba/s]
76%|ββββββββ | 28/37 [00:09<00:03, 2.91ba/s]
78%|ββββββββ | 29/37 [00:09<00:02, 3.10ba/s]
81%|ββββββββ | 30/37 [00:09<00:02, 3.07ba/s]
84%|βββββββββ | 31/37 [00:10<00:02, 2.93ba/s]
86%|βββββββββ | 32/37 [00:10<00:01, 2.96ba/s]
89%|βββββββββ | 33/37 [00:10<00:01, 2.93ba/s]
92%|ββββββββββ| 34/37 [00:11<00:01, 2.90ba/s]
95%|ββββββββββ| 35/37 [00:11<00:00, 2.98ba/s]
97%|ββββββββββ| 36/37 [00:11<00:00, 2.92ba/s]
100%|ββββββββββ| 37/37 [00:12<00:00, 3.44ba/s]
100%|ββββββββββ| 37/37 [00:12<00:00, 3.05ba/s]
0%| | 0/4 [00:00<?, ?ba/s]
25%|βββ | 1/4 [00:00<00:00, 3.37ba/s]
50%|βββββ | 2/4 [00:00<00:00, 3.17ba/s]
75%|ββββββββ | 3/4 [00:01<00:00, 3.06ba/s]
100%|ββββββββββ| 4/4 [00:01<00:00, 3.47ba/s]
100%|ββββββββββ| 4/4 [00:01<00:00, 3.31ba/s]
[INFO|trainer.py:1147] 2021-06-07 05:58:35,755 >> ***** Running training *****
[INFO|trainer.py:1148] 2021-06-07 05:58:35,755 >> Num examples = 2282
[INFO|trainer.py:1149] 2021-06-07 05:58:35,755 >> Num Epochs = 3
[INFO|trainer.py:1150] 2021-06-07 05:58:35,755 >> Instantaneous batch size per device = 2
[INFO|trainer.py:1151] 2021-06-07 05:58:35,755 >> Total train batch size (w. parallel, distributed & accumulation) = 8
[INFO|trainer.py:1152] 2021-06-07 05:58:35,755 >> Gradient Accumulation steps = 4
[INFO|trainer.py:1153] 2021-06-07 05:58:35,756 >> Total optimization steps = 855
0%| | 0/855 [00:00<?, ?it/s]Traceback (most recent call last):
File "transformers/examples/pytorch/language-modeling/run_clm.py", line 488, in <module>
main()
File "transformers/examples/pytorch/language-modeling/run_clm.py", line 438, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/avit/trial2/transformers/src/transformers/trainer.py", line 1263, in train
tr_loss += self.training_step(model, inputs)
File "/home/avit/trial2/transformers/src/transformers/trainer.py", line 1741, in training_step
loss = self.compute_loss(model, inputs)
File "/home/avit/trial2/transformers/src/transformers/trainer.py", line 1773, in compute_loss
outputs = model(**inputs)
File "/home/avit/miniconda3/envs/try2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/avit/trial2/transformers/src/transformers/models/openai/modeling_openai.py", line 581, in forward
transformer_outputs = self.transformer(
File "/home/avit/miniconda3/envs/try2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/avit/trial2/transformers/src/transformers/models/openai/modeling_openai.py", line 501, in forward
hidden_states = inputs_embeds + position_embeds + token_type_embeds
RuntimeError: The size of tensor a (1024) must match the size of tensor b (512) at non-singleton dimension 1
0%| | 0/855 [00:00<?, ?it/s]
```
## Expected behaviour
Should not have a mismatch in tensor shapes. Apparently, the max length of tokens do not match: position embeddings expect 512 but input embeddings are 1024. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12048/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12047 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12047/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12047/comments | https://api.github.com/repos/huggingface/transformers/issues/12047/events | https://github.com/huggingface/transformers/issues/12047 | 913,048,657 | MDU6SXNzdWU5MTMwNDg2NTc= | 12,047 | Question: Masked Loss for LukeForEntitySpanClassification | {
"login": "zhenbangt",
"id": 60147744,
"node_id": "MDQ6VXNlcjYwMTQ3NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/60147744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhenbangt",
"html_url": "https://github.com/zhenbangt",
"followers_url": "https://api.github.com/users/zhenbangt/followers",
"following_url": "https://api.github.com/users/zhenbangt/following{/other_user}",
"gists_url": "https://api.github.com/users/zhenbangt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhenbangt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhenbangt/subscriptions",
"organizations_url": "https://api.github.com/users/zhenbangt/orgs",
"repos_url": "https://api.github.com/users/zhenbangt/repos",
"events_url": "https://api.github.com/users/zhenbangt/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhenbangt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Normally, in PyTorch, you have to set labels for padding tokens equal to -100, as -100 is the default ignore index that loss functions use.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,626 | 1,626 | NONE | null | In transformers.LukeForEntitySpanClassification, the loss is calculated from labels of shape `(batch_size, entity_length)` and in range `[0, ..., config.num_labels - 1]`. I did not see in the source code you masked out the loss for padded tokens, nor did you make a special label for padded tokens. So how do you deal with the loss of padded sequences?
Many thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12047/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12046 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12046/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12046/comments | https://api.github.com/repos/huggingface/transformers/issues/12046/events | https://github.com/huggingface/transformers/issues/12046 | 913,026,284 | MDU6SXNzdWU5MTMwMjYyODQ= | 12,046 | ImportError: cannot import name 'AutoTokenizer' from 'transformers' | {
"login": "Fushier",
"id": 48719664,
"node_id": "MDQ6VXNlcjQ4NzE5NjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/48719664?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Fushier",
"html_url": "https://github.com/Fushier",
"followers_url": "https://api.github.com/users/Fushier/followers",
"following_url": "https://api.github.com/users/Fushier/following{/other_user}",
"gists_url": "https://api.github.com/users/Fushier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Fushier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Fushier/subscriptions",
"organizations_url": "https://api.github.com/users/Fushier/orgs",
"repos_url": "https://api.github.com/users/Fushier/repos",
"events_url": "https://api.github.com/users/Fushier/events{/privacy}",
"received_events_url": "https://api.github.com/users/Fushier/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,626 | 1,626 | NONE | null | transformers: 4.6.1
tokenizers: 0.10.3
I installed transformers with
`conda install -c huggingface transformers`
but when I `from transformers import AutoTokenizer`
Traceback (most recent call last):
File "D:/IIE/WorkSpace/Pycharm WorkSpace/HuggingfaceNER/tokenizers.py", line 1, in <module>
from transformers import AutoTokenizer
File "D:\Environment\Anaconda3\envs\huggingface\lib\site-packages\transformers\__init__.py", line 48, in <module>
from .data import (
File "D:\Environment\Anaconda3\envs\huggingface\lib\site-packages\transformers\data\__init__.py", line 6, in <module>
from .processors import (
File "D:\Environment\Anaconda3\envs\huggingface\lib\site-packages\transformers\data\processors\__init__.py", line 5, in <module>
from .glue import glue_convert_examples_to_features, glue_output_modes, glue_processors, glue_tasks_num_labels
File "D:\Environment\Anaconda3\envs\huggingface\lib\site-packages\transformers\data\processors\glue.py", line 25, in <module>
from ...tokenization_utils import PreTrainedTokenizer
File "D:\Environment\Anaconda3\envs\huggingface\lib\site-packages\transformers\tokenization_utils.py", line 26, in <module>
from .tokenization_utils_base import (
File "D:\Environment\Anaconda3\envs\huggingface\lib\site-packages\transformers\tokenization_utils_base.py", line 31, in <module>
from tokenizers import AddedToken
File "D:\IIE\WorkSpace\Pycharm WorkSpace\HuggingfaceNER\tokenizers.py", line 1, in <module>
from transformers import AutoTokenizer
ImportError: cannot import name 'AutoTokenizer' from 'transformers' (D:\Environment\Anaconda3\envs\huggingface\lib\site-packages\transformers\__init__.py)
It even worked well yesterday, and I didn't upgrade anything...
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12046/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12045 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12045/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12045/comments | https://api.github.com/repos/huggingface/transformers/issues/12045/events | https://github.com/huggingface/transformers/pull/12045 | 912,977,819 | MDExOlB1bGxSZXF1ZXN0NjYzMDczNzk1 | 12,045 | [wip] [deps] data_collator fails with older numpy, update numpy>=1.20.0 | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"OK, so as I was concerned some dependency fixes numpy at 1.19.5 so things fail:\r\n```\r\nERROR: Could not find a version that satisfies the requirement numpy>=1.20.0 (from transformers[all,quality]) (from versions: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 1.10.4, 1.11.0, 1.11.1, 1.11.2, 1.11.3, 1.12.0, 1.12.1, 1.13.0rc1, 1.13.0rc2, 1.13.0, 1.13.1, 1.13.3, 1.14.0rc1, 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4, 1.14.5, 1.14.6, 1.15.0rc1, 1.15.0rc2, 1.15.0, 1.15.1, 1.15.2, 1.15.3, 1.15.4, 1.16.0rc1, 1.16.0rc2, 1.16.0, 1.16.1, 1.16.2, 1.16.3, 1.16.4, 1.16.5, 1.16.6, 1.17.0rc1, 1.17.0rc2, 1.17.0, 1.17.1, 1.17.2, 1.17.3, 1.17.4, 1.17.5, 1.18.0rc1, 1.18.0, 1.18.1, 1.18.2, 1.18.3, 1.18.4, 1.18.5, 1.19.0rc1, 1.19.0rc2, 1.19.0, 1.19.1, 1.19.2, 1.19.3, 1.19.4, 1.19.5)\r\nERROR: No matching distribution found for numpy>=1.20.0\r\n\r\nExited with code exit status 1\r\n``` \r\n\r\nThis is so not user-friendly as it could have said which dependency causes this conflict.\r\n\r\n`pip check` comes to help:\r\n\r\n```\r\n$ pip check\r\nWARNING: Ignoring invalid distribution -orch (/mnt/nvme1/anaconda3/envs/py38-pt18/lib/python3.8/site-packages)\r\ntensorflow 2.5.0 has requirement numpy~=1.19.2, but you have numpy 1.20.0.\r\n```\r\n\r\nso `tensorflow 2.5.0` is the limiting culprit.",
"OK, that was a faulty pytorch build. Seems to work fine with the latest nightly or 1.9.0-rc."
] | 1,623 | 1,623 | 1,623 | CONTRIBUTOR | null | Please ignore for now as this looks a pytorch 1.9.0-rc problem, filed an issue
https://github.com/pytorch/pytorch/issues/59533
--------------------
with pytorch-1.9.0/nighty 1.9.0a0+git2a178d3
data collator via Trainer fails with numpy==1.19.5 with:
```
RuntimeError: Could not infer dtype of numpy.float32
```
Additionally getting a warning:
```
../../../../../home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/package/_mock_zipreader.py:17
/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/package/_mock_zipreader.py:17: UserWarning: Failed to initialize NumPy: module compiled against API version 0xe but this version of numpy is 0xd (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:67.)
_dtype_to_storage = {data_type(0).dtype: data_type for data_type in _storages}
```
Full trace:
```
$ pip install numpy==1.19.5
$ pytest tests/test_trainer.py::TrainerIntegrationTest::test_fp16_full_eval
====================================================================== test session starts ======================================================================
platform linux -- Python 3.8.8, pytest-6.2.3, py-1.10.0, pluggy-0.13.1
rootdir: /mnt/nvme1/code/huggingface, configfile: pytest.ini
plugins: monitor-1.6.0, flakefinder-1.0.0, forked-1.3.0, instafail-0.4.2, xdist-2.2.1
collected 1 item
tests/test_trainer.py F [100%]
=========================================================================== FAILURES ============================================================================
__________________________________________________________ TrainerIntegrationTest.test_fp16_full_eval ___________________________________________________________
self = <tests.test_trainer.TrainerIntegrationTest testMethod=test_fp16_full_eval>
def setUp(self):
super().setUp()
args = TrainingArguments(".")
self.n_epochs = args.num_train_epochs
self.batch_size = args.train_batch_size
trainer = get_regression_trainer(learning_rate=0.1)
> trainer.train()
tests/test_trainer.py:333:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/trainer.py:1237: in train
for step, inputs in enumerate(epoch_iterator):
/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/utils/data/dataloader.py:521: in __next__
data = self._next_data()
/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/utils/data/dataloader.py:561: in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py:47: in fetch
return self.collate_fn(data)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
features = [{'input_x': -0.54438275, 'labels': 1.8582585}, {'input_x': 0.64768857, 'labels': 4.288176}, {'input_x': 1.5792128, 'l...abels': 0.12356561}, {'input_x': -0.46947438, 'labels': 2.0574687}, {'input_x': -0.46572974, 'labels': 2.1507308}, ...]
def default_data_collator(features: List[InputDataClass]) -> Dict[str, torch.Tensor]:
"""
Very simple data collator that simply collates batches of dict-like objects and performs special handling for
potential keys named:
- ``label``: handles a single value (int or float) per object
- ``label_ids``: handles a list of values per object
Does not do any additional preprocessing: property names of the input object will be used as corresponding inputs
to the model. See glue and ner for example of how it's useful.
"""
# In this function we'll make the assumption that all `features` in the batch
# have the same attributes.
# So we will look at the first element as a proxy for what attributes exist
# on the whole batch.
if not isinstance(features[0], (dict, BatchEncoding)):
features = [vars(f) for f in features]
first = features[0]
batch = {}
# Special handling for labels.
# Ensure that tensor is created with the correct type
# (it should be automatically the case, but let's make sure of it.)
if "label" in first and first["label"] is not None:
label = first["label"].item() if isinstance(first["label"], torch.Tensor) else first["label"]
dtype = torch.long if isinstance(label, int) else torch.float
batch["labels"] = torch.tensor([f["label"] for f in features], dtype=dtype)
elif "label_ids" in first and first["label_ids"] is not None:
if isinstance(first["label_ids"], torch.Tensor):
batch["labels"] = torch.stack([f["label_ids"] for f in features])
else:
dtype = torch.long if type(first["label_ids"][0]) is int else torch.float
batch["labels"] = torch.tensor([f["label_ids"] for f in features], dtype=dtype)
# Handling of all other possible keys.
# Again, we will use the first element to figure out which key/values are not None for this model.
for k, v in first.items():
if k not in ("label", "label_ids") and v is not None and not isinstance(v, str):
if isinstance(v, torch.Tensor):
batch[k] = torch.stack([f[k] for f in features])
else:
> batch[k] = torch.tensor([f[k] for f in features])
E RuntimeError: Could not infer dtype of numpy.float32
src/transformers/data/data_collator.py:80: RuntimeError
```
The error goes away after installing the next release `numpy==1.20.0`.
Perhaps it can be fixed in the collator to support older numpy.
This PR is one way to approach it. Not sure if we have other dependencies that perhaps require numpy<=1.20.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12045/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12045",
"html_url": "https://github.com/huggingface/transformers/pull/12045",
"diff_url": "https://github.com/huggingface/transformers/pull/12045.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12045.patch",
"merged_at": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.