url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/12745
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12745/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12745/comments
https://api.github.com/repos/huggingface/transformers/issues/12745/events
https://github.com/huggingface/transformers/pull/12745
945,567,072
MDExOlB1bGxSZXF1ZXN0NjkwODc4NjAx
12,745
Replace specific tokenizer in log message by AutoTokenizer
{ "login": "SaulLu", "id": 55560583, "node_id": "MDQ6VXNlcjU1NTYwNTgz", "avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SaulLu", "html_url": "https://github.com/SaulLu", "followers_url": "https://api.github.com/users/SaulLu/followers", "following_url": "https://api.github.com/users/SaulLu/following{/other_user}", "gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}", "starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions", "organizations_url": "https://api.github.com/users/SaulLu/orgs", "repos_url": "https://api.github.com/users/SaulLu/repos", "events_url": "https://api.github.com/users/SaulLu/events{/privacy}", "received_events_url": "https://api.github.com/users/SaulLu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thank you for accepting my proposal!" ]
1,626
1,626
1,626
CONTRIBUTOR
null
# What does this PR do? As mentioned by @sgugger in PR #12619, I propose with this PR to harmonize the messages in the logs to encourage users to use `AutoTokenizer`. I've checked that all these tokenizers appear in `TOKENIZER_MAPPING` int the `tokenization_auto.py` file. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Who can review? Anyone in the community is free to review the PR once the tests have passed. @sgugger since this PR is based on one of your comments, I think you would be interested. @europeanplaice, I'm just tagging you for your information (I didn't include the change you proposed in your PR). <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12745/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12745/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12745", "html_url": "https://github.com/huggingface/transformers/pull/12745", "diff_url": "https://github.com/huggingface/transformers/pull/12745.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12745.patch", "merged_at": 1626368388000 }
https://api.github.com/repos/huggingface/transformers/issues/12744
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12744/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12744/comments
https://api.github.com/repos/huggingface/transformers/issues/12744/events
https://github.com/huggingface/transformers/issues/12744
945,435,028
MDU6SXNzdWU5NDU0MzUwMjg=
12,744
Blenderbot output logits dimensions mismatch
{ "login": "naderabdalghani", "id": 33901325, "node_id": "MDQ6VXNlcjMzOTAxMzI1", "avatar_url": "https://avatars.githubusercontent.com/u/33901325?v=4", "gravatar_id": "", "url": "https://api.github.com/users/naderabdalghani", "html_url": "https://github.com/naderabdalghani", "followers_url": "https://api.github.com/users/naderabdalghani/followers", "following_url": "https://api.github.com/users/naderabdalghani/following{/other_user}", "gists_url": "https://api.github.com/users/naderabdalghani/gists{/gist_id}", "starred_url": "https://api.github.com/users/naderabdalghani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/naderabdalghani/subscriptions", "organizations_url": "https://api.github.com/users/naderabdalghani/orgs", "repos_url": "https://api.github.com/users/naderabdalghani/repos", "events_url": "https://api.github.com/users/naderabdalghani/events{/privacy}", "received_events_url": "https://api.github.com/users/naderabdalghani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Seems like a documentation thing. After a little bit of digging, I found out that the output logits have the dimensions of `(batch_size, decoder_sequence_length, config.vocab_size)` not `(batch_size, sequence_length, config.vocab_size)`.", "@naderabdalghani thanks for investigating! Also feel free to open a PR to clarify the docs if you'd like :-)" ]
1,626
1,627
1,626
NONE
null
- `transformers` version: 4.9.0.dev0 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.9.0+cu102 (True) - Tensorflow version (GPU?): 2.5.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No Models: Blenderbot — @patrickvonplaten, @patil-suraj Documentation: @sgugger ## Information The model I am using is `facebook/blenderbot-400M-distill`. The problem arises when using: * [x] my own modified scripts ## To reproduce Steps to reproduce the behaviour: Run the following code ```python from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration import torch DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = BlenderbotForConditionalGeneration.from_pretrained('facebook/blenderbot-400M-distill').to(DEVICE) tokenizer = BlenderbotTokenizer.from_pretrained('facebook/blenderbot-400M-distill') input_ids = tokenizer.encode("Hello there! My name is Nader", return_tensors="pt").to(DEVICE) decoder_input_ids = tokenizer.encode("<s>", return_tensors="pt").to(DEVICE) outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids) logits = outputs.logits print(input_ids.shape) print(logits.shape) ``` ## Current output ``` torch.Size([1, 9]) torch.Size([1, 3, 8008]) ``` ## Expected behaviour ``` torch.Size([1, 9]) torch.Size([1, 9, 8008]) ``` According to [the documentation of the `forward()` method,](https://huggingface.co/transformers/model_doc/blenderbot.html?highlight=forward#transformers.BlenderbotForConditionalGeneration.forward) it should return a `Seq2SeqLMOutput` object with a member `logits` having the following properties > logits (`torch.FloatTensor` of shape (`batch_size`, `sequence_length`, `config.vocab_size`)) and considering that the `input_ids` in this case has dimensions of `(batch_size, sequence_length)` which maps to `(1, 9)` in this case shown in code above, why doesn't the model output logits have dimensions of `(1, 9, 8008)`? Please tell me what does this `3` signify if I'm missing something here. Thanks. <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12744/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12744/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12743
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12743/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12743/comments
https://api.github.com/repos/huggingface/transformers/issues/12743/events
https://github.com/huggingface/transformers/pull/12743
945,379,483
MDExOlB1bGxSZXF1ZXN0NjkwNzE3MDcw
12,743
[Debug] wav2vec2 pretraining
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,626
1,631
1,631
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Check https://huggingface.co/patrickvonplaten/debug_repo ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12743/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12743/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12743", "html_url": "https://github.com/huggingface/transformers/pull/12743", "diff_url": "https://github.com/huggingface/transformers/pull/12743.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12743.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/12742
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12742/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12742/comments
https://api.github.com/repos/huggingface/transformers/issues/12742/events
https://github.com/huggingface/transformers/pull/12742
945,371,750
MDExOlB1bGxSZXF1ZXN0NjkwNzEwNDU1
12,742
Patch T5 device test
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks a lot!" ]
1,626
1,626
1,626
MEMBER
null
The input IDs were not cast to the correct device.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12742/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12742/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12742", "html_url": "https://github.com/huggingface/transformers/pull/12742", "diff_url": "https://github.com/huggingface/transformers/pull/12742.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12742.patch", "merged_at": 1626363617000 }
https://api.github.com/repos/huggingface/transformers/issues/12741
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12741/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12741/comments
https://api.github.com/repos/huggingface/transformers/issues/12741/events
https://github.com/huggingface/transformers/issues/12741
945,368,269
MDU6SXNzdWU5NDUzNjgyNjk=
12,741
Change create_model_card to use best eval_results when args.load_best_model_at_end==True
{ "login": "nadahlberg", "id": 58701810, "node_id": "MDQ6VXNlcjU4NzAxODEw", "avatar_url": "https://avatars.githubusercontent.com/u/58701810?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nadahlberg", "html_url": "https://github.com/nadahlberg", "followers_url": "https://api.github.com/users/nadahlberg/followers", "following_url": "https://api.github.com/users/nadahlberg/following{/other_user}", "gists_url": "https://api.github.com/users/nadahlberg/gists{/gist_id}", "starred_url": "https://api.github.com/users/nadahlberg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nadahlberg/subscriptions", "organizations_url": "https://api.github.com/users/nadahlberg/orgs", "repos_url": "https://api.github.com/users/nadahlberg/repos", "events_url": "https://api.github.com/users/nadahlberg/events{/privacy}", "received_events_url": "https://api.github.com/users/nadahlberg/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sgugger ", "I don't think there is an easy way to get that loss and metrics. However, note that if you run a last `trainer.evaluate()` (as is done in all the example scripts) the loss and metrics reported will be the ones from the final model, so I would suggest doing this.", "@sgugger that makes sense. Thanks!" ]
1,626
1,626
1,626
CONTRIBUTOR
null
# 🚀 Feature request When I use Trainer.push_to_hub, the model card that gets generated uses my most recent checkpoint's eval_results. However, if I include load_best_model_at_end=True in my TrainingArguments, then the model that is being pushed can often be from an earlier checkpoint (right?). The suggestion is to change create_model_card (and parse_log_history) to use the best loss when appropriate. ## Motivation So that the reported evaluation results on the model card match the correct checkpoint of the model pushed to the hub. ## Contribution Happy to submit a PR, with some guidance on where to make the changes. Would it make sense to pass the trainer as an optional argument to parse_log_history and then use the training args load_best_model_at_end, metric_for_best_model, and greater_is_better to return the best loss as the eval_results if appropriate?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12741/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12741/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12740
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12740/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12740/comments
https://api.github.com/repos/huggingface/transformers/issues/12740/events
https://github.com/huggingface/transformers/pull/12740
945,367,485
MDExOlB1bGxSZXF1ZXN0NjkwNzA2NzQw
12,740
Skip test while the model is not available
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
MEMBER
null
Skip test while the model is not available on the huggingface hub.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12740/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12740/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12740", "html_url": "https://github.com/huggingface/transformers/pull/12740", "diff_url": "https://github.com/huggingface/transformers/pull/12740.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12740.patch", "merged_at": 1626354852000 }
https://api.github.com/repos/huggingface/transformers/issues/12739
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12739/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12739/comments
https://api.github.com/repos/huggingface/transformers/issues/12739/events
https://github.com/huggingface/transformers/pull/12739
945,360,567
MDExOlB1bGxSZXF1ZXN0NjkwNzAwNzA5
12,739
Skip test while the model is not available
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
MEMBER
null
Skip the test while the model cannot be accessed through the hub.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12739/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12739/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12739", "html_url": "https://github.com/huggingface/transformers/pull/12739", "diff_url": "https://github.com/huggingface/transformers/pull/12739.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12739.patch", "merged_at": 1626354408000 }
https://api.github.com/repos/huggingface/transformers/issues/12738
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12738/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12738/comments
https://api.github.com/repos/huggingface/transformers/issues/12738/events
https://github.com/huggingface/transformers/issues/12738
945,353,143
MDU6SXNzdWU5NDUzNTMxNDM=
12,738
Doctest Integration
{ "login": "will-rice", "id": 25072137, "node_id": "MDQ6VXNlcjI1MDcyMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/25072137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/will-rice", "html_url": "https://github.com/will-rice", "followers_url": "https://api.github.com/users/will-rice/followers", "following_url": "https://api.github.com/users/will-rice/following{/other_user}", "gists_url": "https://api.github.com/users/will-rice/gists{/gist_id}", "starred_url": "https://api.github.com/users/will-rice/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/will-rice/subscriptions", "organizations_url": "https://api.github.com/users/will-rice/orgs", "repos_url": "https://api.github.com/users/will-rice/repos", "events_url": "https://api.github.com/users/will-rice/events{/privacy}", "received_events_url": "https://api.github.com/users/will-rice/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @will-rice! Most of our tests are setup to work with doctest, and we had doctest coverage a few months back. Unfortunately, as the number of model grows, the issue is less of a documentation issue and more of an infrastructure issue :) We're working on setting them back up as we speak.", "ok awesome! I'll go ahead and close this then." ]
1,626
1,626
1,626
CONTRIBUTOR
null
# 🚀 Feature request It would be nice to add doctest to ensure new additions have working examples in the docstrings and that existing examples do not become outdated with API changes. The feature is already in [pytest](https://docs.pytest.org/en/6.2.x/doctest.html) so adding it should be straightforward. ## Motivation This will reduce the number of PRs for typos/errors in docstring examples which should allow reviewers to focus on other PRs. ## Your contribution I wouldn't mind working on this if everyone thinks it is a good idea.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12738/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12738/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12737
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12737/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12737/comments
https://api.github.com/repos/huggingface/transformers/issues/12737/events
https://github.com/huggingface/transformers/pull/12737
945,347,635
MDExOlB1bGxSZXF1ZXN0NjkwNjg5NTI2
12,737
Fix MBart failing test
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for fixing this test - this was actually failing since a long time and no-one fixed it" ]
1,626
1,626
1,626
MEMBER
null
Fixes the failing test by adjusting the expected sentence. The sentence turns from ``` [...] will only worsen the violence and misery of millions. ``` to ``` [...] only make violence and misery worse for millions of people. ``` which seems grammatically correct still. @patrickvonplaten please let me know if you think this is a real issue or not.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12737/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12737/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12737", "html_url": "https://github.com/huggingface/transformers/pull/12737", "diff_url": "https://github.com/huggingface/transformers/pull/12737.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12737.patch", "merged_at": 1626363575000 }
https://api.github.com/repos/huggingface/transformers/issues/12736
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12736/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12736/comments
https://api.github.com/repos/huggingface/transformers/issues/12736/events
https://github.com/huggingface/transformers/pull/12736
945,329,412
MDExOlB1bGxSZXF1ZXN0NjkwNjczNzg5
12,736
LXMERT integration test typo
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
MEMBER
null
Patches a typo in the LXMERT integration test.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12736/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12736/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12736", "html_url": "https://github.com/huggingface/transformers/pull/12736", "diff_url": "https://github.com/huggingface/transformers/pull/12736.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12736.patch", "merged_at": 1626352190000 }
https://api.github.com/repos/huggingface/transformers/issues/12735
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12735/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12735/comments
https://api.github.com/repos/huggingface/transformers/issues/12735/events
https://github.com/huggingface/transformers/pull/12735
945,326,581
MDExOlB1bGxSZXF1ZXN0NjkwNjcxMzQx
12,735
Fix led torchscript
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
MEMBER
null
LED can't run on `torchscript`: ``` Failure Traceback (most recent call last): File "/home/xxx/transformers/tests/test_modeling_common.py", line 538, in _create_and_check_torchscript traced_model = torch.jit.trace( File "/home/xxx/transformers/.env/lib/python3.8/site-packages/torch/jit/_trace.py", line 735, in trace return trace_module( File "/home/xxx/transformers/.env/lib/python3.8/site-packages/torch/jit/_trace.py", line 952, in trace_module module._c._create_method_from_trace( RuntimeError: 0INTERNAL ASSERT FAILED at "/pytorch/torch/csrc/jit/ir/alias_analysis.cpp":532, please report a bug to PyTorch. We don't have an op for aten::constant_pad_nd but it isn't a special case. Argument types: Tensor, int[], bool, During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/xxx/transformers/tests/test_modeling_led.py", line 284, in test_torchscript self._create_and_check_torchscript(config, inputs_dict) File "/home/xxx/transformers/tests/test_modeling_common.py", line 545, in _create_and_check_torchscript self.fail("Couldn't trace module.") AssertionError: Couldn't trace module. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12735/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12735/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12735", "html_url": "https://github.com/huggingface/transformers/pull/12735", "diff_url": "https://github.com/huggingface/transformers/pull/12735.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12735.patch", "merged_at": 1626364130000 }
https://api.github.com/repos/huggingface/transformers/issues/12734
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12734/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12734/comments
https://api.github.com/repos/huggingface/transformers/issues/12734/events
https://github.com/huggingface/transformers/pull/12734
945,324,110
MDExOlB1bGxSZXF1ZXN0NjkwNjY5MTg2
12,734
Fix DETR integration test
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "LGTM!" ]
1,626
1,626
1,626
MEMBER
null
The integration test must be adjusted to reflect DETR's true margin of error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12734/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12734/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12734", "html_url": "https://github.com/huggingface/transformers/pull/12734", "diff_url": "https://github.com/huggingface/transformers/pull/12734.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12734.patch", "merged_at": 1626364117000 }
https://api.github.com/repos/huggingface/transformers/issues/12733
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12733/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12733/comments
https://api.github.com/repos/huggingface/transformers/issues/12733/events
https://github.com/huggingface/transformers/pull/12733
945,323,717
MDExOlB1bGxSZXF1ZXN0NjkwNjY4ODM2
12,733
Fix AutoModel tests
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "No worries!" ]
1,626
1,626
1,626
MEMBER
null
Auto model tests were not kept up to date. This patches the following two tests: ``` FAILED tests/test_modeling_auto.py::AutoModelTest::test_model_from_pretrained FAILED tests/test_modeling_common.py::ModelUtilsTest::test_model_from_pretrained ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12733/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12733/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12733", "html_url": "https://github.com/huggingface/transformers/pull/12733", "diff_url": "https://github.com/huggingface/transformers/pull/12733.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12733.patch", "merged_at": 1626354372000 }
https://api.github.com/repos/huggingface/transformers/issues/12732
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12732/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12732/comments
https://api.github.com/repos/huggingface/transformers/issues/12732/events
https://github.com/huggingface/transformers/issues/12732
945,277,141
MDU6SXNzdWU5NDUyNzcxNDE=
12,732
Not able to load the custom model after training in Hugging Face
{ "login": "vivekvkashyap", "id": 58116635, "node_id": "MDQ6VXNlcjU4MTE2NjM1", "avatar_url": "https://avatars.githubusercontent.com/u/58116635?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vivekvkashyap", "html_url": "https://github.com/vivekvkashyap", "followers_url": "https://api.github.com/users/vivekvkashyap/followers", "following_url": "https://api.github.com/users/vivekvkashyap/following{/other_user}", "gists_url": "https://api.github.com/users/vivekvkashyap/gists{/gist_id}", "starred_url": "https://api.github.com/users/vivekvkashyap/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vivekvkashyap/subscriptions", "organizations_url": "https://api.github.com/users/vivekvkashyap/orgs", "repos_url": "https://api.github.com/users/vivekvkashyap/repos", "events_url": "https://api.github.com/users/vivekvkashyap/events{/privacy}", "received_events_url": "https://api.github.com/users/vivekvkashyap/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: colab ### Who can help @patil-suraj @patrickvonplaten Models: Used FlaxGPT2Module as base class and built FlaxGPT2ForMutlipleChoice.After training is done when loading the saved weights in hugging face hub there seems to be an error. Model hub: -Hugging Face Hub:https://huggingface.co/Vivek/gpt2-common-sense-reasoning Error messgae: unpack(b) received extra data. ## Information Model I am using (Bert, XLNet ...):FlaxGPT2ForMultipleChoice(custom model) The problem arises when using: 1.when i try to load the weights of saved weights in hugging face hub there seems to be an error. The tasks I am working on is: * Dataset: COSMOS QA Expected Behaviour :To able to load the weights and the configuration without any error Colab Notebook:https://colab.research.google.com/drive/1C-M1GLMk7jiomXIpngbLZvCNvEJMfJ1K?usp=sharing ## To reproduce ``` python new_model=FlaxGPT2ForMultipleChoice.from_pretrained('/content/gpt2-common-sense-reasoning', input_shape=(1,4,1), config=config)```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12732/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12732/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12731
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12731/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12731/comments
https://api.github.com/repos/huggingface/transformers/issues/12731/events
https://github.com/huggingface/transformers/pull/12731
945,174,998
MDExOlB1bGxSZXF1ZXN0NjkwNTQxOTc1
12,731
Remove framework mention
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
MEMBER
null
Remove mention of framework for `transformers.onnx`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12731/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12731/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12731", "html_url": "https://github.com/huggingface/transformers/pull/12731", "diff_url": "https://github.com/huggingface/transformers/pull/12731.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12731.patch", "merged_at": 1626364142000 }
https://api.github.com/repos/huggingface/transformers/issues/12730
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12730/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12730/comments
https://api.github.com/repos/huggingface/transformers/issues/12730/events
https://github.com/huggingface/transformers/issues/12730
945,086,910
MDU6SXNzdWU5NDUwODY5MTA=
12,730
Adding a Wav2Vec2ForSpeechClassification class
{ "login": "ehcalabres", "id": 10390523, "node_id": "MDQ6VXNlcjEwMzkwNTIz", "avatar_url": "https://avatars.githubusercontent.com/u/10390523?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ehcalabres", "html_url": "https://github.com/ehcalabres", "followers_url": "https://api.github.com/users/ehcalabres/followers", "following_url": "https://api.github.com/users/ehcalabres/following{/other_user}", "gists_url": "https://api.github.com/users/ehcalabres/gists{/gist_id}", "starred_url": "https://api.github.com/users/ehcalabres/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ehcalabres/subscriptions", "organizations_url": "https://api.github.com/users/ehcalabres/orgs", "repos_url": "https://api.github.com/users/ehcalabres/repos", "events_url": "https://api.github.com/users/ehcalabres/events{/privacy}", "received_events_url": "https://api.github.com/users/ehcalabres/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Hey @ehcalabres,\r\n\r\nI'm only seeing your issue now sadly :-/ Super sorry to not have answered sooner. @anton-l is working on an official `Wav2Vec2-` and `HubertForSequenceClassification` at the moment, here: https://github.com/huggingface/transformers/pull/13153 which should serve your needs then :-) \r\n\r\nIt would be great if you could take a look at https://github.com/huggingface/transformers/pull/13153 to see whether this design/architecture fits your needs", "Hey @patrickvonplaten, @anton-l,\r\n\r\nThanks a lot for your answer! As I'm seeing on the issue #13153 , it seems like it's pretty much the same as I was proposing here, so I think it'll do the job for this kind of audio classification tasks. I'll try it when it comes out but it seems to be fine by the moment. Great!\r\n\r\nOnly one thing, I've work mostly in PyTorch but as I was checking the code I've seen that there's no TensorFlow version of these models (neither for Hubert or Wav2Vec2), do you think it's relevant to implement them? If so maybe I can help with that, but I don't know if it's something critical.\r\n\r\nAnyway, is there anything else I can do to help you with this? Just let me know.\r\n\r\nThanks again!" ]
1,626
1,630
1,630
NONE
null
# Adding a Wav2Vec2ForSpeechClassification class 🚀 Right now, using any of the Wav2Vec 2.0 models available on the 🤗hub and make a fine-tuning process to resolve a __speech classification task__ implies creating a new class that inherit his behaviour from the Wav2Vec2PreTrainedModel class. Although creating this types of models can be done with a bit of research, I find too complicated to just use a fine-tuned model when shared on the 🤗hub, because you need to have access to the code of the model class in order to instantiate it and retrieve the model with the `from_pretrained()` method (which may or may not be available at that time). I think that adding a class to the 🤗transformers library like `Wav2Vec2ForSpeechClassification` (i.e. the same way that works for the `BertForSequenceClassification` models and others similar) will be a very nice feature in order to not just be able to fine-tune Wav2Vec 2.0 for classification tasks but also it would simplify and accelerate the way one can use a shared model. ## Motivation Speech has always been a very awesome field of research both in the way a user interacts with a physical system, and vice versa. Taking this into account, and with the great news of having the new Wav2Vec 2.0 model integrated on the 🤗transformers library 🎉, I started a research project on Speech Emotion Recognition (SER) with the idea of fine-tune a Wav2Vec 2.0 model in this type of emotional datasets. The results that I've obtained are very promising and the model seems to work extremely well, so I decided to put the fine-tuned model on the [🤗hub](https://huggingface.co/ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition) (wip). Additionally, I saw on the 🤗 discussion forums a [topic](https://discuss.huggingface.co/t/using-wav2vec-in-speech-classification-regression-problems/6361) about this same task of SER implementation with its corresponding model on the [🤗hub](https://huggingface.co/m3hrdadfi/wav2vec2-xlsr-greek-speech-emotion-recognition), which have the same issue when importig it. With all this, I think that the number of use cases of the Wav2Vec2 model for speech classification tasks are huge and having a feature like this one implemented would simplify a lot the way other developers and researchers can work with this type of pretrained models. ## Your contribution I can start working in a new PR to overcome this situation by implementing the `Wav2Vec2ForSpeechClassification` class that I mentioned before in the library. I already have the code working and in fact it's pretty similar to the other nlp models that include the SequenceClassification feature. The idea behind this is to have a much more simplified and generalized way to use and train this models, getting as final result this snippet for a straight forward use of them. ```python from transformers import Wav2Vec2FeatureExtractor, Wav2Vec2ForSpeechClassification processor = Wav2Vec2FeatureExtractor.from_pretrained("ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition") model = Wav2Vec2ForSpeechClassification.from_pretrained("ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition") ``` Let me know if this feature fits the needs of the library in terms of simplicity and integration, and I will start a new PR with these changes. Also let me know if it is useful and cover an adecuate number of use cases, making it worth of implementing. Thank you all for your amazing work 🥇
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12730/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12730/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12729
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12729/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12729/comments
https://api.github.com/repos/huggingface/transformers/issues/12729/events
https://github.com/huggingface/transformers/issues/12729
945,075,578
MDU6SXNzdWU5NDUwNzU1Nzg=
12,729
Checkpoints are saved multiple times during hyperparameter tuning / How to get the best model?
{ "login": "sven-h", "id": 8777506, "node_id": "MDQ6VXNlcjg3Nzc1MDY=", "avatar_url": "https://avatars.githubusercontent.com/u/8777506?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sven-h", "html_url": "https://github.com/sven-h", "followers_url": "https://api.github.com/users/sven-h/followers", "following_url": "https://api.github.com/users/sven-h/following{/other_user}", "gists_url": "https://api.github.com/users/sven-h/gists{/gist_id}", "starred_url": "https://api.github.com/users/sven-h/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sven-h/subscriptions", "organizations_url": "https://api.github.com/users/sven-h/orgs", "repos_url": "https://api.github.com/users/sven-h/repos", "events_url": "https://api.github.com/users/sven-h/events{/privacy}", "received_events_url": "https://api.github.com/users/sven-h/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,626
1,629
1,629
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.8.2 - Platform: Linux-4.19.0-16-amd64-x86_64-with-glibc2.10 - Python version: 3.8.0 - PyTorch version (GPU?): 1.9.0+cu102 (True) - Tensorflow version (GPU?): 2.4.1 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help Probably @amogkam because issue is related to ray tune. ## Information Model I am using (Bert, XLNet ...): distilbert-base-uncased The problem arises when using: * my own modified scripts: The tasks I am working on is: * an official GLUE/SQUaD task: MRPC My ultimate goal is to tune the hyperparameters of a model and save the best model in a folder. Therefore I created the script below. It runs a simple hyperparameter search. After the run, the folder `trainer_output_dir` is empty and the `ray_local_dir` folder has following structure: ```` ray_local_dir/ ├── tune_transformer_pbt/ │ ├── _objective_081f1_00000_0_learning_rate=2.4982e-05_2021-07-13_14-44-15 │ │ ├── checkpoint_001377 │ │ │ ├── checkpoint-1377 │ │ ├── trainer_output_dir │ │ │ ├── run-081f1_00000 │ │ │ │ ├── checkpoint-459 │ │ │ │ ├── checkpoint-918 │ │ │ │ ├── checkpoint-1377 ```` When the attribute `save_strategy` (of the `TrainingArguments`) is set to `epoch` the folder structure like above is generated. When the attribute `save_strategy` is set to `no`, then no checkpoints are written at all. In folder `checkpoint_001377` the checkpoints are removed by ray tune but the checkpoints from the trainer_output_dir are not removed. The main reason is that the checkpoints are generated in two functions: 1. in function [`_tune_save_checkpoint`](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L892) which is called only when `self.control.should_save` is true 2. in function [`_save_checkpoint`](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L1436) which is also called only when `self.control.should_save` is true I care about these checkpoint folders, because this is the only way to load the best model. [The example from ray tune](https://docs.ray.io/en/master/tune/examples/pbt_transformers.html) sets `load_best_model_at_end=True` but this has no effect and the trainer has no model which could be saved. Thus I decided to load the model from the checkpoint folder. The info in the BestRun object returned by `hyperparameter_search` is: ``` BestRun(run_id='081f1_00000', objective=0.5750986933708191, hyperparameters={'learning_rate': 2.49816047538945e-05}) ``` It contains only the `run_id` and I search for the folder `_objective_081f1_00000.*` (glob pattern) in `tune_transformer_pbt` to get the right trials folder and then search for one folder which starts with `checkpoint`. Is this the best way to load the model? At the end of the notebook [How to fine-tune a model on text classification](https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb), it is stated: `To reproduce the best training, just set the hyperparameters in your TrainingArgument before creating a Trainer:` but with PBT the hyperparameters might change during one trial. Furthermore, the model is already trained and it should not be necessary to train it once again. Another question about the same topic: Ray tune allows to set the scope for the function [`get_best_trial`](https://docs.ray.io/en/master/tune/api_docs/analysis.html#ray.tune.ExperimentAnalysis.get_best_trial) of the `ExperimentAnalysis` object which defines if the last result of a trial should be used to return the best one or if all intermediate evaluations should be also taken into account. At least what I can see in the [trainer class](https://github.com/huggingface/transformers/blob/master/src/transformers/integrations.py#L255), this parameter can not be modified. Due to the fact, that only one checkpoint per trial is saved (`keep_checkpoints_num=1`) and with the parameter `checkpoint_score_attr` we can also define the measure to compare, then it should be possible to only store the best checkpoint( according to the specified measure). But the default value of `scope` is set to compare only the measure of the last result and it could happen that a trial's last measure is better even tough another trial had a much better result somewhen during the training (and exactly this checkpoint is kept on disk by `checkpoint_score_attr`) . How do I need to configure hyperparameter search and training arguments such that I get the best model from all (intermediate) evaluations? ## To reproduce Steps to reproduce the behavior: Execute the following script: ``` from datasets import load_dataset, load_metric from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments from ray import tune model_name = 'distilbert-base-uncased' tokenizer = AutoTokenizer.from_pretrained(model_name) dataset = load_dataset('glue', 'mrpc') def encode(examples): outputs = tokenizer(examples['sentence1'], examples['sentence2'], truncation=True) return outputs encoded_dataset = dataset.map(encode, batched=True) def model_init(): return AutoModelForSequenceClassification.from_pretrained(model_name, return_dict=True) def compute_metrics(eval_pred): metric = load_metric('glue', 'mrpc') predictions, labels = eval_pred predictions = predictions.argmax(axis=-1) return metric.compute(predictions=predictions, references=labels) training_args = TrainingArguments( output_dir='./trainer_output_dir', skip_memory_metrics=True, # see https://github.com/huggingface/transformers/issues/11249 disable_tqdm=True, do_eval=True, evaluation_strategy='epoch', save_strategy='epoch', # 'no', # TODO: change here to see the different behaviour logging_dir='./logs' ) trainer = Trainer( args=training_args, tokenizer=tokenizer, train_dataset=encoded_dataset["train"], eval_dataset=encoded_dataset["validation"], model_init=model_init, compute_metrics=compute_metrics, ) trainer.hyperparameter_search( direction="minimize", compute_objective=lambda x: x['eval_loss'], backend="ray", n_trials=1, hp_space = lambda _: { #toy example "learning_rate": tune.uniform(1e-5, 5e-5), }, scheduler=tune.schedulers.PopulationBasedTraining( time_attr="training_iteration", perturbation_interval=1, metric="objective", mode='min', hyperparam_mutations={ "learning_rate": tune.uniform(1e-5, 5e-5), } ), keep_checkpoints_num=1, checkpoint_score_attr="training_iteration", resources_per_trial={"cpu": 1, "gpu": 1}, local_dir="./ray_local_dir/", name="tune_transformer_pbt", ) ``` ## Expected behavior The checkpoints in the `trainer_output_dir` should not be written to disk.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12729/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12729/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12728
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12728/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12728/comments
https://api.github.com/repos/huggingface/transformers/issues/12728/events
https://github.com/huggingface/transformers/issues/12728
944,978,882
MDU6SXNzdWU5NDQ5Nzg4ODI=
12,728
How can I generate sentencepiece file or vocabulary from tokenizers?
{ "login": "darwinharianto", "id": 44696192, "node_id": "MDQ6VXNlcjQ0Njk2MTky", "avatar_url": "https://avatars.githubusercontent.com/u/44696192?v=4", "gravatar_id": "", "url": "https://api.github.com/users/darwinharianto", "html_url": "https://github.com/darwinharianto", "followers_url": "https://api.github.com/users/darwinharianto/followers", "following_url": "https://api.github.com/users/darwinharianto/following{/other_user}", "gists_url": "https://api.github.com/users/darwinharianto/gists{/gist_id}", "starred_url": "https://api.github.com/users/darwinharianto/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/darwinharianto/subscriptions", "organizations_url": "https://api.github.com/users/darwinharianto/orgs", "repos_url": "https://api.github.com/users/darwinharianto/repos", "events_url": "https://api.github.com/users/darwinharianto/events{/privacy}", "received_events_url": "https://api.github.com/users/darwinharianto/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Could you try `save_pretrained` instead?", "It throws\r\n```\r\nTraceback (most recent call last):\r\n File \"~/pretrain.py\", line 26, in <module>\r\n tokenizer.save_pretrained('./test_Spm')\r\n File ~/anaconda3/envs/deltalake/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1958, in save_pretrained\r\n save_files = self._save_pretrained(\r\n File \"~/anaconda3/envs/deltalake/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 555, in _save_pretrained\r\n vocab_files = self.save_vocabulary(save_directory, filename_prefix=filename_prefix)\r\n File \"~/anaconda3/envs/deltalake/lib/python3.9/site-packages/transformers/models/xlnet/tokenization_xlnet_fast.py\", line 232, in save_vocabulary\r\n if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file):\r\n File \"~/anaconda3/envs/deltalake/lib/python3.9/posixpath.py\", line 374, in abspath\r\n path = os.fspath(path)\r\nTypeError: expected str, bytes or os.PathLike object, not NoneType\r\n```\r\n\r\nIt looked for vocab file (spm), but since I initiated it with tokenizer_object, there is no vocab_file", "Could you share your `unigram.json` file or mention how you obtained it so that I can reproduce the issue? Thank you!", "Here it is\r\n\r\n[unigram.json.zip](https://github.com/huggingface/transformers/files/6839324/unigram.json.zip)\r\n\r\n\r\nEdit:\r\n\r\n```\r\nfrom transformers import AutoTokenizer, XLNetTokenizerFast\r\n\r\ntokenizer = AutoTokenizer.from_pretrained('xlnet-base-cased')\r\nprint(tokenizer.save_pretrained(\"./dump\"))\r\ntokenizer = XLNetTokenizerFast(tokenizer_file='./dump/tokenizer.json')\r\nprint(tokenizer.save_pretrained(\"./dump\"))\r\n```\r\nAbove Throws an error\r\n\r\n```\r\n\r\nfrom transformers import AutoTokenizer, XLNetTokenizerFast, BertTokenizerFast\r\n\r\ntokenizer = AutoTokenizer.from_pretrained('bert-base-cased')\r\nprint(tokenizer.save_pretrained(\"./dump\"))\r\ntokenizer = BertTokenizerFast(tokenizer_file='./dump/tokenizer.json')\r\nprint(tokenizer.save_pretrained(\"./dump\"))\r\n\r\n```\r\nAbove does not throw an error", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Sorry, is there any progress on this?", "@SaulLu can you give this one a look?", "Thank you very much for reporting this problem @darwinharianto . \r\n\r\nI have the impression that it linked to the problem reported in [this issue](https://github.com/huggingface/transformers/issues/12762) and that I had started to work on in this [PR](https://github.com/huggingface/transformers/pull/12806). As the cleanest looking fix requires quite a bit of work, I had put it on hold. I'll try to work on it again at the beginning of the week.", "@darwinharianto, to answer the question in the issue title, at the moment it is not possible to transform a fast tokenizer initialized only with a `tokenizer_object` into an spm file compatible with the `SentencePiece` library. The conversion is only supported in the opposite direction.\r\n\r\nWhy did you want the vocabulary? Because the vocabulary can be found in the `tokenizer.json` file or by doing: \r\n```\r\ntok.get_vocab()\r\n```\r\n", "Since I tried to make my own XLNet tokenizer, i wanted to check if the saved format for vocabulary is the same as the published models.\r\nI thought the fastest way would be comparing saved vocab file from huggingface model hub with mine.\r\nJust for sanity check.", "Duly noted! If it's to do a sanity check, would it be ok to compare the files of the fast version of the tokenizers (in particular the `tokenizer.json` file)? \r\n\r\nTo retrieve these files for the new tokenizer `tok` you made, the following command should work: \r\n```\r\ntok.save_pretrained(\"./dump\", legacy_format=False)\r\n```\r\n\r\nFor information, the vocabulary will be visible in the `tokenizer.json` file.", "Thanks! it works.\r\n\r\nOne more question, I can see this\r\n```\r\n \"type\": \"Precompiled\",\r\n \"precompiled_charsmap\": \r\n```\r\nunder normalizers, when I tried to save a pretrained tokenizer.\r\n\r\nMy custom tokenizer doesn't have this attribute. Is this normal?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "I'm sorry, I realize that I never answered your last question. \r\n\r\nThis type of `Precompiled` normalizer is only used to recover the normalization operation which would be contained in a file generated by the sentencepiece library. If you have ever created your tokenizer with the tokenizers library it is perfectly normal that you do not have this type of normalization. Nevertheless, if you want to have an equivalent normalization for your tokenizer, it is generally possible to build it with the tokenizers library but it requires to know exactly which normalizations you want to apply. :slightly_smiling_face: ", "> @darwinharianto, to answer the question in the issue title, at the moment it is not possible to transform a fast tokenizer initialized only with a `tokenizer_object` into an spm file compatible with the `SentencePiece` library. The conversion is only supported in the opposite direction.\r\n> \r\n> Why did you want the vocabulary? Because the vocabulary can be found in the `tokenizer.json` file or by doing:\r\n> \r\n> ```\r\n> tok.get_vocab()\r\n> ```\r\n\r\nHi @SaulLu,\r\n\r\nSorry to open this old thread, I noticed that you mentioned transferring from spm tokenizer to a huggingface one is easy, but I could not find any function which does that for me. I would be grateful if you could share any piece of code to help me with that.\r\n(Just to give you a quick background, I trained a SPM tokenizer and would like to use it in huggingface, but I have .vocab and .model for it, and huggingface expect a .json file) \r\n\r\nThank you,\r\nSoheila", "Hello @SoheilaSamiee, I'm encountering a similar problem. If you've found a solution, could you please share it with me? Thank you!", "Hi, @nandinimundra, questions like these are best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n\r\ncc @ArthurZucker ", "We indeed don't seem to have a good piece of documentation on how to properly convert a `sentencepiece` based model to `transformers` / `tokenizer` format. I'll try to create one in the coming month! 🤗 ", "Thankyou @ArthurZucker. The problem was resolved by using https://github.com/huggingface/tokenizers/blob/main/bindings/python/scripts/sentencepiece_extractor.py I managed to get the vocab.json and merges.txt files. Then, by using https://huggingface.co/docs/tokenizers/api/models#tokenizers.models.BPE, I was able to produce the tokenizer.json file.\r\n\r\n@amyeroberts sure, Noted.\r\nthanks", "@nandinimundra your solution does not work. I tried but the vocab.json is not correct.\r\n\r\n> Thankyou @ArthurZucker. The problem was resolved by using https://github.com/huggingface/tokenizers/blob/main/bindings/python/scripts/sentencepiece_extractor.py I managed to get the vocab.json and merges.txt files. Then, by using https://huggingface.co/docs/tokenizers/api/models#tokenizers.models.BPE, I was able to produce the tokenizer.json file.\r\n> \r\n> @amyeroberts sure, Noted. thanks\r\n\r\n" ]
1,626
1,708
1,632
NONE
null
After I make custom tokenizer using Tokenizers library, I could load it into XLNetTokenizerFast using ``` tokenizer = Tokenizer.from_file("unigram.json") tok = XLNetTokenizerFast(tokenizer_object=tokenizer) ``` After I called ``` tok .save_vocabulary("ss") ``` it throws an error since I didnt load XLNetTokenizerFast using spm file. I believe save_vocabulary is looking for vocab_file parameter. Is there any way to save_vocab after loading it from XLNetTokenizer?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12728/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12728/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12727
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12727/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12727/comments
https://api.github.com/repos/huggingface/transformers/issues/12727/events
https://github.com/huggingface/transformers/issues/12727
944,976,736
MDU6SXNzdWU5NDQ5NzY3MzY=
12,727
Getting incompatible shapes when using global_attention_mask in TFLongformerModel
{ "login": "sli0111", "id": 62090767, "node_id": "MDQ6VXNlcjYyMDkwNzY3", "avatar_url": "https://avatars.githubusercontent.com/u/62090767?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sli0111", "html_url": "https://github.com/sli0111", "followers_url": "https://api.github.com/users/sli0111/followers", "following_url": "https://api.github.com/users/sli0111/following{/other_user}", "gists_url": "https://api.github.com/users/sli0111/gists{/gist_id}", "starred_url": "https://api.github.com/users/sli0111/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sli0111/subscriptions", "organizations_url": "https://api.github.com/users/sli0111/orgs", "repos_url": "https://api.github.com/users/sli0111/repos", "events_url": "https://api.github.com/users/sli0111/events{/privacy}", "received_events_url": "https://api.github.com/users/sli0111/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It appears that when the number of examples is a multiple of the batch_size, the training starts (i.e. batch_size=2 and num_examples=6). This doesn't work for a batch_size=1. \n\nUnfortunately for my full dataset, I won't be able to run a batch_size=2 without OOM.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Hey @sli0111,\r\n\r\nI'm very sorry, but I won't find time in the near future to debug this :-/ Gently pinging @Rocketknight1 in case you have some bandwidth to check out longformer training in TF", "Hi @sli0111, sorry for the delay but I'm taking a look at this now! Are you still encountering the issue, and have you discovered anything else about it?", "Hi @Rocketknight1 I was able to workaround the issue by reducing the number of examples to an even number and running a batch_size=2. I was not able to avoid the error when I had a batch_size=1.", "That's interesting - I'll mark this one down for testing and let you know what I find.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,626
1,634
1,634
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.8.2 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.9.0+cu102 (False) - Tensorflow version (GPU?): 2.5.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help @patrickvonplaten ## Information I am using TFLongformerModel. The problem arises when using: * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] my own task or dataset: (give details below) ## To reproduce The task is to classify the relationship between two phrases given the document the phrase is in. There are a total of 8 classes. Separators are added between the phrases and document. I would like to add global attention to phrase 1 and phrase 2. For example: ``` <s> phrase 1 </s> phrase 2 </s> entire document </s> ``` Checked shape of inputs to ensure they were the same. ``` input ids tf.Tensor([ 0 35151 19026 ... 1 1 1], shape=(1024,), dtype=int32) attention mask tf.Tensor([1 1 1 ... 0 0 0], shape=(1024,), dtype=int32) global attention mask tf.Tensor([1 1 1 ... 0 0 0], shape=(1024,), dtype=int32) ``` Example code for creating the model: ``` def create_model(train_layers=False, lr=5e-5): # Input Layers input_ids = Input(shape=(max_length,), name='input_ids', dtype='int32') attention_mask = Input(shape=(max_length,), name='attention_mask', dtype='int32') global_attention_mask = Input(shape=(max_length,), name='global_attention_mask', dtype='int32') # Transformer Layer X = transformer_model(input_ids=input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask)[0] # Gets the embeddings before the [CLS] pooling # Deep Neural Net X = GlobalAveragePooling1D()(X) X = Dense(512, activation='relu')(X) X = Dense(512, activation='relu')(X) X = Dense(512, activation='relu')(X) X = Dense(256, activation='relu')(X) X = Dense(128, activation='relu')(X) X = Dense(64, activation='relu')(X) X = Dense(8, activation='softmax')(X) model = Model(inputs=[input_ids, attention_mask, global_attention_mask], outputs = X) if train_layers == False: for layer in model.layers[:3]: layer.trainable = False elif train_layers == True: for layer in model.layers[:3]: layer.trainable = True opt = tf.keras.optimizers.Adam(learning_rate=lr) # Compile the model model.compile(optimizer=opt, loss='sparse_categorical_crossentropy', metrics = ['sparse_categorical_accuracy']) # Print model summary model.summary() return model ``` Example code for training: ``` model = create_model(train_layers=False, lr=5e-05) output = model.fit(x=[X.input_ids, X.attention_mask, X.global_attention_mask, y=y_resample, batch_size = 1, epochs = 5) ``` Error: ```I InvalidArgumentError: Incompatible shapes: [2,1024,12,513] vs. [2,1024,12,522] [[node model_8/tf_longformer_model/longformer/encoder/layer_._0/attention/self/dropout_1/dropout/Mul_1 (defined at /usr/local/lib/python3.7/dist-packages/transformers/models/longformer/modeling_tf_longformer.py:823) ]] [Op:__inference_train_function_618763] Function call stack: train_function ``` It seems the last dimension in the error message is related to the number of non-zero values in the global_attention_mask. In the example above, there are 9 tokens that have a value of 1 in the global_attention_mask and the rest is zero. (522-513=9) ## Expected behavior The model should start training.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12727/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12727/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12726
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12726/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12726/comments
https://api.github.com/repos/huggingface/transformers/issues/12726/events
https://github.com/huggingface/transformers/issues/12726
944,952,362
MDU6SXNzdWU5NDQ5NTIzNjI=
12,726
Unrecognized configuration class GPT2Config for AutoModelForSeq2SeqLM | Microsoft DialoGPT no longer working
{ "login": "RuolinZheng08", "id": 35674052, "node_id": "MDQ6VXNlcjM1Njc0MDUy", "avatar_url": "https://avatars.githubusercontent.com/u/35674052?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RuolinZheng08", "html_url": "https://github.com/RuolinZheng08", "followers_url": "https://api.github.com/users/RuolinZheng08/followers", "following_url": "https://api.github.com/users/RuolinZheng08/following{/other_user}", "gists_url": "https://api.github.com/users/RuolinZheng08/gists{/gist_id}", "starred_url": "https://api.github.com/users/RuolinZheng08/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RuolinZheng08/subscriptions", "organizations_url": "https://api.github.com/users/RuolinZheng08/orgs", "repos_url": "https://api.github.com/users/RuolinZheng08/repos", "events_url": "https://api.github.com/users/RuolinZheng08/events{/privacy}", "received_events_url": "https://api.github.com/users/RuolinZheng08/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[ { "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false } ]
[ "Getting this error also. all fine-tuned models using Dialo giving this exact error.", "Hi everyone.\r\n\r\nNot really sure what happened here (the error is pretty confusing). It is fixed now anyway. ", "If someone pinned a model while it was having issues, please let me know, we might have to update them to fix them too !", "Hi @Narsil \r\n\r\nThese 2 models need update:\r\n- dbmdz/german-gpt2\r\n- benjamin/gerpt2", "dbmdz/german-gpt2 seems to be working https://huggingface.co/dbmdz/german-gpt2?text=Heute+ist+sehr+sch%C3%B6nes+Wetter+in\r\n\r\nIt doesn't seem to be defined as `conversation`, is that what you're referring to ?\r\nI am not sure how this model was defined and so if it actually works with conversation, but it doesn't seem to be the case.\r\n\r\nThe API works with text-generation for this model and it works fine.\r\n\r\n`benjamin/gerpt2` seems to be exactly the same.\r\n\r\nIf you want to mark them as conversational you need to update the `pipeline_tag` https://huggingface.co/docs/hub/models-widgets#enabling-a-widget\r\n\r\nOtherwise do you mind creating a new issue with the new error you're receiving to be able to reproduce (you can ping me) ?\r\n\r\nHope this helps." ]
1,626
1,659
1,626
NONE
null
## Information Model I am using: Microsoft's DialoGPT The problem arises when using: * [x] the official example scripts: Since the morning of July 14th, the inference API has been outputting errors on [Microsoft's DialoGPT](https://huggingface.co/microsoft/DialoGPT-medium). It was working fine before July 14th. Error ``` {'error': "Unrecognized configuration class <class 'transformers.models.gpt2.configuration_gpt2.GPT2Config'> for this kind of AutoModel: AutoModelForSeq2SeqLM.\nModel type should be one of BigBirdPegasusConfig, M2M100Config, LEDConfig, BlenderbotSmallConfig, MT5Config, T5Config, PegasusConfig, MarianConfig, MBartConfig, BlenderbotConfig, BartConfig, FSMTConfig, EncoderDecoderConfig, XLMProphetNetConfig, ProphetNetConfig."} ``` Query script as given on Hugging Face's site: ```python import requests API_URL = "https://api-inference.huggingface.co/models/microsoft/DialoGPT-medium" headers = {"Authorization": "Bearer API_TOKEN"} def query(payload): response = requests.post(API_URL, headers=headers, json=payload) return response.json() output = query({ "inputs": { "past_user_inputs": ["Which movie is the best ?"], "generated_responses": ["It's Die Hard for sure."], "text": "Can you explain why ?", }, }) ``` @patrickvonplaten, @LysandreJik I'm mentioning these two people as the guide says they are working on gpt2. Sorry if I pinged the wrong people!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12726/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12726/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12725
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12725/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12725/comments
https://api.github.com/repos/huggingface/transformers/issues/12725/events
https://github.com/huggingface/transformers/pull/12725
944,892,141
MDExOlB1bGxSZXF1ZXN0NjkwMzA2MDc3
12,725
[doc] performance: batch sizes
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
CONTRIBUTOR
null
This PR adds a brief discussion of batch sizes for performance. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12725/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12725/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12725", "html_url": "https://github.com/huggingface/transformers/pull/12725", "diff_url": "https://github.com/huggingface/transformers/pull/12725.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12725.patch", "merged_at": 1626367174000 }
https://api.github.com/repos/huggingface/transformers/issues/12724
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12724/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12724/comments
https://api.github.com/repos/huggingface/transformers/issues/12724/events
https://github.com/huggingface/transformers/pull/12724
944,875,525
MDExOlB1bGxSZXF1ZXN0NjkwMjkyMTM4
12,724
[doc] testing: how to trigger a self-push workflow
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
CONTRIBUTOR
null
Took me a while to figure out how to trigger self-push github actions test, so documenting how to do it right the first time. @sgugger, @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12724/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12724/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12724", "html_url": "https://github.com/huggingface/transformers/pull/12724", "diff_url": "https://github.com/huggingface/transformers/pull/12724.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12724.patch", "merged_at": 1626391137000 }
https://api.github.com/repos/huggingface/transformers/issues/12723
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12723/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12723/comments
https://api.github.com/repos/huggingface/transformers/issues/12723/events
https://github.com/huggingface/transformers/pull/12723
944,868,517
MDExOlB1bGxSZXF1ZXN0NjkwMjg2MzUy
12,723
[deepspeed] nvme test hanging experiment: take4
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks a lot for fixing this! Should we merge the PR or leave it like this to remember how to manage a hang with deepspeed and nvme?\r\n\r\nDo you know if rebuilding the extensions takes a look time?", "Never tried to measure the rebuilding, and it'd depend on the hardware, but probably in a ballpark of 20-30secs.\r\n\r\n--------------\r\n\r\nMy main concern with the proposed solution in this PR is a race condition where one job rebuilds and another slightly slower one wipes the binaries out and leading to test failure. Since this is shared fs.\r\n\r\nThis is the problem with pytorch CUDA extensions. Instead of installing the binaries into the python tree which could be many on the same system (virtual env) it installs them into `~/.cache/torch_extensions` which is shared between all virtual envs - really bad idea.\r\n\r\nSo the clean solution is to not to install pure python package and have it build JIT at run time, but instead to do a pre-build which then installs the binary cuda extensions into the right python env, then there is never a collision.\r\n\r\nSo it'd be:\r\n```\r\nDS_BUILD_CPU_ADAM=1 DS_BUILD_AIO=1 DS_BUILD_UTILS=1 pip install deepspeed --global-option=\"build_ext\" --global-option=\"-j8\" --no-cache -v --disable-pip-version-check\r\n```\r\n\r\nOf course, this too takes time.\r\n\r\nIn theory we only need to do this once per deepspeed release, so we could also pre-build binary wheels and simply install those.\r\n\r\nDo we have any other need for binary wheels besides `torch_scatter`?", "And this would only be necessary if there's an issue with the deepspeed release, right? As there was no issue for the other machine, nor for your local machines. I wonder if we really need to implement a workaround for this or if we can't have this as a potential solution for future deepspeed issues that arise soon after a release.\r\n\r\nThere are no other binary wheels needs besides `torch_scatter`, but I'd rather keep those to a minimum as it doesn't help maintainability.", "> And this would only be necessary if there's an issue with the deepspeed release, right?\r\n\r\nWe would need to do this for every release in case some Cpp code was changed.\r\n\r\n------\r\n\r\nAgreed, let's not do anything then and revisit this if it becomes a problem.\r\n\r\nI wonder if we could create an index pointing to troubleshooting PRs/Issues, so e.g. this could be a start:\r\n\r\n## Troubleshooting Github Actions CI (self-hosted box)\r\n\r\n* Deepspeed\r\n - if jit build hangs, clear out `rm -rf ~/.cache/torch_extensions/` reference: https://github.com/huggingface/transformers/pull/12723\r\n \r\nand put it close to home, under `.github-actions/README.md`? or `.github-actions/TROUBLESHOOTING.md`", "Oh that's a brilliant idea indeed!", "closing this as it's now indexed by https://github.com/huggingface/transformers/blob/master/.github/workflows/TROUBLESHOOT.md" ]
1,626
1,651
1,626
CONTRIBUTOR
null
As reported in https://github.com/huggingface/transformers/issues/12715 nvme CUDA extension of deepspeed fails to build and leads to a hanging test. As suggested by @tjruwase we should be clearing out the CUDA binary extensions dir, since a new release might be incompatible with the old binary and things break. ``` rm -rf ~/.cache/torch_extensions/ ``` And all appears to be resolved. Replicated to the scheduled job too. Fixes: https://github.com/huggingface/transformers/issues/12715 @sgugger, @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12723/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12723/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12723", "html_url": "https://github.com/huggingface/transformers/pull/12723", "diff_url": "https://github.com/huggingface/transformers/pull/12723.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12723.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/12722
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12722/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12722/comments
https://api.github.com/repos/huggingface/transformers/issues/12722/events
https://github.com/huggingface/transformers/pull/12722
944,867,083
MDExOlB1bGxSZXF1ZXN0NjkwMjg1MTI1
12,722
[deepspeed] nvme test hanging experiment: take3
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "grr, has to be on upstream continued in https://github.com/huggingface/transformers/pull/12723" ]
1,626
1,626
1,626
CONTRIBUTOR
null
Trying to fix hanging test https://github.com/huggingface/transformers/issues/12715 WIP
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12722/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12722/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12722", "html_url": "https://github.com/huggingface/transformers/pull/12722", "diff_url": "https://github.com/huggingface/transformers/pull/12722.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12722.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/12721
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12721/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12721/comments
https://api.github.com/repos/huggingface/transformers/issues/12721/events
https://github.com/huggingface/transformers/pull/12721
944,863,067
MDExOlB1bGxSZXF1ZXN0NjkwMjgxNjkz
12,721
[WIP] [deepspeed] nvme test hanging experiment: take2
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "continued in https://github.com/huggingface/transformers/pull/12722 - needed to have the branch name start with `ci_`" ]
1,626
1,626
1,626
CONTRIBUTOR
null
Trying to fix hanging test https://github.com/huggingface/transformers/issues/12715
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12721/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12721/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12721", "html_url": "https://github.com/huggingface/transformers/pull/12721", "diff_url": "https://github.com/huggingface/transformers/pull/12721.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12721.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/12720
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12720/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12720/comments
https://api.github.com/repos/huggingface/transformers/issues/12720/events
https://github.com/huggingface/transformers/pull/12720
944,834,469
MDExOlB1bGxSZXF1ZXN0NjkwMjU3MTk5
12,720
[Flax] Correct shift labels for seq2seq models in Flax
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #12719 This PR makes sure that the `shift_tokens_right` is always written in numpy as it will always be called in the data-collator ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12720/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12720/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12720", "html_url": "https://github.com/huggingface/transformers/pull/12720", "diff_url": "https://github.com/huggingface/transformers/pull/12720.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12720.patch", "merged_at": 1626331357000 }
https://api.github.com/repos/huggingface/transformers/issues/12719
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12719/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12719/comments
https://api.github.com/repos/huggingface/transformers/issues/12719/events
https://github.com/huggingface/transformers/issues/12719
944,828,503
MDU6SXNzdWU5NDQ4Mjg1MDM=
12,719
[Flax] Change all `shift_tokens_right` to numpy code
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[]
1,626
1,626
1,626
MEMBER
null
shift labels right is called usually in the data collator and therefore should not be written in jax, but in numpy to not block the TPU. We should make sure that all Encoder-Decoder models have their `shift_tokens_right` implemented in numpy as it's faster.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12719/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12719/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12718
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12718/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12718/comments
https://api.github.com/repos/huggingface/transformers/issues/12718/events
https://github.com/huggingface/transformers/pull/12718
944,809,229
MDExOlB1bGxSZXF1ZXN0NjkwMjM1Njc0
12,718
[trainer] release tmp memory in checkpoint load
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
CONTRIBUTOR
null
As discovered in https://github.com/huggingface/transformers/issues/12680#issuecomment-880194562 we had a model-size memory leak on loading checkpoint. @sgugger found a fix which is what this this PR is. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12718/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12718/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12718", "html_url": "https://github.com/huggingface/transformers/pull/12718", "diff_url": "https://github.com/huggingface/transformers/pull/12718.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12718.patch", "merged_at": 1626301082000 }
https://api.github.com/repos/huggingface/transformers/issues/12717
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12717/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12717/comments
https://api.github.com/repos/huggingface/transformers/issues/12717/events
https://github.com/huggingface/transformers/pull/12717
944,803,567
MDExOlB1bGxSZXF1ZXN0NjkwMjMwNzg1
12,717
[wip] [deepspeed] nvme test hanging experiment
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Continued in https://github.com/huggingface/transformers/pull/12721" ]
1,626
1,651
1,626
CONTRIBUTOR
null
Debugging https://github.com/huggingface/transformers/issues/12715 This PR is trying to revert to the last known to work version 0.4.2.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12717/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12717/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12717", "html_url": "https://github.com/huggingface/transformers/pull/12717", "diff_url": "https://github.com/huggingface/transformers/pull/12717.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12717.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/12716
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12716/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12716/comments
https://api.github.com/repos/huggingface/transformers/issues/12716/events
https://github.com/huggingface/transformers/pull/12716
944,800,843
MDExOlB1bGxSZXF1ZXN0NjkwMjI4NDQ2
12,716
Fix typo in Speech2TextForConditionalGeneration example
{ "login": "will-rice", "id": 25072137, "node_id": "MDQ6VXNlcjI1MDcyMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/25072137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/will-rice", "html_url": "https://github.com/will-rice", "followers_url": "https://api.github.com/users/will-rice/followers", "following_url": "https://api.github.com/users/will-rice/following{/other_user}", "gists_url": "https://api.github.com/users/will-rice/gists{/gist_id}", "starred_url": "https://api.github.com/users/will-rice/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/will-rice/subscriptions", "organizations_url": "https://api.github.com/users/will-rice/orgs", "repos_url": "https://api.github.com/users/will-rice/repos", "events_url": "https://api.github.com/users/will-rice/events{/privacy}", "received_events_url": "https://api.github.com/users/will-rice/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
CONTRIBUTOR
null
# What does this PR do? This PR fixes a small typo in the example docstring. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @patil-suraj <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12716/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12716/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12716", "html_url": "https://github.com/huggingface/transformers/pull/12716", "diff_url": "https://github.com/huggingface/transformers/pull/12716.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12716.patch", "merged_at": 1626331443000 }
https://api.github.com/repos/huggingface/transformers/issues/12715
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12715/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12715/comments
https://api.github.com/repos/huggingface/transformers/issues/12715/events
https://github.com/huggingface/transformers/issues/12715
944,800,708
MDU6SXNzdWU5NDQ4MDA3MDg=
12,715
[testing] failing tests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_stage3_nvme_offload
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 2659267025, "node_id": "MDU6TGFiZWwyNjU5MjY3MDI1", "url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed", "name": "DeepSpeed", "color": "4D34F7", "default": false, "description": "" } ]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "I'm running an experiment with `deepspeed==0.4.2` https://github.com/huggingface/transformers/pull/12717\r\n", "> If I run it on my own setup by first removing `rm -rf ~/.cache/torch_extensions/` it works just fine. So it happens only on that\r\n\r\nI have seen these kinds of DeepSpeed hangs building different extensions at different points in time, and in all cases deleting the `.cache/torch_extensions` seems to always do the trick. I have always felt that this was caused by a timing issue in the build process. What happens if you manually deleted the cache folder in the nvme unit test?", "Nothing immediately comes to mind for me either. It seems like it's stuck waiting for a lock file to go away?\r\n>\r\n> while os.path.exists(self.lock_file_path):\r\n\r\nMaybe the build of the extension before aio didn't delete that file during its cleanup?\r\n\r\nWould that file get left behind if there was a problem building cpu_adam?", "Thank you for the tip, @tjruwase!\r\n\r\nI've added this clean up to the CI job, I think it should be there all the time, since deepspeed won't rebuild a new extension after it built an old one I think.\r\n\r\nHopefully that did the trick. I will have to weight for a while till that job gets run.\r\n\r\n---\r\n\r\n@adammoody, let's see if Tunji's trick works. Most likely the problem is unrelated to your PR.\r\n", "I think it did the trick, thank you @tjruwase!\r\nhttps://github.com/huggingface/transformers/pull/12723\r\n", "That's very cool !!! I have been stuck here for a long time, and finally I found this solution!\r\n\r\nThe system just waiting after the log: \r\n>Using ~/.cache/torch_extensions/py38_cu113 as PyTorch extensions root...\r\n\r\nThe debug process located at\r\n>def wait(self):\r\n '''\r\n Periodically sleeps for a certain amount until the baton is released.\r\n The amount of time slept depends on the ``wait_seconds`` parameter\r\n passed to the constructor.\r\n '''\r\n while os.path.exists(self.lock_file_path):\r\n time.sleep(self.wait_seconds)\r\n\r\nAfter I remove the folder, the process became normal. Cool!\r\n\r\n" ]
1,626
1,659
1,628
CONTRIBUTOR
null
So a few days ago `tests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_stage3_nvme_offload` started hanging and getting killed by pytest-timeout. It gets stuck in `_jit_compile` which never completes. This is nvme-specific, as all other deepspeed tests that use jit work just fine. If I run it on my own setup by first removing `rm -rf ~/.cache/torch_extensions/` it works just fine. So it happens only on that github-actions runner. I went back to the logs from a few days back when it wasn't failing and checked that it's the same libaio packages installed on both cases: ``` Get:1 http://archive.ubuntu.com/ubuntu focal/main amd64 libaio1 amd64 0.3.112-5 [7184 B] Get:2 http://archive.ubuntu.com/ubuntu focal/main amd64 libaio-dev amd64 0.3.112-5 [13.7 kB] ``` @tjruwase, any insights to why it might start hanging on building the nvme cuda extention? The main difference is that the successful run was using deepspeed-0.4.2 and it started failing with deepspeed-0.4.3 release. I looked through the changes since 0.4.2 and I don't see anything remotely related to the op_builder other than https://github.com/microsoft/DeepSpeed/pull/1213 - could that be related? The full log is: ``` self = <test_deepspeed.TrainerIntegrationDeepSpeed testMethod=test_stage3_nvme_offload> @require_deepspeed_aio def test_stage3_nvme_offload(self): with mockenv_context(**self.dist_env_1_gpu): # this actually doesn't have to be on NVMe, any storage will do since this test only # runs a simple check that we can use some directory as if it were NVMe nvme_path = self.get_auto_remove_tmp_dir() nvme_config = dict(device="nvme", nvme_path=nvme_path) ds_config_zero3_dict = self.get_config_dict(ZERO3) ds_config_zero3_dict["zero_optimization"]["offload_optimizer"] = nvme_config ds_config_zero3_dict["zero_optimization"]["offload_param"] = nvme_config trainer = get_regression_trainer(local_rank=0, fp16=True, deepspeed=ds_config_zero3_dict) with CaptureLogger(deepspeed_logger) as cl: > trainer.train() tests/deepspeed/test_deepspeed.py:321: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ src/transformers/trainer.py:1124: in train deepspeed_engine, optimizer, lr_scheduler = deepspeed_init( src/transformers/deepspeed.py:370: in deepspeed_init model, optimizer, _, lr_scheduler = deepspeed.initialize( /opt/conda/lib/python3.8/site-packages/deepspeed/__init__.py:126: in initialize engine = DeepSpeedEngine(args=args, /opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py:194: in __init__ self._configure_optimizer(optimizer, model_parameters) /opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py:726: in _configure_optimizer self.optimizer = self._configure_zero_optimizer(basic_optimizer) /opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py:940: in _configure_zero_optimizer optimizer = FP16_DeepSpeedZeroOptimizer_Stage3( /opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py:809: in __init__ self._configure_tensor_swapping(offload_optimizer_config, aio_config) /opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py:938: in _configure_tensor_swapping self.optimizer_swapper = swapper_type( /opt/conda/lib/python3.8/site-packages/deepspeed/runtime/swap_tensor/partitioned_optimizer_swapper.py:47: in __init__ aio_op = AsyncIOBuilder().load() /opt/conda/lib/python3.8/site-packages/deepspeed/ops/op_builder/builder.py:239: in load return self.jit_load(verbose) /opt/conda/lib/python3.8/site-packages/deepspeed/ops/op_builder/builder.py:267: in jit_load op_module = load( /opt/conda/lib/python3.8/site-packages/torch/utils/cpp_extension.py:1074: in load return _jit_compile( /opt/conda/lib/python3.8/site-packages/torch/utils/cpp_extension.py:1301: in _jit_compile baton.wait() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <torch.utils.file_baton.FileBaton object at 0x7f7418fe1fa0> def wait(self): ''' Periodically sleeps for a certain amount until the baton is released. The amount of time slept depends on the ``wait_seconds`` parameter passed to the constructor. ''' while os.path.exists(self.lock_file_path): > time.sleep(self.wait_seconds) E Failed: Timeout >60.0s /opt/conda/lib/python3.8/site-packages/torch/utils/file_baton.py:42: Failed ----------------------------- Captured stdout call ----------------------------- [2021-07-14 20:39:36,891] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed info: version=0.4.3, git-hash=unknown, git-branch=unknown [2021-07-14 20:39:36,892] [INFO] [utils.py:11:_initialize_parameter_parallel_groups] data_parallel_size: 1, parameter_parallel_size: 1 [2021-07-14 20:39:36,914] [INFO] [engine.py:179:__init__] DeepSpeed Flops Profiler Enabled: False Using /github/home/.cache/torch_extensions as PyTorch extensions root... No modifications detected for re-loaded extension module cpu_adam, skipping build step... Loading extension module cpu_adam... Time to load cpu_adam op: 0.25669288635253906 seconds Adam Optimizer #19 is created with AVX2 arithmetic capability. Config: alpha=0.000050, betas=(0.900000, 0.999000), weight_decay=0.000000, adam_w=1 [2021-07-14 20:39:37,652] [INFO] [engine.py:708:_configure_optimizer] Using DeepSpeed Optimizer param name adamw as basic optimizer [2021-07-14 20:39:37,653] [INFO] [engine.py:713:_configure_optimizer] DeepSpeed Basic Optimizer = DeepSpeedCPUAdam [2021-07-14 20:39:37,653] [INFO] [utils.py:43:is_zero_supported_optimizer] Checking ZeRO support for optimizer=DeepSpeedCPUAdam type=<class 'deepspeed.ops.adam.cpu_adam.DeepSpeedCPUAdam'> [2021-07-14 20:39:37,653] [INFO] [logging.py:68:log_dist] [Rank 0] Creating fp16 ZeRO stage 3 optimizer [2021-07-14 20:39:37,653] [INFO] [engine.py:938:_configure_zero_optimizer] Initializing ZeRO Stage 3 [2021-07-14 20:39:37,653] [INFO] [stage3.py:633:__init__] Reduce bucket size 1 [2021-07-14 20:39:37,653] [INFO] [stage3.py:634:__init__] Allgather bucket size 0.9 Using /github/home/.cache/torch_extensions as PyTorch extensions root... No modifications detected for re-loaded extension module utils, skipping build step... Loading extension module utils... Time to load utils op: 0.0005452632904052734 seconds [2021-07-14 20:39:37,656] [INFO] [stage3.py:933:_configure_tensor_swapping] Tensor Swapping: Adding optimizer tensors [2021-07-14 20:39:37,657] [INFO] [utils.py:30:print_object] SwapBufferManager: [2021-07-14 20:39:37,657] [INFO] [utils.py:34:print_object] count ........................ 4 [2021-07-14 20:39:37,657] [INFO] [utils.py:34:print_object] dtype ........................ torch.float32 [2021-07-14 20:39:37,657] [INFO] [utils.py:34:print_object] free_buffer_index ............ [0, 1, 2, 3] [2021-07-14 20:39:37,657] [INFO] [utils.py:34:print_object] gigabytes .................... 3.814697265625e-06 [2021-07-14 20:39:37,657] [INFO] [utils.py:34:print_object] num_elems .................... 256 [2021-07-14 20:39:37,657] [INFO] [utils.py:34:print_object] used_buffer_index ............ {} Using /github/home/.cache/torch_extensions as PyTorch extensions root... ----------------------------- Captured stderr call ----------------------------- PyTorch: setting up devices The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-). PyTorch: setting up devices The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-). Using amp fp16 backend +++++++++++++++++++++++++++++++++++ Timeout ++++++++++++++++++++++++++++++++++++ ~~~~~~~~~~~~~~~~~~~~~ Stack of Thread-1 (140136515512064) ~~~~~~~~~~~~~~~~~~~~~~ File "/opt/conda/lib/python3.8/threading.py", line 890, in _bootstrap self._bootstrap_inner() File "/opt/conda/lib/python3.8/threading.py", line 932, in _bootstrap_inner self.run() File "/opt/conda/lib/python3.8/site-packages/tqdm/_monitor.py", line 59, in run self.was_killed.wait(self.sleep_interval) File "/opt/conda/lib/python3.8/threading.py", line 558, in wait signaled = self._cond.wait(timeout) File "/opt/conda/lib/python3.8/threading.py", line 306, in wait gotit = waiter.acquire(True, timeout) ~~~~~~~~~~~~~~~~~~~~~ Stack of <unknown> (140136768341760) ~~~~~~~~~~~~~~~~~~~~~ File "/opt/conda/lib/python3.8/site-packages/execnet/gateway_base.py", line 285, in _perform_spawn reply.run() File "/opt/conda/lib/python3.8/site-packages/execnet/gateway_base.py", line 220, in run self._result = func(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/execnet/gateway_base.py", line 967, in _thread_receiver msg = Message.from_io(io) File "/opt/conda/lib/python3.8/site-packages/execnet/gateway_base.py", line 432, in from_io header = io.read(9) # type 1, channel 4, payload 4 File "/opt/conda/lib/python3.8/site-packages/execnet/gateway_base.py", line 400, in read data = self._read(numbytes - len(buf)) +++++++++++++++++++++++++++++++++++ Timeout ++++++++++++++++++++++++++++++++++++ ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12715/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12715/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12714
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12714/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12714/comments
https://api.github.com/repos/huggingface/transformers/issues/12714/events
https://github.com/huggingface/transformers/issues/12714
944,797,326
MDU6SXNzdWU5NDQ3OTczMjY=
12,714
layoutlm TokenClassificationPipeline
{ "login": "uwong", "id": 7048857, "node_id": "MDQ6VXNlcjcwNDg4NTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7048857?v=4", "gravatar_id": "", "url": "https://api.github.com/users/uwong", "html_url": "https://github.com/uwong", "followers_url": "https://api.github.com/users/uwong/followers", "following_url": "https://api.github.com/users/uwong/following{/other_user}", "gists_url": "https://api.github.com/users/uwong/gists{/gist_id}", "starred_url": "https://api.github.com/users/uwong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/uwong/subscriptions", "organizations_url": "https://api.github.com/users/uwong/orgs", "repos_url": "https://api.github.com/users/uwong/repos", "events_url": "https://api.github.com/users/uwong/events{/privacy}", "received_events_url": "https://api.github.com/users/uwong/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @NielsRogge ", "I'm afraid the `TokenClassificationPipeline` will not work with LayoutLM, the reason being that, as you mention, the model expects an additional input besides text, namely bounding boxes.\r\n\r\nWe are currently discussing the design of pipelines for models like LayoutLM. \r\n\r\n", "Ok thanks for the information! I'll just work around it for now.", "@NielsRogge are you aware of any developments in terms of pipeline integration for LayoutLM-like models? Thanks :) ", "@mishig25 worked on supporting LayoutLM for the object detection pipeline, but that wasn't added in the end. Not sure if we can add it to the existing pipeline, cause the model requires a few additional inputs (`bbox`, and `pixel_values`), cc @Narsil ", "Hi, it probably won't be implemented directly in `transformers` because arguments are different and so on.\r\n\r\nHowever you should be able to override the pipeline yourself doing something like\r\n\r\n```python\r\n\r\npipe = pipeline(model=\"...\" , pipeline_class=MyPipeline)\r\n\r\nclass MyPipeline(Pipeline):\r\n def preprocess(self, inputs):\r\n # Just return the inputs that will be sent to the model\r\n return model_inputs\r\n \r\n def _forward(self, model_inputs):\r\n model_outputs = self.model(**model_inputs)\r\n return model_outputs\r\n \r\n def postprocess(self, model_outputs):\r\n # Finalize the objects\r\n return final_object\r\n```\r\n\r\nIf you inherit `TokenClassificationPipeline you could definitely reuse stuff being done with aggregation_strategies\r\n " ]
1,626
1,656
1,626
NONE
null
Hi, I was looking at using transformers.pipeline for TokenClassification with an instance of microsoft/layoutlm-base-uncased that I have fine tuned. I would like to use pipeline to take advantage of the entity aggregation_strategy feature for extracted entities. However it is unclear to me how/whether TokenClassificationPipeline works with layoutlm for inference because layoutlm expects both a input text and input bounding boxes, unlike other text only models. Do you know if TokenClassificationPipeline is supposed to work with layoutlm and are there any examples?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12714/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12714/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12713
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12713/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12713/comments
https://api.github.com/repos/huggingface/transformers/issues/12713/events
https://github.com/huggingface/transformers/pull/12713
944,796,804
MDExOlB1bGxSZXF1ZXN0NjkwMjI0OTU4
12,713
Add versioning system to fast tokenizer files
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "might be cleaner if this worked in the other direction, i.e.\r\n\r\n> multiple tokenizer files: the `tokenizer.json` is the default one, used in the most recent version of Transformers. If one or more `tokenizer-x.y.z.json` exist, those files are used for the version x.y.z (of Transformers) and below.\r\n\r\nMakes more sense on the Hub side as well. What do you think?", "@julien-c this would break repositories that rely on `transformers` versions that are earlier than the first one that will have this feature.\r\n\r\nWere we to update the `tokenizer.json` file to the new, \"fixed\" one, and add a new `tokenizer-x.x.x.json` file to be used by earlier versions of `transformers`, then we would have no way of telling all versions < `4.10.0` to use that version rather than the standard `tokenizer.json` file.", "I think your assertion depends on what kind of changes are made to the JSON files. If it's only new attributes for example I wouldn't expect older versions to break, but from what I understand you're actually talking about modifying the actual attributes?", "Yes, the attributes actually need to be modified. For example, see this issue: https://github.com/huggingface/transformers/issues/9633\r\n\r\nThere was an offset mappings bug, which needed to be patched. However, the issue lived in the `tokenizer.json` file itself - so the recommended way to patch this was for users to recompile that file, by passing the \"slow\" tokenizer files, and using the newer `tokenizers` version to generate the updated file.\r\n\r\nI believe there are other issues, and there will be other issues as the libraries continue to evolve. Implementing this here allows us to ensure that the previous versions remain completely unaffected - while offering a way to patch models for future use.", "> Yes, the attributes actually need to be modified. For example, see this issue: #9633\r\n> \r\n> There was an offset mappings bug, which needed to be patched. However, the issue lived in the `tokenizer.json` file itself - so the recommended way to patch this was for users to recompile that file, by passing the \"slow\" tokenizer files, and using the newer `tokenizers` version to generate the updated file.\r\n> \r\n> I believe there are other issues, and there will be other issues as the libraries continue to evolve. Implementing this here allows us to ensure that the previous versions remain completely unaffected - while offering a way to patch models for future use.\r\n\r\ngoing on a whim here, but what about using git branches to do this?", "The problem with a new branch is that we then can't have a new version of the model in a new git branch that has to be used with one tokenizer file if versions of Transformers are old, and another one if they are more recent. And it wouldn't be compatible with the sure selecting their own branch as well (though in that case they should make sure to have the right version with tokenizers file).\r\n\r\nThe key here (for more context) is that we have tokenizers that have a \"wrong\" tokenizer file for more recent versions of Tokenizers (controlled by the version of Transformers) because there was a bug in the conversion from slow to fast tokenizer script. We can't touch the main branch and the tokenizer.json file otherwise every code in production using those models will suddenly break (the changes are significant sadly)." ]
1,626
1,626
1,626
COLLABORATOR
null
# What does this PR do? Some changes cannot be done to the fast tokenizers file without breaking backward compatibility. This PR introduces a versioning system by allowing a model repo to contain multiple tokenizer files: the `tokenizer.json` is the default one and if one (or several) `tokenizer.x.y.z.json` exist, those files are used for the version x.y.z (of Transformers) and above. cc @n1t0 as it should be helpful to solve that longstanding bug.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12713/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12713/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12713", "html_url": "https://github.com/huggingface/transformers/pull/12713", "diff_url": "https://github.com/huggingface/transformers/pull/12713.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12713.patch", "merged_at": 1626870277000 }
https://api.github.com/repos/huggingface/transformers/issues/12712
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12712/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12712/comments
https://api.github.com/repos/huggingface/transformers/issues/12712/events
https://github.com/huggingface/transformers/pull/12712
944,775,224
MDExOlB1bGxSZXF1ZXN0NjkwMjA2MzUx
12,712
[doc] parallelism: Which Strategy To Use When
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
CONTRIBUTOR
null
as requested by https://github.com/huggingface/transformers/issues/12688 adding a new section on Which Strategy To Use When Fixes: https://github.com/huggingface/transformers/issues/12688 @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12712/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12712/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12712", "html_url": "https://github.com/huggingface/transformers/pull/12712", "diff_url": "https://github.com/huggingface/transformers/pull/12712.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12712.patch", "merged_at": 1626367131000 }
https://api.github.com/repos/huggingface/transformers/issues/12711
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12711/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12711/comments
https://api.github.com/repos/huggingface/transformers/issues/12711/events
https://github.com/huggingface/transformers/issues/12711
944,755,450
MDU6SXNzdWU5NDQ3NTU0NTA=
12,711
Error while performing eval on clm using gpt2 in flax
{ "login": "AnantShankhdhar", "id": 56432951, "node_id": "MDQ6VXNlcjU2NDMyOTUx", "avatar_url": "https://avatars.githubusercontent.com/u/56432951?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AnantShankhdhar", "html_url": "https://github.com/AnantShankhdhar", "followers_url": "https://api.github.com/users/AnantShankhdhar/followers", "following_url": "https://api.github.com/users/AnantShankhdhar/following{/other_user}", "gists_url": "https://api.github.com/users/AnantShankhdhar/gists{/gist_id}", "starred_url": "https://api.github.com/users/AnantShankhdhar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AnantShankhdhar/subscriptions", "organizations_url": "https://api.github.com/users/AnantShankhdhar/orgs", "repos_url": "https://api.github.com/users/AnantShankhdhar/repos", "events_url": "https://api.github.com/users/AnantShankhdhar/events{/privacy}", "received_events_url": "https://api.github.com/users/AnantShankhdhar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @AnantShankhdhar - could you please provide the `run.sh` file?", "The error was because the eval batch size was very high " ]
1,626
1,626
1,626
NONE
null
## Environment info - `transformers` version: 4.9.0.dev0 - Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): 0.3.4 (tpu) - Jax version: 0.2.16 - JaxLib version: 0.1.68 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> @patrickvonplaten @patil-suraj ## Information Model I am using (Bert, XLNet ...):GPT2 The problem arises when using: * [1 ] the official example scripts: (give details below) using the examples/flax/language-modeling/run_clm_flax.py * [ 2] my own modified scripts: (give details below): a run.sh file of the format - run.sh --param1 --param2 The tasks I am working on is: * [ 1] my own task or dataset: (give details below): * txt file containing rap lyrics starting with <BOS> and ending with <EOS> ## To reproduce Steps to reproduce the behavior: 1.1.Make a new directory test and change to this directory 2.Add tokenizer.json and config.json from the gpt2 repo from (https://huggingface.co/gpt2/tree/main) to this repository 3.Make a run.sh file of the type run.sh --param1 --param2 and add evaluation parameters such --do_eval and --eval_steps 4.run the file ./run.sh ## Expected behavior When evaluation occurs you will get the following error: File "run_clm_flax.py", line 640, in <module> main() File "run_clm_flax.py", line 609, in main eval_metrics = get_metrics(eval_metrics) File "/home/anantshankhdhar/RapAiAnant/lib/python3.8/site-packages/flax/training/common_utils.py", line 53, in get_metrics return stack_forest(metrics_np) File "/home/anantshankhdhar/RapAiAnant/lib/python3.8/site-packages/flax/training/common_utils.py", line 45, in stack_forest return jax.tree_multimap(stack_args, *forest) TypeError: tree_map() missing 1 required positional argument: 'tree'
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12711/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12711/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12710
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12710/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12710/comments
https://api.github.com/repos/huggingface/transformers/issues/12710/events
https://github.com/huggingface/transformers/pull/12710
944,707,689
MDExOlB1bGxSZXF1ZXN0NjkwMTQ3ODQz
12,710
[test] split test into 4 sub-tests to avoid timeout
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
CONTRIBUTOR
null
This PR splits the long test into 4 sub-tests to avoid timeout, as each sub-test is relatively slow. This supercedes https://github.com/huggingface/transformers/pull/12699 @LysandreJik, @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12710/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12710/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12710", "html_url": "https://github.com/huggingface/transformers/pull/12710", "diff_url": "https://github.com/huggingface/transformers/pull/12710.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12710.patch", "merged_at": 1626293098000 }
https://api.github.com/repos/huggingface/transformers/issues/12709
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12709/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12709/comments
https://api.github.com/repos/huggingface/transformers/issues/12709/events
https://github.com/huggingface/transformers/pull/12709
944,669,391
MDExOlB1bGxSZXF1ZXN0NjkwMTE0Nzc2
12,709
Init adds its own files as impacted
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
COLLABORATOR
null
# What does this PR do? As pointed out by @patrickvonplaten, the script that fetches the right tests does not consider the init of a submodule impacts its files. This PR addresses that.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12709/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12709/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12709", "html_url": "https://github.com/huggingface/transformers/pull/12709", "diff_url": "https://github.com/huggingface/transformers/pull/12709.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12709.patch", "merged_at": 1626337067000 }
https://api.github.com/repos/huggingface/transformers/issues/12708
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12708/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12708/comments
https://api.github.com/repos/huggingface/transformers/issues/12708/events
https://github.com/huggingface/transformers/issues/12708
944,651,651
MDU6SXNzdWU5NDQ2NTE2NTE=
12,708
[Bug?] question answering - end position of each input is weird
{ "login": "woong97", "id": 60849888, "node_id": "MDQ6VXNlcjYwODQ5ODg4", "avatar_url": "https://avatars.githubusercontent.com/u/60849888?v=4", "gravatar_id": "", "url": "https://api.github.com/users/woong97", "html_url": "https://github.com/woong97", "followers_url": "https://api.github.com/users/woong97/followers", "following_url": "https://api.github.com/users/woong97/following{/other_user}", "gists_url": "https://api.github.com/users/woong97/gists{/gist_id}", "starred_url": "https://api.github.com/users/woong97/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/woong97/subscriptions", "organizations_url": "https://api.github.com/users/woong97/orgs", "repos_url": "https://api.github.com/users/woong97/repos", "events_url": "https://api.github.com/users/woong97/events{/privacy}", "received_events_url": "https://api.github.com/users/woong97/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I believe @sgugger worked on that script", "Hi there, noticed you closed this so may have come to the same conclusion, but the \"end_positions\" will give you the position of the last token in the answer. So you should add a +1 in your slice to include that token at \"end_positions\".", "Thank you for your reply.\nI recognized my mistakes, thus I closed the issue myself.\nThank you for checking one more time.\nNext time, I'll post the issue deliberately!\n\n2021년 7월 16일 (금) 오전 2:13, Sylvain Gugger ***@***.***>님이 작성:\n\n> Hi there, noticed you closed this so may have come to the same conclusion,\n> but the \"end_positions\" will give you the position of the last token in the\n> answer. So you should add a +1 in your slice to include that token at\n> \"end_positions\".\n>\n> —\n> You are receiving this because you modified the open/close state.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/12708#issuecomment-880872726>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AOQH5YFSBA4AWDZPF77K3TLTX4JMJANCNFSM5AL5BOQQ>\n> .\n>\n" ]
1,626
1,626
1,626
NONE
null
I run "python run_qa.py" in transformers/examples/pytorch/question-answering. In prepare_train_features function, I think "end position" is lower than expected postiion. I tested first example in examples(squad) in "prepare_train_features" function For example, answer text = 'Saint Bernadette Soubirous' print(tokenizer(answer_text)) => return: {'input_ids': [101, 3002, 16595, 9648, 4674, 2061, 12083, 9711, 2271, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} print(input_ids[tokenized_examples['start_positions'][0]:tokenized_examples['end_positions'][0]]) => return: [3002, 16595, 9648, 4674, 2061, 12083, 9711] => Thus I think last token 2271 is dropped. For other input sentences, I think last token is dropped Isn't it bug??
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12708/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12708/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12707
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12707/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12707/comments
https://api.github.com/repos/huggingface/transformers/issues/12707/events
https://github.com/huggingface/transformers/issues/12707
944,509,452
MDU6SXNzdWU5NDQ1MDk0NTI=
12,707
Convert model from flax to TF
{ "login": "arunraja-hub", "id": 43485111, "node_id": "MDQ6VXNlcjQzNDg1MTEx", "avatar_url": "https://avatars.githubusercontent.com/u/43485111?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arunraja-hub", "html_url": "https://github.com/arunraja-hub", "followers_url": "https://api.github.com/users/arunraja-hub/followers", "following_url": "https://api.github.com/users/arunraja-hub/following{/other_user}", "gists_url": "https://api.github.com/users/arunraja-hub/gists{/gist_id}", "starred_url": "https://api.github.com/users/arunraja-hub/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arunraja-hub/subscriptions", "organizations_url": "https://api.github.com/users/arunraja-hub/orgs", "repos_url": "https://api.github.com/users/arunraja-hub/repos", "events_url": "https://api.github.com/users/arunraja-hub/events{/privacy}", "received_events_url": "https://api.github.com/users/arunraja-hub/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "At the moment we only have Flax <=> PT and TF <=> PT conversion. So you should do the following:\r\n\r\n```python\r\nfrom transformers import T5ForConditionalGeneration, MT5TokenizerFast, TFT5ForConditionalGeneration, MT5Config, FlaxT5ForConditionalGeneration\r\n\r\nimport numpy as np\r\nimport jax\r\nimport jax.numpy as jnp\r\n\r\npretrained = \"../dumped/code-mt5-large-batch-mix/\" # earlier missed the fact that there is no ckpt in this dir\r\ntmp_path = \"../dumped/code-mt5-large-batch-mix-tensorflow\"\r\n\r\nconfig = MT5Config.from_pretrained(pretrained, from_flax=True)\r\nmodel = T5ForConditionalGeneration.from_pretrained(pretrained, config=config)\r\ntokenizer = MT5TokenizerFast(pretrained, use_fast=True, extra_ids=160)\r\n\r\ndef to_f32(t):\r\n return jax.tree_map(lambda x: x.astype(jnp.float32) if x.dtype == jnp.bfloat16 else x, t)\r\n\r\nmodel.params = to_f32(model.params)\r\nmodel.save_pretrained(tmp_path)\r\n\r\nmodel_pt = T5ForConditionalGeneration.from_pretrained(tmp_path, from_flax=True)\r\nmodel_pt.save_pretrained(tmp_path)\r\n\r\nmodel_tf = TFT5ForConditionalGeneration.from_pretrained(tmp_path, from_pt=True)\r\nmodel_tf.save_pretrained(tmp_path)\r\n```", "Thank you!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,626
1,629
1,629
NONE
null
I am trying to convert my flax MT5 model to TensorFlow. I devised the following script using https://github.com/huggingface/transformers/issues/12545 ``` from transformers import MT5Model, MT5TokenizerFast, TFMT5Model, MT5Config, FlaxT5ForConditionalGeneration import numpy as np import jax import jax.numpy as jnp pretrained = "../dumped/code-mt5-large-batch-mix/" # earlier missed the fact that there is no ckpt in this dir tmp_path = "../dumped/code-mt5-large-batch-mix-tensorflow" config = MT5Config.from_pretrained(pretrained, from_flax=True) model = FlaxT5ForConditionalGeneration.from_pretrained(pretrained, config=config) tokenizer = MT5TokenizerFast(pretrained, use_fast=True, extra_ids=160) def to_f32(t): return jax.tree_map(lambda x: x.astype(jnp.float32) if x.dtype == jnp.bfloat16 else x, t) model.params = to_f32(model.params) model.save_pretrained(tmp_path) model_tf = TFMT5Model.from_pretrained(tmp_path) model_tf.save_pretrained(tmp_path) ``` However, the conversion gets aborted with this output: https://paste.ubuntu.com/p/Ynw9Tn8NC9/ According to the output, the conversion seems to require a specific file instead of the entire model directory `../dumped/code-mt5-large-batch-mix/` (` what(): basic_filebuf::underflow error reading the file: Is a directory `). We are not sure if this is the case and if so what is the specific file required. The contents of `../dumped/code-mt5-large-batch-mix/` are: ![image](https://user-images.githubusercontent.com/43485111/125642556-933161e1-a8e3-4611-8097-9ba9f8c0e022.png) Some help with this model conversion is much appreciated. Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12707/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12707/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12706
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12706/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12706/comments
https://api.github.com/repos/huggingface/transformers/issues/12706/events
https://github.com/huggingface/transformers/pull/12706
944,477,125
MDExOlB1bGxSZXF1ZXN0Njg5OTU1NDI1
12,706
Deprecate TFTrainer
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
MEMBER
null
This PR adds a deprecation warning to `TFTrainer`, and offers advice and a link to the new Keras examples.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12706/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12706/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12706", "html_url": "https://github.com/huggingface/transformers/pull/12706", "diff_url": "https://github.com/huggingface/transformers/pull/12706.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12706.patch", "merged_at": 1626274754000 }
https://api.github.com/repos/huggingface/transformers/issues/12705
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12705/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12705/comments
https://api.github.com/repos/huggingface/transformers/issues/12705/events
https://github.com/huggingface/transformers/pull/12705
944,448,505
MDExOlB1bGxSZXF1ZXN0Njg5OTMxMTM0
12,705
Fix uninitialized variables when `config.mask_feature_prob > 0`
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks a lot!" ]
1,626
1,626
1,626
MEMBER
null
When `config.mask_feature_prob > 0` AND `mask_time_indices is not None` then `batch_size` and `sequence_length` are not defined for masking over features axis. This PR solves this.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12705/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12705/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12705", "html_url": "https://github.com/huggingface/transformers/pull/12705", "diff_url": "https://github.com/huggingface/transformers/pull/12705.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12705.patch", "merged_at": 1626273019000 }
https://api.github.com/repos/huggingface/transformers/issues/12704
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12704/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12704/comments
https://api.github.com/repos/huggingface/transformers/issues/12704/events
https://github.com/huggingface/transformers/issues/12704
944,418,445
MDU6SXNzdWU5NDQ0MTg0NDU=
12,704
Where is the casual mask when using BertLMHeadModel and set config.is_decoder = True?
{ "login": "Doragd", "id": 26213546, "node_id": "MDQ6VXNlcjI2MjEzNTQ2", "avatar_url": "https://avatars.githubusercontent.com/u/26213546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Doragd", "html_url": "https://github.com/Doragd", "followers_url": "https://api.github.com/users/Doragd/followers", "following_url": "https://api.github.com/users/Doragd/following{/other_user}", "gists_url": "https://api.github.com/users/Doragd/gists{/gist_id}", "starred_url": "https://api.github.com/users/Doragd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Doragd/subscriptions", "organizations_url": "https://api.github.com/users/Doragd/orgs", "repos_url": "https://api.github.com/users/Doragd/repos", "events_url": "https://api.github.com/users/Doragd/events{/privacy}", "received_events_url": "https://api.github.com/users/Doragd/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Doragd, BERT is an encoder model, and is therefore ill-suited to the causal language modeling task. Is there a reason you would like to use that model specifically for causal language modeling?", "Hi, @LysandreJik I just apply causal language modeling as an auxiliary task to lead stable training of our model. I should have implemented this process myself, but I found this class `BertLMHeadModel`. However, I did not find any code snippet to implement causal mask. I would like to know that if is_decoder=True is set in BERT, can causal language modeling be achieved correctly?", "cc @patrickvonplaten ", "Setting `is_decoder=True` automatically creates a causal mask in those lines of code: https://github.com/huggingface/transformers/blob/7fae5350528474c29b664ebb4df5bbc8104b48ec/src/transformers/modeling_utils.py#L266" ]
1,626
1,627
1,627
NONE
null
I hope to use BERT for the task of causal language modeling. `BertLMHeadModel ` seems to meet my needs, but I did not find any code snippets about the causal mask, even if I set the `config.is_decoder=True`. I only find the following related code in https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_bert.py#L968. however, I do not have any values to pass into the argument `encoder_hidden_states` when doing causal language modeling. So maybe the causal mask does not work? ``` if self.config.is_decoder and encoder_hidden_states is not None: encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size() encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length) if encoder_attention_mask is None: encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device) encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask) else: encoder_extended_attention_mask = None ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12704/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12704/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12703
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12703/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12703/comments
https://api.github.com/repos/huggingface/transformers/issues/12703/events
https://github.com/huggingface/transformers/pull/12703
944,381,181
MDExOlB1bGxSZXF1ZXN0Njg5ODczNzI2
12,703
Update TF examples README
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
MEMBER
null
Update the general README for all TF examples now that the Keras push is finished, as well as adding in the missing README for the token classification example.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12703/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12703/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12703", "html_url": "https://github.com/huggingface/transformers/pull/12703", "diff_url": "https://github.com/huggingface/transformers/pull/12703.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12703.patch", "merged_at": 1626272125000 }
https://api.github.com/repos/huggingface/transformers/issues/12702
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12702/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12702/comments
https://api.github.com/repos/huggingface/transformers/issues/12702/events
https://github.com/huggingface/transformers/issues/12702
944,340,206
MDU6SXNzdWU5NDQzNDAyMDY=
12,702
Examples/flax/run_clm_flax.py showing error file extension error for train_file attribute even though file has the correct extension
{ "login": "AnantShankhdhar", "id": 56432951, "node_id": "MDQ6VXNlcjU2NDMyOTUx", "avatar_url": "https://avatars.githubusercontent.com/u/56432951?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AnantShankhdhar", "html_url": "https://github.com/AnantShankhdhar", "followers_url": "https://api.github.com/users/AnantShankhdhar/followers", "following_url": "https://api.github.com/users/AnantShankhdhar/following{/other_user}", "gists_url": "https://api.github.com/users/AnantShankhdhar/gists{/gist_id}", "starred_url": "https://api.github.com/users/AnantShankhdhar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AnantShankhdhar/subscriptions", "organizations_url": "https://api.github.com/users/AnantShankhdhar/orgs", "repos_url": "https://api.github.com/users/AnantShankhdhar/repos", "events_url": "https://api.github.com/users/AnantShankhdhar/events{/privacy}", "received_events_url": "https://api.github.com/users/AnantShankhdhar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @AnantShankhdhar, \r\n\r\nI cannot copy paste the script to run the code since it's a screenshot (please never post screenshots of code in an issue, always copy-paste & format them with:\r\n\r\n```\r\nrun.sh --param1 --param2\r\n```", "To solve you error, you should have this format in your bash script\r\n\r\n```bash\r\n./run_clm_flax.py \\\r\n ...\r\n --train_file = \"file.txt\" \\\r\n```\r\n\r\nbut instead have the following format in your bash script\r\n\r\n```bash\r\n./run_clm_flax.py \\\r\n ...\r\n --train_file=\"file.txt\" \\\r\n```\r\n\r\n(Note how there are no whitespaces around the `=` in the bash script)", "Thanks yes it worked" ]
1,626
1,626
1,626
NONE
null
## Environment info - `transformers` version: 4.9.0.dev0 - Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): 0.3.4 (tpu) - Jax version: 0.2.16 - JaxLib version: 0.1.68 - Using GPU in script?: NO - Using distributed or parallel set-up in script?: NO ### Who can help @patrickvonplaten @patil-suraj ## Information Model I am using (Bert, XLNet ...): GPT2 The problem arises when using: * [ 1] the official example scripts: (give details below) Used examples/flax/run_clm_flax.py for gpt 2 text generation * [ 2] my own modified scripts: (give details below) For the running command I modified the one given for contextual language modelling in examples/flax/language-modeling/README.md by removing dataset name parameter and instead passing the train_file argument as --train_file = "/home/anantshankhdhar/gpt2-rap-lyric-generator/Lilgpt.txt"\ from my system The tasks I am working on is: * [ 1] my own task or dataset: (give details below) I made a dataset call Lilgpt.txt which is a txt file consisting rap lyrics . Each song starts with a <BOS> token and ends with an <EOS> token ## To reproduce Steps to reproduce the behavior: 1.Make a new directory test and change to this directory 2.Add tokenizer.json and config.json from the gpt2 repo from (https://huggingface.co/gpt2/tree/main) to this repository 3.make a run.sh file like this <img width="1440" alt="Screenshot 2021-07-14 at 2 34 59 AM" src="https://user-images.githubusercontent.com/56432951/125616296-4fbc7b18-1432-4d35-a4e6-7563ed8edd9e.png"> 4.Add a txt file as the train_file attribute in run.sh and add a txt file dataset to the directory 5. type ./run.sh in terminal <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> 1. you will get the following error:- File "./run_clm_flax.py", line 640, in <module> main() File "./run_clm_flax.py", line 241, in main model_args, data_args, training_args = parser.parse_args_into_dataclasses() File "/home/anantshankhdhar/transformers/src/transformers/hf_argparser.py", line 191, in parse_args_into_dataclasses obj = dtype(**inputs) File "<string>", line 13, in __init__ File "./run_clm_flax.py", line 164, in __post_init__ assert extension in ["csv", "json", "txt"], "`train_file` should be a csv, a json or a txt file." AssertionError: `train_file` should be a csv, a json or a txt file. 2. However our train_file is txt only so we should have not got the error 3. The ideal behavior is training begins smoothly
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12702/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12702/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12701
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12701/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12701/comments
https://api.github.com/repos/huggingface/transformers/issues/12701/events
https://github.com/huggingface/transformers/pull/12701
944,328,934
MDExOlB1bGxSZXF1ZXN0Njg5ODI4MDc3
12,701
Translate README.md to Traditional Chinese
{ "login": "qqaatw", "id": 24835382, "node_id": "MDQ6VXNlcjI0ODM1Mzgy", "avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qqaatw", "html_url": "https://github.com/qqaatw", "followers_url": "https://api.github.com/users/qqaatw/followers", "following_url": "https://api.github.com/users/qqaatw/following{/other_user}", "gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}", "starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions", "organizations_url": "https://api.github.com/users/qqaatw/orgs", "repos_url": "https://api.github.com/users/qqaatw/repos", "events_url": "https://api.github.com/users/qqaatw/events{/privacy}", "received_events_url": "https://api.github.com/users/qqaatw/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'll ask my friends who are also native in Traditional Chinese, to help double check the terms in the files. So, we can ensure the accuracy of the translation.", "@JetRunner We have had the files checked. It is ready to merge if there is no other mistake.", "Cool! I'll give it a look then we are ready to merge." ]
1,626
1,626
1,626
CONTRIBUTOR
null
# What does this PR do? 1. Add README_zh-hant.md and links to direct users to each README. 2. Some of terms in the file can be found at [National Academy for Educational Research](https://terms.naer.edu.tw/), an official website providing bilingual translations between English and Traditional Chinese. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. cc @JetRunner
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12701/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12701/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12701", "html_url": "https://github.com/huggingface/transformers/pull/12701", "diff_url": "https://github.com/huggingface/transformers/pull/12701.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12701.patch", "merged_at": 1626363339000 }
https://api.github.com/repos/huggingface/transformers/issues/12700
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12700/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12700/comments
https://api.github.com/repos/huggingface/transformers/issues/12700/events
https://github.com/huggingface/transformers/issues/12700
944,324,826
MDU6SXNzdWU5NDQzMjQ4MjY=
12,700
Doc - expecting `push_to_hub` method for Tokenizers to be also in the Tokenizer class doc pages
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false } ]
[ "cc @sgugger " ]
1,626
1,626
1,626
MEMBER
null
# 🚀 Doc request The gist is in the title. I was expecting the doc/docstring for the `push_to_hub` method for Tokenizers to be also in the Tokenizer class doc pages, e.g. on the main `Tokenizer` API landing page: https://huggingface.co/transformers/main_classes/tokenizer.html
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12700/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12700/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12699
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12699/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12699/comments
https://api.github.com/repos/huggingface/transformers/issues/12699/events
https://github.com/huggingface/transformers/pull/12699
944,220,822
MDExOlB1bGxSZXF1ZXN0Njg5NzM0MDMw
12,699
Add a custom timeout for log replica test
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hmm, it's really slow - clocked `1m16.735s` on my machine. \r\n\r\nLet me see first if it can be made faster.", "It's like 4 tests in one - so it adds up - I guess I could just split it in several sub-tests.", "What do you guys prefer here? We can also make it @slow - which will shave off ~80sec - it doesn't need to run all the time at all.", "No strong opinion on my side, do what you think is best!", "oh, but these are multi-gpu tests so they are @slow already as they only run on our machine only\r\n\r\n@LysandreJik, does this impact the push workflow? or just scheduled one?\r\n\r\nIf so I'd also `@slow` all the fairscale/apex tests, as these definitely don't need to run often at all.", "Here we go: https://github.com/huggingface/transformers/pull/12710 - reworked 1 to 4 subtests, shouldn't run longer than the timeout now.", "merged the alternative solution, closing this then", "The multi GPU tests are run every time there is a commit on `master`, so it's not only slow tests. We have fast and slow GPU & multi-GPU tests.\r\n\r\nThanks a lot for splitting that test, way better solution.", "ok, so should we put fairscale and apex tests to @slow then? These are hardly ever used by anyone, so would be a waste to spend $$ and time on those.", "We can, but it's not a super high priority, they seem to run quickly enough" ]
1,626
1,626
1,626
MEMBER
null
Add a custom timeout for log replica test. Let's keep these outliers to a minimum.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12699/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12699/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12699", "html_url": "https://github.com/huggingface/transformers/pull/12699", "diff_url": "https://github.com/huggingface/transformers/pull/12699.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12699.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/12698
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12698/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12698/comments
https://api.github.com/repos/huggingface/transformers/issues/12698/events
https://github.com/huggingface/transformers/issues/12698
944,187,776
MDU6SXNzdWU5NDQxODc3NzY=
12,698
[Examples]Flax Seq2Seq example fails when doing only eval or predict
{ "login": "bhadreshpsavani", "id": 26653468, "node_id": "MDQ6VXNlcjI2NjUzNDY4", "avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhadreshpsavani", "html_url": "https://github.com/bhadreshpsavani", "followers_url": "https://api.github.com/users/bhadreshpsavani/followers", "following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}", "gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions", "organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs", "repos_url": "https://api.github.com/users/bhadreshpsavani/repos", "events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}", "received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "yes, right now for simplicity the scripts are written such that they always expect train datasets.\r\n\r\nFeel free to open a PR :)", "Sure, I will open PR!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,626
1,629
1,629
CONTRIBUTOR
null
## Descriptions In the flax example if we only do predict or eval step it is not flexible enough to work. It will fail at this line https://github.com/huggingface/transformers/blob/5dd0c956a8eb492c8597e9673cc1d818f0e6b501/examples/flax/summarization/run_summarization_flax.py#L569 because the current script is written in such a way it will always need a training dataset I have modified a version of the same file which works but I need to remove the below line in that case https://github.com/huggingface/transformers/blob/5dd0c956a8eb492c8597e9673cc1d818f0e6b501/examples/flax/summarization/run_summarization_flax.py#L467 and i always need to pass training data and create train_dataset by preprocessing even if i am not doing training ### Who can help @patrickvonplaten @patil-suraj @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12698/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12698/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12697
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12697/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12697/comments
https://api.github.com/repos/huggingface/transformers/issues/12697/events
https://github.com/huggingface/transformers/issues/12697
944,137,742
MDU6SXNzdWU5NDQxMzc3NDI=
12,697
SystemError: <built-in method run_backward of torch._C._EngineBase object at 0x7f06bfae6b30> returned NULL without setting an error> ```
{ "login": "lancekung", "id": 19167336, "node_id": "MDQ6VXNlcjE5MTY3MzM2", "avatar_url": "https://avatars.githubusercontent.com/u/19167336?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lancekung", "html_url": "https://github.com/lancekung", "followers_url": "https://api.github.com/users/lancekung/followers", "following_url": "https://api.github.com/users/lancekung/following{/other_user}", "gists_url": "https://api.github.com/users/lancekung/gists{/gist_id}", "starred_url": "https://api.github.com/users/lancekung/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lancekung/subscriptions", "organizations_url": "https://api.github.com/users/lancekung/orgs", "repos_url": "https://api.github.com/users/lancekung/repos", "events_url": "https://api.github.com/users/lancekung/events{/privacy}", "received_events_url": "https://api.github.com/users/lancekung/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hmm, somehow this issue has never been addressed. \r\n\r\nIn such cases you will have a better luck reporting torch-land issues to https://github.com/pytorch/pytorch/issues as chances are low we will have the required understanding.\r\n\r\nI tried to google the exception and only found this to be relevant:\r\nhttps://discuss.pytorch.org/t/autograd-vague-error-returned-null-without-setting-an-error/112781/6\r\nAre you by chance too using apex's amp?\r\n\r\nSomeone reported that building their own version of pytorch solved the problem. So perhaps you could try to switch to an older or newer pytorch and see if the problem goes away?\r\n\r\nOn my setup (pt-1.9.0)\r\n```\r\npython -c \"import pickle; from transformers import AutoModelForCausalLM; pickle.dumps(AutoModelForCausalLM)\"\r\n```\r\nit works w/o a problem (that is if that's the code that caused the error in OP).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,626
1,631
1,631
NONE
null
> ``` > import pickle > from transformers import AutoModelForCausalLM > > pickle.dumps(AutoModelForCausalLM) > ``` > > I think it's comes from the fact those are autogenerated. Thanks for your help, but I tested based on your modification in #12654, a new problem arises: @stas00 @patrickvonplaten, @LysandreJik Traceback (most recent call last): File "/usr/local/anaconda3/envs/py38/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/usr/local/anaconda3/envs/py38/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/media/cfs/gonglixing/9Nctl/gpt_v2/run_clm_v3.py", line 509, in init_process fn(rank, size) File "/media/cfs/gonglixing/9Nctl/gpt_v2/run_clm_v3.py", line 456, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/media/cfs/gonglixing/9Nctl/opensource/transformers-master/src/transformers/trainer.py", line 1275, in train tr_loss += self.training_step(model, inputs) File "/media/cfs/gonglixing/9Nctl/opensource/transformers-master/src/transformers/trainer.py", line 1778, in training_step self.scaler.scale(loss).backward() File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/torch/tensor.py", line 245, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/torch/autograd/__init__.py", line 145, in backward Variable._execution_engine.run_backward( SystemError: <built-in method run_backward of torch._C._EngineBase object at 0x7f06bfae6b30> returned NULL without setting an error _Originally posted by @lancekung in https://github.com/huggingface/transformers/issues/12621#issuecomment-878738997_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12697/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12697/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12696
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12696/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12696/comments
https://api.github.com/repos/huggingface/transformers/issues/12696/events
https://github.com/huggingface/transformers/pull/12696
944,031,642
MDExOlB1bGxSZXF1ZXN0Njg5NTc0OTQ3
12,696
Refactored code to improve performance.
{ "login": "AllStars101-sudo", "id": 53670363, "node_id": "MDQ6VXNlcjUzNjcwMzYz", "avatar_url": "https://avatars.githubusercontent.com/u/53670363?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AllStars101-sudo", "html_url": "https://github.com/AllStars101-sudo", "followers_url": "https://api.github.com/users/AllStars101-sudo/followers", "following_url": "https://api.github.com/users/AllStars101-sudo/following{/other_user}", "gists_url": "https://api.github.com/users/AllStars101-sudo/gists{/gist_id}", "starred_url": "https://api.github.com/users/AllStars101-sudo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AllStars101-sudo/subscriptions", "organizations_url": "https://api.github.com/users/AllStars101-sudo/orgs", "repos_url": "https://api.github.com/users/AllStars101-sudo/repos", "events_url": "https://api.github.com/users/AllStars101-sudo/events{/privacy}", "received_events_url": "https://api.github.com/users/AllStars101-sudo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@LysandreJik Sorry for the ping but I'd like to know your thoughts on this PR and whether I did whatever you had asked me to do in the [previous one](https://github.com/huggingface/transformers/pull/12639). Thanks!", "Hi, I'd like to put my two cents in this PR.\r\n\r\nAt first, although most of the time we want to make the code concise and shortened, we should meanwhile have the code readable and maintainable in order to let the next contributor easily understand the purpose of a code segment. \r\n\r\nTaking a part of your contribution (src/transformers/commands/user.py ) as an example:\r\n```\r\n- lines = []\r\n- lines.append(row_format.format(*headers))\r\n+ lines = [row_format.format(*headers)]\r\n lines.append(row_format.format(*[\"-\" * w for w in col_widths])) \r\n```\r\nHere you used a shortened expression to instantiate `lines` list with the first line at the same time, which is good when you just have one element to be appended; however, the `lines` in this place means to interact with users by displaying a block of text, and we expect that the `append` method will be used many times in order to add additional information. As a result, this change might break the consistency of the code block as it mixed instantiation and appending. On the contrary, making them separated (instantiation & appending new values) can increase the readability for the next contributor.\r\n\r\nHere is another example (src/transformers/commands/serving.py):\r\n ```\r\n nlp = pipeline(\r\n task=args.task,\r\n - model=args.model if args.model else None,\r\n + model=args.model or None,\r\n config=args.config,\r\n tokenizer=args.tokenizer,\r\n device=args.device,\r\n )\r\n```\r\nThough `or` operator can do the thing well, this change may surprise or confuse the next contributor because we usually use `or` when we have to check whether left or right operand is equivalent to `True`, and here the `None` will always be `False`. As a result, a question in my brain may appear that why we should check/test a value which is always equivalent to `False`.\r\n\r\nThe other thing is that this PR includes too many changes across numerous files (51 files changed), from setup.py to model definitions. This might lead to a difficulty to review by the maintainers. Therefore, I would suggest you choosing a part of files which are related to each other, and make sure the changes keep readable and maintainable. \r\n\r\nHave a good luck!", "> Hi, I'd like to put my two cents in this PR.\r\n> \r\n> At first, although most of the time we want to make the code concise and shortened, we should meanwhile have the code readable and maintainable in order to let the next contributor easily understand the purpose of a code segment.\r\n> \r\n> Taking a part of your contribution (src/transformers/commands/user.py ) as an example:\r\n> \r\n> ```\r\n> - lines = []\r\n> - lines.append(row_format.format(*headers))\r\n> + lines = [row_format.format(*headers)]\r\n> lines.append(row_format.format(*[\"-\" * w for w in col_widths])) \r\n> ```\r\n> \r\n> Here you used a shortened expression to instantiate `lines` list with the first line at the same time, which is good when you just have one element to be appended; however, the `lines` in this place means to interact with users by displaying a block of text, and we expect that the `append` method will be used many times in order to add additional information. As a result, this change might break the consistency of the code block as it mixed instantiation and appending. On the contrary, making them separated (instantiation & appending new values) can increase the readability for the next contributor.\r\n> \r\n> Here is another example (src/transformers/commands/serving.py):\r\n> \r\n> ```\r\n> nlp = pipeline(\r\n> task=args.task,\r\n> - model=args.model if args.model else None,\r\n> + model=args.model or None,\r\n> config=args.config,\r\n> tokenizer=args.tokenizer,\r\n> device=args.device,\r\n> )\r\n> ```\r\n> \r\n> Though `or` operator can do the thing well, this change may surprise or confuse the next contributor because we usually use `or` when we have to check whether left or right operand is equivalent to `True`, and here the `None` will always be `False`. As a result, a question in my brain may appear that why we should check/test a value which is always equivalent to `False`.\r\n> \r\n> The other thing is that this PR includes too many changes across numerous files (51 files changed), from setup.py to model definitions. This might lead to a difficulty to review by the maintainers. Therefore, I would suggest you choosing a part of files which are related to each other, and make sure the changes keep readable and maintainable.\r\n> \r\n> Have a good luck!\r\n\r\nThanks for replying! I'll keep these points in mind and make changes accordingly. Is it okay if I write a few comments explaining a few of these hard-to-read codes or should I not make changes to these altogether? ", "Writing comments is a good idea when a code segment itself cannot directly express its purpose. However, in my opinion, the original implementations I mentioned above can clearly deliver its purpose and goal to the next contributor without any comment, and this is the best practice of coding I think (even if the performance has a minuscule difference).\r\n\r\nTherefore, I would suggest you making sure which code segment really needs to be refactored (i.e. the change will bring a significant performance improvement such as improved time complexity or increase its readability...etc.) before refactoring. ", "> Writing comments is a good idea when a code segment itself cannot directly express its purpose. However, in my opinion, the original implementations I mentioned above can clearly deliver its purpose and goal to the next contributor without any comment, and this is the best practice of coding I think (even if the performance has a minuscule difference).\n> \n> Therefore, I would suggest you making sure which code segment really needs to be refactored (i.e. the change will bring a significant performance improvement such as improved time complexity or increase its readability...etc.) before refactoring. \n\nGotcha, thanks." ]
1,626
1,626
1,626
NONE
null
# What does this PR do? Refactors several segments of code in the `scripts`,`src`,`tests`,`utils` and `setup.py` and increases performance by a bit, using compression methods and newer practices. No new functions or methods/models were added; therefore no documentation changes were required. ## Before submitting * [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). * [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? * [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. * [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). * [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12696/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12696/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12696", "html_url": "https://github.com/huggingface/transformers/pull/12696", "diff_url": "https://github.com/huggingface/transformers/pull/12696.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12696.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/12695
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12695/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12695/comments
https://api.github.com/repos/huggingface/transformers/issues/12695/events
https://github.com/huggingface/transformers/pull/12695
944,021,788
MDExOlB1bGxSZXF1ZXN0Njg5NTY2MDc5
12,695
[Deepspeed] add many more models to the model zoo test
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 2659267025, "node_id": "MDU6TGFiZWwyNjU5MjY3MDI1", "url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed", "name": "DeepSpeed", "color": "4D34F7", "default": false, "description": "" }, { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "nice work @stas00, have you tested Perceiver with DeepSpeed.", "Would be glad to do that, @sameeravithana- in order to do that I need is a Trainer-based example script that I can test with.\r\n\r\nAs you can see from this map:\r\n\r\nhttps://github.com/huggingface/transformers/blob/4a419d4995111c22d6842ee1bcd2d3f500150845/tests/deepspeed/test_model_zoo.py#L231-L270\r\n\r\nI have each model tested by one of HF Trainer examples. Is there one that can be used with perceiver?\r\n" ]
1,626
1,652
1,652
CONTRIBUTOR
null
This PR continues figuring out how to make various models work with Deepspeed (a lot of fixes happen on the Deepspeed side), most models just work out of the box - the main purpose of this PR is to test as many models as possible. so there are no fixes to add. - [x] update coverage to albert, bart, bert, bigbird_pegasus, big_bird, blenderbot, deberta, deberta_v2, distilbert, electra, flaubert, fsmt, funnel, gpt2, gptj, gpt_neo, layoutlm, led, longformer, marian, mbart, mobilebert, mpnet, pegasus, prophetnet, roberta, squeezebert, t5, t5_v1, vit, xlm_roberta, xlnet Thanks to @LysandreJik for creating the tiny test models for many of HF models! Some models I couldn't cover for a variety of reasons unrelated to Deepspeed (missing tokenizers, missing tiny models, missing example scripts to exercise these). But their status is documented in the script. Over time more will be tested. Blocking events - all resolved: - [x] https://github.com/microsoft/DeepSpeed/pull/1227 (fixes reference counting) - [x] https://github.com/microsoft/DeepSpeed/pull/1380 (fixes zero_to_fp32 recovery of uneven param shapes) - [x] https://github.com/huggingface/transformers/pull/13665 (fixes positional embeddings: m2m_100 and others) - [x] https://github.com/microsoft/DeepSpeed/pull/1916#event-6563217392 (fixes tracing) - [x] 0.6.4 Deepspeed release that includes all the merged PRs
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12695/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12695/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12695", "html_url": "https://github.com/huggingface/transformers/pull/12695", "diff_url": "https://github.com/huggingface/transformers/pull/12695.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12695.patch", "merged_at": 1652196163000 }
https://api.github.com/repos/huggingface/transformers/issues/12694
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12694/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12694/comments
https://api.github.com/repos/huggingface/transformers/issues/12694/events
https://github.com/huggingface/transformers/pull/12694
943,962,565
MDExOlB1bGxSZXF1ZXN0Njg5NTE2Nzkx
12,694
Refactored code to improve performance
{ "login": "AllStars101-sudo", "id": 53670363, "node_id": "MDQ6VXNlcjUzNjcwMzYz", "avatar_url": "https://avatars.githubusercontent.com/u/53670363?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AllStars101-sudo", "html_url": "https://github.com/AllStars101-sudo", "followers_url": "https://api.github.com/users/AllStars101-sudo/followers", "following_url": "https://api.github.com/users/AllStars101-sudo/following{/other_user}", "gists_url": "https://api.github.com/users/AllStars101-sudo/gists{/gist_id}", "starred_url": "https://api.github.com/users/AllStars101-sudo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AllStars101-sudo/subscriptions", "organizations_url": "https://api.github.com/users/AllStars101-sudo/orgs", "repos_url": "https://api.github.com/users/AllStars101-sudo/repos", "events_url": "https://api.github.com/users/AllStars101-sudo/events{/privacy}", "received_events_url": "https://api.github.com/users/AllStars101-sudo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
NONE
null
# What does this PR do? Refactors several segments of code in the `scripts`,`src`,`tests`,`utils` and `setup.py` and increases performance by a bit, using compression methods and newer practices. No new functions or methods/models were added; therefore no documentation changes were required. ## Before submitting * [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). * [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? * [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. * [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). * [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12694/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12694/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12694", "html_url": "https://github.com/huggingface/transformers/pull/12694", "diff_url": "https://github.com/huggingface/transformers/pull/12694.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12694.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/12693
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12693/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12693/comments
https://api.github.com/repos/huggingface/transformers/issues/12693/events
https://github.com/huggingface/transformers/issues/12693
943,941,613
MDU6SXNzdWU5NDM5NDE2MTM=
12,693
Strange output from summarization models
{ "login": "Zahz1", "id": 59493450, "node_id": "MDQ6VXNlcjU5NDkzNDUw", "avatar_url": "https://avatars.githubusercontent.com/u/59493450?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Zahz1", "html_url": "https://github.com/Zahz1", "followers_url": "https://api.github.com/users/Zahz1/followers", "following_url": "https://api.github.com/users/Zahz1/following{/other_user}", "gists_url": "https://api.github.com/users/Zahz1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Zahz1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Zahz1/subscriptions", "organizations_url": "https://api.github.com/users/Zahz1/orgs", "repos_url": "https://api.github.com/users/Zahz1/repos", "events_url": "https://api.github.com/users/Zahz1/events{/privacy}", "received_events_url": "https://api.github.com/users/Zahz1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "I see this happening now with pegasus models. Similar to the @Zahz1 I get the following:\r\n\r\n\"In our series of letters from African journalists, filmmaker and columnist Ahmed Rashid looks at some of the issues facing the continent.\"\r\n\r\nThe text that is being summarized has nothing to do with the above generated summarization." ]
1,626
1,678
1,629
NONE
null
I am trying to get some models working for summarizing news articles, but for some reason I keep getting this strange out Output: "In our series of letters from African journalists, film-maker and columnist Farai Sevenzo looks at ... [subject of input article]" This has happened on multiple models (Pegasus, Bart, and Roberta) and multiple different inputs. The output is either the correct summary for the article or this incorrect output listed above. Does anyone have any idea how to fix this problem? code: from transformers import PegasusTokenizer, TFPegasusModel, PegasusModel, TFPegasusForConditionalGeneration import tensorflow as tf src_text = """ article text put here """ tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-xsum') model = TFPegasusForConditionalGeneration.from_pretrained('google/pegasus-xsum') inputs = tokenizer(src_text, truncation=True, padding='longest', return_tensors="tf", ) translated = model.generate(**inputs) tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) print(tgt_text)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12693/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12693/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12692
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12692/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12692/comments
https://api.github.com/repos/huggingface/transformers/issues/12692/events
https://github.com/huggingface/transformers/pull/12692
943,915,236
MDExOlB1bGxSZXF1ZXN0Njg5NDc3MTcw
12,692
Provide mask_time_indices to `_mask_hidden_states` to avoid double masking
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks a lot for fixing this! Can you also make the fix to `modeling_wav2vec2.py`? Think the same error is there", "and for both tf_hubert and tf_wav2vec2, we need to do the change as well I think ", "Thanks a lot!" ]
1,626
1,626
1,626
MEMBER
null
The current behavior for training Hubert masks randomly some "spans" of time according to a `mask_time_indices` which can be provided or not to the `forward(..., mask_time_indices=Optional[torch.Tensor])` . When providing this value (_interesting to mask the loss over non masked spans_), the mask was applied outside of the `_mask_hidden_states(...)` function. Then, a new mask `_mask_hidden_states(...)` inside was generated potentially masking again some others tokens independently from what was provided through `mask_time_indices`. This PR provides a fix by ensuring we only mask spans inside `_mask_hidden_states(...)` and correctly apply the masking operating one time.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12692/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12692/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12692", "html_url": "https://github.com/huggingface/transformers/pull/12692", "diff_url": "https://github.com/huggingface/transformers/pull/12692.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12692.patch", "merged_at": 1626261454000 }
https://api.github.com/repos/huggingface/transformers/issues/12691
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12691/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12691/comments
https://api.github.com/repos/huggingface/transformers/issues/12691/events
https://github.com/huggingface/transformers/issues/12691
943,896,926
MDU6SXNzdWU5NDM4OTY5MjY=
12,691
OSError: Not found: "/root/.cache/huggingface/transformers/5ec31591d9130cc9be0872e6b3dc0b276e514ab96e68404ac4a876ff03cb413b.dbd4bc2544d5c9f8f0d109844726c1600fa95cf0ba770b54c146f702be6e55dc": No such file or directory Error #2
{ "login": "EricPeter", "id": 37067613, "node_id": "MDQ6VXNlcjM3MDY3NjEz", "avatar_url": "https://avatars.githubusercontent.com/u/37067613?v=4", "gravatar_id": "", "url": "https://api.github.com/users/EricPeter", "html_url": "https://github.com/EricPeter", "followers_url": "https://api.github.com/users/EricPeter/followers", "following_url": "https://api.github.com/users/EricPeter/following{/other_user}", "gists_url": "https://api.github.com/users/EricPeter/gists{/gist_id}", "starred_url": "https://api.github.com/users/EricPeter/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/EricPeter/subscriptions", "organizations_url": "https://api.github.com/users/EricPeter/orgs", "repos_url": "https://api.github.com/users/EricPeter/repos", "events_url": "https://api.github.com/users/EricPeter/events{/privacy}", "received_events_url": "https://api.github.com/users/EricPeter/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello! What's the code that triggers this error?", "import os\r\nloaded_model = torch.load(\"mt_luganda.pt\",map_location=torch.device('cpu'))", "Are you sure? That's unrelated to `tranformers` or `huggingface`, yet I do see a `transformers` cache error in your issue title.", "am trying to load that model in another machine \r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Same problem.", "Any solution?", "Same issue. Trying to load a pickled tokenizer inside of a docker container\r\nwith open(f\"t5-base_tokenizer.pkl\", 'rb') as f:\r\n tok = pickle.load(f)" ]
1,626
1,666
1,629
NONE
null
This happens when try to load the model on another device ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12691/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12691/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12690
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12690/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12690/comments
https://api.github.com/repos/huggingface/transformers/issues/12690/events
https://github.com/huggingface/transformers/pull/12690
943,896,458
MDExOlB1bGxSZXF1ZXN0Njg5NDYxMDM1
12,690
[Deepspeed] non-native optimizers are mostly ok with zero-offload
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
CONTRIBUTOR
null
As noticed in https://github.com/huggingface/transformers/issues/11044#issuecomment-870742459 most non-DS optimizers should work with zero-offload as long as they have cpu+gpu implementation (except LAMB). So this PR relaxes the earlier incorrectly imposed restriction. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12690/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12690/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12690", "html_url": "https://github.com/huggingface/transformers/pull/12690", "diff_url": "https://github.com/huggingface/transformers/pull/12690.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12690.patch", "merged_at": 1626232731000 }
https://api.github.com/repos/huggingface/transformers/issues/12689
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12689/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12689/comments
https://api.github.com/repos/huggingface/transformers/issues/12689/events
https://github.com/huggingface/transformers/pull/12689
943,841,501
MDExOlB1bGxSZXF1ZXN0Njg5NDEzNjEy
12,689
Flax MLM: Allow validation split when loading dataset from local file
{ "login": "fgaim", "id": 4906991, "node_id": "MDQ6VXNlcjQ5MDY5OTE=", "avatar_url": "https://avatars.githubusercontent.com/u/4906991?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fgaim", "html_url": "https://github.com/fgaim", "followers_url": "https://api.github.com/users/fgaim/followers", "following_url": "https://api.github.com/users/fgaim/following{/other_user}", "gists_url": "https://api.github.com/users/fgaim/gists{/gist_id}", "starred_url": "https://api.github.com/users/fgaim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fgaim/subscriptions", "organizations_url": "https://api.github.com/users/fgaim/orgs", "repos_url": "https://api.github.com/users/fgaim/repos", "events_url": "https://api.github.com/users/fgaim/events{/privacy}", "received_events_url": "https://api.github.com/users/fgaim/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
CONTRIBUTOR
null
# What does this PR do? In Flax training scripts for MLM, CLM, and T5, this PR enables the option to apply validation-split-percentage when loading datasets from local file. This option already worked when loading standard HF datasets but was missing for local files. ## Who can review? @patrickvonplaten @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12689/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12689/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12689", "html_url": "https://github.com/huggingface/transformers/pull/12689", "diff_url": "https://github.com/huggingface/transformers/pull/12689.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12689.patch", "merged_at": 1626781106000 }
https://api.github.com/repos/huggingface/transformers/issues/12688
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12688/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12688/comments
https://api.github.com/repos/huggingface/transformers/issues/12688/events
https://github.com/huggingface/transformers/issues/12688
943,838,477
MDU6SXNzdWU5NDM4Mzg0Nzc=
12,688
[doc] parallelism - when to use which mode
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 1834067346, "node_id": "MDU6TGFiZWwxODM0MDY3MzQ2", "url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation", "name": "Documentation", "color": "77cc3b", "default": false, "description": "" }, { "id": 2690307185, "node_id": "MDU6TGFiZWwyNjkwMzA3MTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Performance", "name": "Performance", "color": "207F32", "default": false, "description": "" } ]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "@BramVanroy, please have a look if this addresses your question and I will add it to the doc. It of course assumes that https://huggingface.co/transformers/master/parallelism.html has been read (hence the abbreviations).\r\n\r\nIf more information is needed please don't hesitate to say what you feel is missing and how things can be improved. Thank you.\r\n\r\nI just wasn't sure about single node / multi-gpu as I haven't played much with PP/TP on a single node.\r\n\r\n------------\r\n## Which Strategy To Use When\r\n\r\nHere is a very rough outlook at which parallelism strategy to use when. The first on the list is typically faster.\r\n\r\n**⇨ Single GPU**\r\n\r\n* Model fits onto a single GPU:\r\n\r\n 1. Normal use\r\n\r\n* Model doesn't fit onto a single GPU:\r\n\r\n 1. ZeRO + Offload CPU and optionally NVMe\r\n\r\n\r\n**⇨ Single Node / Multi-GPU**\r\n\r\n* Model fits onto a single GPU:\r\n\r\n 1. DDP - Distributed DP\r\n 2. ZeRO - may or may not be faster depending on the situation and configuration used\r\n\r\n* Model doesn't fit onto a single GPU:\r\n\r\n 1. ZeRO\r\n 2. TP\r\n 3. PP\r\n\r\n (not sure which one will be faster here - haven't done enough experiments)\r\n\r\n**⇨ Multi-Node / Multi-GPU**\r\n\r\n* When you have fast inter-node connectivity:\r\n\r\n 1. ZeRO - as it requires close to no modifications to the model\r\n 2. PP+TP+DP - less communications, but requires massive changes to the model\r\n\r\n* when you have slow inter-node connectivity:\r\n\r\n 1. DP+PP+TP+ZeRO\r\n", "This is already very useful for most people I think! I personally haven't even tried anything else but regular training and single node-muliti gpu DDP, but I can see how this small overview helps users. It makes it easier for them to \"choose what to do\".\r\n\r\nThanks!" ]
1,626
1,626
1,626
CONTRIBUTOR
null
# 🚀 Feature request Was asked to expand https://huggingface.co/transformers/master/parallelism.html to include recommendations on which mode to use when.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12688/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12688/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12687
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12687/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12687/comments
https://api.github.com/repos/huggingface/transformers/issues/12687/events
https://github.com/huggingface/transformers/pull/12687
943,819,529
MDExOlB1bGxSZXF1ZXN0Njg5Mzk0NTkx
12,687
Assert evaluation_strategy not no when load_best_model_at_end
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> I see that practically:\r\n>\r\n> load_best_model_at_end cancels out save_strategy=steps.\r\n> load_best_model_at_end has no impact on save_strategy=epoch.\r\n\r\nThat is not completely correct. One should also add that if `evaluation_strategy=steps`, a save is done every `eval_steps` and if `evaluation_strategy=epoch`, a save is done every epoch. Basically the model need to be saved every time there is an evaluation, to keep track of the best checkpoint.\r\n\r\nTo be honest, it makes absolutely no sense to use `--load_best_model_at_end` if the `evaluation_strategy` and `save_strategy` are not the same (and in the case of steps, with the same number of steps) so perhaps this is what the assert should be. The current implementation tries to avoid having the user input the same thing twice, but maybe it is too confusing.", "That works too.\r\n\r\nMy initial suggestion was to only flag to the user the silent override of `save_steps`, but if we can do better, then by all means let's do that!", "Superseded by #12786 " ]
1,626
1,651
1,626
COLLABORATOR
null
# What does this PR do? Since using `--load_best_model_at_end` overrides the `save_strategy` by the `evaluation_strategy`, this PR adds a defensive check to make sure that strategy is not "no" (otherwise nothing is ever saved). Fixes #12685
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12687/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12687/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12687", "html_url": "https://github.com/huggingface/transformers/pull/12687", "diff_url": "https://github.com/huggingface/transformers/pull/12687.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12687.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/12686
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12686/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12686/comments
https://api.github.com/repos/huggingface/transformers/issues/12686/events
https://github.com/huggingface/transformers/issues/12686
943,804,153
MDU6SXNzdWU5NDM4MDQxNTM=
12,686
No docs for v2.3.0
{ "login": "alexcoca", "id": 30216068, "node_id": "MDQ6VXNlcjMwMjE2MDY4", "avatar_url": "https://avatars.githubusercontent.com/u/30216068?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexcoca", "html_url": "https://github.com/alexcoca", "followers_url": "https://api.github.com/users/alexcoca/followers", "following_url": "https://api.github.com/users/alexcoca/following{/other_user}", "gists_url": "https://api.github.com/users/alexcoca/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexcoca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexcoca/subscriptions", "organizations_url": "https://api.github.com/users/alexcoca/orgs", "repos_url": "https://api.github.com/users/alexcoca/repos", "events_url": "https://api.github.com/users/alexcoca/events{/privacy}", "received_events_url": "https://api.github.com/users/alexcoca/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sgugger , I think all docs up to 2.9 are gone (checked 2.8 randomly and it was missing) so there might be a broader isssue.\r\n\r\n", "It might be linked to use more recent versions of sphinx when building it, though I'm not sure. This is not a priority for us, especially for such an older version (it would be different if the docs of the last major version were down), so I don't think anyone on our side will investigate this further.", "@sgugger, you are making a valid point. \r\n\r\nHowever, I just wanted to highlight that _a lot_ of research hinges on these older versions as, unfortunately, people do not maintain their research code once it's out there. It would be helpful if the docs did not disappear so we can still work with others' legacy code when we have to. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,626
1,629
1,629
NONE
null
## Environment info - `transformers` version: 2.3.0 ### Who can help Documentation: @sgugger ## To reproduce Steps to reproduce the behavior: Click [here](https://huggingface.co/transformers/v2.3.0/model_doc/gpt2.html#gpt2doubleheadsmodel) and see there are no docs for 2.3.0. ## Expected behavior Documentation should be displayed when clicking [here](https://huggingface.co/transformers/v2.3.0/model_doc/gpt2.html#gpt2doubleheadsmodel)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12686/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12686/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12685
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12685/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12685/comments
https://api.github.com/repos/huggingface/transformers/issues/12685/events
https://github.com/huggingface/transformers/issues/12685
943,727,179
MDU6SXNzdWU5NDM3MjcxNzk=
12,685
[trainer] `--load_best_model_at_end` silently turns of `--save_steps` settings
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes, as said in that comment, I think it's reasonable if we raise an error if `--load_best_model_at_end` is set and `--evaluation_strategy` is \"no\" since there is no \"best model\" to pick from in that case. I can do it later today if you want.", "I'm still not 100% clear on how this feature's reliance on eval affects saving checkpoints, but if it solves the problem that's good enough for me.\r\n\r\nAbsolutely no rush on this one.\r\n\r\nThank you!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Hasn't this one been resolved already?", "Yes, this was fixed by #12786 in the end." ]
1,626
1,630
1,630
CONTRIBUTOR
null
Splitting off from https://github.com/huggingface/transformers/pull/12477#discussion_r668326212 Currently `--load_best_model_at_end` silently turns off `--save_steps` settings when `--do_eval` is off (or `--evaluation_strategy` is set to other than `"no"`, which otherwise automatically turns on `--do_eval`) The proposal is to assert if: `--load_best_model_at_end` is set and `--evaluation_strategy` is `"no"` Reproducible test: ``` export BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_train --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 500 --max_source_length 128 --max_target_length 128 --val_max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --predict_with_generate --sortish_sampler --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" --source_prefix "translate English to Romanian: " --warmup_steps 50 --max_train_samples 50 --save_steps 1 ``` which saves checkpoints. then adding `--load_best_model_at_end` stops saving those. @sgugger.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12685/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12685/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12684
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12684/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12684/comments
https://api.github.com/repos/huggingface/transformers/issues/12684/events
https://github.com/huggingface/transformers/pull/12684
943,640,220
MDExOlB1bGxSZXF1ZXN0Njg5MjQyNDI1
12,684
Add timeout to CI.
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
MEMBER
null
Adds a global timeout to 60 seconds for non-slow tests, and a global timeout of 5 minutes for slow tests. These can be adjusted later on, but it prevents the two hanging suites right now and is important to merge to get feedback on the current coverage. I've re-enabled the `-v` option on `pytest` as this was instrumental in discovering the failing test, and would have saved me a lot of time had it been activated by default. I'm also removing the `pytest-sugar` dependency, because even if a nice QOL improvement, it was detrimental to the discoverability of the hanging test.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12684/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12684/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12684", "html_url": "https://github.com/huggingface/transformers/pull/12684", "diff_url": "https://github.com/huggingface/transformers/pull/12684.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12684.patch", "merged_at": 1626203598000 }
https://api.github.com/repos/huggingface/transformers/issues/12683
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12683/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12683/comments
https://api.github.com/repos/huggingface/transformers/issues/12683/events
https://github.com/huggingface/transformers/issues/12683
943,621,871
MDU6SXNzdWU5NDM2MjE4NzE=
12,683
confusing description in prepare_seq2seq_batch of MBart
{ "login": "XuhuiZhou", "id": 20436061, "node_id": "MDQ6VXNlcjIwNDM2MDYx", "avatar_url": "https://avatars.githubusercontent.com/u/20436061?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XuhuiZhou", "html_url": "https://github.com/XuhuiZhou", "followers_url": "https://api.github.com/users/XuhuiZhou/followers", "following_url": "https://api.github.com/users/XuhuiZhou/following{/other_user}", "gists_url": "https://api.github.com/users/XuhuiZhou/gists{/gist_id}", "starred_url": "https://api.github.com/users/XuhuiZhou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XuhuiZhou/subscriptions", "organizations_url": "https://api.github.com/users/XuhuiZhou/orgs", "repos_url": "https://api.github.com/users/XuhuiZhou/repos", "events_url": "https://api.github.com/users/XuhuiZhou/events{/privacy}", "received_events_url": "https://api.github.com/users/XuhuiZhou/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @XuhuiZhou, the `prepare_seq2seq_batch` method is now deprecated and the description is a bit outdated.\r\nwe don't recommend using it anymore. You could refer to this section to see how to prepare data for mbart-50 https://huggingface.co/transformers/model_doc/mbart.html#training-of-mbart-50", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,626
1,629
1,629
NONE
null
## Information The model that I am using is `MBart-50` In the description of `prepare_seq2seq_batch`, it says _Prepare model inputs for translation. For best performance, translate one sentence at a time._. Does this mean we should not do batching if we want to obtain the best performance? I am curious why it is the case since the paper itself does not mention that. ## Expected behavior The performance should be the same doing batching or not. @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12683/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12683/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12682
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12682/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12682/comments
https://api.github.com/repos/huggingface/transformers/issues/12682/events
https://github.com/huggingface/transformers/pull/12682
943,582,808
MDExOlB1bGxSZXF1ZXN0Njg5MTk0Mjk1
12,682
Fix minor docstring typos.
{ "login": "qqaatw", "id": 24835382, "node_id": "MDQ6VXNlcjI0ODM1Mzgy", "avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qqaatw", "html_url": "https://github.com/qqaatw", "followers_url": "https://api.github.com/users/qqaatw/followers", "following_url": "https://api.github.com/users/qqaatw/following{/other_user}", "gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}", "starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions", "organizations_url": "https://api.github.com/users/qqaatw/orgs", "repos_url": "https://api.github.com/users/qqaatw/repos", "events_url": "https://api.github.com/users/qqaatw/events{/privacy}", "received_events_url": "https://api.github.com/users/qqaatw/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
CONTRIBUTOR
null
# What does this PR do? Fix minor docstring typos in #12664 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12682/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12682/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12682", "html_url": "https://github.com/huggingface/transformers/pull/12682", "diff_url": "https://github.com/huggingface/transformers/pull/12682.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12682.patch", "merged_at": 1626192496000 }
https://api.github.com/repos/huggingface/transformers/issues/12681
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12681/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12681/comments
https://api.github.com/repos/huggingface/transformers/issues/12681/events
https://github.com/huggingface/transformers/issues/12681
943,411,845
MDU6SXNzdWU5NDM0MTE4NDU=
12,681
Flax - Loading pretrained model overwrites weights of different shapes
{ "login": "borisdayma", "id": 715491, "node_id": "MDQ6VXNlcjcxNTQ5MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/borisdayma", "html_url": "https://github.com/borisdayma", "followers_url": "https://api.github.com/users/borisdayma/followers", "following_url": "https://api.github.com/users/borisdayma/following{/other_user}", "gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}", "starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions", "organizations_url": "https://api.github.com/users/borisdayma/orgs", "repos_url": "https://api.github.com/users/borisdayma/repos", "events_url": "https://api.github.com/users/borisdayma/events{/privacy}", "received_events_url": "https://api.github.com/users/borisdayma/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This should be fixed by the work in #12664 ", "Closing because it was fixed" ]
1,626
1,626
1,626
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: master - Platform: Ubuntu - Python version: 3.9 ### Who can help @patil-suraj @sgugger <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): Custom FlaxBart The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Create a custom model by subclassing - just change output shape (lm_head & final_logits_bias) 2. use `CustomModel.from_pretrained('facebook/bart-large-c') 3. check `model.params['final_logits_bias'].shape`, it will come from the pretrained model <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> The shape of weights should be checked prior to be overwritten. Right now my approach is: * load pre trained model * init custom model from config * update manually the weights needed
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12681/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12681/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12680
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12680/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12680/comments
https://api.github.com/repos/huggingface/transformers/issues/12680/events
https://github.com/huggingface/transformers/issues/12680
943,349,548
MDU6SXNzdWU5NDMzNDk1NDg=
12,680
Running out of memory when resume training.
{ "login": "thies1006", "id": 32954413, "node_id": "MDQ6VXNlcjMyOTU0NDEz", "avatar_url": "https://avatars.githubusercontent.com/u/32954413?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thies1006", "html_url": "https://github.com/thies1006", "followers_url": "https://api.github.com/users/thies1006/followers", "following_url": "https://api.github.com/users/thies1006/following{/other_user}", "gists_url": "https://api.github.com/users/thies1006/gists{/gist_id}", "starred_url": "https://api.github.com/users/thies1006/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thies1006/subscriptions", "organizations_url": "https://api.github.com/users/thies1006/orgs", "repos_url": "https://api.github.com/users/thies1006/repos", "events_url": "https://api.github.com/users/thies1006/events{/privacy}", "received_events_url": "https://api.github.com/users/thies1006/received_events", "type": "User", "site_admin": false }
[ { "id": 2659267025, "node_id": "MDU6TGFiZWwyNjU5MjY3MDI1", "url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed", "name": "DeepSpeed", "color": "4D34F7", "default": false, "description": "" } ]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "Thank you for the detailed report, @thies1006 \r\n\r\nI suspect that at some point we have the model allocated more than once.\r\n\r\nI will profile the memory usage and get back to you with the findings.\r\n\r\nI'm glad to hear that meanwhile you have a workaround.", "So first I see our non-deepspeed checkpoint-loading is inefficient CPU memory-wise\r\n\r\n```\r\n# save\r\nexport BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_train --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 500 --max_source_length 128 --max_target_length 128 --val_max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --predict_with_generate --sortish_sampler --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config \"ro-en\" --source_prefix \"translate English to Romanian: \" --max_train_samples 50 --save_steps 1 --skip_memory_metrics 0\r\n\r\n# load:\r\nexport BS=16; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_train --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 500 --max_source_length 128 --max_target_length 128 --val_max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --predict_with_generate --sortish_sampler --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config \"ro-en\" --source_prefix \"translate English to Romanian: \" --max_train_samples 50 --save_steps 1 --skip_memory_metrics 0 --resume_from_checkpoint output_dir/checkpoint-1\r\n```\r\n\r\n```\r\n# save\r\n***** train metrics *****\r\n epoch = 1.0\r\n init_mem_cpu_alloc_delta = -153MB\r\n init_mem_cpu_peaked_delta = 152MB\r\n init_mem_gpu_alloc_delta = 230MB\r\n init_mem_gpu_peaked_delta = 0MB\r\n train_loss = 2.9967\r\n train_mem_cpu_alloc_delta = 1324MB\r\n train_mem_cpu_peaked_delta = 125MB\r\n train_mem_gpu_alloc_delta = 933MB\r\n train_mem_gpu_peaked_delta = 355MB\r\n train_runtime = 0:00:03.47\r\n train_samples = 50\r\n train_samples_per_second = 14.386\r\n train_steps_per_second = 0.575\r\n```\r\n\r\n\r\n```\r\n# load\r\n***** train metrics *****\r\n epoch = 1.0\r\n init_mem_cpu_alloc_delta = -153MB\r\n init_mem_cpu_peaked_delta = 152MB\r\n init_mem_gpu_alloc_delta = 230MB\r\n init_mem_gpu_peaked_delta = 0MB\r\n train_loss = 1.4817\r\n train_mem_cpu_alloc_delta = 1552MB\r\n train_mem_cpu_peaked_delta = 124MB\r\n train_mem_gpu_alloc_delta = 931MB\r\n train_mem_gpu_peaked_delta = 228MB\r\n train_runtime = 0:00:03.45\r\n train_samples = 50\r\n train_samples_per_second = 14.472\r\n train_steps_per_second = 0.579\r\n```\r\n\r\nAs you can see the checkpoint loading takes ~225MB more:\r\n```\r\n- train_mem_cpu_alloc_delta = 1324MB\r\n+ train_mem_cpu_alloc_delta = 1552MB\r\n```\r\nwhich is exactly the size of the t5-small (230MB) model.\r\n\r\nThat is at some point it keeps 2 full copies of the model in CPU memory.\r\n\r\ncc: @sgugger \r\n\r\nSo the issue might not be in deepspeed, but will check that next.\r\n\r\n", "Oh that is weird. At the top of my mind the first culprit could be the `state_dict` we loaded that is not release by the `Trainer` for some reason. If you add a `del state_dict` on [this line](https://github.com/huggingface/transformers/blob/a18a17d2b6357321279190963765085a0ef4d466/src/transformers/trainer.py#L1078) does it release that copy? (Can't fully test right now which is why I'm asking you.)", "Yes, that did the trick! It's the same memory usage now. Applied here: https://github.com/huggingface/transformers/pull/12718", "So back to the deepspeed side of this Issue. I wasn't able to see the problem with `t5-small`, but I can see it clearly with `t5-base`\r\n\r\n\r\n```\r\n# save\r\nBS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 deepspeed --num_gpus 2 examples/pytorch/translation/run_translation.py --model_name_or_path t5-base --output_dir output_dir --overwrite_output_dir --max_source_length 128 --max_target_length 128 --val_max_target_length 128 --do_train --num_train_epochs 1 --per_device_train_batch_size $BS --learning_rate 3e-3 --logging_steps 0 --dataset_name wmt16 --dataset_config ro-en --source_lang en --target_lang ro --source_prefix \"translate English to Romanian: \" --max_train_samples 50 --deepspeed tests/deepspeed/ds_config_zero3.json --save_steps 1 --skip_memory_metrics 0\r\n\r\n# load:\r\nBS=16; PYTHONPATH=src USE_TF=0 deepspeed --num_gpus 2 examples/pytorch/translation/run_translation.py --model_name_or_path t5-base --output_dir output_dir --overwrite_output_dir --max_source_length 128 --max_target_length 128 --val_max_target_length 128 --do_train --num_train_epochs 1 --per_device_train_batch_size $BS --learning_rate 3e-3 --logging_steps 0 --dataset_name wmt16 --dataset_config ro-en --source_lang en --target_lang ro --source_prefix \"translate English to Romanian: \" --max_train_samples 50 --deepspeed tests/deepspeed/ds_config_zero3.json --save_steps 1 --skip_memory_metrics 0 --resume_from_checkpoint output_dir/checkpoint-1\r\n```\r\n\r\n```\r\n# save\r\n***** train metrics *****\r\n train_mem_cpu_alloc_delta = 5542MB\r\n train_mem_cpu_peaked_delta = 424MB\r\n train_mem_gpu_alloc_delta = -394MB\r\n train_mem_gpu_peaked_delta = 1259MB\r\n```\r\n\r\n```\r\n# load\r\n***** train metrics *****\r\n train_mem_cpu_alloc_delta = 5109MB\r\n train_mem_cpu_peaked_delta = 1944MB\r\n train_mem_gpu_alloc_delta = -394MB\r\n train_mem_gpu_peaked_delta = 804MB\r\n```\r\n\r\nSo it's easy to see that at some point there is a temporary jump by 1.1GB as compared to the normal run - t5-base is about 850MB. Which most likely means there are several copies of it loaded into CPU memory at some point.\r\n", "OK, so I did some profiling with an even larger model: t5-large (2.7GB) so it's easier to see what's happening.\r\n\r\n**We need to take into account that Deepspeed needs to load optimizer states, which non-Deepspeed run doesn't do! And that makes a huge difference.**\r\n\r\nSo our model has close to 0.75B params:\r\n```\r\n$ python -c 'from transformers import T5ForConditionalGeneration; model = T5ForConditionalGeneration.from_pretrained(\"t5-large\"); print(sum(dict((p.data_ptr(), p.numel()) for p in model.parameters()).values()))'\r\n737,668,096 # 737M params\r\n```\r\nNow the checkpoint contains 4 bytes for fp32 weights and 8 bytes for optimizer, 12 in total:\r\n```\r\npython -c 'print(f\"{737668096*12 / 2**30 :0.2f}GB\")'\r\n8.24GB\r\n```\r\nIndeed if we check the checkpoint folder:\r\n```\r\ndu -sh output_dir/checkpoint-1/global_step1/\r\n8.3G output_dir/checkpoint-1/global_step1/\r\n```\r\n\r\nAnd this is what accounts for a huge peak CPU RAM that gets temporarily used when the checkpoint is loaded.\r\n\r\nSo as you indeed figured out if you bypass the checkpoint loading and load just the weights you extracted with `zero_to_fp32.py` you have no problem with temporarily needing more CPU memory than required to run the normal run.\r\n\r\nIn general this should be possible to fix, by not allocating the model until the checkpoint loading (see https://github.com/huggingface/transformers/issues/12274 - which was just made available in pytorch) and probably something similar with the optimizer. But I can't promise you if and when this will happen. This is very important I think!\r\n\r\nPerhaps a simpler solution until then would be to allocate some swap memory on an nvme drive?\r\n\r\nPlease let me know if this is helpful.\r\n", "Thank you very much for the insights @stas00 !! I just wanted to bring this up because the order of magnitude was surprising to me. As I understand you, model and optimizer states are allocating memory twice (model init and checkpoint loading).\r\n\r\nMy checkpoint has the size (for Blenderbot-9B):\r\n```\r\ndu -sh /tmp/tst-summarization/checkpoint-10/global_step10/\r\n106G\t/tmp/tst-summarization/checkpoint-10/global_step10/\r\n```\r\n\r\nI also tried with the Blenderbot-3B, there I get 61GB size of the checkpoint folder and cpu ram consumption peaks at about 330GB (short peak, as you said). \r\n\r\nSo, in summary, I'm still wondering about the numbers. But as I understand you, this is normal and already addressed. I'll try with the nvme btw, thanks for the hint! \r\n\r\nI think we can close this for now.", "The main issue is loading optimizer states which are 2x bigger than the fp32 model.\r\n\r\nActually, I thought of a possible solution last night. This is staggered checkpoint loading. \r\n\r\nSo if you have 4 gpus on a node, now you get the whole checkpoint folder loaded into CPU at once. However what if we loaded one gpu at a time! That would require 1/4th extra CPU memory as when one gpu finished loading it will return the CPU memory back to the pool.\r\n\r\nI think this approach should solve your limitation. Let me try to implement this on the deepspeed side.", "After trying to implement staggered load, I discovered that each process loads zero checkpoints for all ranks in deepspeed, \r\nLet's continue this discussion over at Deepspeed as it's not really a transformers' issue\r\nhttps://github.com/microsoft/DeepSpeed/issues/1236\r\n" ]
1,626
1,626
1,626
NONE
null
Might be similar problem as #11317, node runs out of cpu memory (512GB). To reproduce: (i) ``` deepspeed --hostfile myhostfile \ ${_PATH}/examples/pytorch/summarization/run_summarization.py \ --model_name_or_path hyunwoongko/blenderbot-9B \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 8 \ --deepspeed ${_PATH}/tests/deepspeed/ds_config_zero3.json \ --logging_steps 1 \ --fp16 \ --overwrite_output_dir \ --save_steps 10 \ --gradient_accumulation_steps 1 \ --evaluation_strategy="steps" \ --max_train_samples 10024 \ --max_eval_samples 32 \ --max_source_length 128 --max_target_length 128 \ --eval_steps 5 ``` (ii) Afterwards in order to resume I use the option `--resume_from_checkpoint /tmp/tst-summarization/checkpoint-10`. A workaround is to export the FP32 weights using the script `zero_to_fp32.py` as described in [https://huggingface.co/transformers/master/main_classes/deepspeed.html#getting-the-model-weights-out](https://huggingface.co/transformers/master/main_classes/deepspeed.html#getting-the-model-weights-out) and restart directly from `pytorch_model.bin`, nevertheless it would be better to resume directly from the deepspeed checkpoint, if possible. torch: 1.8.1+cu111 transformers: 4.9.0.dev0 deepspeed: 0.4.4+d1a7a55 log: [log.txt](https://github.com/huggingface/transformers/files/6808841/log.txt) @stas00
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12680/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12680/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12679
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12679/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12679/comments
https://api.github.com/repos/huggingface/transformers/issues/12679/events
https://github.com/huggingface/transformers/pull/12679
943,311,780
MDExOlB1bGxSZXF1ZXN0Njg4OTUzNDA4
12,679
Fix multiple choice doc examples
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
COLLABORATOR
null
# What does this PR do? The multiple choice example docstrings was fixed for PyTorch but not Flax and TensorFlow. This PR addresses that.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12679/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12679/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12679", "html_url": "https://github.com/huggingface/transformers/pull/12679", "diff_url": "https://github.com/huggingface/transformers/pull/12679.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12679.patch", "merged_at": 1626248118000 }
https://api.github.com/repos/huggingface/transformers/issues/12678
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12678/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12678/comments
https://api.github.com/repos/huggingface/transformers/issues/12678/events
https://github.com/huggingface/transformers/issues/12678
943,272,348
MDU6SXNzdWU5NDMyNzIzNDg=
12,678
Mask prediction does not work with whitespace before mask token
{ "login": "temurchichua", "id": 69351709, "node_id": "MDQ6VXNlcjY5MzUxNzA5", "avatar_url": "https://avatars.githubusercontent.com/u/69351709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/temurchichua", "html_url": "https://github.com/temurchichua", "followers_url": "https://api.github.com/users/temurchichua/followers", "following_url": "https://api.github.com/users/temurchichua/following{/other_user}", "gists_url": "https://api.github.com/users/temurchichua/gists{/gist_id}", "starred_url": "https://api.github.com/users/temurchichua/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/temurchichua/subscriptions", "organizations_url": "https://api.github.com/users/temurchichua/orgs", "repos_url": "https://api.github.com/users/temurchichua/repos", "events_url": "https://api.github.com/users/temurchichua/events{/privacy}", "received_events_url": "https://api.github.com/users/temurchichua/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Sure! It's not very easy to avoid before pretraining (it depends on how you set up the data collator), but if you know how the special tokens work in tokenizers and transformers you can easily fix it next time.\r\n\r\nIf you notice that `\"word<mask>\"` works well, but `\"word <mask>\"` doesn't then this means that during pretraining your model was trained on data that was processed to \"word<mask>\" and ideally you would like all inputs to be processed this way when using your pretrained model.\r\n\r\nTo do so we need to be sure that both `tokenizer(\"word<mask>\")` and `tokenizer(\"word <mask>\")` get processed to the same `input_ids` .\r\nE.g. compare (new):\r\n\r\n```python\r\nfrom transformers import RobertaTokenizerFast\r\n\r\ntok = RobertaTokenizerFast.from_pretrained(\"flax-community/qartvelian-roberta-base-fix\")\r\ntok.decode(tok.encode(\"Hello <mask>\"))\r\n```\r\nto (old)\r\n```python\r\nfrom transformers import RobertaTokenizerFast\r\n\r\ntok = RobertaTokenizerFast.from_pretrained(\"Temur/qartvelian-roberta-base\")\r\ntok.decode(tok.encode(\"Hello <mask>\"))\r\n```\r\n-> the \"new\" tokenizer should strip away the whitespace while the \"old\" one doesn't.\r\n\r\nIf you look into your tokenizer file here: https://huggingface.co/Temur/qartvelian-roberta-base/raw/main/tokenizer.json you can see that lstrip for leftstrip for the mask_token is set to False while in https://huggingface.co/flax-community/qartvelian-roberta-base-fix/raw/main/tokenizer.json the attribute lstrip of the <mask_token> dict is set to True => so the new tokenizier strips away the left space of all <mask> tokens.\r\n\r\nDoing this change is quite easy, all you have to do is:\r\n\r\n```python\r\nfrom transformers import RobertaTokenizerFast, AddedToken\r\n\r\ntok = RobertaTokenizerFast.from_pretrained(\"Temur/qartvelian-roberta-base\")\r\ntok.mask_token = AddedToken(\"<mask>\", lstrip=True)\r\n```" ]
1,626
1,626
1,626
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.0.dev0 - Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): 1.9.0+cu102 (False) - Tensorflow version (GPU?): 2.5.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.3.4 (tpu) - Jax version: 0.2.16 - JaxLib version: 0.1.68 - Using GPU in script?: No - Using distributed or parallel set-up in script?: <fill in> ### Who can help @patrickvonplaten Model hub: Path of the Repository on the hub: https://huggingface.co/Temur/qartvelian-roberta-base ## Information The model I am using (Bert, XLNet ...): RoBERTa The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce ```python from transformers import pipeline, AutoTokenizer, RobertaForMaskedLM tokenizer = AutoTokenizer.from_pretrained("./qartvelian-roberta-base") model = RobertaForMaskedLM.from_pretrained("./qartvelian-roberta-base", from_flax=True) unmask = pipeline("fill-mask", model=model, tokenizer=tokenizer) unmask("ჩემი სამშობლოა<mask>.") ``` I'm getting the right result when I'm passing the string like it's shown in the following snippet. But If I pass a string with whitespace before the `<mask>` I'm getting weird results. ### How I trained the tokenizer: This is the script I have used to train the Tokenizer: ```python # Import libraries from pathlib import Path from tokenizers import trainers, Tokenizer, normalizers, ByteLevelBPETokenizer from datasets import load_dataset # preparing files model_dir = "./qartvelian-roberta-base" # ${MODEL_DIR} train_paths = [str(x) for x in Path("./corpuses/").glob("**/*.txt")] test_path = train_paths.pop(0) print(f"training from: {train_paths}\ntesting from: {test_path}") # load dataset dataset = load_dataset('text', data_files={'train': 'corpuses/pre_processed.txt', 'validation': 'corpuses/validate.txt'}) train_dataset = dataset['train'] # Instantiate tokenizer tokenizer = ByteLevelBPETokenizer() # Batch Generator def batch_iterator(batch_size=1000): for i in range(0, len(train_dataset), batch_size): yield train_dataset[i: i + batch_size]["text"] # Customized training tokenizer.train_from_iterator(batch_iterator(), vocab_size=50265, min_frequency=2, special_tokens=[ "<s>", "<pad>", "</s>", "<unk>", "<mask>", ]) # Save files to disk tokenizer.save(f"./qartvelian-roberta-base/tokenizer.json") ``` Any advice on how to fix the tokenizer? I'd also love to know how to avoid this problem at the pretraining stage.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12678/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12678/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12677
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12677/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12677/comments
https://api.github.com/repos/huggingface/transformers/issues/12677/events
https://github.com/huggingface/transformers/issues/12677
943,225,917
MDU6SXNzdWU5NDMyMjU5MTc=
12,677
Processing custom wikipedia data with clm training script throws error when "blockifying" data
{ "login": "BirgerMoell", "id": 1704131, "node_id": "MDQ6VXNlcjE3MDQxMzE=", "avatar_url": "https://avatars.githubusercontent.com/u/1704131?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BirgerMoell", "html_url": "https://github.com/BirgerMoell", "followers_url": "https://api.github.com/users/BirgerMoell/followers", "following_url": "https://api.github.com/users/BirgerMoell/following{/other_user}", "gists_url": "https://api.github.com/users/BirgerMoell/gists{/gist_id}", "starred_url": "https://api.github.com/users/BirgerMoell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BirgerMoell/subscriptions", "organizations_url": "https://api.github.com/users/BirgerMoell/orgs", "repos_url": "https://api.github.com/users/BirgerMoell/repos", "events_url": "https://api.github.com/users/BirgerMoell/events{/privacy}", "received_events_url": "https://api.github.com/users/BirgerMoell/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I printed the offending file (examples) which is a dict with the following keys\r\n```\r\ndict_keys(['attention_mask', 'input_ids', 'text'])\r\n```\r\nfor the data. ", "I see! Can you try replacing:\r\n\r\n```python\r\n tokenized_datasets = dataset.map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=data_args.preprocessing_num_workers,\r\n # remove_columns=column_names,\r\n load_from_cache_file=not data_args.overwrite_cache,\r\n )\r\n```\r\n\r\nby\r\n\r\n```python\r\n tokenized_datasets = dataset.map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=data_args.preprocessing_num_workers,\r\n remove_columns=dataset.column_names,\r\n load_from_cache_file=not data_args.overwrite_cache,\r\n )\r\n```\r\n\r\n?", "It seems to be loading correctly. I will wait to make sure the training starts but your fix seemed to resolve it. Thank you!", " File \"./run_clm_flax.py\", line 431, in main\r\n raise ValueError(\"--do_train requires a train dataset\")\r\nValueError: --do_train requires a train dataset\r\nNew error when training" ]
1,626
1,626
1,626
NONE
null
I'm loading in a wikipedia from hugginface dataset to run the CLM script. ```python from datasets import load_dataset import pdb def load_and_clean_oscar(): dataset = load_dataset('oscar', 'unshuffled_deduplicated_sv', split="train") dataset = dataset.remove_columns(['id']) print(dataset) pdb.set_trace() filtered_dataset = dataset.map(filter_oscar) filtered_dataset[:3] print(filtered_dataset[:3]) pdb.set_trace() return filtered_dataset def filter_oscar(batch): batch["text"] = " ".join(batch["text"].split("\n")) return batch def load_and_clean_wiki(): dataset = load_dataset('wiki40b', 'sv', beam_runner='DirectRunner', split="train") dataset = dataset.remove_columns(['wikidata_id', 'version_id']) filtered_dataset = dataset.map(filter_wikipedia) # filtered_dataset[:3] # print(filtered_dataset[:3]) return filtered_dataset def filter_wikipedia(batch): batch["text"] = " ".join(batch["text"].split("\n_START_SECTION_\n")) batch["text"] = " ".join(batch["text"].split("\n_START_ARTICLE_\n")) batch["text"] = " ".join(batch["text"].split("\n_START_ARTICLE_\n")) batch["text"] = " ".join(batch["text"].split("\n_START_PARAGRAPH_\n")) batch["text"] = " ".join(batch["text"].split("_NEWLINE_")) batch["text"] = " ".join(batch["text"].split("\xa0")) return batch ```` clm script ```python #!/usr/bin/env python # coding=utf-8 # Copyright 2021 The HuggingFace Team All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Pre-training/Fine-tuning the library models for causal language modeling (GPT, GPT-2, CTRL, ...) on a text file or a dataset. Here is the full list of checkpoints on the hub that can be fine-tuned by this script: https://huggingface.co/models?filter=causal-lm """ # You can also adapt this script on your own causal language modeling task. Pointers for this are left as comments. import logging import math import os import sys import time from dataclasses import dataclass, field from pathlib import Path from typing import Callable, Optional import datasets from datasets import Dataset, load_dataset from tqdm import tqdm import jax import jax.numpy as jnp import optax import transformers from load_from_hf import load_and_clean_wiki from flax import jax_utils, traverse_util from flax.jax_utils import unreplicate from flax.training import train_state from flax.training.common_utils import get_metrics, onehot, shard, shard_prng_key from transformers import ( CONFIG_MAPPING, FLAX_MODEL_FOR_CAUSAL_LM_MAPPING, AutoConfig, AutoTokenizer, FlaxAutoModelForCausalLM, HfArgumentParser, TrainingArguments, is_tensorboard_available, ) from transformers.testing_utils import CaptureLogger logger = logging.getLogger(__name__) MODEL_CONFIG_CLASSES = list(FLAX_MODEL_FOR_CAUSAL_LM_MAPPING.keys()) MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES) @dataclass class ModelArguments: """ Arguments pertaining to which model/config/tokenizer we are going to fine-tune, or train from scratch. """ model_name_or_path: Optional[str] = field( default=None, metadata={ "help": "The model checkpoint for weights initialization." "Don't set if you want to train a model from scratch." }, ) model_type: Optional[str] = field( default=None, metadata={"help": "If training from scratch, pass a model type from the list: " + ", ".join(MODEL_TYPES)}, ) config_name: Optional[str] = field( default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"} ) tokenizer_name: Optional[str] = field( default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"} ) cache_dir: Optional[str] = field( default=None, metadata={"help": "Where do you want to store the pretrained models downloaded from s3"} ) use_fast_tokenizer: bool = field( default=True, metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."}, ) dtype: Optional[str] = field( default="float32", metadata={ "help": "Floating-point format in which the model weights should be initialized and trained. Choose one of `[float32, float16, bfloat16]`." }, ) @dataclass class DataTrainingArguments: """ Arguments pertaining to what data we are going to input our model for training and eval. """ dataset_name: Optional[str] = field( default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."} ) dataset_config_name: Optional[str] = field( default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."} ) train_file: Optional[str] = field(default=None, metadata={"help": "The input training data file (a text file)."}) validation_file: Optional[str] = field( default=None, metadata={"help": "An optional input evaluation data file to evaluate the perplexity on (a text file)."}, ) max_train_samples: Optional[int] = field( default=None, metadata={ "help": "For debugging purposes or quicker training, truncate the number of training examples to this " "value if set." }, ) max_eval_samples: Optional[int] = field( default=None, metadata={ "help": "For debugging purposes or quicker training, truncate the number of evaluation examples to this " "value if set." }, ) overwrite_cache: bool = field( default=False, metadata={"help": "Overwrite the cached training and evaluation sets"} ) validation_split_percentage: Optional[int] = field( default=5, metadata={ "help": "The percentage of the train set used as validation set in case there's no validation split" }, ) block_size: Optional[int] = field( default=None, metadata={ "help": "Optional input sequence length after tokenization. " "The training dataset will be truncated in block of this size for training. " "Default to the model max input length for single sentence inputs (take into account special tokens)." }, ) overwrite_cache: bool = field( default=False, metadata={"help": "Overwrite the cached training and evaluation sets"} ) preprocessing_num_workers: Optional[int] = field( default=None, metadata={"help": "The number of processes to use for the preprocessing."}, ) def __post_init__(self): if self.dataset_name is None and self.train_file is None and self.validation_file is None: raise ValueError("Need either a dataset name or a training/validation file.") else: if self.train_file is not None: extension = self.train_file.split(".")[-1] assert extension in ["csv", "json", "txt"], "`train_file` should be a csv, a json or a txt file." if self.validation_file is not None: extension = self.validation_file.split(".")[-1] assert extension in ["csv", "json", "txt"], "`validation_file` should be a csv, a json or a txt file." class TrainState(train_state.TrainState): dropout_rng: jnp.ndarray def replicate(self): return jax_utils.replicate(self).replace(dropout_rng=shard_prng_key(self.dropout_rng)) def data_loader(rng: jax.random.PRNGKey, dataset: Dataset, batch_size: int, shuffle: bool = False): """ Returns batches of size `batch_size` from truncated `dataset`, sharded over all local devices. Shuffle batches if `shuffle` is `True`. """ steps_per_epoch = len(dataset) // batch_size if shuffle: batch_idx = jax.random.permutation(rng, len(dataset)) else: batch_idx = jnp.arange(len(dataset)) batch_idx = batch_idx[: steps_per_epoch * batch_size] # Skip incomplete batch. batch_idx = batch_idx.reshape((steps_per_epoch, batch_size)) for idx in batch_idx: batch = dataset[idx] batch = {k: jnp.array(v) for k, v in batch.items()} batch = shard(batch) yield batch def write_train_metric(summary_writer, train_metrics, train_time, step): summary_writer.scalar("train_time", train_time, step) train_metrics = get_metrics(train_metrics) for key, vals in train_metrics.items(): tag = f"train_{key}" for i, val in enumerate(vals): summary_writer.scalar(tag, val, step - len(vals) + i + 1) def write_eval_metric(summary_writer, eval_metrics, step): for metric_name, value in eval_metrics.items(): summary_writer.scalar(f"eval_{metric_name}", value, step) def create_learning_rate_fn( train_ds_size: int, train_batch_size: int, num_train_epochs: int, num_warmup_steps: int, learning_rate: float ) -> Callable[[int], jnp.array]: """Returns a linear warmup, linear_decay learning rate function.""" steps_per_epoch = train_ds_size // train_batch_size num_train_steps = steps_per_epoch * num_train_epochs warmup_fn = optax.linear_schedule(init_value=0.0, end_value=learning_rate, transition_steps=num_warmup_steps) decay_fn = optax.linear_schedule( init_value=learning_rate, end_value=0, transition_steps=num_train_steps - num_warmup_steps ) schedule_fn = optax.join_schedules(schedules=[warmup_fn, decay_fn], boundaries=[num_warmup_steps]) return schedule_fn def main(): # See all possible arguments in src/transformers/training_args.py # or by passing the --help flag to this script. # We now keep distinct sets of args, for a cleaner separation of concerns. parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments)) if len(sys.argv) == 2 and sys.argv[1].endswith(".json"): # If we pass only one argument to the script and it's the path to a json file, # let's parse it to get our arguments. model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1])) else: model_args, data_args, training_args = parser.parse_args_into_dataclasses() if ( os.path.exists(training_args.output_dir) and os.listdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir ): raise ValueError( f"Output directory ({training_args.output_dir}) already exists and is not empty." "Use --overwrite_output_dir to overcome." ) # Make one log on every process with the configuration for debugging. logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", level=logging.INFO, ) # Setup logging, we only want one process per machine to log things on the screen. logger.setLevel(logging.INFO if jax.process_index() == 0 else logging.ERROR) if jax.process_index() == 0: datasets.utils.logging.set_verbosity_warning() transformers.utils.logging.set_verbosity_info() else: datasets.utils.logging.set_verbosity_error() transformers.utils.logging.set_verbosity_error() # Set the verbosity to info of the Transformers logger (on main process only): logger.info(f"Training/evaluation parameters {training_args}") # Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below) # or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/ # (the dataset will be downloaded automatically from the datasets Hub). # # For CSV/JSON files, this script will use the column called 'text' or the first column if no column called # 'text' is found. You can easily tweak this behavior (see below). # # In distributed training, the load_dataset function guarantees that only one local process can concurrently # download the dataset. if data_args.dataset_name is not None: # loading the wiki data from the load and clean file dataset = load_and_clean_wiki() print("the dataset is", dataset) # if "validation" not in dataset.keys(): # dataset["validation"] = load_dataset( # data_args.dataset_name, # data_args.dataset_config_name, # split=f"train[:{data_args.validation_split_percentage}%]", # cache_dir=model_args.cache_dir, # ) # dataset["train"] = load_dataset( # data_args.dataset_name, # data_args.dataset_config_name, # split=f"train[{data_args.validation_split_percentage}%:]", # cache_dir=model_args.cache_dir, # ) else: data_files = {} if data_args.train_file is not None: data_files["train"] = data_args.train_file if data_args.validation_file is not None: data_files["validation"] = data_args.validation_file extension = data_args.train_file.split(".")[-1] if extension == "txt": extension = "text" dataset = load_dataset(extension, data_files=data_files, cache_dir=model_args.cache_dir) # See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at # https://huggingface.co/docs/datasets/loading_datasets.html. # Load pretrained model and tokenizer # Distributed training: # The .from_pretrained methods guarantee that only one local process can concurrently # download model & vocab. if model_args.config_name: config = AutoConfig.from_pretrained(model_args.config_name, cache_dir=model_args.cache_dir) elif model_args.model_name_or_path: config = AutoConfig.from_pretrained(model_args.model_name_or_path, cache_dir=model_args.cache_dir) else: config = CONFIG_MAPPING[model_args.model_type]() logger.warning("You are instantiating a new config instance from scratch.") if model_args.tokenizer_name: tokenizer = AutoTokenizer.from_pretrained( model_args.tokenizer_name, cache_dir=model_args.cache_dir, use_fast=model_args.use_fast_tokenizer ) elif model_args.model_name_or_path: tokenizer = AutoTokenizer.from_pretrained( model_args.model_name_or_path, cache_dir=model_args.cache_dir, use_fast=model_args.use_fast_tokenizer ) else: raise ValueError( "You are instantiating a new tokenizer from scratch. This is not supported by this script." "You can do it from another script, save it, and load it from here, using --tokenizer_name." ) if model_args.model_name_or_path: model = FlaxAutoModelForCausalLM.from_pretrained( model_args.model_name_or_path, config=config, seed=training_args.seed, dtype=getattr(jnp, model_args.dtype) ) else: model = FlaxAutoModelForCausalLM.from_config( config, seed=training_args.seed, dtype=getattr(jnp, model_args.dtype) ) # Preprocessing the datasets. # First we tokenize all the texts. # if training_args.do_train: # column_names = dataset["train"].column_names # else: # column_names = dataset["validation"].column_names text_column_name = "text" # since this will be pickled to avoid _LazyModule error in Hasher force logger loading before tokenize_function tok_logger = transformers.utils.logging.get_logger("transformers.tokenization_utils_base") def tokenize_function(examples): with CaptureLogger(tok_logger) as cl: output = tokenizer(examples[text_column_name]) # clm input could be much much longer than block_size if "Token indices sequence length is longer than the" in cl.out: tok_logger.warning( "^^^^^^^^^^^^^^^^ Please ignore the warning above - this long input will be chunked into smaller bits before being passed to the model." ) return output tokenized_datasets = dataset.map( tokenize_function, batched=True, num_proc=data_args.preprocessing_num_workers, # remove_columns=column_names, load_from_cache_file=not data_args.overwrite_cache, ) if data_args.block_size is None: block_size = tokenizer.model_max_length if block_size > config.max_position_embeddings: logger.warning( f"The tokenizer picked seems to have a very large `model_max_length` ({tokenizer.model_max_length}). " "Picking 1024 instead. You can change that default value by passing --block_size xxx." ) block_size = 1024 else: if data_args.block_size > tokenizer.model_max_length: logger.warning( f"The block_size passed ({data_args.block_size}) is larger than the maximum length for the model" f"({tokenizer.model_max_length}). Using block_size={tokenizer.model_max_length}." ) block_size = min(data_args.block_size, tokenizer.model_max_length) # Main data processing function that will concatenate all texts from our dataset and generate chunks of block_size. def group_texts(examples): # Concatenate all texts. # print("the examples are", examples) # import pdb # pdb.set_trace() concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} total_length = len(concatenated_examples[list(examples.keys())[0]]) # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can # customize this part to your needs. if total_length >= block_size: total_length = (total_length // block_size) * block_size # Split by chunks of max_len. result = { k: [t[i : i + block_size] for i in range(0, total_length, block_size)] for k, t in concatenated_examples.items() } result["labels"] = result["input_ids"].copy() return result # Note that with `batched=True`, this map processes 1,000 texts together, so group_texts throws away a remainder # for each of those groups of 1,000 texts. You can adjust that batch_size here but a higher value might be slower # to preprocess. # # To speed up this part, we use multiprocessing. See the documentation of the map method for more information: # https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map lm_datasets = tokenized_datasets.map( group_texts, batched=True, num_proc=data_args.preprocessing_num_workers, load_from_cache_file=not data_args.overwrite_cache, ) if training_args.do_train: if "train" not in tokenized_datasets: raise ValueError("--do_train requires a train dataset") train_dataset = lm_datasets["train"] if data_args.max_train_samples is not None: train_dataset = train_dataset.select(range(data_args.max_train_samples)) if training_args.do_eval: if "validation" not in tokenized_datasets: raise ValueError("--do_eval requires a validation dataset") eval_dataset = lm_datasets["validation"] if data_args.max_eval_samples is not None: eval_dataset = eval_dataset.select(range(data_args.max_eval_samples)) # Enable tensorboard only on the master node has_tensorboard = is_tensorboard_available() if has_tensorboard and jax.process_index() == 0: try: from flax.metrics.tensorboard import SummaryWriter summary_writer = SummaryWriter(log_dir=Path(training_args.output_dir)) except ImportError as ie: has_tensorboard = False logger.warning( f"Unable to display metrics through TensorBoard because some package are not installed: {ie}" ) else: logger.warning( "Unable to display metrics through TensorBoard because the package is not installed: " "Please run pip install tensorboard to enable." ) # Initialize our training rng = jax.random.PRNGKey(training_args.seed) rng, dropout_rng = jax.random.split(rng) # Store some constant num_epochs = int(training_args.num_train_epochs) train_batch_size = int(training_args.per_device_train_batch_size) * jax.device_count() eval_batch_size = int(training_args.per_device_eval_batch_size) * jax.device_count() steps_per_epoch = len(train_dataset) // train_batch_size total_train_steps = steps_per_epoch * num_epochs # Create learning rate schedule linear_decay_lr_schedule_fn = create_learning_rate_fn( len(train_dataset), train_batch_size, training_args.num_train_epochs, training_args.warmup_steps, training_args.learning_rate, ) # We use Optax's "masking" functionality to not apply weight decay # to bias and LayerNorm scale parameters. decay_mask_fn returns a # mask boolean with the same structure as the parameters. # The mask is True for parameters that should be decayed. # Note that this mask is specifically adapted for FlaxGPT2. # For other models, one should correct the layer norm parameter naming # accordingly. def decay_mask_fn(params): flat_params = traverse_util.flatten_dict(params) flat_mask = { path: (path[-1] != "bias" and path[-2:] not in [("ln_1", "scale"), ("ln_2", "scale"), ("ln_f", "scale")]) for path in flat_params } return traverse_util.unflatten_dict(flat_mask) # create adam optimizer if training_args.adafactor: # We use the default parameters here to initialize adafactor, # For more details about the parameters please check https://github.com/deepmind/optax/blob/ed02befef9bf81cbbf236be3d2b0e032e9ed4a40/optax/_src/alias.py#L74 optimizer = optax.adafactor( learning_rate=linear_decay_lr_schedule_fn, ) else: optimizer = optax.adamw( learning_rate=linear_decay_lr_schedule_fn, b1=training_args.adam_beta1, b2=training_args.adam_beta2, eps=training_args.adam_epsilon, weight_decay=training_args.weight_decay, mask=decay_mask_fn, ) # Setup train state state = TrainState.create(apply_fn=model.__call__, params=model.params, tx=optimizer, dropout_rng=dropout_rng) def loss_fn(logits, labels): shift_logits = logits[..., :-1, :] shift_labels = labels[..., 1:] loss = optax.softmax_cross_entropy(shift_logits, onehot(shift_labels, shift_logits.shape[-1])) return loss.mean() # Define gradient update step fn def train_step(state, batch): dropout_rng, new_dropout_rng = jax.random.split(state.dropout_rng) def compute_loss(params): labels = batch.pop("labels") logits = state.apply_fn(**batch, params=params, dropout_rng=dropout_rng, train=True)[0] loss = loss_fn(logits, labels) return loss grad_fn = jax.value_and_grad(compute_loss) loss, grad = grad_fn(state.params) grad = jax.lax.pmean(grad, "batch") new_state = state.apply_gradients(grads=grad, dropout_rng=new_dropout_rng) metrics = {"loss": loss, "learning_rate": linear_decay_lr_schedule_fn(state.step)} metrics = jax.lax.pmean(metrics, axis_name="batch") return new_state, metrics # Define eval fn def eval_step(params, batch): labels = batch.pop("labels") logits = model(**batch, params=params, train=False)[0] loss = loss_fn(logits, labels) # summarize metrics metrics = {"loss": loss} metrics = jax.lax.pmean(metrics, axis_name="batch") return metrics # Create parallel version of the train and eval step p_train_step = jax.pmap(train_step, "batch", donate_argnums=(0,)) p_eval_step = jax.pmap(eval_step, "batch") # Replicate the train state on each device state = state.replicate() logger.info("***** Running training *****") logger.info(f" Num examples = {len(train_dataset)}") logger.info(f" Num Epochs = {num_epochs}") logger.info(f" Instantaneous batch size per device = {training_args.per_device_train_batch_size}") logger.info(f" Total train batch size (w. parallel & distributed) = {train_batch_size}") logger.info(f" Total optimization steps = {total_train_steps}") train_time = 0 train_metrics = [] epochs = tqdm(range(num_epochs), desc=f"Epoch ... (1/{num_epochs})", position=0) for epoch in epochs: # ======================== Training ================================ train_start = time.time() # Create sampling rng rng, input_rng = jax.random.split(rng) # Generate an epoch by shuffling sampling indices from the train dataset train_loader = data_loader(input_rng, train_dataset, train_batch_size, shuffle=True) steps_per_epoch = len(train_dataset) // train_batch_size # train for step in tqdm(range(steps_per_epoch), desc="Training...", position=1, leave=False): batch = next(train_loader) state, train_metric = p_train_step(state, batch) train_metrics.append(train_metric) cur_step = epoch * (len(train_dataset) // train_batch_size) + step if cur_step % training_args.logging_steps == 0 and cur_step > 0: # Save metrics train_metric = unreplicate(train_metric) train_time += time.time() - train_start if has_tensorboard and jax.process_index() == 0: write_train_metric(summary_writer, train_metrics, train_time, cur_step) epochs.write( f"Step... ({cur_step} | Loss: {train_metric['loss'].mean()}, Learning Rate: {train_metric['learning_rate'].mean()})" ) train_metrics = [] if cur_step % training_args.eval_steps == 0 and cur_step > 0: # ======================== Evaluating ============================== eval_metrics = [] eval_loader = data_loader(input_rng, eval_dataset, eval_batch_size) eval_steps = len(eval_dataset) // eval_batch_size for _ in tqdm(range(eval_steps), desc="Evaluating...", position=2, leave=False): # Model forward batch = next(eval_loader) metrics = p_eval_step(state.params, batch) eval_metrics.append(metrics) # normalize eval metrics eval_metrics = get_metrics(eval_metrics) eval_metrics = jax.tree_map(jnp.mean, eval_metrics) try: eval_metrics["perplexity"] = math.exp(eval_metrics["loss"]) except OverflowError: eval_metrics["perplexity"] = float("inf") # Print metrics and update progress bar desc = f"Step... ({cur_step} | Eval Loss: {eval_metrics['loss']} | Eval Perplexity: {eval_metrics['perplexity']})" epochs.write(desc) epochs.desc = desc # Save metrics if has_tensorboard and jax.process_index() == 0: write_eval_metric(summary_writer, eval_metrics, cur_step) if cur_step % training_args.save_steps == 0 and cur_step > 0: # save checkpoint after each epoch and push checkpoint to the hub if jax.process_index() == 0: params = jax.device_get(unreplicate(state.params)) model.save_pretrained( training_args.output_dir, params=params, push_to_hub=training_args.push_to_hub, commit_message=f"Saving weights and logs of step {cur_step}", ) if __name__ == "__main__": main() ``` Error logs ``` Traceback (most recent call last): File "./run_clm_flax.py", line 644, in <module> main() File "./run_clm_flax.py", line 422, in main lm_datasets = tokenized_datasets.map( File "/home/bmoell/gpt2/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1657, in map return self._map_single( File "/home/bmoell/gpt2/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/bmoell/gpt2/lib/python3.8/site-packages/datasets/fingerprint.py", line 397, in wrapper out = func(self, *args, **kwargs) File "/home/bmoell/gpt2/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2006, in _map_single batch = apply_function_on_filtered_inputs( File "/home/bmoell/gpt2/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1896, in apply_function_on_filtered_inputs function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) File "./run_clm_flax.py", line 401, in group_texts concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} File "./run_clm_flax.py", line 401, in <dictcomp> concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} TypeError: can only concatenate list (not "str") to list ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12677/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12677/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12676
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12676/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12676/comments
https://api.github.com/repos/huggingface/transformers/issues/12676/events
https://github.com/huggingface/transformers/pull/12676
943,156,004
MDExOlB1bGxSZXF1ZXN0Njg4ODEyNTg5
12,676
Wrong model is used in example, should be character instead of subword model
{ "login": "jsteggink", "id": 978411, "node_id": "MDQ6VXNlcjk3ODQxMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/978411?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jsteggink", "html_url": "https://github.com/jsteggink", "followers_url": "https://api.github.com/users/jsteggink/followers", "following_url": "https://api.github.com/users/jsteggink/following{/other_user}", "gists_url": "https://api.github.com/users/jsteggink/gists{/gist_id}", "starred_url": "https://api.github.com/users/jsteggink/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jsteggink/subscriptions", "organizations_url": "https://api.github.com/users/jsteggink/orgs", "repos_url": "https://api.github.com/users/jsteggink/repos", "events_url": "https://api.github.com/users/jsteggink/events{/privacy}", "received_events_url": "https://api.github.com/users/jsteggink/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It's not easy to make a file the styler is happy with, haha." ]
1,626
1,626
1,626
CONTRIBUTOR
null
# What does this PR do? A canine.rst fix. In the original Google repo for CANINE there was mixup in the model names in the README.md, which was fixed 2 weeks ago. Since this transformer model was created before, it probably resulted in wrong use in this example in canine.rst. s = subword, c = character <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12676/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12676/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12676", "html_url": "https://github.com/huggingface/transformers/pull/12676", "diff_url": "https://github.com/huggingface/transformers/pull/12676.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12676.patch", "merged_at": 1626180027000 }
https://api.github.com/repos/huggingface/transformers/issues/12675
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12675/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12675/comments
https://api.github.com/repos/huggingface/transformers/issues/12675/events
https://github.com/huggingface/transformers/pull/12675
943,148,664
MDExOlB1bGxSZXF1ZXN0Njg4ODA1OTY5
12,675
[WIP][examples/flax] add gradient accumulation
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
null
[]
[ "Thanks a lot for adding this! That's super useful! It seems to require some bigger changes to core functionality to the script so I think we should be careful here. Also I'm starting to wonder whether the examples become to complicated to read with more and more functionality being added and whether we should maybe instead creating a new training script instead?\r\n\r\nAlso wouldn't it be better to use gradient accumulation functionality from `optax` such as https://optax.readthedocs.io/en/latest/api.html?highlight=ApplyEvery#optax.apply_every ? Think a lot of the code that is written here already exists in optax classes/functions no?\r\n\r\n@sgugger - I'd love to have your feedback on the PR as well", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Unstale", "Hi, I've just checked the flax examples on master branch and it seems that gradient accumulation is still missing, so I'm coming back to this PR :)\r\n\r\n@patrickvonplaten mentioned to use `optax`, and I've found this (working?) implementation of gradient acc. for T5 MLM pre-training from @gsarti. This may could help here :hugs: \r\n\r\n\r\n\r\n" ]
1,626
1,648
null
MEMBER
null
# What does this PR do? Adds gradient accumulation in flax language modeling scripts.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12675/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12675/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12675", "html_url": "https://github.com/huggingface/transformers/pull/12675", "diff_url": "https://github.com/huggingface/transformers/pull/12675.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12675.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/12674
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12674/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12674/comments
https://api.github.com/repos/huggingface/transformers/issues/12674/events
https://github.com/huggingface/transformers/issues/12674
942,888,223
MDU6SXNzdWU5NDI4ODgyMjM=
12,674
Nothing
{ "login": "vivekvkashyap", "id": 58116635, "node_id": "MDQ6VXNlcjU4MTE2NjM1", "avatar_url": "https://avatars.githubusercontent.com/u/58116635?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vivekvkashyap", "html_url": "https://github.com/vivekvkashyap", "followers_url": "https://api.github.com/users/vivekvkashyap/followers", "following_url": "https://api.github.com/users/vivekvkashyap/following{/other_user}", "gists_url": "https://api.github.com/users/vivekvkashyap/gists{/gist_id}", "starred_url": "https://api.github.com/users/vivekvkashyap/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vivekvkashyap/subscriptions", "organizations_url": "https://api.github.com/users/vivekvkashyap/orgs", "repos_url": "https://api.github.com/users/vivekvkashyap/repos", "events_url": "https://api.github.com/users/vivekvkashyap/events{/privacy}", "received_events_url": "https://api.github.com/users/vivekvkashyap/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
NONE
null
## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. ```python model = FlaxGPT2ForMultipleChoice.from_pretrained('gpt2') ``` Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12674/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12674/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12673
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12673/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12673/comments
https://api.github.com/repos/huggingface/transformers/issues/12673/events
https://github.com/huggingface/transformers/issues/12673
942,761,771
MDU6SXNzdWU5NDI3NjE3NzE=
12,673
Too Many kernels and embeddings were randomly initialized when loading Hugging Face GPT-2 Model
{ "login": "vivekvkashyap", "id": 58116635, "node_id": "MDQ6VXNlcjU4MTE2NjM1", "avatar_url": "https://avatars.githubusercontent.com/u/58116635?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vivekvkashyap", "html_url": "https://github.com/vivekvkashyap", "followers_url": "https://api.github.com/users/vivekvkashyap/followers", "following_url": "https://api.github.com/users/vivekvkashyap/following{/other_user}", "gists_url": "https://api.github.com/users/vivekvkashyap/gists{/gist_id}", "starred_url": "https://api.github.com/users/vivekvkashyap/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vivekvkashyap/subscriptions", "organizations_url": "https://api.github.com/users/vivekvkashyap/orgs", "repos_url": "https://api.github.com/users/vivekvkashyap/repos", "events_url": "https://api.github.com/users/vivekvkashyap/events{/privacy}", "received_events_url": "https://api.github.com/users/vivekvkashyap/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, this is because you are using a different base model prefix `self.gpt2` to load the base model. To be able to add any heads and still be able to load the base model weights the pre-trained class expects the base model to have the same prefix.\r\n\r\nFor gpt2 it is `self.transformer`, changing `self.gpt2` to `self.transformer` should fix this.\r\n\r\nAlso, try to avoid posting screen-shots, it's usually better for us, if you post the warning/stack-trace as text. Thanks!" ]
1,626
1,626
1,626
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform:colab - Jax version (CPU):0.2.13 - Using GPU in script?:No - Using distributed or parallel set-up in script?:No ### Who can help @patrickvonplaten @patil-suraj Models: Used hugging face GPT2 model for multiple choice task. Examples: ```python self.gpt2=FlaxGPT2Model(config=self.config, dtype=self.dtype) ``` ## Information Model I am using GPT2: The problem arises when using: *loading the model The tasks I am working on is: * mutliple choice * dataset:COSMOS ## To reproduce https://colab.research.google.com/drive/1uTwJ1X1WTxOTDSduKqoUTg3oizehPFJB?usp=sharing While executing this command: ```python model = FlaxGPT2ForMultipleChoice.from_pretrained('gpt2') ``` ## Expected behavior To run the following code without any warnings and randomly initialized kernels.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12673/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12673/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12672
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12672/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12672/comments
https://api.github.com/repos/huggingface/transformers/issues/12672/events
https://github.com/huggingface/transformers/pull/12672
942,670,757
MDExOlB1bGxSZXF1ZXN0Njg4MzcxNTY5
12,672
[doc] fix distil* example link
{ "login": "songyouwei", "id": 2573291, "node_id": "MDQ6VXNlcjI1NzMyOTE=", "avatar_url": "https://avatars.githubusercontent.com/u/2573291?v=4", "gravatar_id": "", "url": "https://api.github.com/users/songyouwei", "html_url": "https://github.com/songyouwei", "followers_url": "https://api.github.com/users/songyouwei/followers", "following_url": "https://api.github.com/users/songyouwei/following{/other_user}", "gists_url": "https://api.github.com/users/songyouwei/gists{/gist_id}", "starred_url": "https://api.github.com/users/songyouwei/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/songyouwei/subscriptions", "organizations_url": "https://api.github.com/users/songyouwei/orgs", "repos_url": "https://api.github.com/users/songyouwei/repos", "events_url": "https://api.github.com/users/songyouwei/events{/privacy}", "received_events_url": "https://api.github.com/users/songyouwei/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Great, thanks @songyouwei !\r\n\r\nCould you run `make fixup` at the root of your clone to fix the code quality issue? Thank you!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,626
1,629
1,629
CONTRIBUTOR
null
fix broken links
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12672/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12672/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12672", "html_url": "https://github.com/huggingface/transformers/pull/12672", "diff_url": "https://github.com/huggingface/transformers/pull/12672.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12672.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/12671
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12671/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12671/comments
https://api.github.com/repos/huggingface/transformers/issues/12671/events
https://github.com/huggingface/transformers/pull/12671
942,551,509
MDExOlB1bGxSZXF1ZXN0Njg4MjY3NDQ3
12,671
Update generation_logits_process.py
{ "login": "willfrey", "id": 13784361, "node_id": "MDQ6VXNlcjEzNzg0MzYx", "avatar_url": "https://avatars.githubusercontent.com/u/13784361?v=4", "gravatar_id": "", "url": "https://api.github.com/users/willfrey", "html_url": "https://github.com/willfrey", "followers_url": "https://api.github.com/users/willfrey/followers", "following_url": "https://api.github.com/users/willfrey/following{/other_user}", "gists_url": "https://api.github.com/users/willfrey/gists{/gist_id}", "starred_url": "https://api.github.com/users/willfrey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/willfrey/subscriptions", "organizations_url": "https://api.github.com/users/willfrey/orgs", "repos_url": "https://api.github.com/users/willfrey/repos", "events_url": "https://api.github.com/users/willfrey/events{/privacy}", "received_events_url": "https://api.github.com/users/willfrey/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @willfrey, I've merged many of your PRs (thanks for that 🤗) but I don't agree with this one since there is no integer between 0 and 1 so it'll be nice to have the instance check here (in case someone passes a tensor or something).", "I’d suggest checking against numbers.Integral then because that’s the ABC/protocol for anything like an integer.\n\n> On Jul 29, 2021, at 3:38 PM, Kevin Canwen Xu ***@***.***> wrote:\n> \n> \n> Hi @willfrey, I've merged many of your PRs but I don't agree with this one since there is no integer between 0 and 1 so it'll be nice to have the instance check here (in case someone passes a tensor or something).\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n", "Sorry, not numbers.Integral but numbers.Real. ", "I'm just trying to not have it raise an exception if I pass in `1` instead of `1.0` because I can't rely on passing `None` to default to `1.0` because it may be overridden by a model's config.\r\n\r\nI can change it so that it checks against that and won't yell at you for passing a `1`, if you'd prefer.", "> I'm just trying to not have it raise an exception if I pass in `1` instead of `1.0` because I can't rely on passing `None` to default to `1.0` because it may be overridden by a model's config.\r\n> \r\n> I can change it so that it checks against that and won't yell at you for passing a `1`, if you'd prefer.\r\n\r\nHi @willfrey I checked again and I think the best solution is to add a try-except for the typecasting `float()`. If it throws an exception we should catch it and tell the users you should pass a number", "Sure, I’ll make that change!\n\n> On Jul 30, 2021, at 10:49 AM, Kevin Canwen Xu ***@***.***> wrote:\n> \n> \n> I'm just trying to not have it raise an exception if I pass in 1 instead of 1.0 because I can't rely on passing None to default to 1.0 because it may be overridden by a model's config.\n> \n> I can change it so that it checks against that and won't yell at you for passing a 1, if you'd prefer.\n> \n> Hi @willfrey I checked again and I think the best solution is to add a try-except for the typecasting float(). If it throws an exception we should catch it and tell the users you should pass a number\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "@willfrey Hi, could you make the change as we discussed? I'm pinging again since the stale bot is pinging.", "Hi @JetRunner.\r\n\r\nSorry, had this drop off my radar.\r\n\r\nDo we want to re-raise an exception from just calling `float(top_p)`? That'll throw a `TypeError` if whatever the original `top_p` parameter does not support being an argument to `float(...)`.\r\n\r\nIt'd basically be:\r\n\r\n```py3\r\ntry:\r\n top_p = float(top_p)\r\nexcept TypeError:\r\n raise TypeError(f\"cannot interpret {top_p!r} as a float\")\r\n```\r\n\r\nwhich seems a little redundant.\r\n\r\nHappy to make the change if you want, though.", "You made a point! I'll just merge it as it is now." ]
1,626
1,629
1,629
CONTRIBUTOR
null
# What does this PR do? If you're using type hints, then passing an `int` where a `float` is annotated is acceptable as per [PEP 484](https://www.python.org/dev/peps/pep-0484/#the-numeric-tower). This makes life a little nicer. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ~Fixes # (issue)~ ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12671/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12671/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12671", "html_url": "https://github.com/huggingface/transformers/pull/12671", "diff_url": "https://github.com/huggingface/transformers/pull/12671.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12671.patch", "merged_at": 1629830045000 }
https://api.github.com/repos/huggingface/transformers/issues/12670
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12670/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12670/comments
https://api.github.com/repos/huggingface/transformers/issues/12670/events
https://github.com/huggingface/transformers/issues/12670
942,513,844
MDU6SXNzdWU5NDI1MTM4NDQ=
12,670
Converting fairseq roberta to transformer throws ModuleAttributeError: 'RobertaHubInterface' object has no attribute 'args'
{ "login": "fdas3213", "id": 22696996, "node_id": "MDQ6VXNlcjIyNjk2OTk2", "avatar_url": "https://avatars.githubusercontent.com/u/22696996?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fdas3213", "html_url": "https://github.com/fdas3213", "followers_url": "https://api.github.com/users/fdas3213/followers", "following_url": "https://api.github.com/users/fdas3213/following{/other_user}", "gists_url": "https://api.github.com/users/fdas3213/gists{/gist_id}", "starred_url": "https://api.github.com/users/fdas3213/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fdas3213/subscriptions", "organizations_url": "https://api.github.com/users/fdas3213/orgs", "repos_url": "https://api.github.com/users/fdas3213/repos", "events_url": "https://api.github.com/users/fdas3213/events{/privacy}", "received_events_url": "https://api.github.com/users/fdas3213/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "line 81&82 `roberta_sent_encoder.emb_layer_norm` should be changed to `roberta_sent_encoder.layernorm_embedding` ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "I met the same error just now, thanks for your solution, and I was wondering why attention was not paid to this bug?", "I just met too. Thanks for sharing the bugs here!!" ]
1,626
1,671
1,629
NONE
null
https://github.com/huggingface/transformers/blob/c523b241c2e50c3ed035bb76b938b6a944fed7e5/src/transformers/models/roberta/convert_roberta_original_pytorch_checkpoint_to_pytorch.py#L59 Had this error `ModuleAttributeError: 'RobertaHubInterface' object has no attribute 'args'` when running ``` convert_roberta_original_pytorch_checkpoint_to_pytorch.convert_roberta_checkpoint_to_pytorch(roberta_checkpoint_path='/home/ubuntu/fairseq/checkpoints/', pytorch_dump_folder_path='./huggingface/', classification_head=False) ``` `roberta.args.encoder_embed_dim` should now be converted to `roberta.model.encoder.args.encoder_embed_dim` to bypass this issue with the current fairseq version
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12670/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12670/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12669
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12669/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12669/comments
https://api.github.com/repos/huggingface/transformers/issues/12669/events
https://github.com/huggingface/transformers/pull/12669
942,509,692
MDExOlB1bGxSZXF1ZXN0Njg4MjMwNDcw
12,669
[tokenizer.prepare_seq2seq_batch] change deprecation to be easily actionable
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for iterating on this!" ]
1,626
1,626
1,626
CONTRIBUTOR
null
Attempt to make an easier to understand and act upon deprecation by giving explicit instructions on what needs to be done: Fixes: https://github.com/huggingface/transformers/issues/12622 @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12669/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12669/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12669", "html_url": "https://github.com/huggingface/transformers/pull/12669", "diff_url": "https://github.com/huggingface/transformers/pull/12669.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12669.patch", "merged_at": 1626193144000 }
https://api.github.com/repos/huggingface/transformers/issues/12668
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12668/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12668/comments
https://api.github.com/repos/huggingface/transformers/issues/12668/events
https://github.com/huggingface/transformers/issues/12668
942,425,384
MDU6SXNzdWU5NDI0MjUzODQ=
12,668
Vocab size difference between tokenizer and config for XLMR.
{ "login": "erip", "id": 2348806, "node_id": "MDQ6VXNlcjIzNDg4MDY=", "avatar_url": "https://avatars.githubusercontent.com/u/2348806?v=4", "gravatar_id": "", "url": "https://api.github.com/users/erip", "html_url": "https://github.com/erip", "followers_url": "https://api.github.com/users/erip/followers", "following_url": "https://api.github.com/users/erip/following{/other_user}", "gists_url": "https://api.github.com/users/erip/gists{/gist_id}", "starred_url": "https://api.github.com/users/erip/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/erip/subscriptions", "organizations_url": "https://api.github.com/users/erip/orgs", "repos_url": "https://api.github.com/users/erip/repos", "events_url": "https://api.github.com/users/erip/events{/privacy}", "received_events_url": "https://api.github.com/users/erip/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello! If you want the configuration and tokenizer to match the same checkpoint, you should load them from same checkpoint:\r\n\r\n```py\r\n>>> from transformers import XLMRobertaConfig\r\n>>> XLMRobertaConfig.from_pretrained('xlm-roberta-base').vocab_size\r\n250002\r\n>>> from transformers import AutoTokenizer\r\n>>> AutoTokenizer.from_pretrained('xlm-roberta-base').vocab_size\r\n250002\r\n```\r\n", "Thanks, @LysandreJik. I guess fundamentally my question isn't just \"how do I get the expected vocab size\", but also \"why is the default size wrong\"? The vocab with size 30522 is from BERT; XLM-R has no configuration in which this vocab size is used. Why doesn't the config represent the config used in the paper?", "The issue is that the configuration of this model is a simpler wrapper over RoBERTa since it's basically a copy of that model.\r\n\r\nI do agree that this is misleading however, as it puts the wrong defaults. We should make the two configurations independent and provide the correct defaults for XLM-R.\r\n\r\nWould you like to open a PR to propose a fix for this?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,626
1,629
1,629
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.8.2 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.10 - PyTorch version (GPU?): 1.9.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik maybe? <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): XLM Roberta The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ```python >>> from transformers.models.xlm_roberta import XLMRobertaConfig >>> XLMRobertaConfig().vocab_size 30522 >>> from transformers import AutoTokenizer >>> AutoTokenizer.from_pretrained('xlm-roberta-base').vocab_size 250002 ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior I expect the vocab sizes to be the same.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12668/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12668/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12667
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12667/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12667/comments
https://api.github.com/repos/huggingface/transformers/issues/12667/events
https://github.com/huggingface/transformers/pull/12667
942,381,429
MDExOlB1bGxSZXF1ZXN0Njg4MTE4MDg3
12,667
Adding TF translation example
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12667/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12667/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12667", "html_url": "https://github.com/huggingface/transformers/pull/12667", "diff_url": "https://github.com/huggingface/transformers/pull/12667.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12667.patch", "merged_at": 1626199705000 }
https://api.github.com/repos/huggingface/transformers/issues/12666
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12666/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12666/comments
https://api.github.com/repos/huggingface/transformers/issues/12666/events
https://github.com/huggingface/transformers/issues/12666
942,379,885
MDU6SXNzdWU5NDIzNzk4ODU=
12,666
translation with identical source and target language, for text normalization
{ "login": "desothier1", "id": 50878643, "node_id": "MDQ6VXNlcjUwODc4NjQz", "avatar_url": "https://avatars.githubusercontent.com/u/50878643?v=4", "gravatar_id": "", "url": "https://api.github.com/users/desothier1", "html_url": "https://github.com/desothier1", "followers_url": "https://api.github.com/users/desothier1/followers", "following_url": "https://api.github.com/users/desothier1/following{/other_user}", "gists_url": "https://api.github.com/users/desothier1/gists{/gist_id}", "starred_url": "https://api.github.com/users/desothier1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/desothier1/subscriptions", "organizations_url": "https://api.github.com/users/desothier1/orgs", "repos_url": "https://api.github.com/users/desothier1/repos", "events_url": "https://api.github.com/users/desothier1/events{/privacy}", "received_events_url": "https://api.github.com/users/desothier1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nThanks!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,626
1,629
1,629
NONE
null
Hi, This is rather a general question about translation and I am aware that I don't follow exactly your guidelines, so I am sorry for that. (We could run the examples mentioned in your readme, great tool!) We try to conceive normalization for Dutch, as a 'translation' task. So, is it for instance possible to use source + target language, defined as the same language, for instance --source_lang nl_XX \ --target_lang nl_XX \ {"translation": {"nl_XX": "liefst geen energie vandaag . waar is **m'n** oplaadstation ?", "nl_XX": "liefst geen energie vandaag . waar is **mijn** oplaadstation ?"}} or --source_lang source\ --target_lang target \ {"translation": {"source": "liefst geen energie vandaag . waar is **m'n** oplaadstation ?", "target": "liefst geen energie vandaag . waar is **mijn** oplaadstation ?"}} ## Environment info https://github.com/huggingface/transformers/tree/master/examples/pytorch/translation --model_name_or_path facebook/mbart-large-50-many-to-many-mmt Thanks for your answer!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12666/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12666/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12665
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12665/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12665/comments
https://api.github.com/repos/huggingface/transformers/issues/12665/events
https://github.com/huggingface/transformers/issues/12665
942,349,377
MDU6SXNzdWU5NDIzNDkzNzc=
12,665
word_ids() returned by RoBERTa Tokenizer behaves inconsistently for alphanumeric tokens like '18th'
{ "login": "hos-arafat", "id": 36512796, "node_id": "MDQ6VXNlcjM2NTEyNzk2", "avatar_url": "https://avatars.githubusercontent.com/u/36512796?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hos-arafat", "html_url": "https://github.com/hos-arafat", "followers_url": "https://api.github.com/users/hos-arafat/followers", "following_url": "https://api.github.com/users/hos-arafat/following{/other_user}", "gists_url": "https://api.github.com/users/hos-arafat/gists{/gist_id}", "starred_url": "https://api.github.com/users/hos-arafat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hos-arafat/subscriptions", "organizations_url": "https://api.github.com/users/hos-arafat/orgs", "repos_url": "https://api.github.com/users/hos-arafat/repos", "events_url": "https://api.github.com/users/hos-arafat/events{/privacy}", "received_events_url": "https://api.github.com/users/hos-arafat/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for the very helpful reproducer! @n1t0, @SaulLu, could you take a look? Thank you!", "Thank you for providing a code snippets @hos-arafat !\r\n\r\nIf I understand your request correctly, you would like to retrieve the index of the word to which each token belongs.\r\n\r\nIf this is your request, you have two ways of doing this - @n1t0 don't hesitate to correct me - : \r\n\r\n1. **By letting your tokenizer automatically guess what a word is**\r\nThis is the option you use in the example you showed. In this case, the tokenizer uses the tokenizer's pre-tokenization component to define what a word is. On your example, you can see this breakdown by doing:\r\n```python\r\nsentences = [\"During the 1980s , life was something else\", \"An 18th century poet\"]\r\ntokenizer = AutoTokenizer.from_pretrained(\"roberta-base\", use_fast=True)\r\nprint(tokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str(sentences[0]))\r\ntokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str(sentences[1])\r\n```\r\nAnd you will have as output:\r\n```\r\n[('During', (0, 6)),\r\n ('Ġthe', (6, 10)),\r\n ('Ġ1980', (10, 15)),\r\n ('s', (15, 16)),\r\n ('Ġ,', (16, 18)),\r\n ('Ġlife', (18, 23)),\r\n ('Ġwas', (23, 27)),\r\n ('Ġsomething', (27, 37)),\r\n ('Ġelse', (37, 42))]\r\n```\r\n```\r\n[('An', (0, 2)),\r\n ('Ġ18', (2, 5)),\r\n ('th', (5, 7)),\r\n ('Ġcentury', (7, 15)),\r\n ('Ġpoet', (15, 20))]\r\n```\r\n\r\nIndeed, there you can see that the ByteLevel pre-tokenization separates the numeric characters from the others.\r\n\r\n2. **By specifying before the tokenization the tokens which must belong to the same word**\r\nIf ever the separation proposed by the pre-tokenizer does not suit you, you have the possibility of specifying yourself the list of \"words\" you wish by giving to the tokenizer a list of words instead of a sentence. The only constraint with the tokenizer you use is that you must set the `add_prefix_space` argument to `True`. On your example, if for example you want to consider that words are separated by spaces, you could do:\r\n```python\r\nsentences_splited_into_words = [sentence.split(\" \") for sentence in sentences]\r\ntokenizer = AutoTokenizer.from_pretrained(\"roberta-base\", use_fast=True, add_prefix_space=True)\r\n\r\ne = tokenizer.batch_encode_plus(\r\n sentences_splited_into_words, return_tensors=\"pt\", padding=True, is_split_into_words=True\r\n)\r\n\r\nprint(e.tokens(0))\r\nprint(e.word_ids(0))\r\n\r\nprint(e.tokens(1))\r\nprint(e.word_ids(1))\r\n```\r\nOutput:\r\n```\r\n['<s>', 'ĠDuring', 'Ġthe', 'Ġ1980', 's', 'Ġ,', 'Ġlife', 'Ġwas', 'Ġsomething', 'Ġelse', '</s>']\r\n[None, 0, 1, 2, 2, 3, 4, 5, 6, 7, None]\r\n```\r\n```\r\n['<s>', 'ĠAn', 'Ġ18', 'th', 'Ġcentury', 'Ġpoet', '</s>', '<pad>', '<pad>', '<pad>', '<pad>']\r\n[None, 0, 1, 1, 2, 3, None, None, None, None, None]\r\n```\r\n\r\nI hope this answers your question and if it doesn't, don't hesitate to tell me! :smile: ", "Apologies for the late response, had to study and sit for an exam yesterday (aced it!). \r\nThank you for the quick response, and glad the reproducer was helpful ! @LysandreJik @SaulLu .\r\n\r\nThat's exactly right @SaulLu , I am interested in retrieving the index of every sub-token and to what \"full\" word it belongs to. For example: \r\n\r\n```python\r\n['An', 'Ġ18', 'th', 'Ġcentury', 'Ġpoet'] # the tokenizer splits '18' and 'th' so len = 5\r\n\r\n# This sentence will have labels: \r\n[ 'O', 'O', 'O', 'O'] # len = 4\r\n\r\n# Using the word_ids(), I get the index of the first sub-token of each word \r\n# and create the following list:\r\n['An', 'Ġ18', 'Ġcentury', 'Ġpoet'] # I DROP the sub-token 'th' so len = label_len = 4\r\n\r\n# When the word_ids() is incorrect (does NOT tell me what tokens were split)\r\n# I end up doing loss(predictions, labels)\r\n# which throws an error cuz len(predictions) > len(labels)\r\n\r\n```\r\n\r\nThank you for the solutions you offered ! They are both helpful. I can do two things:\r\n\r\n1. Instead of using the ````word_ids()```` I can use the output of / tuples returned by ````pre_tokenize_str()```` in order to figure out what words were split into many sub-tokens and only take the first subtoken\r\n\r\n2. Since the ````word_ids()```` are returned correctly when I split the string, I can keep using them and just split my sentences based on whitespaces using ````split()```` and add the argument ````is_split_into_words=True```` to ````batch_encode_plus() ````\r\n\r\n\r\nI am wondering why ````word_ids()```` is returned incorrectly as I highlighted in the reproducer though. Will try to investigate the ````GPT2Tokenizer```` class and ````tokenize()```` and see if I can spot something and contribute a fix! Would love to give back to this awesome library!\r\n\r\nThanks again for your help!\r\n", "Glad it helped :hugs: and great that your exam went well! \r\n\r\n> I am wondering why word_ids() is returned incorrectly as I highlighted in the reproducer though. Will try to investigate the GPT2Tokenizer class and tokenize() and see if I can spot something and contribute a fix! Would love to give back to this awesome library!\r\n\r\nThat is really nice of you! Personally, I think that the `word_ids` tokenizer method behaves in the desired way. However, I think we could be more specific in [documenting](https://huggingface.co/transformers/main_classes/tokenizer.html?highlight=word_ids#transformers.BatchEncoding.word_ids) the `word_ids` method in the :hugs: transformers library so that it gives as much information as the underlying function used about the role of the pre-tokenizer which is in the :hugs: tokenizers library and is documented [here](https://huggingface.co/docs/tokenizers/python/latest/api/reference.html?highlight=word_ids#tokenizers.Encoding.word_ids). Would you like to propose a reformulation of the documentation in the transformers library :slightly_smiling_face: ? \r\n\r\nIn order to make it easier to read my answer, I put a copy of the two documentations below.\r\n\r\n- `word_ids` method in the :hugs: transformers:\r\n\r\n ``` python\r\n def word_ids(self, batch_index: int = 0) -> List[Optional[int]]:\r\n \"\"\"\r\n Return a list mapping the tokens to their actual word in the initial sentence for a fast tokenizer.\r\n\r\n Args:\r\n batch_index (:obj:`int`, `optional`, defaults to 0): The index to access in the batch.\r\n\r\n Returns:\r\n :obj:`List[Optional[int]]`: A list indicating the word corresponding to each token. Special tokens added by\r\n the tokenizer are mapped to :obj:`None` and other tokens are mapped to the index of their corresponding\r\n word (several tokens will be mapped to the same word index if they are parts of that word).\r\n \"\"\"\r\n ```\r\n \r\n\r\n- `word_ids` method in the :hugs: tokenizers:\r\n\r\n ``` python\r\n def word_ids(self):\r\n \"\"\"\r\n The generated word indices.\r\n\r\n They represent the index of the word associated to each token.\r\n When the input is pre-tokenized, they correspond to the ID of the given input label,\r\n otherwise they correspond to the words indices as defined by the\r\n :class:`~tokenizers.pre_tokenizers.PreTokenizer` that was used.\r\n\r\n For special tokens and such (any token that was generated from something that was\r\n not part of the input), the output is :obj:`None`\r\n\r\n Returns:\r\n A :obj:`List` of :obj:`Optional[int]`: A list of optional word index.\r\n \"\"\"\r\n ```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,626
1,630
1,630
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.8.2 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.9.0+cu102 (True) - Tensorflow version (GPU?): 2.5.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help - tokenizers: @LysandreJik (Specifically the RoBERTa / GPT tokenizer @patrickvonplaten) ## Information Model I am using is RoBERTa. The problem arises when using: * [ ] my own modified scripts: A simple script that uses RoBERTa to do NER. The tasks I am working on is: * [ ] my own task or dataset: I am doing Named Entity Recognition (NER) on the ````conll2003```` dataset from the ````datasets```` library. As such, I am using RoBERTa + a classification head on top to classify each token in the sequence. Moreover, when the RoBERTa Tokenizer splits a word into many sub-tokens, I pass the entire sentence through RoBERTa then, using the ````word_ids```` returned by ````Tokenizer.batch_encode_plus````, pass only the contextual embeddings associated with the first sub-token of each word into my final classification head. (otherwise, the ````len(prediction) > len(label)````). Detailed code of this can be found in the final Section below. ## The Problem The problem is with the ````word_ids()```` returned by ````batch_encode_plus()```` for sentences that have alphanumeric tokens like ````'18th'```` or ````'1980s'````. Where the ````word_ids()```` will be as follows: ```python ['During', 'Ġthe', 'Ġ1980', 's', 'Ġ,', 'Ġlife', 'Ġwas', 'Ġweird'] # No 'Ġ' before 's', as expected, but word_ids = [None, 0, 1, 2, 3, 4, 5, 6, 7, None] # This causes a problem ! I expect it to be word_ids = [None, 0, 1, 2, 2.... ['An', 'Ġ18', 'th', 'Ġcentury', 'Ġpoet'] # No 'Ġ' before 'th', as expected, but word_ids = [None, 0, 1, 2, 3, 4, None, None, None, None] # This causes a problem ! I expect it to be word_ids = [None, 0, 1, 1.... ``` Notice that the token ````'1980s'```` was split into ````['Ġ1980', 's']```` but the ````word_ids```` did NOT indicate this, as what is returned is ````[None, 0, 1, 2, 3, 4, 5, 6, 7, None]````. Which indicates that the sub-token ````'s'```` is its own word (and NOT a sub-token of the word ````'1980s'````) ## To reproduce Steps to reproduce the behavior: 1. Import and Initialize the RoBERTa Tokenizer (Fast) ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("roberta-base", use_fast=True) ``` 2. ````batch_encode_plus```` sentences that have alphanumeric tokens like ````'18th'```` and ````'1980s'````: ```python sentences = ["During the 1980s , life was something else", "An 18th century poet"] e = tokenizer.batch_encode_plus(sentences, return_tensors='pt', padding=True) ``` 3. Print and inspect the ````word_ids(i)```` ```python print(tokenizer.tokenize(sentences[0])) print(e.word_ids(0)) print(tokenizer.tokenize(sentences[1])) print(e.word_ids(1)) ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> The ````word_ids```` should correctly indicate whenever tokens such as ````'1980s'```` and ````'18th'```` are split: ```python ['<s>', 'An', 'Ġ18', 'th', 'Ġcentury', 'Ġpoet', '</s>'] [None, 0, 1, 1, 2, 3, None] ``` ## Detailed Code ```python input_sentence = ["He lives joyfully"] label = ["O", "O", "O"] tokenizer = AutoTokenizer.from_pretrained("roberta-base", use_fast=True) model = AutoModel.from_pretrained("roberta-base") encoded_x = tokenizer.batch_encode_plus(input_sentence, return_tensors='pt', padding=True) # The input sentence now becomes ["<s>", "ĠHe", "Ġlives", "Ġjoy", "fully", "</s>"] contextual_embeddings = model(encoded_x.input_ids).last_hidden_state # [1, 6, 768] tensor. # I need to pass a [1, 3, 768] tensor into my final classification head # So, I wrote a function that takes as input the word_ids # and returns a list of the first sub-token of each word (dropping <s> and </s>) # Function NOT included here for brevity. Same function works perfectly for BERT my_function( [None, 0, 1, 2, 2, None] ) -> [0, 1, 2] first_subtoken = torch.LongTensor([0, 1, 2]) embeddings_of_interest = contextual_embeddings[:, first_subtoken, :] # [1, 3, 768] tensor ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12665/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12665/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12664
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12664/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12664/comments
https://api.github.com/repos/huggingface/transformers/issues/12664/events
https://github.com/huggingface/transformers/pull/12664
942,347,943
MDExOlB1bGxSZXF1ZXN0Njg4MDg5MDQw
12,664
Add option to load a pretrained model with mismatched shapes
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
COLLABORATOR
null
# What does this PR do? Sometimes, users want to load a checkpoint for a given task with a new head for the same task but different shapes. For instance, they may want to use a checkpoint that does text classification on 2 labels to initialize a model that does text classification on 5 labels. This PR enables that by adding a new argument to the `from_pretrained` method of `PreTrainedModel`, `TFPreTrainedModel` and `FlaxPreTrainedModel` named `ignore_mismatched_sizes`. When set to True, this argument will ignore the weights from the checkpoint that do not have the same shape as the ones inside the model and leave the randomly initialized weights.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12664/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12664/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12664", "html_url": "https://github.com/huggingface/transformers/pull/12664", "diff_url": "https://github.com/huggingface/transformers/pull/12664.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12664.patch", "merged_at": 1626185715000 }
https://api.github.com/repos/huggingface/transformers/issues/12663
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12663/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12663/comments
https://api.github.com/repos/huggingface/transformers/issues/12663/events
https://github.com/huggingface/transformers/pull/12663
942,300,266
MDExOlB1bGxSZXF1ZXN0Njg4MDQ3OTQw
12,663
Fix typo in README_zh-hans.md
{ "login": "JetRunner", "id": 22514219, "node_id": "MDQ6VXNlcjIyNTE0MjE5", "avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JetRunner", "html_url": "https://github.com/JetRunner", "followers_url": "https://api.github.com/users/JetRunner/followers", "following_url": "https://api.github.com/users/JetRunner/following{/other_user}", "gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}", "starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions", "organizations_url": "https://api.github.com/users/JetRunner/orgs", "repos_url": "https://api.github.com/users/JetRunner/repos", "events_url": "https://api.github.com/users/JetRunner/events{/privacy}", "received_events_url": "https://api.github.com/users/JetRunner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12663/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12663/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12663", "html_url": "https://github.com/huggingface/transformers/pull/12663", "diff_url": "https://github.com/huggingface/transformers/pull/12663.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12663.patch", "merged_at": 1626112212000 }
https://api.github.com/repos/huggingface/transformers/issues/12662
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12662/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12662/comments
https://api.github.com/repos/huggingface/transformers/issues/12662/events
https://github.com/huggingface/transformers/pull/12662
942,269,542
MDExOlB1bGxSZXF1ZXN0Njg4MDIwODQ2
12,662
[Flax Generation] Correct inconsistencies PyTorch/Flax
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The Spanish Marian model should be fixed as well on TPU (cc @gchhablani )\r\n\r\n```python\r\nfrom transformers import FlaxMarianMTModel, MarianTokenizer\r\nimport torch\r\n\r\nmodel_name = \"Helsinki-NLP/opus-mt-en-es\"\r\n\r\nmodel_fx = FlaxMarianMTModel.from_pretrained(model_name)\r\n\r\ntokenizer = MarianTokenizer.from_pretrained(model_name)\r\n\r\ninput_ids = tokenizer(\"Living Room, The Sheridan House! Your Minneapolis Home!\", return_tensors=\"np\").input_ids\r\n\r\nsequences_fx = model_fx.generate(input_ids, max_length=64, num_beams=2).sequences\r\n\r\ndecoded_fx = tokenizer.batch_decode(sequences_fx, skip_special_tokens=True)\r\n\r\nprint(\"Out Fx\", decoded_fx)\r\n```" ]
1,626
1,626
1,626
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> The Flax greedy & beam search generation & Marian model had a couple of issues that are addressed here: - greedy search now correctly pads **after** the eos token & test against PyTorch is added - beam search now correctly computes the finished beam scores - marian correctly makes use of bias - more beam search tests for marian are added ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12662/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12662/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12662", "html_url": "https://github.com/huggingface/transformers/pull/12662", "diff_url": "https://github.com/huggingface/transformers/pull/12662.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12662.patch", "merged_at": 1626198810000 }
https://api.github.com/repos/huggingface/transformers/issues/12661
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12661/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12661/comments
https://api.github.com/repos/huggingface/transformers/issues/12661/events
https://github.com/huggingface/transformers/issues/12661
942,266,430
MDU6SXNzdWU5NDIyNjY0MzA=
12,661
'TransfoXLLMHeadModelOutput' object has no attribute 'loss'
{ "login": "phfaustini", "id": 8069807, "node_id": "MDQ6VXNlcjgwNjk4MDc=", "avatar_url": "https://avatars.githubusercontent.com/u/8069807?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phfaustini", "html_url": "https://github.com/phfaustini", "followers_url": "https://api.github.com/users/phfaustini/followers", "following_url": "https://api.github.com/users/phfaustini/following{/other_user}", "gists_url": "https://api.github.com/users/phfaustini/gists{/gist_id}", "starred_url": "https://api.github.com/users/phfaustini/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phfaustini/subscriptions", "organizations_url": "https://api.github.com/users/phfaustini/orgs", "repos_url": "https://api.github.com/users/phfaustini/repos", "events_url": "https://api.github.com/users/phfaustini/events{/privacy}", "received_events_url": "https://api.github.com/users/phfaustini/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Indeed, there's an issue with the docstring! `TransfoXL` has two losses, here's the correct snippet:\r\n```py\r\nimport torch\r\nfrom transformers import TransfoXLTokenizer, TransfoXLLMHeadModel\r\ntokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')\r\nmodel = TransfoXLLMHeadModel.from_pretrained('transfo-xl-wt103')\r\ninputs = tokenizer(\"Hello, my dog is cute\", return_tensors=\"pt\")\r\noutputs = model(**inputs, labels=inputs[\"input_ids\"])\r\nlosses = outputs.losses\r\n```\r\n\r\nWould you like to open a PR to fix the docstring?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,626
1,629
1,629
NONE
null
## Environment info - `transformers` version: 4.6.1 - Platform: Linux-5.11.0-7620-generic-x86_64-with-glibc2.10 - Python version: 3.8.10 - PyTorch version (GPU?): 1.8.1 (True) - Tensorflow version (GPU?): not installed (NA) Models: - Transformer XL ## To reproduce Just to run the example from the documentation: https://huggingface.co/transformers/model_doc/transformerxl.html#transfoxllmheadmodel Steps to reproduce the behavior: ```python import torch from transformers import TransfoXLTokenizer, TransfoXLLMHeadModel tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103') model = TransfoXLLMHeadModel.from_pretrained('transfo-xl-wt103') inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) loss = outputs.loss ``` ## Expected behavior I should get a loss, but an exception is thrown instead: AttributeError: 'TransfoXLLMHeadModelOutput' object has no attribute 'loss'
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12661/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12661/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12660
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12660/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12660/comments
https://api.github.com/repos/huggingface/transformers/issues/12660/events
https://github.com/huggingface/transformers/pull/12660
942,242,422
MDExOlB1bGxSZXF1ZXN0Njg3OTk3NDY5
12,660
Updates timeline for project evaluation
{ "login": "osanseviero", "id": 7246357, "node_id": "MDQ6VXNlcjcyNDYzNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/osanseviero", "html_url": "https://github.com/osanseviero", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "repos_url": "https://api.github.com/users/osanseviero/repos", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12660/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12660/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12660", "html_url": "https://github.com/huggingface/transformers/pull/12660", "diff_url": "https://github.com/huggingface/transformers/pull/12660.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12660.patch", "merged_at": 1626117899000 }
https://api.github.com/repos/huggingface/transformers/issues/12659
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12659/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12659/comments
https://api.github.com/repos/huggingface/transformers/issues/12659/events
https://github.com/huggingface/transformers/issues/12659
942,212,266
MDU6SXNzdWU5NDIyMTIyNjY=
12,659
Can't load pretrained model when working in virtual environment
{ "login": "aclifton314", "id": 53267795, "node_id": "MDQ6VXNlcjUzMjY3Nzk1", "avatar_url": "https://avatars.githubusercontent.com/u/53267795?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aclifton314", "html_url": "https://github.com/aclifton314", "followers_url": "https://api.github.com/users/aclifton314/followers", "following_url": "https://api.github.com/users/aclifton314/following{/other_user}", "gists_url": "https://api.github.com/users/aclifton314/gists{/gist_id}", "starred_url": "https://api.github.com/users/aclifton314/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aclifton314/subscriptions", "organizations_url": "https://api.github.com/users/aclifton314/orgs", "repos_url": "https://api.github.com/users/aclifton314/repos", "events_url": "https://api.github.com/users/aclifton314/events{/privacy}", "received_events_url": "https://api.github.com/users/aclifton314/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello! There shouldn't be any difference between using the system-wide environment vs the virtual environment. We mostly use virtual environments to work on `transformers` and we heavily recommend using one when working with `transformers`. \r\n\r\nAre you sure the error comes from the virtual environment and now from another setup issue?", "@LysandreJik If I run:\r\n```python\r\nfrom transformers import BertForSequenceClassification\r\nmodel = BertForSequenceClassification.from_pretrained('bert-base-uncased')\r\n```\r\nusing the system-wide environment, then everything works fine. If I activate the venv and run the exact same code, I get the above error. I'm not sure if there is other information I can provide you that would be useful, but I don't change anything in the setup.\r\n\r\nMaking the system-wide and venv `transformers` version `4.4.2` resolves the error. Making the venv `transformers` version `4.8.2` reproduces the error.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,626
1,629
1,629
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.8.2 - Platform: Pytorch - Python version: 3.7.6 - PyTorch version (GPU?): 1.9.0 no GPU - Using GPU in script?: No. - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik ## Information Using Bert, specifically BertForSequenceClassification ## To reproduce Steps to reproduce the behavior: 1. Create a virtual environment `python -m venv <name_of_env>` 2. `pip install transformers` 3. `source /path/to/venv/bin/activate` 4. Try to load the BertForSequenceClassification model Here is a code snippet: ```python from transformers import BertForSequenceClassification model = BertForSequenceClassification.from_pretrained('bert-base-uncased') ``` Below is the error message I get: ```python HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /bert-base-uncased/resolve/main/config.json (Caused by ProxyError('Cannot connect to proxy.', OSError(0, 'Error'))) Traceback (most recent call last): File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/urllib3/connectionpool.py", line 696, in urlopen self._prepare_proxy(conn) File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/urllib3/connectionpool.py", line 964, in _prepare_proxy conn.connect() File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/urllib3/connection.py", line 359, in connect conn = self._connect_tls_proxy(hostname, conn) File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/urllib3/connection.py", line 506, in _connect_tls_proxy ssl_context=ssl_context, File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/urllib3/util/ssl_.py", line 450, in ssl_wrap_socket sock, context, tls_in_tls, server_hostname=server_hostname File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl return ssl_context.wrap_socket(sock, server_hostname=server_hostname) File "/home/aclifton/anaconda3/lib/python3.7/ssl.py", line 423, in wrap_socket session=session File "/home/aclifton/anaconda3/lib/python3.7/ssl.py", line 870, in _create self.do_handshake() File "/home/aclifton/anaconda3/lib/python3.7/ssl.py", line 1139, in do_handshake self._sslobj.do_handshake() OSError: [Errno 0] Error During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/requests/adapters.py", line 449, in send timeout=timeout File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/urllib3/connectionpool.py", line 756, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/urllib3/util/retry.py", line 574, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /bert-base-uncased/resolve/main/config.json (Caused by ProxyError('Cannot connect to proxy.', OSError(0, 'Error'))) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/transformers/configuration_utils.py", line 505, in get_config_dict user_agent=user_agent, File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/transformers/file_utils.py", line 1337, in cached_path local_files_only=local_files_only, File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/transformers/file_utils.py", line 1499, in get_from_cache r = requests.head(url, headers=headers, allow_redirects=False, proxies=proxies, timeout=etag_timeout) File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/requests/api.py", line 104, in head return request('head', url, **kwargs) File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/requests/api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/requests/sessions.py", line 542, in request resp = self.send(prep, **send_kwargs) File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/requests/sessions.py", line 655, in send r = adapter.send(request, **kwargs) File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/requests/adapters.py", line 510, in send raise ProxyError(e, request=request) requests.exceptions.ProxyError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /bert-base-uncased/resolve/main/config.json (Caused by ProxyError('Cannot connect to proxy.', OSError(0, 'Error'))) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/transformers/modeling_utils.py", line 1086, in from_pretrained **kwargs, File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/transformers/configuration_utils.py", line 440, in from_pretrained config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/transformers/configuration_utils.py", line 517, in get_config_dict raise EnvironmentError(msg) OSError: Can't load config for 'bert-base-uncased'. Make sure that: - 'bert-base-uncased' is a correct model identifier listed on 'https://huggingface.co/models' - or 'bert-base-uncased' is the correct path to a directory containing a config.json file ``` ## Expected behavior I had expected that the virtual environment would not affect the download or declaring the `model` variable. If I don't run the virtual environment, the above code works. I believe I have located the models in the `~/.cache/huggingface/transformers` directory so if there is a particular place those should be copied to in the `/path/to/venv/` directory let me know. I tried just copying `~/.cache/huggingface` to `/path/to/venv/` and still get the same error. I will also mention that I am working behind a proxy, but setting the `proxies` parameter doesn't seem to help either. That being said, I do have the model in `~/.cache/huggingface/transformers` and the proxy does not affect the above code snippet when running without the virtual environment. Thanks in advance for your help! **UPDATE** I changed from `transformers 4.8.2` to `transformers 4.4.2` and the problem goes away.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12659/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12659/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12658
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12658/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12658/comments
https://api.github.com/repos/huggingface/transformers/issues/12658/events
https://github.com/huggingface/transformers/issues/12658
942,169,563
MDU6SXNzdWU5NDIxNjk1NjM=
12,658
Autotokenizer error "Already borrowed" when used on thread pool
{ "login": "Warra07", "id": 19632982, "node_id": "MDQ6VXNlcjE5NjMyOTgy", "avatar_url": "https://avatars.githubusercontent.com/u/19632982?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Warra07", "html_url": "https://github.com/Warra07", "followers_url": "https://api.github.com/users/Warra07/followers", "following_url": "https://api.github.com/users/Warra07/following{/other_user}", "gists_url": "https://api.github.com/users/Warra07/gists{/gist_id}", "starred_url": "https://api.github.com/users/Warra07/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Warra07/subscriptions", "organizations_url": "https://api.github.com/users/Warra07/orgs", "repos_url": "https://api.github.com/users/Warra07/repos", "events_url": "https://api.github.com/users/Warra07/events{/privacy}", "received_events_url": "https://api.github.com/users/Warra07/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Duplicate of https://github.com/huggingface/tokenizers/issues/537", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,626
1,629
1,629
NONE
null
## Environment info - `transformers` version: 4.8.2 - Platform: Databricks - Python version: 3.7.10 - PyTorch version (GPU?): GPU - Using GPU in script?: No - Using distributed or parallel set-up in script?: Yes ## Information Model I am using : camembert-base: The problem arises when i try to use a tokenizer (from whatever model in my experiments) on multiple thread pools with an Autotokenizer, the error **RuntimeError: Already borrowed** get raised, i haven't tried if the same issues occure whith AutoModel, but i suspect it would, this makes it completly inefficient as it requires to duplicate the tokenizer on each thread (same for model), and is a real problem for packages like Petastorm / Horovod. ## To reproduce Below you'll find a simple snippet of code to reproduce the error: Steps to reproduce the behavior: ```Python from multiprocessing.dummy import Pool as ThreadPool tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path="camembert-base") def tokenizer_test(text): print(tokenizer(text)) pool = ThreadPool(10) data_list = ['this is a test'] * 10 pool.map(tokenizer_test, data_list) pool.close() pool.join() ``` However this works fine if i switch the Autotokenizer with CamembertTokenizer.from_pretrained("camembert-base") for example.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12658/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12658/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12657
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12657/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12657/comments
https://api.github.com/repos/huggingface/transformers/issues/12657/events
https://github.com/huggingface/transformers/pull/12657
942,161,672
MDExOlB1bGxSZXF1ZXN0Njg3OTI4NzE5
12,657
Remove SageMaker documentation
{ "login": "philschmid", "id": 32632186, "node_id": "MDQ6VXNlcjMyNjMyMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/philschmid", "html_url": "https://github.com/philschmid", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "organizations_url": "https://api.github.com/users/philschmid/orgs", "repos_url": "https://api.github.com/users/philschmid/repos", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "received_events_url": "https://api.github.com/users/philschmid/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
MEMBER
null
# What does this PR do? This PRs removes the SageMaker documentation from huggingface.co/transformers since there is a new documentation of hf.co/docs/sagemaker. Not sure if the deprecation "warning" should be displayed or just a comment for us. What do you think?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12657/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12657/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12657", "html_url": "https://github.com/huggingface/transformers/pull/12657", "diff_url": "https://github.com/huggingface/transformers/pull/12657.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12657.patch", "merged_at": 1626105772000 }
https://api.github.com/repos/huggingface/transformers/issues/12656
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12656/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12656/comments
https://api.github.com/repos/huggingface/transformers/issues/12656/events
https://github.com/huggingface/transformers/pull/12656
942,146,511
MDExOlB1bGxSZXF1ZXN0Njg3OTE1NTM4
12,656
Pipeline should be agnostic
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Nice catch !" ]
1,626
1,626
1,626
MEMBER
null
The pipeline test was PyTorch only when ran on both PT and TF, the slow test was failing.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12656/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12656/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12656", "html_url": "https://github.com/huggingface/transformers/pull/12656", "diff_url": "https://github.com/huggingface/transformers/pull/12656.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12656.patch", "merged_at": 1626104579000 }
https://api.github.com/repos/huggingface/transformers/issues/12655
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12655/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12655/comments
https://api.github.com/repos/huggingface/transformers/issues/12655/events
https://github.com/huggingface/transformers/pull/12655
942,129,431
MDExOlB1bGxSZXF1ZXN0Njg3OTAxMTA1
12,655
**encode_plus() shouldn't run for W2V2CTC
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Indeed, there was a typo. Thanks!" ]
1,626
1,626
1,626
MEMBER
null
The W2V2CTC shouldn't be used to create the input values for W2V2, so the output of `encode_plus` shouldn't be used as raw input for the model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12655/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12655/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12655", "html_url": "https://github.com/huggingface/transformers/pull/12655", "diff_url": "https://github.com/huggingface/transformers/pull/12655.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12655.patch", "merged_at": 1626172316000 }
https://api.github.com/repos/huggingface/transformers/issues/12654
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12654/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12654/comments
https://api.github.com/repos/huggingface/transformers/issues/12654/events
https://github.com/huggingface/transformers/pull/12654
942,117,454
MDExOlB1bGxSZXF1ZXN0Njg3ODkwOTgw
12,654
Pickle auto models
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
COLLABORATOR
null
# What does this PR do? The auto-generated classes for the Auto models are not picklable, because they are dynamically generated (so pickle can't trace them properly). This PR changes a little bit the way to create the Auto classes in each of their modeling files like proper classes, then update them to add the right methods. As a result, the auto classes are now picklable. Fixes #12621
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12654/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12654/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12654", "html_url": "https://github.com/huggingface/transformers/pull/12654", "diff_url": "https://github.com/huggingface/transformers/pull/12654.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12654.patch", "merged_at": 1626102954000 }
https://api.github.com/repos/huggingface/transformers/issues/12653
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12653/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12653/comments
https://api.github.com/repos/huggingface/transformers/issues/12653/events
https://github.com/huggingface/transformers/pull/12653
942,093,952
MDExOlB1bGxSZXF1ZXN0Njg3ODcwNTUy
12,653
[WIP] Patch BigBird tokenization test
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @LysandreJik,\r\n\r\nEven original tokenizer is not introducing space before `[MASK]`, so I think tokenizer is alright & the test is wrong instead.\r\n\r\n```\r\nwget https://huggingface.co/google/bigbird-roberta-base/resolve/main/spiece.model\r\ns = spm.SentencePieceProcessor(model_file='spiece.model')\r\ns.decode([7434, 9894, 67, 9894, 7434])\r\n```\r\n", "Great, then merging this! Thanks @vasudevgupta7 " ]
1,626
1,626
1,626
MEMBER
null
This patches the BigBird integration test. The core of the issue is that the `[MASK]` token is an `AddedToken` with `lstrip=True`. It, therefore, gobbles up the spaces on the left without getting a sentence piece underline. Therefore, when decoding, the internal sentence piece tokenizer is unaware that it should add a space in front of the `[MASK]` token. However, the original tokenizer does correctly decode with the space, so I believe there's an issue with our implementation. @vasudevgupta7 do you know of the difference between the two implementations? Also cc @n1t0 and @SaulLu Do not merge this as this isn't the correct fix :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12653/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12653/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12653", "html_url": "https://github.com/huggingface/transformers/pull/12653", "diff_url": "https://github.com/huggingface/transformers/pull/12653.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12653.patch", "merged_at": 1626159186000 }
https://api.github.com/repos/huggingface/transformers/issues/12652
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12652/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12652/comments
https://api.github.com/repos/huggingface/transformers/issues/12652/events
https://github.com/huggingface/transformers/pull/12652
942,069,940
MDExOlB1bGxSZXF1ZXN0Njg3ODQ5OTE1
12,652
Fix transfo xl integration test
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
MEMBER
null
Skipping test until https://github.com/huggingface/transformers/issues/12651 is resolved
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12652/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12652/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12652", "html_url": "https://github.com/huggingface/transformers/pull/12652", "diff_url": "https://github.com/huggingface/transformers/pull/12652.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12652.patch", "merged_at": 1626105095000 }
https://api.github.com/repos/huggingface/transformers/issues/12651
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12651/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12651/comments
https://api.github.com/repos/huggingface/transformers/issues/12651/events
https://github.com/huggingface/transformers/issues/12651
942,064,650
MDU6SXNzdWU5NDIwNjQ2NTA=
12,651
TF TransfoXL doesn't work with the `generate` method
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,626
1,629
1,629
MEMBER
null
The TF TransfoXL model does not output `logits` but `prediction_scores` which are different due to the `AdaptiveEmbedding`. The TF version of `generate` requires the `logits` to be output, therefore the model doesn't work with the `generate` method.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12651/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12651/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12650
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12650/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12650/comments
https://api.github.com/repos/huggingface/transformers/issues/12650/events
https://github.com/huggingface/transformers/pull/12650
942,058,374
MDExOlB1bGxSZXF1ZXN0Njg3ODQwMjc5
12,650
The extended trainer tests should require torch
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
MEMBER
null
The extended trainer tests have no global torch requirements. Some tests have no decorator at all and therefore get run in the TF CI, failing because of a lack of PyTorch. This adds a requirement for torch for all extended trainer tests.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12650/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12650/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12650", "html_url": "https://github.com/huggingface/transformers/pull/12650", "diff_url": "https://github.com/huggingface/transformers/pull/12650.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12650.patch", "merged_at": 1626097626000 }
https://api.github.com/repos/huggingface/transformers/issues/12649
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12649/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12649/comments
https://api.github.com/repos/huggingface/transformers/issues/12649/events
https://github.com/huggingface/transformers/pull/12649
942,035,337
MDExOlB1bGxSZXF1ZXN0Njg3ODIwNDM5
12,649
Skip TestMarian_MT_EN
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
MEMBER
null
Skip the test until #12647 is resolved.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12649/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12649/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12649", "html_url": "https://github.com/huggingface/transformers/pull/12649", "diff_url": "https://github.com/huggingface/transformers/pull/12649.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12649.patch", "merged_at": 1626095492000 }
https://api.github.com/repos/huggingface/transformers/issues/12648
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12648/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12648/comments
https://api.github.com/repos/huggingface/transformers/issues/12648/events
https://github.com/huggingface/transformers/issues/12648
942,015,054
MDU6SXNzdWU5NDIwMTUwNTQ=
12,648
Inconsistency between the tokenization of `CLIPTokenizer` and `CLIPTokenizerFast` with `openai/clip-vit-base-patch32`
{ "login": "SaulLu", "id": 55560583, "node_id": "MDQ6VXNlcjU1NTYwNTgz", "avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SaulLu", "html_url": "https://github.com/SaulLu", "followers_url": "https://api.github.com/users/SaulLu/followers", "following_url": "https://api.github.com/users/SaulLu/following{/other_user}", "gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}", "starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions", "organizations_url": "https://api.github.com/users/SaulLu/orgs", "repos_url": "https://api.github.com/users/SaulLu/repos", "events_url": "https://api.github.com/users/SaulLu/events{/privacy}", "received_events_url": "https://api.github.com/users/SaulLu/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[ { "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false } ]
[ "Great catch! \r\nIndeed, this is not normal. Feel free to give it a try to fix this as I won't be able to assign time for it this week, thanks :) ", "3 issues that are causing this in-consistency\r\n\r\n- The fast tokenizer was using `ByteLevel` `decoder ` which was not removing the end of word suffix `</w>`. Using `BPEDecoder` fixes this\r\n- CLIP uses `bos` and `eos` tokens, but the current post-processor is `ByteLevel` processor which does not add these, using `TemplateProcessing` instead fixes this.\r\n- Unlike GPT2's BPE tokenizer, CLIP's BPE does not represent space with `Ġ`. It instead repalces `</w>` with space during decoding. But the `BPE` tokenizer in `tokenizers` always seems to replace space with `Ġ`, which is the only remaining issue. \r\n\r\n```python\r\ntokenizer_slow = CLIPTokenizer.from_pretrained(\"openai/clip-vit-base-patch32\")\r\ntokenizer_fast = CLIPTokenizerFast.from_pretrained(\"openai/clip-vit-base-patch32\", from_slow=True)\r\n\r\ntext = \"A photo of a cat\"\r\ntokenizer_slow.tokenize(text)\r\n# ['a</w>', 'photo</w>', 'of</w>', 'a</w>', 'cat</w>']\r\n\r\ntokenizer_fast.tokenize(text)\r\n# ['a</w>', 'Ġ', 'photo</w>', 'Ġ', 'of</w>', 'Ġ', 'a</w>', 'Ġ', 'cat</w>']\r\n```\r\n\r\n\r\n\r\nIs there any way to disable this behavior @n1t0 @SaulLu ?\r\n\r\n", "@patil-suraj Hi, I wonder if this issue is solved? When will be this fix becomes official? Thanks!", "I'm really sorry for the delay. I have investigated a bit and I think that unfortunately the last problem is not limited to the fact that spaces are replaced by `Ġ`.\r\n\r\nFor example, here is the output on another example:\r\n```python\r\ntokenizer_slow = CLIPTokenizer.from_pretrained(\"openai/clip-vit-base-patch32\")\r\ntokenizer_fast = CLIPTokenizerFast.from_pretrained(\"openai/clip-vit-base-patch32\", from_slow=True)\r\n\r\ntext = \"A\\n'll 11p223RF☆ho!!to? of a cat\"\r\ntokenizer_slow.tokenize(text)\r\n# ['a</w>', \"'ll</w>\", '1</w>', '1</w>', 'p</w>', '2</w>', '2</w>', '3</w>', 'rf</w>', 'âĺĨ</w>', 'ho</w>', '!!</w>', 'to</w>', '?</w>', 'of</w>', 'a</w>', 'cat</w>']\r\n\r\ntokenizer_fast.tokenize(text)\r\n# ['a</w>', 'Ġ', \"'</w>\", 'll</w>', 'Ġ', '1', '1</w>', 'p</w>', '2', '2', '3</w>', 'rf</w>', 'âĺĨ</w>', 'ho</w>', '!!</w>', 'to</w>', '?</w>', 'Ġ', 'of</w>', 'Ġ', 'a</w>', 'Ġ', 'cat</w>']\r\n```\r\n\r\nI think that we also need a pre tokenizer that reproduces the split induced in [this line](https://github.com/openai/CLIP/blob/main/clip/simple_tokenizer.py#L124) thanks to this regex: `r\"\"\"<\\|startoftext\\|>|<\\|endoftext\\|>|'s|'t|'re|'ve|'m|'ll|'d|[\\p{L}]+|[\\p{N}]|[^\\s\\p{L}\\p{N}]+\"\"\"`. I think we could use [`tokenizers.pre_tokenizers.Split`](https://huggingface.co/docs/tokenizers/python/latest/api/reference.html#tokenizers.pre_tokenizers.Split) with [tokenizers.pre_tokenizers.Sequence](https://huggingface.co/docs/tokenizers/python/latest/api/reference.html#tokenizers.pre_tokenizers.Sequence) but for the moment I couldn't make it work.\r\n\r\nAt this point, the only solution I can propose that comes close (but doesn't match entirely) to the correct behavior is to replace the `tokenizer.pre_tokenizer=pre_tokenizers.ByteLevel(add_prefix_space=False)` line of the `CLIPConverter` class into `convert_slow_tokenizer.py` with :\r\n```python\r\n tokenizer.pre_tokenizer = pre_tokenizers.Sequence(\r\n [\r\n pre_tokenizers.pre_tokenizers.WhitespaceSplit(),\r\n pre_tokenizers.ByteLevel(\r\n add_prefix_space=False,\r\n ),\r\n ]\r\n )\r\n```\r\nThis would give on the previous example: \r\n```\r\ntokenizer_slow = CLIPTokenizer.from_pretrained(\"openai/clip-vit-base-patch32\")\r\ntokenizer_fast = CLIPTokenizerFast.from_pretrained(\"openai/clip-vit-base-patch32\", from_slow=True)\r\n\r\ntext = \"A\\n'll 11p223RF☆ho!!to? of a cat\"\r\ntokenizer_slow.tokenize(text)\r\n# ['a</w>', \"'ll</w>\", '1</w>', '1</w>', 'p</w>', '2</w>', '2</w>', '3</w>', 'rf</w>', 'âĺĨ</w>', 'ho</w>', '!!</w>', 'to</w>', '?</w>', 'of</w>', 'a</w>', 'cat</w>']\r\n\r\ntokenizer_fast.tokenize(text)\r\n# ['a</w>', \"'ll</w>\", '1', '1</w>', 'p</w>', '2', '2', '3</w>', 'rf</w>', 'âĺĨ</w>', 'ho</w>', '!!</w>', 'to</w>', '?</w>', 'of</w>', 'a</w>', 'cat</w>']\r\n```\r\n", "@SaulLu Thanks for providing this temporal solution. I hope this issue could be fixed soon and merged into the huggingface official release by @patil-suraj @n1t0 ", "Thank you for investigating this @SaulLu ! There is one more difference which I'm not sure how to handle in fast tokenizers. \r\nSince CLIP is trained on noisy web alt text, it uses `ftfy` to fix the text which also changes the tokenization.\r\n\r\n@n1t0 Would be nice if you let us know if this is something that can be supported in fast tokenizers.", "I just thought about this issue and I think it would be important to fix it quickly because a user who would use the fast version of this tokenizer could really have bad surprises.\r\n\r\n1. in the very short term, it is probably safer to remove the fast version of the tokenizer from the library. Indeed I think that fixing this tokenizer will require a lot of discussions (or even a new release of the Tokenizers library)\r\n2. I tried to work to create a fast tokenizer as faithful as possible to the slow version in [this PR](https://github.com/huggingface/transformers/pull/15067). Nevertheless, I really need to discuss this fix with you. I explain in more detail the points to discuss in the PR. :smile: \r\n\r\n", "Hey, is this fix as of now?" ]
1,626
1,691
null
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.8.2 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.9.0+cu102 (False) - Tensorflow version (GPU?): 2.5.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help @patil-suraj, I think you worked on CLIP, maybe you could help me by confirming that this behavior is not normal. If it is and no one can deal with it first, I'd be happy to try to fix it. <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): CLIP ## To reproduce The easiest way to reproduce is to open [this google colab](https://colab.research.google.com/drive/1JzlYtuG4MdAKl8lPI5PkqGcYbdM3N24x?usp=sharing) Steps to reproduce the behavior: 1. Import the slow and fast CLIP tokenizers from the transformers library and eventualy the tokenizer of https://github.com/openai/CLIP ``` from transformers import CLIPTokenizer, CLIPTokenizerFast tokenizer_slow = CLIPTokenizer.from_pretrained("openai/clip-vit-base-patch32") tokenizer_fast = CLIPTokenizerFast.from_pretrained("openai/clip-vit-base-patch32") ``` ``` from CLIP import clip as clip_orig ``` 2. Tokenize the same text with the 3 tokenizers ``` text = "A photo of a cat" context_length = 77 ``` ``` tokens_ids_orig = clip_orig.tokenize(text) tokens_ids_slow = tokenizer_slow.encode(text, padding="max_length", max_length=context_length, return_tensors='pt') tokens_ids_fast = tokenizer_fast.encode(text, padding="max_length", max_length=context_length, return_tensors='pt') ``` 3. Compare the outputs ``` (tokens_ids_orig == tokens_ids_slow).sum() == context_length ``` Output: `True` ``` (tokens_ids_orig == tokens_ids_fast).sum() == context_length ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior I think I would have expected the slow and fast versions to tokenize the text in the same way. <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12648/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12648/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/12647
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12647/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12647/comments
https://api.github.com/repos/huggingface/transformers/issues/12647/events
https://github.com/huggingface/transformers/issues/12647
942,006,117
MDU6SXNzdWU5NDIwMDYxMTc=
12,647
`TestMarian_MT_EN::test_batch_generation_mt_en` Failing due to randomly generated tokens
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[ { "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }, { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Traced back to this commit: https://github.com/huggingface/transformers/commit/184ef8ecd05ac783827b196e8d15403820efedf9\r\n\r\nI suspect there is a difference between the upload TF and PT checkpoints", "It seems there's a single difference in the final logits bias:\r\n\r\n```py\r\nimport torch\r\nfrom transformers import MarianMTModel\r\n\r\npt_model = MarianMTModel.from_pretrained(\"Helsinki-NLP/opus-mt-mt-en\")\r\ntf_model = MarianMTModel.from_pretrained(\"Helsinki-NLP/opus-mt-mt-en\", from_tf=True)\r\n\r\npt, tf = pt_model.state_dict(), tf_model.state_dict()\r\n\r\nptf = {}\r\n\r\nfor key, value in pt.items():\r\n ptf[key] = [value]\r\n\r\nfor key, value in tf.items():\r\n if key not in ptf:\r\n print(key, \"not in ptf\")\r\n else:\r\n ptf[key].append(value)\r\n\r\nfor key, value in ptf.items():\r\n _pt, _tf = value\r\n difference = torch.max(torch.abs(_pt - _tf)).tolist()\r\n if difference > 0:\r\n print(key, difference)\r\n\r\n# final_logits_bias 10.176068305969238\r\n```\r\n\r\nSeems systematic, independent of runtime or seed.", "I would say the error comes from the TF checkpoint on the hub, looking forward to your input @patrickvonplaten and @patil-suraj.\r\n\r\nI'll deactivate the test in the meantime.", "This is also the case for the `Helsinki-NLP/opus-mt-en-zh` checkpoint:\r\n\r\n```py\r\n# final_logits_bias 8.724637031555176\r\n```", "And for the `Helsinki-NLP/opus-mt-en-ROMANCE` checkpoint:\r\n```\r\nfinal_logits_bias 11.757145881652832\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,626
1,631
null
MEMBER
null
The test fails with the following: ``` _________________ TestMarian_MT_EN.test_batch_generation_mt_en _________________ [gw0] linux -- Python 3.6.9 /usr/local/bin/python self = <tests.test_modeling_tf_marian.TestMarian_MT_EN testMethod=test_batch_generation_mt_en> @slow def test_batch_generation_mt_en(self): > self._assert_generated_batch_equal_expected() tests/test_modeling_tf_marian.py:390: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/test_modeling_tf_marian.py:366: in _assert_generated_batch_equal_expected self.assertListEqual(self.expected_text, generated_words) E AssertionError: Lists differ: ['Tou[19 chars] healed a man who was affected by the sad disease of leprosy.'] != ['Tou[19 chars] healed a man who was affected by▁kifkażUnjonik ill.'] E E First differing element 0: E 'Touc[17 chars]s healed a man who was affected by the sad disease of leprosy.' E 'Touc[17 chars]s healed a man who was affected by▁kifkażUnjonik ill.' E E - ['Touching gently, Jesus healed a man who was affected by the sad disease of ' E ? ^^^^^^ ^^^ ^^^^^^^^^ E E + ['Touching gently, Jesus healed a man who was affected by▁kifkażUnjonik ill.'] E ? ^^^^^ ^^^^^^ ^^^^^^ + E E - 'leprosy.'] ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12647/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12647/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/12646
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12646/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12646/comments
https://api.github.com/repos/huggingface/transformers/issues/12646/events
https://github.com/huggingface/transformers/pull/12646
941,996,644
MDExOlB1bGxSZXF1ZXN0Njg3Nzg3NDIz
12,646
Fixed docs
{ "login": "KickItLikeShika", "id": 54319724, "node_id": "MDQ6VXNlcjU0MzE5NzI0", "avatar_url": "https://avatars.githubusercontent.com/u/54319724?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KickItLikeShika", "html_url": "https://github.com/KickItLikeShika", "followers_url": "https://api.github.com/users/KickItLikeShika/followers", "following_url": "https://api.github.com/users/KickItLikeShika/following{/other_user}", "gists_url": "https://api.github.com/users/KickItLikeShika/gists{/gist_id}", "starred_url": "https://api.github.com/users/KickItLikeShika/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KickItLikeShika/subscriptions", "organizations_url": "https://api.github.com/users/KickItLikeShika/orgs", "repos_url": "https://api.github.com/users/KickItLikeShika/repos", "events_url": "https://api.github.com/users/KickItLikeShika/events{/privacy}", "received_events_url": "https://api.github.com/users/KickItLikeShika/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
CONTRIBUTOR
null
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12646/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12646/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12646", "html_url": "https://github.com/huggingface/transformers/pull/12646", "diff_url": "https://github.com/huggingface/transformers/pull/12646.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12646.patch", "merged_at": 1626105793000 }