url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/23879
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23879/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23879/comments
https://api.github.com/repos/huggingface/transformers/issues/23879/events
https://github.com/huggingface/transformers/issues/23879
1,733,255,792
I_kwDOCUB6oc5nT2Jw
23,879
distillation training for arabic langauge
{ "login": "muhammed-saeed", "id": 38116007, "node_id": "MDQ6VXNlcjM4MTE2MDA3", "avatar_url": "https://avatars.githubusercontent.com/u/38116007?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muhammed-saeed", "html_url": "https://github.com/muhammed-saeed", "followers_url": "https://api.github.com/users/muhammed-saeed/followers", "following_url": "https://api.github.com/users/muhammed-saeed/following{/other_user}", "gists_url": "https://api.github.com/users/muhammed-saeed/gists{/gist_id}", "starred_url": "https://api.github.com/users/muhammed-saeed/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muhammed-saeed/subscriptions", "organizations_url": "https://api.github.com/users/muhammed-saeed/orgs", "repos_url": "https://api.github.com/users/muhammed-saeed/repos", "events_url": "https://api.github.com/users/muhammed-saeed/events{/privacy}", "received_events_url": "https://api.github.com/users/muhammed-saeed/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Please use the [forums](https://discuss.huggingface.co/) for such questions. This is not a maintained example.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,688
1,688
NONE
null
### System Info I encountered two issues while attempting to run the `binarized_data.py` and `train.py` scripts for the Knowledge Distillation of BERT Language Model on the Arabic Language project. Below are the details of each issue: 1. In the `binarized_data.py` script, I had to modify line 83 to make it work. The original line is: ```python dp_file = f"{args.dump_file}.{args.tokenizer_name}.pickle" ``` However, I had to remove the `tokenizer_name` variable and change the line to: ```python dp_file = f"{args.dump_file}.pickle" ``` This change was necessary because the Arabic BERT model name, "asafaya/bert-large-arabic," contains a forward slash ("/"), which caused errors when concatenating it with the `tokenizer_name` variable. 2. In the `train.py` script, I made a modification on line 258. The original line is: ```python args.max_model_input_size = tokenizer.max_model_input_sizes[args.teacher_name] ``` However, I had to change it to: ```python args.max_model_input_size = tokenizer.max_model_input_sizes['bert-large-uncased'] ``` This modification was necessary because I am using different model configurations than those listed in the folder. It would be helpful if the script could be modified to automatically work with the intended config, allowing for more flexibility. Apart from these script modifications, I made the necessary changes to the config files to match the different models I am using. this is understood as I am using a model with a different config than the one listed in the folder, maybe we can modify the script to download and locate the necessary config file automatically. Please let me know if there are any further clarifications needed or if you require additional information to address these issues. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction here is link to the Google colab that has the problem https://colab.research.google.com/drive/1OqSvRNMl0-Z7ScCd6hLbPHMO-ZXT3WEw?usp=sharing ### Expected behavior the model has to start the training smoothly and the script has to be able to handle the model names which contains '/'
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23879/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23879/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23878
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23878/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23878/comments
https://api.github.com/repos/huggingface/transformers/issues/23878/events
https://github.com/huggingface/transformers/pull/23878
1,733,254,912
PR_kwDOCUB6oc5RvRYI
23,878
[i18n]Translated "attention.mdx" to korean
{ "login": "kihoon71", "id": 75935546, "node_id": "MDQ6VXNlcjc1OTM1NTQ2", "avatar_url": "https://avatars.githubusercontent.com/u/75935546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kihoon71", "html_url": "https://github.com/kihoon71", "followers_url": "https://api.github.com/users/kihoon71/followers", "following_url": "https://api.github.com/users/kihoon71/following{/other_user}", "gists_url": "https://api.github.com/users/kihoon71/gists{/gist_id}", "starred_url": "https://api.github.com/users/kihoon71/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kihoon71/subscriptions", "organizations_url": "https://api.github.com/users/kihoon71/orgs", "repos_url": "https://api.github.com/users/kihoon71/repos", "events_url": "https://api.github.com/users/kihoon71/events{/privacy}", "received_events_url": "https://api.github.com/users/kihoon71/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,685
1,686
1,686
CONTRIBUTOR
null
# What does this PR do? Translated attention.mdx file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (번역 누락/중복 검사) - [x] Grammar Check (맞춤법 검사) - [x] Review or Add new terms to glossary (용어 확인 및 추가) - [x] Check Inline TOC (e.g. [[lowercased-header]]) - [x] Check live-preview for gotchas (live-preview로 정상작동 확인) ## Who can review?(Initial) Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review?(Final) @sgugger, @ArthurZucker, @eunseojo May you please review this PR? <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23878/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23878/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23878", "html_url": "https://github.com/huggingface/transformers/pull/23878", "diff_url": "https://github.com/huggingface/transformers/pull/23878.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23878.patch", "merged_at": 1686574758000 }
https://api.github.com/repos/huggingface/transformers/issues/23877
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23877/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23877/comments
https://api.github.com/repos/huggingface/transformers/issues/23877/events
https://github.com/huggingface/transformers/issues/23877
1,733,091,380
I_kwDOCUB6oc5nTOA0
23,877
Cannot reproduce results for Pix2struct on InfographicVQA
{ "login": "Lizw14", "id": 20157670, "node_id": "MDQ6VXNlcjIwMTU3Njcw", "avatar_url": "https://avatars.githubusercontent.com/u/20157670?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Lizw14", "html_url": "https://github.com/Lizw14", "followers_url": "https://api.github.com/users/Lizw14/followers", "following_url": "https://api.github.com/users/Lizw14/following{/other_user}", "gists_url": "https://api.github.com/users/Lizw14/gists{/gist_id}", "starred_url": "https://api.github.com/users/Lizw14/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lizw14/subscriptions", "organizations_url": "https://api.github.com/users/Lizw14/orgs", "repos_url": "https://api.github.com/users/Lizw14/repos", "events_url": "https://api.github.com/users/Lizw14/events{/privacy}", "received_events_url": "https://api.github.com/users/Lizw14/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "gentle ping @younesbelkada ", "Hi everyone, \r\nSadly I won't have the bandwidth to properly dig into this right now, @Lizw14 do you still face the same issue when using the main branch of `transformers`?\r\n```\r\npip install git+https://github.com/huggingface/transformers.git\r\n```", "@Lizw14 \r\nquickly going back to the issue, can you double check you used the same hyper parameters than the ones presented on the paper? for example what is the sequence length you are using? in what precision do you load the model (fp32, fp16, bf16, int8)?\r\nIdeally can you share the full script you use to reproduce the results of the paper\r\nThanks!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,691
1,691
NONE
null
I am using the `pix2struct-infographics-vqa-base` and `pix2struct-infographics-vqa-large` model here and doing inference on InfographicsVQA. However, I get 29.53 ANLS for base and 34.31 ANLS for large, which do not match with the 38.2 and 40.0 results as in the original paper. Could anyone help with this? Here is my inference code: ``` import requests from PIL import Image import torch from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-infographics-vqa-base").to("cuda") processor = Pix2StructProcessor.from_pretrained("google/pix2struct-infographics-vqa-base") image_url = "https://blogs.constantcontact.com/wp-content/uploads/2019/03/Social-Media-Infographic.png" image = Image.open(requests.get(image_url, stream=True).raw) question = "Which social platform has heavy female audience?" inputs = processor(images=image, text=question, return_tensors="pt").to("cuda") predictions = model.generate(**inputs) pred = processor.decode(predictions[0], skip_special_tokens=True) gt = 'pinterest' print(pred) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23877/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23877/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23876
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23876/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23876/comments
https://api.github.com/repos/huggingface/transformers/issues/23876/events
https://github.com/huggingface/transformers/issues/23876
1,733,078,320
I_kwDOCUB6oc5nTK0w
23,876
`.to_dict` does not correctly serialize `torch.dtype` in some cases (e.g., vision models)
{ "login": "xenova", "id": 26504141, "node_id": "MDQ6VXNlcjI2NTA0MTQx", "avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xenova", "html_url": "https://github.com/xenova", "followers_url": "https://api.github.com/users/xenova/followers", "following_url": "https://api.github.com/users/xenova/following{/other_user}", "gists_url": "https://api.github.com/users/xenova/gists{/gist_id}", "starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xenova/subscriptions", "organizations_url": "https://api.github.com/users/xenova/orgs", "repos_url": "https://api.github.com/users/xenova/repos", "events_url": "https://api.github.com/users/xenova/events{/privacy}", "received_events_url": "https://api.github.com/users/xenova/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "Hey! Indeed, the `PretrainedConfig` class calls `dict_torch_dtype_to_str`, and the `text_config` and `vision_config` inherit from it, so they work fine, indeed, the parent's `torch_dtype` attribute can be modified and we don't use `self.to_dict()` . Thanks for reporting \r\n\r\nThe configs should be automatically tested IMO, this is currently note the case. It seems that for blip, only the text config is tested, which is why this does not fail. 10 models or more are concerned (mostly when `is_composition=True`. \r\n\r\nI'll open a PR to fix this ", "Commenting to prevent it being closed as stale.", "Yep, sorry I'll try to get to the original fix taking the comment into account!", "This was fixed by #25237" ]
1,685
1,692
1,692
CONTRIBUTOR
null
### System Info - `transformers` version: 4.29.1 - Platform: Windows-10 - Python version: 3.8.3 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger @ArthurZucker @amyeroberts ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Running: ```python import json from transformers import AutoConfig json_data = AutoConfig.from_pretrained('openai/clip-vit-base-patch16').to_dict() json.dumps(json_data, indent=4) ``` Results in ``` TypeError: Object of type dtype is not JSON serializable ``` --- I have identified this problem with the following models: - `clip` - `sam` - `vision-encoder-decoder` ### Expected behavior torch dtypes should be converted to a string. I believe this is due to these configs redefining their `to_dict` method, without calling `dict_torch_dtype_to_str` on the top-level object. https://github.com/huggingface/transformers/blob/de9255de27abfcae4a1f816b904915f0b1e23cd9/src/transformers/models/clip/configuration_clip.py#L397-L408
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23876/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23876/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23875
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23875/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23875/comments
https://api.github.com/repos/huggingface/transformers/issues/23875/events
https://github.com/huggingface/transformers/pull/23875
1,733,058,043
PR_kwDOCUB6oc5Run4W
23,875
Changed "perplexity" to "eval_perplexity"
{ "login": "david-waterworth", "id": 5028974, "node_id": "MDQ6VXNlcjUwMjg5NzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4", "gravatar_id": "", "url": "https://api.github.com/users/david-waterworth", "html_url": "https://github.com/david-waterworth", "followers_url": "https://api.github.com/users/david-waterworth/followers", "following_url": "https://api.github.com/users/david-waterworth/following{/other_user}", "gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}", "starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions", "organizations_url": "https://api.github.com/users/david-waterworth/orgs", "repos_url": "https://api.github.com/users/david-waterworth/repos", "events_url": "https://api.github.com/users/david-waterworth/events{/privacy}", "received_events_url": "https://api.github.com/users/david-waterworth/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23875). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,688
1,688
NONE
null
# What does this PR do? Modifies training scripts in examples/pytorch/language-modelling so that `perplexity` is correctly logged to wandb. Since the metrics don't contain an eval_ prefix in the metrics dictionary they are not logged. <!-- Remove if not applicable --> Fixes # https://github.com/huggingface/transformers/issues/23593 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23875/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23875/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23875", "html_url": "https://github.com/huggingface/transformers/pull/23875", "diff_url": "https://github.com/huggingface/transformers/pull/23875.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23875.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/23874
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23874/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23874/comments
https://api.github.com/repos/huggingface/transformers/issues/23874/events
https://github.com/huggingface/transformers/issues/23874
1,732,910,935
I_kwDOCUB6oc5nSh9X
23,874
Code formatting issue with Korean translation of quicktour.mdx
{ "login": "GarrettDaniel", "id": 36463300, "node_id": "MDQ6VXNlcjM2NDYzMzAw", "avatar_url": "https://avatars.githubusercontent.com/u/36463300?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GarrettDaniel", "html_url": "https://github.com/GarrettDaniel", "followers_url": "https://api.github.com/users/GarrettDaniel/followers", "following_url": "https://api.github.com/users/GarrettDaniel/following{/other_user}", "gists_url": "https://api.github.com/users/GarrettDaniel/gists{/gist_id}", "starred_url": "https://api.github.com/users/GarrettDaniel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GarrettDaniel/subscriptions", "organizations_url": "https://api.github.com/users/GarrettDaniel/orgs", "repos_url": "https://api.github.com/users/GarrettDaniel/repos", "events_url": "https://api.github.com/users/GarrettDaniel/events{/privacy}", "received_events_url": "https://api.github.com/users/GarrettDaniel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Thanks for reporting, feel free to open a PR and ping me 😉 ", "It looks like this issue was resolved on the docs website https://huggingface.co/docs/transformers/v4.30.0/ko/quicktour. Closing this issue" ]
1,685
1,687
1,687
NONE
null
### System Info N/A ### Reproduction In the Korean translation of quicktour.mdx found [here](https://github.com/huggingface/transformers/blob/main/docs/source/ko/quicktour.mdx), there is a small formatting issue in the bash commands to install pytorch and tensorflow. ![image](https://github.com/huggingface/transformers/assets/36463300/3f750ef3-ba13-4977-b8e5-52ec7d8d396e) ### Expected behavior - The install commands should be rendered as code blocks in the markdown file. - The formatting can be fixed by adding a newline/return between the two commands like this: ```bash pip install torch``` ```bash pip install tensorflow``` Furthermore, these commands can be simplified to one line by using the following syntax, which will install both PyTorch and Tensorflow: ```!pip install torch tensorflow``` I'd be happy to make these changes and help with some of the other Korean documentation.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23874/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23874/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23873
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23873/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23873/comments
https://api.github.com/repos/huggingface/transformers/issues/23873/events
https://github.com/huggingface/transformers/issues/23873
1,732,910,021
I_kwDOCUB6oc5nShvF
23,873
RWKV bug for 8-bit model fine-tuning.
{ "login": "LetianLee", "id": 73881739, "node_id": "MDQ6VXNlcjczODgxNzM5", "avatar_url": "https://avatars.githubusercontent.com/u/73881739?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LetianLee", "html_url": "https://github.com/LetianLee", "followers_url": "https://api.github.com/users/LetianLee/followers", "following_url": "https://api.github.com/users/LetianLee/following{/other_user}", "gists_url": "https://api.github.com/users/LetianLee/gists{/gist_id}", "starred_url": "https://api.github.com/users/LetianLee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LetianLee/subscriptions", "organizations_url": "https://api.github.com/users/LetianLee/orgs", "repos_url": "https://api.github.com/users/LetianLee/repos", "events_url": "https://api.github.com/users/LetianLee/events{/privacy}", "received_events_url": "https://api.github.com/users/LetianLee/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @LetianLee \r\nThanks for the issue! \r\nIn fact, you cannot train a model that has been purely loaded in 8bit. In order to apply fine tuning using 8bit / 4bit models, you need to add adapters on top of the model and train these adapters only. \r\nPlease check out the official example of PEFT: https://github.com/huggingface/peft/blob/main/examples/int8_training/Finetune_opt_bnb_peft.ipynb and adapt it to your needs. You may need to manually specify `target_modules=[\"key\", \"value\", \"receptance\"]` when defining the `LoraConfig`. Please let us know how it goes", "Hi @younesbelkada ,\r\nThank you for your kind reply and explanation. Since this is the case, I will close this ticket as it is not an issue related to Hugging Face/Transformers. Thank you very much for providing the relevant tutorial. I will now proceed to try the Lora fine-tuning. Thanks!", "Thanks so much @LetianLee !" ]
1,685
1,685
1,685
NONE
null
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu) - Jax version: 0.4.10 - JaxLib version: 0.4.10 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker @younesbelkada @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The 8-bit model inference works successfully, but after fine-tuning, the model fails when inferring it again. Reproduction: ``` import torch from transformers import AutoTokenizer, RwkvForCausalLM, GenerationConfig from torch.optim import AdamW model = RwkvForCausalLM.from_pretrained("RWKV/rwkv-raven-1b5", device_map="auto", torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-1b5") optim = AdamW(model.parameters(), lr=1e-4) ctx = "Hello my name is Bob" inputs = tokenizer(ctx, return_tensors="pt").to(0) model(inputs["input_ids"]) # ok model.train() outputs = model(inputs["input_ids"], labels=inputs["input_ids"]) loss = outputs.loss loss.backward() optim.step() model.eval() model(inputs["input_ids"]) # failed ``` or see my colab code as follows: https://colab.research.google.com/drive/1l_vNHPd9_Z40dPkhIj5CxgrLhIn1Edyc?usp=sharing ### Expected behavior After fine-tuning, the model should still work properly.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23873/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23873/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23872
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23872/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23872/comments
https://api.github.com/repos/huggingface/transformers/issues/23872/events
https://github.com/huggingface/transformers/pull/23872
1,732,900,455
PR_kwDOCUB6oc5RuFQ7
23,872
Raise error if loss can't be calculated - ViT MIM
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,685
1,685
1,685
COLLABORATOR
null
# What does this PR do? Currently, `ViTForMaskedImageModeling` will fail when calculating the reconstruction loss if a patch size other than 16 is chosen. This is because the decoder head is parametrized by `config.encoder_stride`, which controls the resolution of the upsampled image. By default, `config.patch_size = config.encoder_stride = 16`. If a user updates the patch size but not the encoder stride to match, the reconstructed image will have a different resolution. This PR adds a warning for the user before the forward pass, explaining why the loss calculation won't work. Fixes #23832 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23872/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23872/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23872", "html_url": "https://github.com/huggingface/transformers/pull/23872", "diff_url": "https://github.com/huggingface/transformers/pull/23872.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23872.patch", "merged_at": 1685548914000 }
https://api.github.com/repos/huggingface/transformers/issues/23871
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23871/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23871/comments
https://api.github.com/repos/huggingface/transformers/issues/23871/events
https://github.com/huggingface/transformers/pull/23871
1,732,728,610
PR_kwDOCUB6oc5RtfKu
23,871
Support shared tensors
{ "login": "thomasw21", "id": 24695242, "node_id": "MDQ6VXNlcjI0Njk1MjQy", "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomasw21", "html_url": "https://github.com/thomasw21", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "repos_url": "https://api.github.com/users/thomasw21/repos", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Failing test seems unrelated?", "Nice 🔥 ", "Seems like this requires the latest version of safetensors otherwise getting \r\n```python \r\nE RuntimeError: Failed to import transformers.modeling_utils because of the following error (look up to see its traceback):\r\nE cannot import name 'storage_ptr' from 'safetensors.torch' (/opt/conda/envs/py39/lib/python3.9/site-packages/safetensors/torch.py)\r\n```\r\nshould probably update the setup? ", "Yes @muellerzr made a PR: #23911 ", "Woops! Thank you @muellerzr !" ]
1,685
1,685
1,685
CONTRIBUTOR
null
# What does this PR do? - Fixes #23868 We can uniquely hash the storage by computing `data_ptr()` and `nbytes()` since storages are 1D contiguous buffers. We use that to find tensors that share the same storage. And if we do find them, we put those within the same "block" relying on underlying code to optimize serialization.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23871/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23871/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23871", "html_url": "https://github.com/huggingface/transformers/pull/23871", "diff_url": "https://github.com/huggingface/transformers/pull/23871.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23871.patch", "merged_at": 1685540550000 }
https://api.github.com/repos/huggingface/transformers/issues/23870
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23870/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23870/comments
https://api.github.com/repos/huggingface/transformers/issues/23870/events
https://github.com/huggingface/transformers/issues/23870
1,732,633,964
I_kwDOCUB6oc5nReVs
23,870
importing of transformers 4.29.2 slows down PyToch DataLoader's multi-processing significantly
{ "login": "TYTTYTTYT", "id": 30595688, "node_id": "MDQ6VXNlcjMwNTk1Njg4", "avatar_url": "https://avatars.githubusercontent.com/u/30595688?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TYTTYTTYT", "html_url": "https://github.com/TYTTYTTYT", "followers_url": "https://api.github.com/users/TYTTYTTYT/followers", "following_url": "https://api.github.com/users/TYTTYTTYT/following{/other_user}", "gists_url": "https://api.github.com/users/TYTTYTTYT/gists{/gist_id}", "starred_url": "https://api.github.com/users/TYTTYTTYT/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TYTTYTTYT/subscriptions", "organizations_url": "https://api.github.com/users/TYTTYTTYT/orgs", "repos_url": "https://api.github.com/users/TYTTYTTYT/repos", "events_url": "https://api.github.com/users/TYTTYTTYT/events{/privacy}", "received_events_url": "https://api.github.com/users/TYTTYTTYT/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Both take the same time on my side, so it's not just Transformers but some external library causing the problem. Could you share your full env?", "> Both take the same time on my side, so it's not just Transformers but some external library causing the problem. Could you share your full env?\r\n\r\nThanks for your reply! Here is the env generated by Pytorch env script:\r\n\r\n```\r\nPyTorch version: 2.0.1\r\nIs debug build: False\r\nCUDA used to build PyTorch: 11.7\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 22.04.2 LTS (x86_64)\r\nGCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0\r\nClang version: Could not collect\r\nCMake version: Could not collect\r\nLibc version: glibc-2.35\r\n\r\nPython version: 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] (64-bit runtime)\r\nPython platform: Linux-5.15.0-70-generic-x86_64-with-glibc2.35\r\nIs CUDA available: True\r\nCUDA runtime version: Could not collect\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090 Ti\r\nNvidia driver version: 515.65.01\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nAddress sizes: 43 bits physical, 48 bits virtual\r\nByte Order: Little Endian\r\nCPU(s): 16\r\nOn-line CPU(s) list: 0-15\r\nVendor ID: AuthenticAMD\r\nModel name: AMD Ryzen 7 3700X 8-Core Processor\r\nCPU family: 23\r\nModel: 113\r\nThread(s) per core: 2\r\nCore(s) per socket: 8\r\nSocket(s): 1\r\nStepping: 0\r\nFrequency boost: enabled\r\nCPU max MHz: 3600.0000\r\nCPU min MHz: 2200.0000\r\nBogoMIPS: 7199.26\r\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es\r\nVirtualisation: AMD-V\r\nL1d cache: 256 KiB (8 instances)\r\nL1i cache: 256 KiB (8 instances)\r\nL2 cache: 4 MiB (8 instances)\r\nL3 cache: 32 MiB (2 instances)\r\nNUMA node(s): 1\r\nNUMA node0 CPU(s): 0-15\r\nVulnerability Itlb multihit: Not affected\r\nVulnerability L1tf: Not affected\r\nVulnerability Mds: Not affected\r\nVulnerability Meltdown: Not affected\r\nVulnerability Mmio stale data: Not affected\r\nVulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection\r\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\r\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\r\nVulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected\r\nVulnerability Srbds: Not affected\r\nVulnerability Tsx async abort: Not affected\r\n\r\nVersions of relevant libraries:\r\n[pip3] mypy-extensions==1.0.0\r\n[pip3] numpy==1.24.3\r\n[pip3] torch==2.0.1\r\n[pip3] torchaudio==2.0.2\r\n[pip3] torchvision==0.15.2\r\n[pip3] triton==2.0.0\r\n[conda] blas 1.0 mkl \r\n[conda] ffmpeg 4.3 hf484d3e_0 pytorch\r\n[conda] mkl 2023.1.0 h6d00ec8_46342 \r\n[conda] mkl-service 2.4.0 py310h5eee18b_1 \r\n[conda] mkl_fft 1.3.6 py310h1128e8f_1 \r\n[conda] mkl_random 1.2.2 py310h1128e8f_1 \r\n[conda] numpy 1.24.3 py310h5f9d8c6_1 \r\n[conda] numpy-base 1.24.3 py310hb5e798b_1 \r\n[conda] pytorch 2.0.1 py3.10_cuda11.7_cudnn8.5.0_0 pytorch\r\n[conda] pytorch-cuda 11.7 h778d358_5 pytorch\r\n[conda] pytorch-mutex 1.0 cuda pytorch\r\n[conda] torchaudio 2.0.2 py310_cu117 pytorch\r\n[conda] torchtriton 2.0.0 py310 pytorch\r\n[conda] torchvision 0.15.2 py310_cu117 pytorch\r\n```\r\n \r\nHere is my conda environment:\r\n\r\n```\r\nname: pt2hfpy310\r\nchannels:\r\n - pytorch\r\n - huggingface\r\n - nvidia\r\n - conda-forge\r\n - defaults\r\ndependencies:\r\n - _libgcc_mutex=0.1=main\r\n - _openmp_mutex=5.1=1_gnu\r\n - abseil-cpp=20211102.0=h27087fc_1\r\n - aiosignal=1.3.1=pyhd8ed1ab_0\r\n - anyio=3.5.0=py310h06a4308_0\r\n - argon2-cffi=21.3.0=pyhd3eb1b0_0\r\n - argon2-cffi-bindings=21.2.0=py310h7f8727e_0\r\n - arrow-cpp=11.0.0=py310h7516544_0\r\n - asttokens=2.0.5=pyhd3eb1b0_0\r\n - async-timeout=4.0.2=pyhd8ed1ab_0\r\n - attrs=23.1.0=pyh71513ae_1\r\n - aws-c-common=0.4.57=he6710b0_1\r\n - aws-c-event-stream=0.1.6=h2531618_5\r\n - aws-checksums=0.1.9=he6710b0_0\r\n - aws-sdk-cpp=1.8.185=hce553d0_0\r\n - babel=2.11.0=py310h06a4308_0\r\n - backcall=0.2.0=pyhd3eb1b0_0\r\n - beautifulsoup4=4.12.2=py310h06a4308_0\r\n - blas=1.0=mkl\r\n - bleach=4.1.0=pyhd3eb1b0_0\r\n - boost-cpp=1.65.1=0\r\n - bottleneck=1.3.5=py310ha9d4c09_0\r\n - brotli=1.0.9=he6710b0_2\r\n - brotlipy=0.7.0=py310h7f8727e_1002\r\n - bzip2=1.0.8=h7b6447c_0\r\n - c-ares=1.19.0=h5eee18b_0\r\n - ca-certificates=2023.01.10=h06a4308_0\r\n - certifi=2023.5.7=py310h06a4308_0\r\n - cffi=1.15.1=py310h5eee18b_3\r\n - charset-normalizer=2.0.4=pyhd3eb1b0_0\r\n - click=8.0.4=py310h06a4308_0\r\n - comm=0.1.2=py310h06a4308_0\r\n - contourpy=1.0.5=py310hdb19cb5_0\r\n - cryptography=39.0.1=py310h9ce1e76_0\r\n - cuda-cudart=11.7.99=0\r\n - cuda-cupti=11.7.101=0\r\n - cuda-libraries=11.7.1=0\r\n - cuda-nvrtc=11.7.99=0\r\n - cuda-nvtx=11.7.91=0\r\n - cuda-runtime=11.7.1=0\r\n - cycler=0.11.0=pyhd3eb1b0_0\r\n - dataclasses=0.8=pyh6d0b6a4_7\r\n - datasets=2.12.0=py_0\r\n - dbus=1.13.18=hb2f20db_0\r\n - debugpy=1.5.1=py310h295c915_0\r\n - decorator=5.1.1=pyhd3eb1b0_0\r\n - defusedxml=0.7.1=pyhd3eb1b0_0\r\n - dill=0.3.6=pyhd8ed1ab_1\r\n - entrypoints=0.4=py310h06a4308_0\r\n - executing=0.8.3=pyhd3eb1b0_0\r\n - expat=2.4.9=h6a678d5_0\r\n - ffmpeg=4.3=hf484d3e_0\r\n - filelock=3.9.0=py310h06a4308_0\r\n - fontconfig=2.14.1=h52c9d5c_1\r\n - fonttools=4.25.0=pyhd3eb1b0_0\r\n - freetype=2.12.1=h4a9f257_0\r\n - frozenlist=1.3.3=py310h5eee18b_0\r\n - fsspec=2023.5.0=pyh1a96a4e_0\r\n - gflags=2.2.2=he1b5a44_1004\r\n - giflib=5.2.1=h5eee18b_3\r\n - glib=2.69.1=he621ea3_2\r\n - glog=0.5.0=h48cff8f_0\r\n - gmp=6.2.1=h295c915_3\r\n - gmpy2=2.1.2=py310heeb90bb_0\r\n - gnutls=3.6.15=he1e5248_0\r\n - grpc-cpp=1.46.1=h33aed49_1\r\n - gst-plugins-base=1.14.1=h6a678d5_1\r\n - gstreamer=1.14.1=h5eee18b_1\r\n - huggingface_hub=0.14.1=py_0\r\n - icu=58.2=hf484d3e_1000\r\n - idna=3.4=py310h06a4308_0\r\n - importlib-metadata=6.0.0=py310h06a4308_0\r\n - importlib_metadata=6.0.0=hd3eb1b0_0\r\n - intel-openmp=2023.1.0=hdb19cb5_46305\r\n - ipykernel=6.19.2=py310h2f386ee_0\r\n - ipython=8.12.0=py310h06a4308_0\r\n - ipython_genutils=0.2.0=pyhd3eb1b0_1\r\n - ipywidgets=8.0.4=py310h06a4308_0\r\n - jedi=0.18.1=py310h06a4308_1\r\n - jinja2=3.1.2=py310h06a4308_0\r\n - joblib=1.1.1=py310h06a4308_0\r\n - jpeg=9e=h5eee18b_1\r\n - json5=0.9.6=pyhd3eb1b0_0\r\n - jsonschema=4.17.3=py310h06a4308_0\r\n - jupyter=1.0.0=py310h06a4308_8\r\n - jupyter_client=8.1.0=py310h06a4308_0\r\n - jupyter_console=6.6.3=py310h06a4308_0\r\n - jupyter_core=5.3.0=py310h06a4308_0\r\n - jupyter_server=1.23.4=py310h06a4308_0\r\n - jupyterlab=3.5.3=py310h06a4308_0\r\n - jupyterlab_pygments=0.1.2=py_0\r\n - jupyterlab_server=2.22.0=py310h06a4308_0\r\n - jupyterlab_widgets=3.0.5=py310h06a4308_0\r\n - keyutils=1.6.1=h166bdaf_0\r\n - kiwisolver=1.4.4=py310h6a678d5_0\r\n - krb5=1.19.3=h3790be6_0\r\n - lame=3.100=h7b6447c_0\r\n - lcms2=2.12=h3be6417_0\r\n - ld_impl_linux-64=2.38=h1181459_1\r\n - lerc=3.0=h295c915_0\r\n - libbrotlicommon=1.0.9=h166bdaf_7\r\n - libbrotlidec=1.0.9=h166bdaf_7\r\n - libbrotlienc=1.0.9=h166bdaf_7\r\n - libclang=10.0.1=default_hb85057a_2\r\n - libcublas=11.10.3.66=0\r\n - libcufft=10.7.2.124=h4fbf590_0\r\n - libcufile=1.6.1.9=0\r\n - libcurand=10.3.2.106=0\r\n - libcurl=7.87.0=h91b91d3_0\r\n - libcusolver=11.4.0.1=0\r\n - libcusparse=11.7.4.91=0\r\n - libdeflate=1.17=h5eee18b_0\r\n - libedit=3.1.20191231=he28a2e2_2\r\n - libev=4.33=h516909a_1\r\n - libevent=2.1.12=h8f2d780_0\r\n - libffi=3.4.4=h6a678d5_0\r\n - libgcc-ng=11.2.0=h1234567_1\r\n - libgomp=11.2.0=h1234567_1\r\n - libiconv=1.16=h7f8727e_2\r\n - libidn2=2.3.4=h5eee18b_0\r\n - libllvm10=10.0.1=hbcb73fb_5\r\n - libnghttp2=1.46.0=hce63b2e_0\r\n - libnpp=11.7.4.75=0\r\n - libnvjpeg=11.8.0.2=0\r\n - libpng=1.6.39=h5eee18b_0\r\n - libpq=12.9=h16c4e8d_3\r\n - libprotobuf=3.20.3=he621ea3_0\r\n - libsodium=1.0.18=h7b6447c_0\r\n - libssh2=1.10.0=ha56f1ee_2\r\n - libstdcxx-ng=11.2.0=h1234567_1\r\n - libtasn1=4.19.0=h5eee18b_0\r\n - libthrift=0.15.0=hcc01f38_0\r\n - libtiff=4.5.0=h6a678d5_2\r\n - libunistring=0.9.10=h27cfd23_0\r\n - libuuid=1.41.5=h5eee18b_0\r\n - libwebp=1.2.4=h11a3e52_1\r\n - libwebp-base=1.2.4=h5eee18b_1\r\n - libxcb=1.15=h7f8727e_0\r\n - libxkbcommon=1.0.1=hfa300c1_0\r\n - libxml2=2.9.14=h74e7548_0\r\n - libxslt=1.1.35=h4e12654_0\r\n - lxml=4.9.1=py310h1edc446_0\r\n - lz4-c=1.9.4=h6a678d5_0\r\n - markupsafe=2.1.1=py310h7f8727e_0\r\n - matplotlib=3.7.1=py310h06a4308_1\r\n - matplotlib-base=3.7.1=py310h1128e8f_1\r\n - matplotlib-inline=0.1.6=py310h06a4308_0\r\n - mistune=0.8.4=py310h7f8727e_1000\r\n - mkl=2023.1.0=h6d00ec8_46342\r\n - mkl-service=2.4.0=py310h5eee18b_1\r\n - mkl_fft=1.3.6=py310h1128e8f_1\r\n - mkl_random=1.2.2=py310h1128e8f_1\r\n - mpc=1.1.0=h10f8cd9_1\r\n - mpfr=4.0.2=hb69a4c5_1\r\n - multidict=6.0.2=py310h5eee18b_0\r\n - multiprocess=0.70.14=py310h06a4308_0\r\n - munkres=1.1.4=py_0\r\n - nbclassic=0.5.5=py310h06a4308_0\r\n - nbclient=0.5.13=py310h06a4308_0\r\n - nbconvert=6.5.4=py310h06a4308_0\r\n - nbformat=5.7.0=py310h06a4308_0\r\n - ncurses=6.4=h6a678d5_0\r\n - nest-asyncio=1.5.6=py310h06a4308_0\r\n - nettle=3.7.3=hbbd107a_1\r\n - networkx=2.8.4=py310h06a4308_1\r\n - notebook=6.5.4=py310h06a4308_0\r\n - notebook-shim=0.2.2=py310h06a4308_0\r\n - nspr=4.33=h295c915_0\r\n - nss=3.74=h0370c37_0\r\n - numexpr=2.8.4=py310h85018f9_1\r\n - numpy=1.24.3=py310h5f9d8c6_1\r\n - numpy-base=1.24.3=py310hb5e798b_1\r\n - openh264=2.1.1=h4ff587b_0\r\n - openssl=1.1.1t=h7f8727e_0\r\n - orc=1.7.4=hb3bc3d3_1\r\n - packaging=23.0=py310h06a4308_0\r\n - pandas=1.5.3=py310h1128e8f_0\r\n - pandocfilters=1.5.0=pyhd3eb1b0_0\r\n - parso=0.8.3=pyhd3eb1b0_0\r\n - pcre=8.45=h295c915_0\r\n - pexpect=4.8.0=pyhd3eb1b0_3\r\n - pickleshare=0.7.5=pyhd3eb1b0_1003\r\n - pillow=9.4.0=py310h6a678d5_0\r\n - pip=23.0.1=py310h06a4308_0\r\n - platformdirs=2.5.2=py310h06a4308_0\r\n - ply=3.11=py310h06a4308_0\r\n - prometheus_client=0.14.1=py310h06a4308_0\r\n - prompt-toolkit=3.0.36=py310h06a4308_0\r\n - prompt_toolkit=3.0.36=hd3eb1b0_0\r\n - protobuf=3.20.3=py310h6a678d5_0\r\n - psutil=5.9.0=py310h5eee18b_0\r\n - ptyprocess=0.7.0=pyhd3eb1b0_2\r\n - pure_eval=0.2.2=pyhd3eb1b0_0\r\n - pyarrow=11.0.0=py310h468efa6_0\r\n - pycparser=2.21=pyhd3eb1b0_0\r\n - pygments=2.15.1=py310h06a4308_1\r\n - pyopenssl=23.0.0=py310h06a4308_0\r\n - pyparsing=3.0.9=py310h06a4308_0\r\n - pyqt=5.15.7=py310h6a678d5_1\r\n - pyrsistent=0.18.0=py310h7f8727e_0\r\n - pysocks=1.7.1=py310h06a4308_0\r\n - python=3.10.11=h7a1cb2a_2\r\n - python-dateutil=2.8.2=pyhd8ed1ab_0\r\n - python-fastjsonschema=2.16.2=py310h06a4308_0\r\n - python-xxhash=3.0.0=py310h5764c6d_1\r\n - python_abi=3.10=2_cp310\r\n - pytorch=2.0.1=py3.10_cuda11.7_cudnn8.5.0_0\r\n - pytorch-cuda=11.7=h778d358_5\r\n - pytorch-mutex=1.0=cuda\r\n - pytz=2023.3=pyhd8ed1ab_0\r\n - pyyaml=6.0=py310h5eee18b_1\r\n - pyzmq=25.0.2=py310h6a678d5_0\r\n - qt-main=5.15.2=h327a75a_7\r\n - qt-webengine=5.15.9=hd2b0992_4\r\n - qtconsole=5.4.2=py310h06a4308_0\r\n - qtpy=2.2.0=py310h06a4308_0\r\n - qtwebkit=5.212=h4eab89a_4\r\n - re2=2022.04.01=h27087fc_0\r\n - readline=8.2=h5eee18b_0\r\n - regex=2022.7.9=py310h5eee18b_0\r\n - requests=2.29.0=py310h06a4308_0\r\n - sacremoses=master=py_0\r\n - send2trash=1.8.0=pyhd3eb1b0_1\r\n - sentencepiece=0.1.99=py310hdb19cb5_0\r\n - setuptools=66.0.0=py310h06a4308_0\r\n - sip=6.6.2=py310h6a678d5_0\r\n - six=1.16.0=pyhd3eb1b0_1\r\n - snappy=1.1.9=h295c915_0\r\n - sniffio=1.2.0=py310h06a4308_1\r\n - soupsieve=2.4=py310h06a4308_0\r\n - sqlite=3.41.2=h5eee18b_0\r\n - stack_data=0.2.0=pyhd3eb1b0_0\r\n - sympy=1.11.1=py310h06a4308_0\r\n - tbb=2021.8.0=hdb19cb5_0\r\n - terminado=0.17.1=py310h06a4308_0\r\n - tinycss2=1.2.1=py310h06a4308_0\r\n - tk=8.6.12=h1ccaba5_0\r\n - tokenizers=0.11.4=py310h3dcd8bd_1\r\n - toml=0.10.2=pyhd3eb1b0_0\r\n - tomli=2.0.1=py310h06a4308_0\r\n - torchaudio=2.0.2=py310_cu117\r\n - torchtriton=2.0.0=py310\r\n - torchvision=0.15.2=py310_cu117\r\n - tornado=6.2=py310h5eee18b_0\r\n - tqdm=4.65.0=py310h2f386ee_0\r\n - traitlets=5.7.1=py310h06a4308_0\r\n - typing-extensions=4.5.0=py310h06a4308_0\r\n - typing_extensions=4.5.0=py310h06a4308_0\r\n - tzdata=2023c=h04d1e81_0\r\n - urllib3=1.26.15=py310h06a4308_0\r\n - utf8proc=2.6.1=h27cfd23_0\r\n - wcwidth=0.2.5=pyhd3eb1b0_0\r\n - webencodings=0.5.1=py310h06a4308_1\r\n - websocket-client=0.58.0=py310h06a4308_4\r\n - wheel=0.38.4=py310h06a4308_0\r\n - widgetsnbextension=4.0.5=py310h06a4308_0\r\n - xxhash=0.8.0=h7f98852_3\r\n - xz=5.4.2=h5eee18b_0\r\n - yaml=0.2.5=h7b6447c_0\r\n - yarl=1.7.2=py310h5764c6d_2\r\n - zeromq=4.3.4=h2531618_0\r\n - zipp=3.11.0=py310h06a4308_0\r\n - zlib=1.2.13=h5eee18b_0\r\n - zstd=1.5.5=hc292b87_0\r\n - pip:\r\n - aiohttp==3.8.4\r\n - dataclasses-json==0.5.7\r\n - greenlet==2.0.2\r\n - langchain==0.0.180\r\n - marshmallow==3.19.0\r\n - marshmallow-enum==1.5.1\r\n - mpmath==1.2.1\r\n - mypy-extensions==1.0.0\r\n - openai==0.27.7\r\n - openapi-schema-pydantic==1.2.4\r\n - pydantic==1.10.8\r\n - pyqt5-sip==12.11.0\r\n - sqlalchemy==2.0.15\r\n - tenacity==8.2.2\r\n - transformers==4.29.2\r\n - typing-inspect==0.9.0\r\nprefix: /home/tai/miniconda3/envs/pt2hfpy310\r\n```\r\n", "This is really puzzling as `import transformers` does not really do anything (it's when you import a specific object that the code of a module is actually executed), so I don't see what could cause this slowdown.", "@sgugger Yeah, it's really puzzling. I think ```import transformers``` would run the codes inside the ```transformers/__init__.py``` before actually using it.\r\n\r\nZailiWang said it may be because \"that transformers have another openmp dependency and the new openmp lib flushed llvm-openmp invoked by torch\" in [anohter issue](https://github.com/pytorch/pytorch/issues/102494#issuecomment-1568727409).", "We do not have an openmp dependency. And if you look at the transformers __init__ you will see that nothing is done there.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,688
1,688
NONE
null
### System Info - `transformers` version: 4.29.2 - Platform: Linux-5.15.0-70-generic-x86_64-with-glibc2.35 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 1.13.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <no> no - Using distributed or parallel set-up in script?: <yes> yes ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The issue is firstly report to ```PyTorch```, then I found it's caused by ```transformers``` [Original Issue](https://github.com/pytorch/pytorch/issues/102494) The codes below take 23.6 seconds with only 2 CPU cores fully used, even though I didn't really use the transformers. ``` python import transformers # imported but not used import torch import torchvision.datasets as datasets import torchvision.transforms as transforms trans = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) dataset = datasets.FakeData(size=10000, transform=trans) loader = torch.utils.data.DataLoader( dataset, batch_size=128, shuffle=True, num_workers=12, sampler=None) i = 0 for d in loader: print("Batch {}".format(i)) i += 1 # takes 23.6 seconds ``` And by importing ```torch``` before ```transformers```, the CPU is fully used and only takes 5.4 seconds. ``` python import torch import torchvision.datasets as datasets import torchvision.transforms as transforms import transformers trans = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) dataset = datasets.FakeData(size=10000, transform=trans) loader = torch.utils.data.DataLoader( dataset, batch_size=128, shuffle=True, num_workers=12, sampler=None) i = 0 for d in loader: print("Batch {}".format(i)) i += 1 # take only 5.4 seconds ``` ### Expected behavior The aforementioned issue happens to ```transformers 4.29.2```. I tested 4.26.1 as well and it works fine. I expect the multi-processing DataLoader can fully use my CPU so the data processing could be faster.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23870/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23870/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23869
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23869/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23869/comments
https://api.github.com/repos/huggingface/transformers/issues/23869/events
https://github.com/huggingface/transformers/pull/23869
1,732,632,611
PR_kwDOCUB6oc5RtKba
23,869
Editing issue with pickle def with lambda function
{ "login": "Natyren", "id": 51296182, "node_id": "MDQ6VXNlcjUxMjk2MTgy", "avatar_url": "https://avatars.githubusercontent.com/u/51296182?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Natyren", "html_url": "https://github.com/Natyren", "followers_url": "https://api.github.com/users/Natyren/followers", "following_url": "https://api.github.com/users/Natyren/following{/other_user}", "gists_url": "https://api.github.com/users/Natyren/gists{/gist_id}", "starred_url": "https://api.github.com/users/Natyren/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Natyren/subscriptions", "organizations_url": "https://api.github.com/users/Natyren/orgs", "repos_url": "https://api.github.com/users/Natyren/repos", "events_url": "https://api.github.com/users/Natyren/events{/privacy}", "received_events_url": "https://api.github.com/users/Natyren/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,685
1,685
1,685
CONTRIBUTOR
null
# What does this PR do? In this PR, I address the problem of pickling the constant LR scheduler, which fails during the process (potentially during multi-GPU training, as observed in my case) due to the presence of a lambda function within it. Fixes #23865 (issue)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23869/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23869/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23869", "html_url": "https://github.com/huggingface/transformers/pull/23869", "diff_url": "https://github.com/huggingface/transformers/pull/23869.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23869.patch", "merged_at": 1685467598000 }
https://api.github.com/repos/huggingface/transformers/issues/23868
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23868/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23868/comments
https://api.github.com/repos/huggingface/transformers/issues/23868/events
https://github.com/huggingface/transformers/issues/23868
1,732,570,726
I_kwDOCUB6oc5nRO5m
23,868
Avoid saving tied weights with sharded checkpoints
{ "login": "NouamaneTazi", "id": 29777165, "node_id": "MDQ6VXNlcjI5Nzc3MTY1", "avatar_url": "https://avatars.githubusercontent.com/u/29777165?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NouamaneTazi", "html_url": "https://github.com/NouamaneTazi", "followers_url": "https://api.github.com/users/NouamaneTazi/followers", "following_url": "https://api.github.com/users/NouamaneTazi/following{/other_user}", "gists_url": "https://api.github.com/users/NouamaneTazi/gists{/gist_id}", "starred_url": "https://api.github.com/users/NouamaneTazi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NouamaneTazi/subscriptions", "organizations_url": "https://api.github.com/users/NouamaneTazi/orgs", "repos_url": "https://api.github.com/users/NouamaneTazi/repos", "events_url": "https://api.github.com/users/NouamaneTazi/events{/privacy}", "received_events_url": "https://api.github.com/users/NouamaneTazi/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "thomasw21", "id": 24695242, "node_id": "MDQ6VXNlcjI0Njk1MjQy", "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomasw21", "html_url": "https://github.com/thomasw21", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "repos_url": "https://api.github.com/users/thomasw21/repos", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "type": "User", "site_admin": false }
[ { "login": "thomasw21", "id": 24695242, "node_id": "MDQ6VXNlcjI0Njk1MjQy", "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomasw21", "html_url": "https://github.com/thomasw21", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "repos_url": "https://api.github.com/users/thomasw21/repos", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "type": "User", "site_admin": false } ]
[ "I don't think that developing the logic that would avoid this is really worth the time it will require, but we can leave this issue as a reference point.", "Hum I don't think it should take that long. I can have a go at it. I do think it's an easy win." ]
1,685
1,685
1,685
MEMBER
null
It seems that when sharding a checkpoint we untie weights which makes them take more space ```python import torch from transformers import GPT2LMHeadModel, GPT2Config config = GPT2Config() model = GPT2LMHeadModel(config) assert id(model.transformer.wte.weight) == id(model.lm_head.weight) model.save_pretrained("gpt2-tied-weights") config.tie_word_embeddings = False model = GPT2LMHeadModel(config) assert id(model.transformer.wte.weight) != id(model.lm_head.weight) model.save_pretrained("gpt2-untied-weights") config = GPT2Config() model = GPT2LMHeadModel(config) assert id(model.transformer.wte.weight) == id(model.lm_head.weight) model.save_pretrained("gpt2-tied-weights-sharded", max_shard_size="100MB") config.tie_word_embeddings = False model = GPT2LMHeadModel(config) assert id(model.transformer.wte.weight) != id(model.lm_head.weight) model.save_pretrained("gpt2-untied-weights-sharded", max_shard_size="100MB") ``` When checking the space taken by these checkpoints: ``` $ du -sh gpt2* 475M gpt2-tied-weights 622M gpt2-tied-weights-sharded # MUST BE 475M 622M gpt2-untied-weights 622M gpt2-untied-weights-sharded ``` cc @ArthurZucker @thomasw21
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23868/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23868/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23867
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23867/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23867/comments
https://api.github.com/repos/huggingface/transformers/issues/23867/events
https://github.com/huggingface/transformers/pull/23867
1,732,566,083
PR_kwDOCUB6oc5Rs7-T
23,867
[wip: test doc-builder]
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,700
1,688
CONTRIBUTOR
null
Closes #23625 Testing https://github.com/huggingface/doc-builder/pull/373
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23867/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23867/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23867", "html_url": "https://github.com/huggingface/transformers/pull/23867", "diff_url": "https://github.com/huggingface/transformers/pull/23867.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23867.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/23866
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23866/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23866/comments
https://api.github.com/repos/huggingface/transformers/issues/23866/events
https://github.com/huggingface/transformers/pull/23866
1,732,562,586
PR_kwDOCUB6oc5Rs7M_
23,866
merge main
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23866). All of your documentation changes will be reflected on that endpoint." ]
1,685
1,685
1,685
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23866/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23866/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23866", "html_url": "https://github.com/huggingface/transformers/pull/23866", "diff_url": "https://github.com/huggingface/transformers/pull/23866.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23866.patch", "merged_at": 1685462735000 }
https://api.github.com/repos/huggingface/transformers/issues/23865
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23865/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23865/comments
https://api.github.com/repos/huggingface/transformers/issues/23865/events
https://github.com/huggingface/transformers/issues/23865
1,732,539,456
I_kwDOCUB6oc5nRHRA
23,865
Possible pickle issues
{ "login": "Natyren", "id": 51296182, "node_id": "MDQ6VXNlcjUxMjk2MTgy", "avatar_url": "https://avatars.githubusercontent.com/u/51296182?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Natyren", "html_url": "https://github.com/Natyren", "followers_url": "https://api.github.com/users/Natyren/followers", "following_url": "https://api.github.com/users/Natyren/following{/other_user}", "gists_url": "https://api.github.com/users/Natyren/gists{/gist_id}", "starred_url": "https://api.github.com/users/Natyren/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Natyren/subscriptions", "organizations_url": "https://api.github.com/users/Natyren/orgs", "repos_url": "https://api.github.com/users/Natyren/repos", "events_url": "https://api.github.com/users/Natyren/events{/privacy}", "received_events_url": "https://api.github.com/users/Natyren/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Would like to open a PR with your fix?", "yes, is it okay?" ]
1,685
1,685
1,685
CONTRIBUTOR
null
https://github.com/huggingface/transformers/blob/af2aac51fc1c59237ff7228908ace2cd8fc0d9a6/src/transformers/optimization.py#L49 When attempting to pickle this function, there is a potential for an error to occur due to the lambda function that cannot be pickled. I suggest the following solution for this problem. ``` def get_constant_lambda(_): return 1 def get_constant_schedule(optimizer: Optimizer, last_epoch: int = -1): """ Create a schedule with a constant learning rate, using the learning rate set in optimizer. Args: optimizer ([`~torch.optim.Optimizer`]): The optimizer for which to schedule the learning rate. last_epoch (`int`, *optional*, defaults to -1): The index of the last epoch when resuming training. Return: `torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule. """ return LambdaLR(optimizer, get_constant_lambda, last_epoch=last_epoch) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23865/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23865/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23864
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23864/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23864/comments
https://api.github.com/repos/huggingface/transformers/issues/23864/events
https://github.com/huggingface/transformers/issues/23864
1,732,522,010
I_kwDOCUB6oc5nRDAa
23,864
Mychatter
{ "login": "typ11", "id": 24231148, "node_id": "MDQ6VXNlcjI0MjMxMTQ4", "avatar_url": "https://avatars.githubusercontent.com/u/24231148?v=4", "gravatar_id": "", "url": "https://api.github.com/users/typ11", "html_url": "https://github.com/typ11", "followers_url": "https://api.github.com/users/typ11/followers", "following_url": "https://api.github.com/users/typ11/following{/other_user}", "gists_url": "https://api.github.com/users/typ11/gists{/gist_id}", "starred_url": "https://api.github.com/users/typ11/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/typ11/subscriptions", "organizations_url": "https://api.github.com/users/typ11/orgs", "repos_url": "https://api.github.com/users/typ11/repos", "events_url": "https://api.github.com/users/typ11/events{/privacy}", "received_events_url": "https://api.github.com/users/typ11/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[]
1,685
1,685
1,685
NONE
null
### Model description I am still learning so the content is unclear even to me ### Open source status - [ ] The model implementation is available - [ ] The model weights are available ### Provide useful links for the implementation _No response_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23864/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23864/timeline
not_planned
null
null
https://api.github.com/repos/huggingface/transformers/issues/23863
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23863/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23863/comments
https://api.github.com/repos/huggingface/transformers/issues/23863/events
https://github.com/huggingface/transformers/pull/23863
1,732,352,820
PR_kwDOCUB6oc5RsNwg
23,863
#23388 Issue: Update RoBERTa configuration
{ "login": "vijethmoudgalya", "id": 33093576, "node_id": "MDQ6VXNlcjMzMDkzNTc2", "avatar_url": "https://avatars.githubusercontent.com/u/33093576?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vijethmoudgalya", "html_url": "https://github.com/vijethmoudgalya", "followers_url": "https://api.github.com/users/vijethmoudgalya/followers", "following_url": "https://api.github.com/users/vijethmoudgalya/following{/other_user}", "gists_url": "https://api.github.com/users/vijethmoudgalya/gists{/gist_id}", "starred_url": "https://api.github.com/users/vijethmoudgalya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vijethmoudgalya/subscriptions", "organizations_url": "https://api.github.com/users/vijethmoudgalya/orgs", "repos_url": "https://api.github.com/users/vijethmoudgalya/repos", "events_url": "https://api.github.com/users/vijethmoudgalya/events{/privacy}", "received_events_url": "https://api.github.com/users/vijethmoudgalya/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ArthurZucker ", "_The documentation is not available anymore as the PR was closed or merged._", "Hey! Thanks for opening the PR, let's just re-run the CI tests" ]
1,685
1,685
1,685
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) #23388 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23863/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23863/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23863", "html_url": "https://github.com/huggingface/transformers/pull/23863", "diff_url": "https://github.com/huggingface/transformers/pull/23863.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23863.patch", "merged_at": 1685458420000 }
https://api.github.com/repos/huggingface/transformers/issues/23862
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23862/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23862/comments
https://api.github.com/repos/huggingface/transformers/issues/23862/events
https://github.com/huggingface/transformers/pull/23862
1,732,337,421
PR_kwDOCUB6oc5RsKYf
23,862
Update collating_graphormer.py
{ "login": "clefourrier", "id": 22726840, "node_id": "MDQ6VXNlcjIyNzI2ODQw", "avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4", "gravatar_id": "", "url": "https://api.github.com/users/clefourrier", "html_url": "https://github.com/clefourrier", "followers_url": "https://api.github.com/users/clefourrier/followers", "following_url": "https://api.github.com/users/clefourrier/following{/other_user}", "gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}", "starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions", "organizations_url": "https://api.github.com/users/clefourrier/orgs", "repos_url": "https://api.github.com/users/clefourrier/repos", "events_url": "https://api.github.com/users/clefourrier/events{/privacy}", "received_events_url": "https://api.github.com/users/clefourrier/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,685
1,685
1,685
MEMBER
null
# What does this PR do? Fixes #23697 ## Who can review? @ydshieh ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23862/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23862/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23862", "html_url": "https://github.com/huggingface/transformers/pull/23862", "diff_url": "https://github.com/huggingface/transformers/pull/23862.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23862.patch", "merged_at": 1685456601000 }
https://api.github.com/repos/huggingface/transformers/issues/23861
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23861/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23861/comments
https://api.github.com/repos/huggingface/transformers/issues/23861/events
https://github.com/huggingface/transformers/pull/23861
1,732,286,765
PR_kwDOCUB6oc5Rr_Wn
23,861
[from_pretrained] imporve the error message when `_no_split_modules` is not defined
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks, though black would format it! ", "Tests are again unrealted to the PR, will merge once the doc is built" ]
1,685
1,685
1,685
COLLABORATOR
null
# What does this PR do? As a lot of issues seem to have appeared related to this, the warning is improved. Adresses #23816
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23861/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23861/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23861", "html_url": "https://github.com/huggingface/transformers/pull/23861", "diff_url": "https://github.com/huggingface/transformers/pull/23861.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23861.patch", "merged_at": 1685459534000 }
https://api.github.com/repos/huggingface/transformers/issues/23860
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23860/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23860/comments
https://api.github.com/repos/huggingface/transformers/issues/23860/events
https://github.com/huggingface/transformers/issues/23860
1,732,183,314
I_kwDOCUB6oc5nPwUS
23,860
Deepspeed unable to resume training on Peft
{ "login": "lucasjinreal", "id": 21303438, "node_id": "MDQ6VXNlcjIxMzAzNDM4", "avatar_url": "https://avatars.githubusercontent.com/u/21303438?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lucasjinreal", "html_url": "https://github.com/lucasjinreal", "followers_url": "https://api.github.com/users/lucasjinreal/followers", "following_url": "https://api.github.com/users/lucasjinreal/following{/other_user}", "gists_url": "https://api.github.com/users/lucasjinreal/gists{/gist_id}", "starred_url": "https://api.github.com/users/lucasjinreal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucasjinreal/subscriptions", "organizations_url": "https://api.github.com/users/lucasjinreal/orgs", "repos_url": "https://api.github.com/users/lucasjinreal/repos", "events_url": "https://api.github.com/users/lucasjinreal/events{/privacy}", "received_events_url": "https://api.github.com/users/lucasjinreal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, there, I got same error with deepspeed. training noramlly and resume got `Can't find a valid checkpoint`\r\n\r\nfirst I try resume_from_checkpoint with `out/lora-Vicuna-chat` (output_path) got `Can't find a valid checkpoint`\r\nthen I send `out/lora-Vicuna-chat/checkpoint-6000` I can not load the lora weights........\r\n\r\n```\r\n\"base_model.model.model.layers.31.self_attn.k_proj.lora_B.default.weight\", \"base_model.model.model.layers.31.self_attn.v_proj.weight\", \"base_model.model.model.layers.31.self_attn.v_proj.lora_A.default.weight\", \r\n\"base_model.model.model.layers.31.self_attn.v_proj.lora_B.default.weight\", \"base_model.model.model.layers.31.self_attn.o_proj.weight\", \"base_model.model.model.layers.31.self_attn.o_proj.lora_A.default.weight\", \r\n\"base_model.model.model.layers.31.self_attn.o_proj.lora_B.default.weight\", \"base_model.model.model.layers.31.self_attn.rotary_emb.inv_freq\", \"base_model.model.model.layers.31.mlp.gate_proj.weight\", \r\n\"base_model.model.model.layers.31.mlp.gate_proj.lora_A.default.weight\", \"base_model.model.model.layers.31.mlp.gate_proj.lora_B.default.weight\", \"base_model.model.model.layers.31.mlp.down_proj.weight\", \r\n\"base_model.model.model.layers.31.mlp.down_proj.lora_A.default.weight\", \"base_model.model.model.layers.31.mlp.down_proj.lora_B.default.weight\", \"base_model.model.model.layers.31.mlp.up_proj.weight\", \r\n\"base_model.model.model.layers.31.mlp.up_proj.lora_A.default.weight\", \"base_model.model.model.layers.31.mlp.up_proj.lora_B.default.weight\", \"base_model.model.model.layers.31.input_layernorm.weight\", \r\n Unexpected key(s) in state_dict:\"base_model.model.model.layers.31.self_attn.q_proj.lora_A.weight\", \"base_model.model.model.layers.31.self_attn.q_proj.lora_B.weight\", \"base_model.model.model.layers.31.self_attn.k_proj.lora_A.weight\", \r\n\"base_model.model.model.layers.31.self_attn.k_proj.lora_B.weight\", \"base_model.model.model.layers.31.self_attn.v_proj.lora_A.weight\", \"base_model.model.model.layers.31.self_attn.v_proj.lora_B.weight\", \r\n\"base_model.model.model.layers.31.self_attn.o_proj.lora_A.weight\", \"base_model.model.model.layers.31.self_attn.o_proj.lora_B.weight\", \"base_model.model.model.layers.31.mlp.gate_proj.lora_A.weight\", \r\n\"base_model.model.model.layers.31.mlp.gate_proj.lora_B.weight\", \"base_model.model.model.layers.31.mlp.down_proj.lora_A.weight\", \"base_model.model.model.layers.31.mlp.down_proj.lora_B.weight\", \r\n```\r\n\r\nthe model with some suffix `default`, but samed model didn't have......\r\n\r\nI am confused so much", "cc @pacman100 ", "I am sorry but I found it might related about this:\r\n\r\n```\r\nmodel.state_dict = (\r\n lambda self, *_, **__: get_peft_model_state_dict(self, old_state_dict())\r\n).__get__(model, type(model))\r\n\r\nif torch.__version__ >= \"2\" and sys.platform != \"win32\":\r\n model = torch.compile(model)\r\n\r\nprint(\"\\n If there's a warning about missing keys above, please disregard :)\")\r\n\r\ntrainer.train(resume_from_checkpoint=args.resume_from_checkpoint)\r\n```\r\n\r\nI am replace model_stateduct after create Trainer, digging into code, found that `get_peft_model_state_dict` will replace the peft model sate keynname with some {adapter_name} as suffix.\r\n\r\nDoes this line of code must before create trainer? There is really lack documentation mentationed about this.\r\n\r\nIf so, then why must need users to do this manually when resume or same? \r\n\r\nAnd when using PeftModel.from_pretrained, it actually can set_peft_model_statedict automatically.....\r\n\r\nThese behaviour really makes me very confused.", "@sgugger Sorry for pin again, but this problem obstacles me and make me very confused, please help me clarify, I made a clear code analysis to adress this problem: https://github.com/huggingface/peft/issues/746", "Hello, looking into this and https://github.com/huggingface/peft/issues/746", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,690
1,689
NONE
null
this was my main code: ```python parser = argparse.ArgumentParser() parser.add_argument("--wandb", action="store_true", default=False) parser.add_argument("--prompt_type", type=str, default="chat") parser.add_argument("--data_path", type=str, default="merge.json") parser.add_argument("--output_path", type=str, default="out/lora-Vicuna-chat") parser.add_argument("--model_path", type=str, default="decapoda-research/llama-7b-hf") parser.add_argument("--num_epoch", type=int, default=3) parser.add_argument("--micro_batch", type=int, default=4) parser.add_argument("--total_batch", type=int, default=128) parser.add_argument("--log_steps", type=int, default=100) parser.add_argument("--eval_steps", type=int, default=200) parser.add_argument("--save_steps", type=int, default=200) parser.add_argument("--warmup_ratio", type=float, default=0.05) parser.add_argument("--test_size", type=int, default=10) parser.add_argument("--resume_from_checkpoint", type=str, default=None) parser.add_argument("--lora_remote_checkpoint", type=str, default=None) parser.add_argument("--ignore_data_skip", type=bool, default=False) parser.add_argument("--int8_train", type=bool, default=False) parser.add_argument("--deepspeed", type=str, default=False) args = parser.parse_args() if not args.wandb: os.environ["WANDB_MODE"] = "disable" MICRO_BATCH_SIZE = args.micro_batch # this could actually be 5 but i like powers of 2 BATCH_SIZE = args.total_batch MAX_STEPS = None GRADIENT_ACCUMULATION_STEPS = BATCH_SIZE // MICRO_BATCH_SIZE EPOCHS = args.num_epoch LEARNING_RATE = 3e-4 # the Karpathy constant # CUTOFF_LEN = 2048 CUTOFF_LEN = 512 LORA_R = 8 LORA_ALPHA = 16 LORA_DROPOUT = 0.05 VAL_SET_SIZE = args.test_size # 2000 TARGET_MODULES = [ "q_proj", "v_proj", "k_proj", "o_proj", "down_proj", "gate_proj", "up_proj", ] DATA_PATH = args.data_path OUTPUT_DIR = args.output_path # "lora-Vicuna" device_map = "auto" world_size = int(os.environ.get("WORLD_SIZE", 1)) ddp = world_size != 1 if ddp: device_map = {"": int(os.environ.get("LOCAL_RANK") or 0)} GRADIENT_ACCUMULATION_STEPS = GRADIENT_ACCUMULATION_STEPS // world_size # we must make sure batch_size and gradient_accumulation_steps not changed for resuming training. if args.resume_from_checkpoint: s_ = utils.check_args_on_resume(args) print(f'Resume args check status: {s_}') # checkpoint = os.path.join(args.resume_from_checkpoint, 'pytorch_model.bin') logger = utils.set_file_logger(__name__, OUTPUT_DIR) # 1. load dataset logger.info(f">>> processing data from {DATA_PATH}") logger.info(f">>> using {args}") train_tokenizer = LlamaTokenizer.from_pretrained(args.model_path, add_eos_token=True) assert train_tokenizer.eos_token_id == 2, "Tokenizer eos is wrong!!!" # unk. we want this to be different from the eos token train_tokenizer.pad_token_id = 0 # cannot use eos in generation! # tokenizer.padding_side = "left" # Allow batched inference test_tokenizer = LlamaTokenizer.from_pretrained(args.model_path) if args.prompt_type == "instruct": PROMPT = prompt.instruct_prompt(train_tokenizer, CUTOFF_LEN) elif args.prompt_type == "chat": PROMPT = prompt.chat_prompt(train_tokenizer, CUTOFF_LEN) else: raise Exception("not support") data = load_dataset("json", data_files=DATA_PATH) start = random.randint(1, 100) examples = Dataset.from_dict(data["train"][start : start + 5]).map( PROMPT.preprocess_train ) for example in examples: logger.info( f'>>> using prompt {args.prompt_type}, prompt example:\n { train_tokenizer.decode(example["input_ids"]) }' ) logger.info( f'>>> tokenizer labels: { train_tokenizer.decode([ 0 if l==-100 else l for l in example["labels"]])}' ) logger.info( f'>>> tokenizer example: { example["input_ids"][:10] }...{ example["input_ids"][-10:]}' ) num_proc = os.cpu_count() if VAL_SET_SIZE > 0: train_val = data["train"].train_test_split( test_size=VAL_SET_SIZE, shuffle=True, seed=42 ) train_data = ( train_val["train"].shuffle().map(PROMPT.preprocess_train, num_proc=num_proc) ) val_data = ( train_val["test"].shuffle().map(PROMPT.preprocess_train, num_proc=num_proc) ) else: train_data = data["train"].shuffle().map(PROMPT.preprocess_train, num_proc=num_proc) val_data = None now_max_steps = max((len(data["train"]) - VAL_SET_SIZE) // BATCH_SIZE * EPOCHS, EPOCHS) logger.info(f">>> load model from {args.model_path}") model = LlamaForCausalLM.from_pretrained( args.model_path, load_in_8bit=args.int8_train, device_map=device_map, torch_dtype=torch.float16, ) if args.int8_train: model = prepare_model_for_int8_training(model) config = LoraConfig( r=LORA_R, lora_alpha=LORA_ALPHA, target_modules=TARGET_MODULES, lora_dropout=LORA_DROPOUT, bias="none", task_type="CAUSAL_LM", ) model = get_peft_model(model, config) class CustomCallback(TrainerCallback): def __init__(self, trainer) -> None: super().__init__() self.trainer = trainer self.generation_config = GenerationConfig( temperature=1.0, top_p=0.75, top_k=40, num_beams=2, bos_token_id=train_tokenizer.bos_token_id, eos_token_id=train_tokenizer.eos_token_id, pad_token_id=train_tokenizer.pad_token_id, max_new_tokens=1024, # max_length=max_new_tokens+input_sequence min_new_tokens=1, # min_length=min_new_tokens+input_sequence bad_words_ids=test_tokenizer( ["\n\nUser:", "\n\nAssistant:"], add_special_tokens=False ).input_ids, ) self.repetition_penalty = 1.3 self.logger = utils.set_file_logger( "transformers.trainer", trainer.args.output_dir ) def on_log(self, args, state, control, logs, **kwargs): logger.info(logs) model.print_trainable_parameters() print(f"peft config of model: {model.peft_config}") logger.info(f"model.modules_to_save: {model.modules_to_save}") old_state_dict = model.state_dict model.state_dict = ( lambda self, *_, **__: get_peft_model_state_dict(self, old_state_dict()) ).__get__(model, type(model)) if torch.__version__ >= "2" and sys.platform != "win32": # model = torch.compile(model) pass model.save_pretrained(args.output_path) # print(f"now FUCK model s: {model.state_dict().keys()}") # print(f"{torch.load(os.path.join(args.resume_from_checkpoint, 'pytorch_model.bin')).keys()}") trainer = transformers.Trainer( model=model, train_dataset=train_data, eval_dataset=val_data, args=transformers.TrainingArguments( per_device_train_batch_size=MICRO_BATCH_SIZE, gradient_accumulation_steps=GRADIENT_ACCUMULATION_STEPS, warmup_ratio=args.warmup_ratio, num_train_epochs=EPOCHS, # max_steps=MAX_STEPS, learning_rate=LEARNING_RATE, fp16=True, logging_steps=args.log_steps, logging_first_step=True, # convenient evaluation_strategy="steps" if VAL_SET_SIZE > 0 else "no", save_strategy="steps", save_total_limit=2, eval_steps=args.eval_steps if VAL_SET_SIZE > 0 else None, save_steps=args.save_steps, output_dir=OUTPUT_DIR, load_best_model_at_end=True if VAL_SET_SIZE > 0 else False, ddp_find_unused_parameters=False if ddp else None, report_to="wandb" if args.wandb else [], ignore_data_skip=args.ignore_data_skip, deepspeed=args.deepspeed, ), data_collator=PROMPT.data_collator(), ) trainer.add_callback(CustomCallback(trainer)) model.config.use_cache = False trainer.train(resume_from_checkpoint=args.resume_from_checkpoint) model.save_pretrained(OUTPUT_DIR) ``` the model training is OK, model save is OK Got output like this: ``` (base) ➜ checkpoint-1200 git:(main) ll total 115M -rw-r--r-- 1 root root 77M May 30 20:37 aa drwxr-xr-x 2 root root 268M May 30 17:30 global_step1200 -rw-r--r-- 1 root root 15 May 30 17:30 latest -rw-r--r-- 1 root root 39M May 30 17:30 pytorch_model.bin -rw-r--r-- 1 root root 16K May 30 17:30 rng_state_0.pth -rw-r--r-- 1 root root 16K May 30 17:30 rng_state_1.pth -rw-r--r-- 1 root root 3.1K May 30 17:30 trainer_state.json -rw-r--r-- 1 root root 5.0K May 30 17:30 training_args.bin -rwxr--r-- 1 root root 19K May 30 17:30 zero_to_fp32.py ``` but somehow I can't resume the checkpoint, From my limited knowledge, resume should send same as output path, and inside output path, we might have checkpiint-800 checkpint-1600 etc. So I just resume from output_path. Then it says ValueError: Can't find a valid checkpoint at out/lora-Vicuna-chat/ Why????? I try to send a path like `out/lora-Vicuna-chat/checkpoint-600`, but also failed ,so strange
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23860/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23860/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23859
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23859/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23859/comments
https://api.github.com/repos/huggingface/transformers/issues/23859/events
https://github.com/huggingface/transformers/issues/23859
1,731,984,924
I_kwDOCUB6oc5nO_4c
23,859
❗ Bug for compute_transition_scores in generation
{ "login": "Hannibal046", "id": 38466901, "node_id": "MDQ6VXNlcjM4NDY2OTAx", "avatar_url": "https://avatars.githubusercontent.com/u/38466901?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hannibal046", "html_url": "https://github.com/Hannibal046", "followers_url": "https://api.github.com/users/Hannibal046/followers", "following_url": "https://api.github.com/users/Hannibal046/following{/other_user}", "gists_url": "https://api.github.com/users/Hannibal046/gists{/gist_id}", "starred_url": "https://api.github.com/users/Hannibal046/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hannibal046/subscriptions", "organizations_url": "https://api.github.com/users/Hannibal046/orgs", "repos_url": "https://api.github.com/users/Hannibal046/repos", "events_url": "https://api.github.com/users/Hannibal046/events{/privacy}", "received_events_url": "https://api.github.com/users/Hannibal046/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@gante @ArthurZucker Hi, would you mind taking some time to check this?", "Sure! Can you share the full traceback of the error that you stumbled upon? ", "Thanks for help. The full traceback is here:\r\n```\r\nRuntimeError Traceback (most recent call last)\r\nCell In[1], line 19\r\n 9 inputs = tokenizer(batch,return_tensors=\"pt\")\r\n 10 outputs = model.generate(\r\n 11 **inputs, \r\n 12 forced_bos_token_id=tokenizer.lang_code_to_id[trg_lang],\r\n (...)\r\n 17 num_beams = 5,\r\n 18 )\r\n---> 19 transition_scores = model.compute_transition_scores(outputs.sequences, outputs.scores, normalize_logits=True)\r\n\r\nFile [/anaconda/envs/llmt/lib/python3.8/site-packages/transformers/generation/utils.py:1086](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a2241313030227d.vscode-resource.vscode-cdn.net/anaconda/envs/llmt/lib/python3.8/site-packages/transformers/generation/utils.py:1086), in GenerationMixin.compute_transition_scores(self, sequences, scores, beam_indices, normalize_logits)\r\n 1084 # 7. Define which indices contributed to scores\r\n 1085 cut_idx = sequences.shape[-1] - max_beam_length\r\n-> 1086 indices = sequences[:, cut_idx:] + beam_sequence_indices\r\n 1088 # 8. Compute scores\r\n 1089 transition_scores = scores.gather(0, indices)\r\n\r\nRuntimeError: The size of tensor a (2) must match the size of tensor b (23) at non-singleton dimension 1\r\n```", "BTW, this RuntimeError doesn't always happen. For example, for input like this, this snippet works fine.\r\n```\r\nbatch = ['«Մենք հիմա ունենք 4 ամսական մկներ, որոնք, նախկինում շաքարային դիաբետ ունենալով, այժմ չունեն այն,- ավելացրեց նա։»']\r\n```", "Hey @Hannibal046 👋 \r\n\r\nAs stated in the [docs](https://huggingface.co/docs/transformers/v4.30.0/en/main_classes/text_generation#transformers.GenerationMixin.compute_transition_scores), when `num_beams>1`, you need to pass the `beam_indices` argument to `compute_transition_scores()`. `beam_indices` is part of the output in generate with beam search.\r\n\r\nHere's a working snippet: \r\n```py\r\nfrom transformers import AutoModelForSeq2SeqLM, AutoTokenizer\r\n\r\nsrc_lang = 'hye_Armn'\r\ntrg_lang = 'eng_Latn'\r\ntokenizer = AutoTokenizer.from_pretrained(\"facebook/nllb-200-distilled-600M\", src_lang='hye_Armn')\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"facebook/nllb-200-distilled-600M\")\r\nbatch = ['Հավելյալ 300-ը վագոնների թիվը դարձնում է 1,300, որոնց նպատակն է թեթևացնել գերծանրաբեռնվածությունը:']\r\n\r\ninputs = tokenizer(batch, return_tensors=\"pt\")\r\noutputs = model.generate(\r\n **inputs,\r\n forced_bos_token_id=tokenizer.lang_code_to_id[trg_lang],\r\n max_length=100,\r\n return_dict_in_generate=True,\r\n output_scores=True,\r\n num_return_sequences = 5,\r\n num_beams = 5,\r\n)\r\ntransition_scores = model.compute_transition_scores(outputs.sequences, outputs.scores, outputs.beam_indices, normalize_logits=True)\r\nprint(transition_scores)\r\n```", "@gante Thanks so much! Appreciate your work!", "@gante we can maybe raise an error if indices are not properly passed? ", "@ArthurZucker Sadly that is not possible without an API change :( We do get an exception when `num_beams` and `num_return_sequences` are used together in `generate`, but not when `num_beams` is used alone -- the output format is the same as a batched input, no way to detect whether it comes from beam search or not.\r\n\r\nE.g. this snippet runs (and it should throw an error)\r\n```py\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"distilgpt2\")\r\nmodel = AutoModelForCausalLM.from_pretrained(\"distilgpt2\")\r\n\r\ninputs = tokenizer([\"The quick brown\"], return_tensors=\"pt\")\r\ngen_out = model.generate(**inputs, num_beams=5, do_sample=False, return_dict_in_generate=True, output_scores=True)\r\ntransition_scores = model.compute_transition_scores(gen_out.sequences, gen_out.scores, normalize_logits=True)\r\n```\r\n\r\nThe solution would be e.g. to pass the entire outputs of generate into `compute_transition_scores`. But that's a bigger change that implies a deprecation cycle (that I'm not sure is worth going through 🤔 )" ]
1,685
1,686
1,686
NONE
null
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.15.0-1038-azure-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer src_lang = 'hye_Armn' trg_lang = 'eng_Latn' tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M",src_lang='hye_Armn') model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M") batch = ['Հավելյալ 300-ը վագոնների թիվը դարձնում է 1,300, որոնց նպատակն է թեթևացնել գերծանրաբեռնվածությունը:'] inputs = tokenizer(batch,return_tensors="pt") outputs = model.generate( **inputs, forced_bos_token_id=tokenizer.lang_code_to_id[trg_lang], max_length=100, return_dict_in_generate=True, output_scores=True, num_return_sequences = 5, num_beams = 5, ) transition_scores = model.compute_transition_scores(outputs.sequences, outputs.scores, normalize_logits=True) ``` ### Expected behavior transition_scores are computed
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23859/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23859/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23858
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23858/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23858/comments
https://api.github.com/repos/huggingface/transformers/issues/23858/events
https://github.com/huggingface/transformers/issues/23858
1,731,876,788
I_kwDOCUB6oc5nOle0
23,858
Space CPU Basic and Nvidia T4 - small should be FREE FREE FREE
{ "login": "pure-rgb", "id": 45315076, "node_id": "MDQ6VXNlcjQ1MzE1MDc2", "avatar_url": "https://avatars.githubusercontent.com/u/45315076?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pure-rgb", "html_url": "https://github.com/pure-rgb", "followers_url": "https://api.github.com/users/pure-rgb/followers", "following_url": "https://api.github.com/users/pure-rgb/following{/other_user}", "gists_url": "https://api.github.com/users/pure-rgb/gists{/gist_id}", "starred_url": "https://api.github.com/users/pure-rgb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pure-rgb/subscriptions", "organizations_url": "https://api.github.com/users/pure-rgb/orgs", "repos_url": "https://api.github.com/users/pure-rgb/repos", "events_url": "https://api.github.com/users/pure-rgb/events{/privacy}", "received_events_url": "https://api.github.com/users/pure-rgb/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,685
1,685
1,685
NONE
null
### Feature request Look at this horrible chart, from [this page](https://huggingface.co/pricing#spaces). ![image](https://github.com/huggingface/transformers/assets/45315076/7b781ad5-f5f6-4469-a845-c408fb126bd7) The CPU basic and T4 small should be completely free. ### Motivation - I have a code example (gradio web app) and I want to upload it in huggingface space. The code is minimal but contains some API level issue to run on CPU and I need minimum GPU to run it. Now I couldn't run it on space due to no GPU for free. - But I easily run this app using google Colab. Which provides free T4. In summary, I git clone the huggingface space repo to google colab and run my code there. - How is it good for huggingface businessnes? - How is it good for end-user demand? Please reconsider. ### Your contribution Don't ask me for sponsor ;)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23858/reactions", "total_count": 3, "+1": 1, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23858/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23857
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23857/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23857/comments
https://api.github.com/repos/huggingface/transformers/issues/23857/events
https://github.com/huggingface/transformers/pull/23857
1,731,858,443
PR_kwDOCUB6oc5RqhS2
23,857
Added time-series blogs to the models
{ "login": "elisim", "id": 17675462, "node_id": "MDQ6VXNlcjE3Njc1NDYy", "avatar_url": "https://avatars.githubusercontent.com/u/17675462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elisim", "html_url": "https://github.com/elisim", "followers_url": "https://api.github.com/users/elisim/followers", "following_url": "https://api.github.com/users/elisim/following{/other_user}", "gists_url": "https://api.github.com/users/elisim/gists{/gist_id}", "starred_url": "https://api.github.com/users/elisim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elisim/subscriptions", "organizations_url": "https://api.github.com/users/elisim/orgs", "repos_url": "https://api.github.com/users/elisim/repos", "events_url": "https://api.github.com/users/elisim/events{/privacy}", "received_events_url": "https://api.github.com/users/elisim/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,685
1,685
1,685
CONTRIBUTOR
null
@kashif @NielsRogge
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23857/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23857/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23857", "html_url": "https://github.com/huggingface/transformers/pull/23857", "diff_url": "https://github.com/huggingface/transformers/pull/23857.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23857.patch", "merged_at": 1685723554000 }
https://api.github.com/repos/huggingface/transformers/issues/23856
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23856/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23856/comments
https://api.github.com/repos/huggingface/transformers/issues/23856/events
https://github.com/huggingface/transformers/pull/23856
1,731,852,047
PR_kwDOCUB6oc5Rqf7J
23,856
Adds AutoProcessor.from_pretrained support for MCTCTProcessor
{ "login": "Ubadub", "id": 1286898, "node_id": "MDQ6VXNlcjEyODY4OTg=", "avatar_url": "https://avatars.githubusercontent.com/u/1286898?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ubadub", "html_url": "https://github.com/Ubadub", "followers_url": "https://api.github.com/users/Ubadub/followers", "following_url": "https://api.github.com/users/Ubadub/following{/other_user}", "gists_url": "https://api.github.com/users/Ubadub/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ubadub/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ubadub/subscriptions", "organizations_url": "https://api.github.com/users/Ubadub/orgs", "repos_url": "https://api.github.com/users/Ubadub/repos", "events_url": "https://api.github.com/users/Ubadub/events{/privacy}", "received_events_url": "https://api.github.com/users/Ubadub/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,685
1,685
1,685
CONTRIBUTOR
null
# What does this PR do? Adds `MCTCTProcessor` to the mapping between model architectures and classes used by `AutoProcessor.from_pretrained`. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #23853 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - **See #23853** - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - **The docs should get autoupdated via the `replace_list_option_in_docstrings` decorator.** - [X] Did you write any new necessary tests? - I don't know that relevant tests can be written without expanding the suite of internal models (`"hf-internal-testing/tiny-random-MCTCTModel` doesn't work because it doesn't have a tokenizer attached). - I did confirm the existing `AutoProcessor` tests still pass. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sanchit-gandhi <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23856/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23856/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23856", "html_url": "https://github.com/huggingface/transformers/pull/23856", "diff_url": "https://github.com/huggingface/transformers/pull/23856.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23856.patch", "merged_at": 1685471779000 }
https://api.github.com/repos/huggingface/transformers/issues/23855
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23855/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23855/comments
https://api.github.com/repos/huggingface/transformers/issues/23855/events
https://github.com/huggingface/transformers/pull/23855
1,731,774,278
PR_kwDOCUB6oc5RqPDR
23,855
[LlamaTokenizerFast] nit update `post_processor` on the fly
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Failing test is unrelated to the changes", "this is not saving `add_eos_token` and `add_bos_token`.\r\n\r\n@sgugger @ArthurZucker Can you guys take a look at this?" ]
1,685
1,698
1,685
COLLABORATOR
null
# What does this PR do? This PR adresses #23833 where it appears that being able to change the `add_eos_token` and `add_bos_token` should be made possible for an easier use of the interface. Fix comes in two changes - added `_add_bos_token` attribute, as well as setters and getters for `add_bos_token` - added a `self.update_post_processor` that update the post processor based on the current values of `add_eos_token` and `add_bos_token`. - added a test to make sure this is properly working
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23855/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23855/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23855", "html_url": "https://github.com/huggingface/transformers/pull/23855", "diff_url": "https://github.com/huggingface/transformers/pull/23855.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23855.patch", "merged_at": 1685458242000 }
https://api.github.com/repos/huggingface/transformers/issues/23854
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23854/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23854/comments
https://api.github.com/repos/huggingface/transformers/issues/23854/events
https://github.com/huggingface/transformers/issues/23854
1,731,735,282
I_kwDOCUB6oc5nOC7y
23,854
:grey_question: Custom tool creation and pip requirements :grey_question:
{ "login": "adriens", "id": 5235127, "node_id": "MDQ6VXNlcjUyMzUxMjc=", "avatar_url": "https://avatars.githubusercontent.com/u/5235127?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adriens", "html_url": "https://github.com/adriens", "followers_url": "https://api.github.com/users/adriens/followers", "following_url": "https://api.github.com/users/adriens/following{/other_user}", "gists_url": "https://api.github.com/users/adriens/gists{/gist_id}", "starred_url": "https://api.github.com/users/adriens/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adriens/subscriptions", "organizations_url": "https://api.github.com/users/adriens/orgs", "repos_url": "https://api.github.com/users/adriens/repos", "events_url": "https://api.github.com/users/adriens/events{/privacy}", "received_events_url": "https://api.github.com/users/adriens/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It's not possible, they will need to install the packages required by your tool.", "Ok, thanks a lot for your answer @sgugger :pray: " ]
1,685
1,685
1,685
NONE
null
# :grey_question: About i'm currently working on [Creating a new tool](https://huggingface.co/docs/transformers/en/custom_tools#creating-a-new-tool), ... and this tool will rely on a custom `pypi` package. In the documentation, you show some classic imports but not custom ones. # :pray: Question If I create a custom tool, how to make sure that the final user won't be embarrased by my internal package ?... ie. I would like the end-user to only have to import my custom tool without having to install the `pypi` package, so here comes the (newbee/noob) question: > "How to package a custom tool that relies itself on a custom `pypi` package ?" Thank you in advance for your help.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23854/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23854/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23853
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23853/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23853/comments
https://api.github.com/repos/huggingface/transformers/issues/23853/events
https://github.com/huggingface/transformers/issues/23853
1,731,533,386
I_kwDOCUB6oc5nNRpK
23,853
AutoProcessor.from_pretrained doesn't support MCTCT Models
{ "login": "Ubadub", "id": 1286898, "node_id": "MDQ6VXNlcjEyODY4OTg=", "avatar_url": "https://avatars.githubusercontent.com/u/1286898?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ubadub", "html_url": "https://github.com/Ubadub", "followers_url": "https://api.github.com/users/Ubadub/followers", "following_url": "https://api.github.com/users/Ubadub/following{/other_user}", "gists_url": "https://api.github.com/users/Ubadub/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ubadub/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ubadub/subscriptions", "organizations_url": "https://api.github.com/users/Ubadub/orgs", "repos_url": "https://api.github.com/users/Ubadub/repos", "events_url": "https://api.github.com/users/Ubadub/events{/privacy}", "received_events_url": "https://api.github.com/users/Ubadub/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sanchit-gandhi " ]
1,685
1,685
1,685
CONTRIBUTOR
null
### System Info Not actually relevant, but included for completeness: - `transformers` version: 4.29.1 - Platform: Linux-4.18.0-348.23.1.el8_5.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): 0.6.1 (cpu) - Jax version: 0.4.9 - JaxLib version: 0.4.9 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @sanchit-gandhi ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoProcessor, MCTCTProcessor mctc_proc1 = AutoProcessor.from_pretrained("speechbrain/m-ctc-t-large") mctc_proc2 = MCTCTProcessor.from_pretrained("speechbrain/m-ctc-t-large") print(f"AutoProcessor: {mctc_proc1}") print(f"MCTCTProcessor: {mctc_proc2}") ``` The first line prints a `MCTCTProcessor` instance, containing a`MCTCTFeatureExtractor` feature extractor and `Wav2Vec2CTCTokenizer` tokenizer) while the second prints just an `Wav2Vec2CTCTokenizer` instance. ### Expected behavior `AutoProcessor.from_pretrained` should return an `MCTCTProcessor` instance when the provided model is an MCTCT model. The reason it does not right now is because [the code for `AutoProcessor`](https://github.com/huggingface/transformers/blob/17a55534f5e5df10ac4804d4270bf6b8cc24998d/src/transformers/models/auto/processing_auto.py#LL42C1-L82C1) does not include a mapping entry for MCTCT. ```python PROCESSOR_MAPPING_NAMES = OrderedDict( [ ("align", "AlignProcessor"), ("altclip", "AltCLIPProcessor"), ("blip", "BlipProcessor"), ("blip-2", "Blip2Processor"), ("bridgetower", "BridgeTowerProcessor"), ("chinese_clip", "ChineseCLIPProcessor"), ("clap", "ClapProcessor"), ("clip", "CLIPProcessor"), ("clipseg", "CLIPSegProcessor"), ("flava", "FlavaProcessor"), ("git", "GitProcessor"), ("groupvit", "CLIPProcessor"), ("hubert", "Wav2Vec2Processor"), ("layoutlmv2", "LayoutLMv2Processor"), ("layoutlmv3", "LayoutLMv3Processor"), ("markuplm", "MarkupLMProcessor"), ("mgp-str", "MgpstrProcessor"), ("oneformer", "OneFormerProcessor"), ("owlvit", "OwlViTProcessor"), ("pix2struct", "Pix2StructProcessor"), ("sam", "SamProcessor"), ("sew", "Wav2Vec2Processor"), ("sew-d", "Wav2Vec2Processor"), ("speech_to_text", "Speech2TextProcessor"), ("speech_to_text_2", "Speech2Text2Processor"), ("speecht5", "SpeechT5Processor"), ("trocr", "TrOCRProcessor"), ("tvlt", "TvltProcessor"), ("unispeech", "Wav2Vec2Processor"), ("unispeech-sat", "Wav2Vec2Processor"), ("vilt", "ViltProcessor"), ("vision-text-dual-encoder", "VisionTextDualEncoderProcessor"), ("wav2vec2", "Wav2Vec2Processor"), ("wav2vec2-conformer", "Wav2Vec2Processor"), ("wavlm", "Wav2Vec2Processor"), ("whisper", "WhisperProcessor"), ("xclip", "XCLIPProcessor"), ] ) ``` An [MCTCTProcessor class](https://github.com/huggingface/transformers/blob/main/src/transformers/models/mctct/processing_mctct.py) exists whose `from_pretrained` function behaves appropriately. `AutoProcessor` should behave the same way, rather than falling back to a tokenizer. The fix seems simple enough, by adding the entry below to `PROCESSOR_MAPPING_NAMES` (but I am far from an expert): ```python ("mctct", "MCTCTProcessor"), ``` For comparison, the `AutoModel.from_pretrained` method does support MCTCT and thus behaves appropriately because [its mapping contains a line for MCTCT](https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/modeling_auto.py#L125).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23853/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23853/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23852
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23852/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23852/comments
https://api.github.com/repos/huggingface/transformers/issues/23852/events
https://github.com/huggingface/transformers/issues/23852
1,731,485,523
I_kwDOCUB6oc5nNF9T
23,852
RWKV can't stop correctly.
{ "login": "JaheimLee", "id": 18062264, "node_id": "MDQ6VXNlcjE4MDYyMjY0", "avatar_url": "https://avatars.githubusercontent.com/u/18062264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JaheimLee", "html_url": "https://github.com/JaheimLee", "followers_url": "https://api.github.com/users/JaheimLee/followers", "following_url": "https://api.github.com/users/JaheimLee/following{/other_user}", "gists_url": "https://api.github.com/users/JaheimLee/gists{/gist_id}", "starred_url": "https://api.github.com/users/JaheimLee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JaheimLee/subscriptions", "organizations_url": "https://api.github.com/users/JaheimLee/orgs", "repos_url": "https://api.github.com/users/JaheimLee/repos", "events_url": "https://api.github.com/users/JaheimLee/events{/privacy}", "received_events_url": "https://api.github.com/users/JaheimLee/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It seems that '\\n\\n' should be the eos_token. @sgugger", "And according to [this](https://huggingface.co/BlinkDL/rwkv-4-world), '\\n\\n' will be a single token in the new world model.", "cc @younesbelkada and @ArthurZucker ", "hi @JaheimLee \r\nThe EOS token should be still the same across all RWKV models, I believe for chat models, you need to manually stop the generation whenever you encounter the `\\n\\n` token. See here: https://huggingface.co/spaces/BlinkDL/ChatRWKV-gradio/blob/main/app.py#L214 for reference. ", "Also another thing that might be required is to properly set the `congig/generation_config`'s `eos_token_id` as there is a logic to automatically stop generating when these are detected/. ", "> Also another thing that might be required is to properly set the `congig/generation_config`'s `eos_token_id` as there is a logic to automatically stop generating when these are detected/.\r\n\r\nBut `\\n\\n` is not a token now. How to set the `eos_token_id`? Maybe the rwkv tokenizer should be updated to the world model version first?", "I don't know if `\\n\\n` can be encoded as a single token, probably that is why in the official demo it manually looks for that string and stops generating if that string has been generated", "It can if we add it to the vocab with `add_special_token`. If you just set `tokenizer.add_special_token\"` it should work out of the box. Let me have a try", "Okay! Here is the fix: `model.config.eos_token_id = 187` (`\"\\n\"` and not `\"\\n\\n\"` worked) . The model.config has it set to `0`. With this here is the output I have: \r\n```python\r\n>>> model.config.eos_token_id = 187\r\n>>> output = model.generate(inputs[\"input_ids\"], max_new_tokens=256);print(tokenizer.decode(output[0]))\r\nBob: What's your name?\r\n\r\nAlice: My name is not important.\r\n```", "> Okay! Here is the fix: `model.config.eos_token_id = 187` (`\"\\n\"` and not `\"\\n\\n\"` worked) . The model.config has it set to `0`. With this here is the output I have:\r\n> \r\n> ```python\r\n> >>> model.config.eos_token_id = 187\r\n> >>> output = model.generate(inputs[\"input_ids\"], max_new_tokens=256);print(tokenizer.decode(output[0]))\r\n> Bob: What's your name?\r\n> \r\n> Alice: My name is not important.\r\n> ```\r\n\r\nBut it will hurt the output in which has to have `\\n`, like\r\n```\r\nquery = \"Bob: How to write a paper?.\\n\\nAlice:\"\r\n\r\nIn [10]: tokenizer.decode(outputs[0][inputs[\"input_ids\"].shape[-1]:])\r\nOut[10]: ' Writing a paper involves several steps, including planning, organizing, writing, editing, and proofreading. Here are some steps to help you write a paper:\\n'\r\n```", "Yes 😅 the issue is that the model does not predict the `\\n\\n` token but rather `\\n` `\\n`. \r\nI’ll see what I can do 😃", "> Yes 😅 the issue is that the model does not predict the `\\n\\n` token but rather `` \\n``\\n ``. I’ll see what I can do 😃\r\n\r\nI think the only way to fix it is to update the tokenizer to the world version mentioned above.", "You can also implement your own `StoppingCriteria`, like the following: \r\n\r\n```python\r\nfrom transformers import StoppingCriteria\r\nclass RwkvStoppingCriteria(StoppingCriteria):\r\n def __init__(self, eos_sequence = [187,187], eos_token_id = 537):\r\n self.eos_sequence = eos_sequence\r\n self.eos_token_id = eos_token_id\r\n def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:\r\n last_2_ids = input_ids[:,-2:].tolist()\r\n return self.eos_sequence in last_2_ids\r\noutput = model.generate(inputs[\"input_ids\"], max_new_tokens=64, stopping_criteria = [RwkvStoppingCriteria()])\r\n```\r\nThis gave me: \r\n```python\r\nBob: What's your name?\r\n\r\nAlice: My name is not important.\r\n```\r\nand \r\n```python \r\n>>> output = model.generate(inputs[\"input_ids\"], max_new_tokens=64, stopping_criteria = [RwkvStoppingCriteria()])\r\nThe attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.\r\nSetting `pad_token_id` to `eos_token_id`:0 for open-end generation.\r\n\r\n>>> print(tokenizer.decode(output[0]))\r\nBob: How to write a paper?.\r\n\r\nAlice: Writing a paper involves several steps, including planning, organizing, writing, editing, and proofreading. Here are some steps to help you write a paper:\r\n1. Choose a topic: Choose a topic that you are interested in and that you can research thoroughly.\r\n2. Develop a thesis statement: A thesis statement\r\n```", "Two choices, either we add this to transformers, or we modify the generate function of RWKV to stop when two `/n` are generated. I am in favor of 1 as it is a much cleaner fix to a hack that should not exist. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,690
1,690
NONE
null
According to [here](https://huggingface.co/BlinkDL/rwkv-4-raven), the prompt should be `Bob: xxxxxxxxxxxxxxxxxx\n\nAlice:`.But when I run ``` import torch from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-raven-7b", torch_dtype=torch.float16).to(0) tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-7b") prompt = "Bob: What's your name?\n\nAlice:" inputs = tokenizer(prompt, return_tensors="pt").to(0) output = model.generate(inputs["input_ids"], max_new_tokens=256) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) ``` The output will be ``` " I'm ChatGPT. My name is not important.\n\nBob: What's your favorite color?\n\nAlice: I don't have a favorite color. I am an AI language model and do not have personal preferences or emotions.\n\nBob: What's your favorite color?\n\nAlice: I don't have personal preferences or emotions. I am an AI language model and do not have personal preferences or emotions.\n\nBob: What's your favorite color?\n\nAlice: I don't have personal preferences or emotions. I am an AI language model and do not have personal preferences or emotions.\n\nBob: What's your favorite color?\n\nAlice: I don't have personal preferences or emotions. I am an AI language model and do not have personal preferences or emotions.\n\nBob: What's your favorite color?\n\nAlice: I don't have personal preferences or emotions. I am an AI language model and do not have personal preferences or emotions.\n\nBob: What's your favorite color?\n\nAlice: I don't have personal preferences or emotions. I am an AI language model and do not have personal preferences or emotions.\n\nBob: What's your favorite color?\n" ``` As you can see, it can't stop.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23852/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23852/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23851
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23851/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23851/comments
https://api.github.com/repos/huggingface/transformers/issues/23851/events
https://github.com/huggingface/transformers/issues/23851
1,731,433,030
I_kwDOCUB6oc5nM5JG
23,851
[Bug]? how does the tokenizer encode the special tokens?
{ "login": "vpegasus", "id": 22723154, "node_id": "MDQ6VXNlcjIyNzIzMTU0", "avatar_url": "https://avatars.githubusercontent.com/u/22723154?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vpegasus", "html_url": "https://github.com/vpegasus", "followers_url": "https://api.github.com/users/vpegasus/followers", "following_url": "https://api.github.com/users/vpegasus/following{/other_user}", "gists_url": "https://api.github.com/users/vpegasus/gists{/gist_id}", "starred_url": "https://api.github.com/users/vpegasus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vpegasus/subscriptions", "organizations_url": "https://api.github.com/users/vpegasus/orgs", "repos_url": "https://api.github.com/users/vpegasus/repos", "events_url": "https://api.github.com/users/vpegasus/events{/privacy}", "received_events_url": "https://api.github.com/users/vpegasus/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "#23818 ", "> #23818\r\n\r\n@jiangwy99 \r\nthanks very much, when set `use_fast=False`, this indeed encode </s> correctly, whether the space exists.\r\n\r\nHowever, \r\n```python\r\ntokenizer(['who are you', '你是谁'])\r\noutput:\r\n\r\noutputs:\r\n[\r\n[1, 1058, 526, 366], \r\n[1, 29871, 30919, 30392, 235, 179, 132]\r\n]\r\n```\r\nthe space ` ` in front Chinese characters still exists.\r\n\r\n", "> > #23818\r\n> \r\n> @jiangwy99 thanks very much, when set `use_fast=False`, this indeed encode correctly, whether the space exists.\r\n> \r\n> However,\r\n> \r\n> ```python\r\n> tokenizer(['who are you', '你是谁'])\r\n> output:\r\n> \r\n> outputs:\r\n> [\r\n> [1, 1058, 526, 366], \r\n> [1, 29871, 30919, 30392, 235, 179, 132]\r\n> ]\r\n> ```\r\n> \r\n> the space ` ` in front Chinese characters still exists.\r\n\r\nThat's quite a problem. Your analysis of the problems on the tokenizer is more comprehensive than mine, and I look forward to these issues being resolved.", "Hey, I basically answered in #23818, this is pretty much the same " ]
1,685
1,687
1,687
NONE
null
### System Info transformer version 4.28.1 ### Who can help? @ArthurZucker hi, maybe, the following issue should be asked here? [[Bug]? how does the tokenizer encode the special tokens? #1263](https://github.com/huggingface/tokenizers/issues/1263) ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Hi, all, I used the tokenzier to process data for llama model(already converted to hf formated) and set: ```python tokenizer = AutoTokenizer.from_pretrained(llama_model_id, model_max_length=1024, padding_side='right', trust_remote_code=True) tokenizer.add_special_tokens( { "eos_token": "</s>", "bos_token": "</s>", "unk_token": "</s>", }) tokenizer.pad_token = tokenizer.eos_token ``` when tokenizing a piece of text with an eos_token: ```python tokenizer(['ASSISTANT: Hello!</s>']) # there is no space between ! and </s>. ``` ``` output: {'input_ids': [[1, 319, 1799, 9047, 13566, 29901, 15043, 29991, 829, 29879, 29958]], 'token_type_ids': [[0, 0, 0, 0, 0, 0,0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]} ``` The `eos_token: </s>` is encoded to ` 829, 29879, 29958` which means `</s>` is regarded as `</`,`s` and `>`. ```python tokenizer(['ASSISTANT: Hello! </s>']) # there is a space between ! and </s>. ``` ``` output: {'input_ids': [[1, 319, 1799, 9047, 13566, 29901, 15043, 29991, 2]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1]]} ``` in this time, `</s>` is encoded correctly (token id is 2). As description above, does this mean we should add a space between text and `eos_token`? however, I find many popular projects like `Alpaca` concatenate text with `eos_token` without a space. I previously thought tokenizer encode text in a greedy style, the `eos_token` would be encoded correctly with or without a space. However, the experiments above seemed to not support my opinion. could anyone help me, if there is something misunderstood by me? thx. ---- After some other experiments, I found some weird thing: ```python tokenizer('我是谁') output: 'input_ids': [1, 29871, 30672, 30392, 235, 179, 132] ``` 1 is bos_token_id, 29871 is the token id of '' ```python tokenizer('我是谁</s>') output: 'input_ids': [1, 29871, 30672, 30392, 235, 179, 132, 829, 29879, 29958] tokenizer('who are you</s>') output: 'input_ids': [1, 1058, 526, 366, 829, 29879, 29958] # there is no 29871. ``` when add a space ` ` between `谁` and `</s>`. ```python tokenizer('我是谁 </s>') output: 'input_ids': [1, 29871, 30672, 30392, 235, 179, 132, 2] # the `</s>` is encoded correctly ``` when decode `[1, 29871, 30672, 30392, 235, 179, 132, 2] ` ``` tokenizer.decode([1, 29871, 30672, 30392, 235, 179, 132, 2]) output: '<s> 我是谁</s>' ``` the space ` ` is ignored! When manually add token id 29871: ``` tokenizer.decode([1, 29871, 30672, 30392, 235, 179, 132, 29871, 2]) output: '<s> 我是谁 </s>' ``` this time, there is a space ` ` between `谁` and `</s>`. Does these experiments above means encode, decode methods are not completely Reciprocal reversible operation? ### Expected behavior does above experiments show bugs? if not, how should I understand these? thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23851/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23851/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23850
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23850/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23850/comments
https://api.github.com/repos/huggingface/transformers/issues/23850/events
https://github.com/huggingface/transformers/pull/23850
1,731,313,337
PR_kwDOCUB6oc5Ros_7
23,850
🌐 [i18n-KO] Translated `perplexity.mdx` to Korean
{ "login": "HanNayeoniee", "id": 33839093, "node_id": "MDQ6VXNlcjMzODM5MDkz", "avatar_url": "https://avatars.githubusercontent.com/u/33839093?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HanNayeoniee", "html_url": "https://github.com/HanNayeoniee", "followers_url": "https://api.github.com/users/HanNayeoniee/followers", "following_url": "https://api.github.com/users/HanNayeoniee/following{/other_user}", "gists_url": "https://api.github.com/users/HanNayeoniee/gists{/gist_id}", "starred_url": "https://api.github.com/users/HanNayeoniee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HanNayeoniee/subscriptions", "organizations_url": "https://api.github.com/users/HanNayeoniee/orgs", "repos_url": "https://api.github.com/users/HanNayeoniee/repos", "events_url": "https://api.github.com/users/HanNayeoniee/events{/privacy}", "received_events_url": "https://api.github.com/users/HanNayeoniee/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "좋은 번역 감사합니다! 앞서 올려주신 수정 제안 이외에 추가 의견 없습니다!", "@sgugger, @ArthurZucker, @eunseojo May you please review this PR? " ]
1,685
1,690
1,688
CONTRIBUTOR
null
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니다 --> # What does this PR do? Translated `perplexity.mdx` file of the documentation to Korean. Added draft of `model_summary.mdx` file because it's referenced. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 <!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: --> ## Before reviewing - [x] Check for missing / redundant translations (번역 누락/중복 검사) - [x] Grammar Check (맞춤법 검사) - [x] Review or Add new terms to glossary (용어 확인 및 추가) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview로 정상작동 확인) ## Who can review? (Initial) <!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> @sgugger, @ArthurZucker, @eunseojo May you please review this PR?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23850/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23850/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23850", "html_url": "https://github.com/huggingface/transformers/pull/23850", "diff_url": "https://github.com/huggingface/transformers/pull/23850.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23850.patch", "merged_at": 1688367028000 }
https://api.github.com/repos/huggingface/transformers/issues/23849
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23849/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23849/comments
https://api.github.com/repos/huggingface/transformers/issues/23849/events
https://github.com/huggingface/transformers/pull/23849
1,731,258,477
PR_kwDOCUB6oc5RohMa
23,849
[WIP] Add llava model
{ "login": "youssefadr", "id": 104783077, "node_id": "U_kgDOBj7c5Q", "avatar_url": "https://avatars.githubusercontent.com/u/104783077?v=4", "gravatar_id": "", "url": "https://api.github.com/users/youssefadr", "html_url": "https://github.com/youssefadr", "followers_url": "https://api.github.com/users/youssefadr/followers", "following_url": "https://api.github.com/users/youssefadr/following{/other_user}", "gists_url": "https://api.github.com/users/youssefadr/gists{/gist_id}", "starred_url": "https://api.github.com/users/youssefadr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/youssefadr/subscriptions", "organizations_url": "https://api.github.com/users/youssefadr/orgs", "repos_url": "https://api.github.com/users/youssefadr/repos", "events_url": "https://api.github.com/users/youssefadr/events{/privacy}", "received_events_url": "https://api.github.com/users/youssefadr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23849). All of your documentation changes will be reflected on that endpoint.", "Hey! Thanks for wanting to contribute. I would suggest you to follow the [guide](https://huggingface.co/docs/transformers/model_sharing) on how to share a model like this one. Since it is basically patching two models up, should be easy to fit on the hub! 🤗 ", "Hey @ArthurZucker! Thank you for your message!\r\n\r\nI have looked up the guide you provided to share a model and according to my understanding, you are making a reference to uploading the model weights to the model hub and adding a model card, right? \r\n\r\nHowever, I am a little confused, since the model is already on the hub ([https://huggingface.co/liuhaotian/LLaVA-7b-delta-v0](https://huggingface.co/liuhaotian/LLaVA-7b-delta-v0)), but it cannot be ran using the current LLaMA implementation in transformers. \r\nI was thinking more to follow this [guide](https://huggingface.co/docs/transformers/add_new_model), and include in my PR new classes for Llava inheriting from PreTrainedConfig and PreTrainedModel and a LlavaForCausalLM class, as implemented here [https://github.com/haotian-liu/LLaVA/blob/main/llava/model/llava.py](https://github.com/haotian-liu/LLaVA/blob/main/llava/model/llava.py). \r\n\r\nWhat to do you think of it @ArthurZucker ? (@jprivera44 do not hesitate to participate in the convo since we will collaborate with each other on this PR)\r\n", "Hi @youssefadr, following up on your post, I am also following the same guide for HF. Although we might interpret the steps slightly differently. I'm not sure which steps you are on, but even though the original researchers included the model card, this should be used to get the initial weights from the LLaMA weights(I'm still waiting on Meta for these weights). Once the pre-loaded weights are in, the process of tracing the forward pass(in the original repo) to see what functions are needed for transfomers/LLaVA kicks off the whole process. Were you able to get the original LLaMA weights from Meta?", "Hey @youssefadr what I meant is that you should host the code on the hub, others will be able to run your code using `trust_remote_code = True`. This is easier to do, and more aligned with the way this model seems to work! ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,688
1,688
CONTRIBUTOR
null
# What does this PR do? This PR adds the LlaVA model ([https://arxiv.org/abs/2304.08485](https://arxiv.org/abs/2304.08485)), an end-to-end trained large multimodal model that connects a vision encoder and LLM for general-purpose visual and language understanding. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. https://github.com/huggingface/transformers/issues/22848 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23849/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23849/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23849", "html_url": "https://github.com/huggingface/transformers/pull/23849", "diff_url": "https://github.com/huggingface/transformers/pull/23849.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23849.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/23848
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23848/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23848/comments
https://api.github.com/repos/huggingface/transformers/issues/23848/events
https://github.com/huggingface/transformers/issues/23848
1,731,193,603
I_kwDOCUB6oc5nL-sD
23,848
RWKV - Inference NF4 quantization broken, also Int8 quantization weirdness.
{ "login": "iantbutler01", "id": 6426407, "node_id": "MDQ6VXNlcjY0MjY0MDc=", "avatar_url": "https://avatars.githubusercontent.com/u/6426407?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iantbutler01", "html_url": "https://github.com/iantbutler01", "followers_url": "https://api.github.com/users/iantbutler01/followers", "following_url": "https://api.github.com/users/iantbutler01/following{/other_user}", "gists_url": "https://api.github.com/users/iantbutler01/gists{/gist_id}", "starred_url": "https://api.github.com/users/iantbutler01/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iantbutler01/subscriptions", "organizations_url": "https://api.github.com/users/iantbutler01/orgs", "repos_url": "https://api.github.com/users/iantbutler01/repos", "events_url": "https://api.github.com/users/iantbutler01/events{/privacy}", "received_events_url": "https://api.github.com/users/iantbutler01/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Not sure quantization actually works for RWKV which has quite a few custom layers. cc @younesbelkada ", "Hmm, I was able to do a 4bit finetuning with qlora last week at the very least targeting key value and receptance in the attention and feed forward blocks, it just seems like inference time is broken\r\n\r\nI confirmed my tuned checkpoints worked fine for inference at full precision and actually it worked fine for just the forward call in 8bit in Eleuther's lm-evaluation-harness too now that I think of it, not sure for 4bit. Just seems to break when calling generate\r\n\r\n", "Hi @iantbutler01 \r\nThanks for the issue!\r\nThe 8bit support should be added in https://github.com/huggingface/transformers/pull/23468 \r\nFrom my understanding it seems you have managed to finetune RWKV in 4bit ? \r\n\r\n> Hmm, I was able to do a 4bit finetuning with qlora last week at the very least targeting key value and receptance in the attention and feed forward blocks\r\n\r\nCould you elaborate more on the error? ", "@younesbelkada \r\n\r\nIn regards to int8, I've been testing on the development branch, which includes the code you've merged there and it very much just produces `tensor([[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0',\r\n dtype=torch.float16)` for the logits during a `generate` call even with the base RWKV 14b model so I think something is still broken. You can reproduce this easily with the steps I've linked in the issue here. \r\n \r\nFor example, with \r\n\r\n```\r\nAndBytesConfig(\r\n load_in_8bit=True\r\n)\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n \"RWKV/rwkv-raven-14b\",\r\n return_dict=True,\r\n torch_dtype=torch.float16,\r\n quantization_config=bnb_config,\r\n context_length=1024,\r\n # rescale_every=0,\r\n).cuda()\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"RWKV/rwkv-raven-14b\")\r\n\r\npipeline = InstructionTextGenerationPipeline(\r\n model=model,\r\n tokenizer=tokenizer,\r\n top_p=0.92,\r\n top_k=50,\r\n temperature=1.0,\r\n)\r\ninstruction = \"Write me the steps to make a peanut butter and jelly sandwich\"\r\nprompt = PROMPT_FOR_GENERATION_FORMAT.format(\r\n instruction=instruction,\r\n)\r\n\r\nclass IsBork(LogitsProcessor):\r\n def __call__(self, input_ids, scores):\r\n print(scores)\r\n return scores\r\n \r\nprompt = str(prompt)\r\ninputs = tokenizer(prompt, return_tensors=\"pt\")\r\n\r\ninput_ids, attention_mask = inputs[\"input_ids\"], inputs[\"attention_mask\"]\r\ninput_ids, attention_mask = input_ids.to(\"cuda\"), attention_mask.to(\"cuda\")\r\n\r\ngenerated_sequence = model.generate(\r\n input_ids=input_ids,\r\n attention_mask=attention_mask,\r\n logits_processor=LogitsProcessorList([IsBork()]),\r\n pad_token_id=tokenizer.pad_token_id,\r\n top_p=0.92,\r\n top_k=50,\r\n temperature=1.0,\r\n max_new_tokens=512\r\n)\r\n\r\nprint(generated_sequence)\r\n```\r\n\r\nThe call to generate raises an error,\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/crow/SoftwareProjects/rwkv-raven-lora-instruct/generate.py\", line 171, in <module>\r\n gen = pipeline(prompt, max_new_tokens=512)\r\n File \"/home/crow/SoftwareProjects/transformers/src/transformers/pipelines/base.py\", line 1118, in __call__\r\n return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)\r\n File \"/home/crow/SoftwareProjects/transformers/src/transformers/pipelines/base.py\", line 1125, in run_single\r\n model_outputs = self.forward(model_inputs, **forward_params)\r\n File \"/home/crow/SoftwareProjects/transformers/src/transformers/pipelines/base.py\", line 1024, in forward\r\n model_outputs = self._forward(model_inputs, **forward_params)\r\n File \"/home/crow/SoftwareProjects/rwkv-raven-lora-instruct/instruct_pipeline.py\", line 112, in _forward\r\n generated_sequence = self.model.generate(\r\n File \"/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/home/crow/SoftwareProjects/transformers/src/transformers/generation/utils.py\", line 1568, in generate\r\n return self.sample(\r\n File \"/home/crow/SoftwareProjects/transformers/src/transformers/generation/utils.py\", line 2651, in sample\r\n next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)\r\nRuntimeError: probability tensor contains either `inf`, `nan` or element < 0 \r\n```\r\nAdding a logits processor that just prints out scores shows on the first token generated,\r\n\r\n`tensor([[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0',\r\n dtype=torch.float16)`\r\n \r\nIf I then set do_sample=False\r\n\r\n```\r\nBelow is an instruction that describes a task. Write a response that appropriately completes the request.\r\n\r\n### Instruction:\r\nWrite me the steps to make a peanut butter and jelly sandwich\r\n\r\n### Response:\r\n<|endoftext|>\r\n```\r\n\r\nIt only generates end of text, where as the full precision model generates correctly.\r\n", "In regards to 4bit rescaling during inference is broken for NF4 quantization with RWKV if you try to run inference, with a `generate` call with nf4 quantization:\r\n\r\nRuntimeError: result type Float can't be cast to the desired output type Byte\r\nwhich is failing in the else statement of that block your int8 PR touches.\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/crow/SoftwareProjects/rwkv-raven-lora-instruct/generate.py\", line 181, in <module>\r\n generated_sequence = model.generate(\r\n File \"/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/home/crow/SoftwareProjects/transformers/src/transformers/generation/utils.py\", line 1518, in generate\r\n return self.greedy_search(\r\n File \"/home/crow/SoftwareProjects/transformers/src/transformers/generation/utils.py\", line 2335, in greedy_search\r\n outputs = self(\r\n File \"/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/crow/venvs/experimental/lib/python3.10/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n File \"/home/crow/SoftwareProjects/transformers/src/transformers/models/rwkv/modeling_rwkv.py\", line 781, in forward\r\n rwkv_outputs = self.rwkv(\r\n File \"/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/crow/venvs/experimental/lib/python3.10/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n File \"/home/crow/SoftwareProjects/transformers/src/transformers/models/rwkv/modeling_rwkv.py\", line 642, in forward\r\n self._rescale_layers()\r\n File \"/home/crow/SoftwareProjects/transformers/src/transformers/models/rwkv/modeling_rwkv.py\", line 713, in _rescale_layers\r\n block.attention.output.weight.div_(2 ** int(block_id // self.config.rescale_every))\r\n```\r\n\r\nAnd then if I turn rescaling off by setting `rescale_every=0`, it looks like theres a projection issue somewhere,\r\nRuntimeError: mat1 and mat2 shapes cannot be multiplied (43x5120 and 1x13107200)\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/crow/SoftwareProjects/rwkv-raven-lora-instruct/generate.py\", line 181, in <module>\r\n generated_sequence = model.generate(\r\n File \"/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/home/crow/SoftwareProjects/transformers/src/transformers/generation/utils.py\", line 1518, in generate\r\n return self.greedy_search(\r\n File \"/home/crow/SoftwareProjects/transformers/src/transformers/generation/utils.py\", line 2335, in greedy_search\r\n outputs = self(\r\n File \"/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/crow/venvs/experimental/lib/python3.10/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n File \"/home/crow/SoftwareProjects/transformers/src/transformers/models/rwkv/modeling_rwkv.py\", line 781, in forward\r\n rwkv_outputs = self.rwkv(\r\n File \"/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/crow/venvs/experimental/lib/python3.10/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n File \"/home/crow/SoftwareProjects/transformers/src/transformers/models/rwkv/modeling_rwkv.py\", line 667, in forward\r\n hidden_states, state, attentions = block(\r\n File \"/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/crow/venvs/experimental/lib/python3.10/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n File \"/home/crow/SoftwareProjects/transformers/src/transformers/models/rwkv/modeling_rwkv.py\", line 384, in forward\r\n attention, state = self.attention(self.ln1(hidden), state=state, use_cache=use_cache)\r\n File \"/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/crow/venvs/experimental/lib/python3.10/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n File \"/home/crow/SoftwareProjects/transformers/src/transformers/models/rwkv/modeling_rwkv.py\", line 308, in forward\r\n receptance, key, value, state = self.extract_key_value(hidden, state=state)\r\n File \"/home/crow/SoftwareProjects/transformers/src/transformers/models/rwkv/modeling_rwkv.py\", line 300, in extract_key_value\r\n key = self.key(key)\r\n File \"/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/crow/venvs/experimental/lib/python3.10/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n File \"/home/crow/venvs/experimental/lib/python3.10/site-packages/bitsandbytes/nn/modules.py\", line 219, in forward\r\n out = bnb.matmul_4bit(x, self.weight.t(), bias=bias, quant_state=self.weight.quant_state)\r\n File \"/home/crow/venvs/experimental/lib/python3.10/site-packages/bitsandbytes/autograd/_functions.py\", line 564, in matmul_4bit\r\n return MatMul4Bit.apply(A, B, out, bias, quant_state)\r\n File \"/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/autograd/function.py\", line 506, in apply\r\n return super().apply(*args, **kwargs) # type: ignore[misc]\r\n File \"/home/crow/venvs/experimental/lib/python3.10/site-packages/bitsandbytes/autograd/_functions.py\", line 512, in forward\r\n output = torch.nn.functional.linear(A, F.dequantize_fp4(B, state).to(A.dtype).t(), bias)\r\nRuntimeError: mat1 and mat2 shapes cannot be multiplied (42x5120 and 1x13107200)\r\n```\r\n\r\nBut yeah I have this all reproducible in the script I've linked in the issue.", "I see, thanks for sharing more details with me\r\nSo there are 2 issues here:\r\n\r\n1- int8 RWKV seems to not work with you. From the snippet I am seeing, you are calling `.cuda()` on the 8bit model. This might lead to unexpected behavior because any `.to(xxx)` calls to the 8bit model will re-compute the quantization statistics. \r\nI have managed to reproduce your issue with the snippet below:\r\n```python\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig\r\n\r\nmodel_id = \"RWKV/rwkv-4-1b5-pile\"\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(model_id, load_in_8bit=True, device_map={\"\":0}).cuda()\r\ntokenizer = AutoTokenizer.from_pretrained(model_id)\r\n\r\ngeneration_config = GenerationConfig(max_new_tokens=20, pad_token_id=tokenizer.eos_token_id)\r\nquestion = \"Hello my name is\"\r\ninputs = tokenizer(question, return_tensors=\"pt\").to(0)\r\noutput_int8 = model.generate((inputs[\"input_ids\"]), generation_config=generation_config)\r\nprint(tokenizer.decode(output_int8[0], skip_special_tokens=True))\r\n```\r\nand the model directly predicts EOS token. The fix is to replace `model = AutoModelForCausalLM.from_pretrained(model_id, load_in_8bit=True, device_map={\"\":0}).cuda()` by `model = AutoModelForCausalLM.from_pretrained(model_id, load_in_8bit=True, device_map={\"\":0})`. Could you confirm this fixes your issue?\r\n\r\n2- RWKV + 4bit seems to be not supported for now. I will dig into that and let you know as soon as I have a fix", "I just added the 4bit inference support for RWKV in #23910 - please try out the fixes stated above together with #23910 and let us know how it goes", "@younesbelkada \r\n\r\nOkay so 8bit is working fine now, thank you very much for the workaround!\r\n\r\n4bit loaded in with this configuration:\r\n\r\n```\r\nbnb_config = BitsAndBytesConfig(\r\n load_in_4bit=True,\r\n bnb_4bit_use_double_quant=True,\r\n bnb_4bit_quant_type=\"nf4\",\r\n bnb_4bit_compute_dtype=torch.bfloat16,\r\n)\r\n\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n \"RWKV/rwkv-raven-14b\",\r\n return_dict=True,\r\n torch_dtype=torch.float16,\r\n quantization_config=bnb_config,\r\n context_length=1024,\r\n # rescale_every=0,\r\n device_map={\"\":0}\r\n)\r\n```\r\n\r\nIs still failing unfortunately, :(\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/crow/SoftwareProjects/rwkv-raven-lora-instruct/generate.py\", line 182, in <module>\r\n generated_sequence = model.generate(\r\n File \"/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/home/crow/SoftwareProjects/transformers/src/transformers/generation/utils.py\", line 1518, in generate\r\n return self.greedy_search(\r\n File \"/home/crow/SoftwareProjects/transformers/src/transformers/generation/utils.py\", line 2335, in greedy_search\r\n outputs = self(\r\n File \"/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/crow/venvs/experimental/lib/python3.10/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n File \"/home/crow/SoftwareProjects/transformers/src/transformers/models/rwkv/modeling_rwkv.py\", line 789, in forward\r\n rwkv_outputs = self.rwkv(\r\n File \"/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/crow/venvs/experimental/lib/python3.10/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n File \"/home/crow/SoftwareProjects/transformers/src/transformers/models/rwkv/modeling_rwkv.py\", line 642, in forward\r\n self._rescale_layers()\r\n File \"/home/crow/SoftwareProjects/transformers/src/transformers/models/rwkv/modeling_rwkv.py\", line 714, in _rescale_layers\r\n block.attention.output.weight.quant_state[0].div_(\r\nRuntimeError: result type Float can't be cast to the desired output type Byte\r\n```", "I see, this is because you are using nested quantization `bnb_4bit_use_double_quant=True`. Can you try without that while I find a fix for this specific usecase? 🙏 ", "Yes sorry about that, I had always intended this to be with double quant, that was in my original repro code, but I should have been more explicit when communicating it to you 👍 \r\n\r\nI tried it without double quantization and it does work. ", "No problem and thanks for double checking, will get back once I fix the issue with nested quantization!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "I think It should not be closed @younesbelkada", "Correct, it is known that RWKV double-quant 4bit inference does not work yet, not sure if I can propose a fix anytime soon because of the rescale layers operation here: https://github.com/huggingface/transformers/blob/main/src/transformers/models/rwkv/modeling_rwkv.py#L722" ]
1,685
1,694
1,694
NONE
null
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.15.0-70-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: RTX 6000 Ada - Using distributed or parallel set-up in script?: Not for inference. - bitsandbytes 0.39. I'm using the `RWKV/rwkv-raven-14b` model. Rescaling is broken for NF4 quantization with RWKV `RuntimeError: result type Float can't be cast to the desired output type Byte` Looks like torch cannot do the conversion in _div And then if I turn rescaling off, it looks like theres a projection issue somewhere, `RuntimeError: mat1 and mat2 shapes cannot be multiplied (43x5120 and 1x13107200)` Additionally, with Int8 quantization enabled RWKV just outputs the endoftext token, I added a logits processor to output the scores and they're all NaN: ``` tensor([[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0', dtype=torch.float16) ``` ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I have a repo with everything setup in generate.py to be able to quickly repro here: https://github.com/iantbutler01/rwkv-raven-qlora-4bit-instruct/blob/main/generate.py pip install -U git+https://github.com/huggingface/transformers.git pip install -U git+https://github.com/huggingface/peft.git pip install -U git+https://github.com/huggingface/accelerate.git pip install --upgrade bitsandbytes And then run `python generate.py` in a python 3.10+ environment. Uncomment 8bit or 4bit bnb config as needed. ### Expected behavior I would expect NF4 based quantization to work at all, and then for Int8 quantization for logits not to be NaN.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23848/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23848/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23846
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23846/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23846/comments
https://api.github.com/repos/huggingface/transformers/issues/23846/events
https://github.com/huggingface/transformers/issues/23846
1,731,139,517
I_kwDOCUB6oc5nLxe9
23,846
Add LaVIN model
{ "login": "tensorpro", "id": 23471886, "node_id": "MDQ6VXNlcjIzNDcxODg2", "avatar_url": "https://avatars.githubusercontent.com/u/23471886?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tensorpro", "html_url": "https://github.com/tensorpro", "followers_url": "https://api.github.com/users/tensorpro/followers", "following_url": "https://api.github.com/users/tensorpro/following{/other_user}", "gists_url": "https://api.github.com/users/tensorpro/gists{/gist_id}", "starred_url": "https://api.github.com/users/tensorpro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tensorpro/subscriptions", "organizations_url": "https://api.github.com/users/tensorpro/orgs", "repos_url": "https://api.github.com/users/tensorpro/repos", "events_url": "https://api.github.com/users/tensorpro/events{/privacy}", "received_events_url": "https://api.github.com/users/tensorpro/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "Hi @amyeroberts, I don't think anyone is working on this anymore. If this adds any value to hf I'll start working on it." ]
1,685
1,688
null
NONE
null
### Model description LaVIN is a vision-language instructed model that is affordable to train (it was trained in a few hours on 8 A100 GPUs) with good performance on ScienceQA. I'd like to add LaVIN to HF transformers. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation The paper [Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models](https://arxiv.org/pdf/2305.15023.pdf) is by [Gen Luo](https://luogen1996.github.io/), [Yiyi Zhou](https://github.com/huggingface/transformers/issues/new?assignees=&labels=New+model&projects=&template=new-model-addition.yml), [Tianhe Ren](https://rentainhe.github.io/), [Shengxin Chen](https://github.com/huggingface/transformers/issues/new?assignees=&labels=New+model&projects=&template=new-model-addition.yml), [Xiaoshuai Sun](https://sites.google.com/view/xssun), and [Rongrong Ji](https://mac.xmu.edu.cn/rrji/) @luogen1996 has made the code and model weights available at [github.com/luogen1996/LaVIN](https://github.com/luogen1996/LaVIN). The weights for the following models are available at the following links: ### ScienceQA | Model | Weights | Time | Memory | #Params | Acc | Weights | |-----------|----------:|----------:|-------:|--------:|-----:|-----------------:| | LaVIN-7B | LLaMA | 1.4 hours | 33.9G | 3.8M | 89.37 | [google drive](https://drive.google.com/file/d/10X2qCBYrLH1grZOHwHRMXLUoz-S6MSgV/view?usp=share_link) | | LaVIN-7B | Vicuna | 1.4 hours | 33.9G | 3.8M | 89.41 | [google drive](https://drive.google.com/file/d/1nuMxeiWlnJKxDybCshg8pVGSvLc5dZy8/view?usp=share_link) | | LaVIN-13B | LLaMA | 2 hours | 55.9G | 5.4M | 90.54 | [google drive](https://drive.google.com/file/d/1LkKUY54spZkkeXrR7BDmU-xmK9YadcKM/view?usp=share_link) | ### Multimodal ChatBot | Model |Weights | Time | Memory | #Params | Acc | Weights | |-----------|----------:|---------:|-------:|--------:|----:|-----------------:| | LaVIN-13B | LLaMA | 75 hours | 55.9G | 5.4M | - | [google drive](https://drive.google.com/file/d/1rHQNSaiGzFHYGgsamtySPYnd5AW4OE9j/view?usp=share_link)|
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23846/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23846/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/23845
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23845/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23845/comments
https://api.github.com/repos/huggingface/transformers/issues/23845/events
https://github.com/huggingface/transformers/issues/23845
1,731,010,774
I_kwDOCUB6oc5nLSDW
23,845
forced_decoder_ids in Whisper models significantly impacts performance, use decoder_input_ids instead
{ "login": "tonysimpson", "id": 140212, "node_id": "MDQ6VXNlcjE0MDIxMg==", "avatar_url": "https://avatars.githubusercontent.com/u/140212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tonysimpson", "html_url": "https://github.com/tonysimpson", "followers_url": "https://api.github.com/users/tonysimpson/followers", "following_url": "https://api.github.com/users/tonysimpson/following{/other_user}", "gists_url": "https://api.github.com/users/tonysimpson/gists{/gist_id}", "starred_url": "https://api.github.com/users/tonysimpson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tonysimpson/subscriptions", "organizations_url": "https://api.github.com/users/tonysimpson/orgs", "repos_url": "https://api.github.com/users/tonysimpson/repos", "events_url": "https://api.github.com/users/tonysimpson/events{/privacy}", "received_events_url": "https://api.github.com/users/tonysimpson/received_events", "type": "User", "site_admin": false }
[ { "id": 3081136536, "node_id": "MDU6TGFiZWwzMDgxMTM2NTM2", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Difficult%20Issue", "name": "Good Difficult Issue", "color": "684CC7", "default": false, "description": "" } ]
open
false
null
[]
[ "Hey! Thanks for taking the time to open this PR. \r\nTotally get the speedup and the latency induced by the use of `foced_decoder_ids` rather than `decoder_input_ids`. \r\nThe addition of the `prompt_ids` was mostly handled by @hollance, which will be able to have a better look at this. \r\nI don't think that there was a release yet, which means this can still be changeable (if its not impossible to update) \r\n", "IIRC we decided for the time being to keep using `forced_decoder_ids` for the prompts, even though it's slower indeed. Would be nice to improve this.", "What might a path to improvement look like? A PR to make sure passing in a custom decoder_input_ids works correctly might be a good start? Happy to do that. I know it doesn't work for PT as the <|startoftranscript|> token can get added by GenerationMixin in the wrong place, I haven't tried TF or flax.", "I don't understand this part of the generation process well enough yet to say anything useful about it. You'd think that we could start generation by passing in the entire `forced_decoder_ids` as the `decoder_input_ids` as the first step, rather than doing it one token at a time. The `ForceTokensLogitsProcessor` also plays a part in this.\r\n\r\n@Narsil can probably enlighten us 😄 ", "@hollance Yes we could absolutely convert `forced_decoder_ids` to `decoder_input_ids` in `.generate(...)`, and I think we can do it in a way that doesn't break anyones code. I can put a draft PR together for the PT code probably sometime tomorrow. ", "Hi, not sure if I can enlighten.\r\n\r\nIn general, I'm not sure why `forced_decoder_ids` is useful for, since if you know what ids you should get, there's no need to do inference.\r\n\r\nIf it was added, the general caution is that it must have been useful for some reason at some point, but in this specific use case I don't really understand.", "@Narsil For Whisper, we want to start generation not with a single \"BOS\" token (here, `<|startoftranscript|>`) but with several tokens. In the case of prompting, this could be a fairly long sequence of tokens. For example `<|startofprev|> here is the prompt <|startoftranscript|><|en|><|notimestamps|>`. The prompt text is used to prime the model with more context. Right now, we use `forced_decoder_ids` to feed in this sequence of \"starting tokens\", which means they get processed one-by-one in the generation loop. It's more efficient to allow the first step of generation to process this entire sequence at once.\r\n", "Yes, I know. I don't *think* it's necessary but I just usually give the benefit of the doubt when something was coded intentionally.", "Hello every one, what if we simply specify `decoder_input_ids` as an argument to generate call?\r\n```\r\n generated_ids = self.model.generate(\r\n inputs=input_features,\r\n decoder_input_ids=torch.tensor(\r\n [decoder_ids], dtype=torch.long\r\n ),\r\n ).cpu()\r\n```\r\n\r\nAs I understood it will be used [here](https://github.com/huggingface/transformers/blob/1689aea73346816b936b84932e12b774974e61a6/src/transformers/generation/utils.py#L661)\r\n\r\n", "Hii, I'm trying to run the ONNX model, when i'm exporting the onnx model using optimum-cli_, i'm getting 4 onnx model decoder_model,decoder_model_merged,decoder_with_past_model and encoder_model.\r\n\r\nCan anyone please help me how to predict using these 4 models?\r\nThe encoder model is giving 1 output that is encoder_hidden_state(1,1500,384) but on the other hand normal decoder_model is taking 2 input-> one is encoder_hidden_state and another one is decoder_input_ids, i've tried with multiple decoder_ids but still i'm not getting correct output.\r\n\r\nCan Anyone please suggest what is the correct decoder_input_ids that i need to give to the model?\r\nThanks in Advance. " ]
1,685
1,700
null
NONE
null
### Feature request @ArthurZucker probably one for you based on commit logs. Using `forced_decoder_ids` to provide "prompt" and or "prefix" to the whisper model is very inefficient as a forward pass and sampling is done for each token in the `forced_decoder_ids` but the result is already known. Instead the model parameter `decoder_input_ids` could be used which only uses one forward pass to initialise the kv cache with all the input tokens and immediately is sampling useful next tokens. Openai's whisper limits prompt to half the context length (448 // 2 - 1 = 223) , so if you want to use transformers whisper to behave like openai's whisper and you expect 20 words + EOS in your input feature then forward pass counts are: - transformers: 244 - openai-whisper: 21 I'm raising this as a feature request rather than a bug or PR as I think `forced_decoder_ids` is already pretty well embedded in the code and the community so I assume it can't just be ripped out and a discussion is probably required before a PR. Here's some code that demonstrates the issue in IPython: ```python from transformers import ( WhisperForConditionalGeneration, WhisperTokenizerFast, WhisperFeatureExtractor, ) from datasets import load_dataset import torch feature_extractor = WhisperFeatureExtractor() tokenizer = WhisperTokenizerFast.from_pretrained("openai/whisper-tiny.en", language="english") # Patch WhisperForConditionalGeneration._prepare_decoder_input_ids_for_generation because the one on GenerationMixin doesn't handle whisper properly. def prepare_decoder_input_ids_for_generation_patch(self, batch_size, model_input_name, model_kwargs, decoder_start_token_id, bos_token_id, device): if 'decoder_input_ids' not in model_kwargs: return torch.ones((batch_size, 1), dtype=torch.long, device=device) * decoder_start_token_id, model_kwargs else: return model_kwargs.pop('decoder_input_ids'), model_kwargs WhisperForConditionalGeneration._prepare_decoder_input_ids_for_generation = prepare_decoder_input_ids_for_generation_patch model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en") audio = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")[3]["audio"]["array"] input_features = feature_extractor(audio, sampling_rate=16000, return_tensors="pt").input_features # A custom logits processor to show how many times the forward pass and sample are run def logits_processor_count_factory(): count = 0 def logits_processor_count(input_ids, scores): nonlocal count count += 1 print(count) return scores return logits_processor_count PREV_TOKEN = 50360 # <|startofprev|> prompt_tokens = [PREV_TOKEN, 1770, 13, 2264, 346, 353, 318, 262, 46329, 286, 262, 3504, 6097, 11, 290, 356, 389, 9675, 284, 7062, 465, 21443, 13, 5414, 318, 1770, 13, 2264, 346, 353, 338, 5642, 1342, 3499, 621, 465, 2300, 13, 679, 4952, 514, 326, 379, 428, 43856, 1622, 286, 262, 614, 11, 351, 6786, 290, 32595, 12023, 28236, 878, 514, 11, 985, 2915, 7428, 422, 6600, 290, 663, 2482, 3051, 749, 14704, 284, 262, 2000, 13] # note prompt_ids is prefixed to forced_decoder_ids inside generate # counts to 106 forced_decoder_ids_output = model.generate(input_features=input_features, return_timestamps=False, prompt_ids=torch.LongTensor(prompt_tokens), logits_processor=[logits_processor_count_factory()])[0] print(tokenizer.decode(forced_decoder_ids_output, decode_with_timestamps=False)) SOT_TOKEN = 50257 # <|startoftranscript|> NO_TIMESTAMPS_TOKEN = 50362 # <|notimestamps|> decoder_input_ids = torch.LongTensor([prompt_tokens + [SOT_TOKEN, NO_TIMESTAMPS_TOKEN]]) # counts to 31 decoder_input_ids_output = model.generate(input_features=input_features, return_timestamps=False, forced_decoder_ids=None, begin_suppress_tokens=None, decoder_input_ids=decoder_input_ids, logits_processor=[logits_processor_count_factory()])[0] print(tokenizer.decode(decoder_input_ids_output, decode_with_timestamps=False)) ``` You can get performance for bothing in IPython doing: ```python %timeit model.generate(input_features=input_features, return_timestamps=False, prompt_ids=torch.LongTensor(prompt_tokens))[0] %timeit model.generate(input_features=input_features, return_timestamps=False, forced_decoder_ids=None, begin_suppress_tokens=None, decoder_input_ids=decoder_input_ids)[0] ``` On CPU for me using decoder_input_ids is 2x faster with this input. ### Motivation I want to be able to use the transformers implementation of whisper in a production system where cost and processing time will be critical, due to the way we are using whisper this issue impact performance a lot more than the 2x I quoted above, its more like 5x in our use case. Obviously we can code around it but if it's possible to change transformers and avoid custom code I'd prefer that. ### Your contribution I'd be able to create a PR but without knowing more about how the maintainers would like to handle backward compatibility etc I don't think its the right place to start. I'd be very happy to be involved in a discussion, offer opinions or testing etc.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23845/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23845/timeline
reopened
null
null
https://api.github.com/repos/huggingface/transformers/issues/23844
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23844/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23844/comments
https://api.github.com/repos/huggingface/transformers/issues/23844/events
https://github.com/huggingface/transformers/pull/23844
1,730,868,586
PR_kwDOCUB6oc5RnM_O
23,844
🌐 [i18n-KO] Translated `tasks_explained.mdx` to Korean
{ "login": "0525hhgus", "id": 47289574, "node_id": "MDQ6VXNlcjQ3Mjg5NTc0", "avatar_url": "https://avatars.githubusercontent.com/u/47289574?v=4", "gravatar_id": "", "url": "https://api.github.com/users/0525hhgus", "html_url": "https://github.com/0525hhgus", "followers_url": "https://api.github.com/users/0525hhgus/followers", "following_url": "https://api.github.com/users/0525hhgus/following{/other_user}", "gists_url": "https://api.github.com/users/0525hhgus/gists{/gist_id}", "starred_url": "https://api.github.com/users/0525hhgus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/0525hhgus/subscriptions", "organizations_url": "https://api.github.com/users/0525hhgus/orgs", "repos_url": "https://api.github.com/users/0525hhgus/repos", "events_url": "https://api.github.com/users/0525hhgus/events{/privacy}", "received_events_url": "https://api.github.com/users/0525hhgus/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "특별히 수정할 만한 곳을 찾지 못했습니다. 고생하셨습니다!", "> 무척 길고 알찬 문서였네요! 고생 많으셨습니다. 몇 가지 수정 의견을 아래와 같이 제안 드립니다 😄\r\n\r\n꼼꼼한 리뷰 감사합니다! 리뷰 주신 사항 반영하여 커밋하였습니다 👍 ", "May you please review this PR? 😄 \r\n@sgugger, @ArthurZucker, @eunseojo" ]
1,685
1,686
1,685
CONTRIBUTOR
null
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니다 --> # What does this PR do? Translated the `tasks_explained.mdx` file of the documentation to Korean 😄 ~~*Reference documents I added: `generation_strategies.mdx`, `task_summary.mdx`~~ Thank you in advance for your review! Part of https://github.com/huggingface/transformers/issues/20179 <!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: --> ## Before reviewing - [x] Check for missing / redundant translations (번역 누락/중복 검사) - [x] Grammar Check (맞춤법 검사) - [x] Review or Add new terms to glossary (용어 확인 및 추가) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview로 정상작동 확인) ## Who can review? (Initial) <!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> <!-- Team PseudoLab, may you please review this PR? --> @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23844/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23844/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23844", "html_url": "https://github.com/huggingface/transformers/pull/23844", "diff_url": "https://github.com/huggingface/transformers/pull/23844.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23844.patch", "merged_at": 1685980924000 }
https://api.github.com/repos/huggingface/transformers/issues/23843
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23843/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23843/comments
https://api.github.com/repos/huggingface/transformers/issues/23843/events
https://github.com/huggingface/transformers/issues/23843
1,730,854,047
I_kwDOCUB6oc5nKryf
23,843
Error in Falcon-40B 8bit-quantized when calling generate
{ "login": "avacaondata", "id": 35173563, "node_id": "MDQ6VXNlcjM1MTczNTYz", "avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avacaondata", "html_url": "https://github.com/avacaondata", "followers_url": "https://api.github.com/users/avacaondata/followers", "following_url": "https://api.github.com/users/avacaondata/following{/other_user}", "gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}", "starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions", "organizations_url": "https://api.github.com/users/avacaondata/orgs", "repos_url": "https://api.github.com/users/avacaondata/repos", "events_url": "https://api.github.com/users/avacaondata/events{/privacy}", "received_events_url": "https://api.github.com/users/avacaondata/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Thanks for reporting this. I would suggest you to open the issue on the model's repository, as the code you are using is not entirely on transformers. Cache might not be properly handled", "Hi @avacaondata , I was able to successfully run your code on my setup (2 TITAN RTX 24GB) with the model in 8-bit and in 4-bit. Let me know if you are still have the error. Also make sure that you have the lastest version of bitsandbytes and accelerate. Thanks for the report =) ", "Yes I have tried with the last version of bitsandbytes and transformers and it works now, the issue is solved. Thank you very much :) @SunMarc " ]
1,685
1,686
1,686
NONE
null
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.15.0-72-generic-x86_64-with-glibc2.35 - Python version: 3.9.16 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker @younes ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Steps to reproduce the behavior: 1. Import modules and load the model: ```python from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer model_path="tiiuae/falcon-40b" config = AutoConfig.from_pretrained(model_path, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained( model_path, config=config, trust_remote_code=True, load_in_8bit=True, device_map="auto") model.eval() model.config.eos_token_id = 0 model.config.forced_eos_token_id = 0 model.config.pad_token_id = 0 ``` 2. Tokenize a text: ```python text = "Hola qué tal estás Íñigo? ¿Qué vas a hacer hoy?" inpts = tokenizer(text, return_tensors="pt").to("cuda") ``` 3. Try to generate text: ```python out = model.generate(**{k: v for k, v in inpts.items() if "token_type" not in k}) ``` You will receive the following error: ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[13], line 1 ----> 1 out = model.generate(**{k: v for k, v in inpts.items() if "token_type" not in k}) File ~/miniconda3/envs/int4/lib/python3.9/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs) 112 @functools.wraps(func) 113 def decorate_context(*args, **kwargs): 114 with ctx_factory(): --> 115 return func(*args, **kwargs) File ~/miniconda3/envs/int4/lib/python3.9/site-packages/transformers/generation/utils.py:1518, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, **kwargs) 1512 raise ValueError( 1513 "num_return_sequences has to be 1 when doing greedy search, " 1514 f"but is {generation_config.num_return_sequences}." 1515 ) 1517 # 11. run greedy search -> 1518 return self.greedy_search( 1519 input_ids, 1520 logits_processor=logits_processor, 1521 stopping_criteria=stopping_criteria, 1522 pad_token_id=generation_config.pad_token_id, 1523 eos_token_id=generation_config.eos_token_id, 1524 output_scores=generation_config.output_scores, 1525 return_dict_in_generate=generation_config.return_dict_in_generate, ... 291 ) 293 x = attn_output.view(batch_size, self.num_heads, q_length, self.head_dim) 294 x = x.permute(0, 2, 1, 3) RuntimeError: Expected query, key, and value to have the same dtype, but got query.dtype: float key.dtype: float and value.dtype: c10::Half instead. ``` ### Expected behavior It is expected that the falcon-40b model is able to generate also with int8, otherwise we cannot perform inference even on a 80GB A-100. Also, other models have no problem with inference in 8bit.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23843/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23843/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23842
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23842/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23842/comments
https://api.github.com/repos/huggingface/transformers/issues/23842/events
https://github.com/huggingface/transformers/pull/23842
1,730,844,289
PR_kwDOCUB6oc5RnHoS
23,842
TF SAM shape flexibility fixes
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,685
1,685
1,685
MEMBER
null
This PR makes some small changes to use dynamic instead of static shapes for SAM, which fixes issues when compiling and fine-tuning. cc @sayakpaul, fixes #23826
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23842/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23842/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23842", "html_url": "https://github.com/huggingface/transformers/pull/23842", "diff_url": "https://github.com/huggingface/transformers/pull/23842.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23842.patch", "merged_at": 1685448525000 }
https://api.github.com/repos/huggingface/transformers/issues/23840
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23840/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23840/comments
https://api.github.com/repos/huggingface/transformers/issues/23840/events
https://github.com/huggingface/transformers/issues/23840
1,730,776,734
I_kwDOCUB6oc5nKY6e
23,840
[i18n-<languageCode>] Translating docs to <languageName>
{ "login": "rezz90", "id": 134093487, "node_id": "U_kgDOB_4arw", "avatar_url": "https://avatars.githubusercontent.com/u/134093487?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rezz90", "html_url": "https://github.com/rezz90", "followers_url": "https://api.github.com/users/rezz90/followers", "following_url": "https://api.github.com/users/rezz90/following{/other_user}", "gists_url": "https://api.github.com/users/rezz90/gists{/gist_id}", "starred_url": "https://api.github.com/users/rezz90/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rezz90/subscriptions", "organizations_url": "https://api.github.com/users/rezz90/orgs", "repos_url": "https://api.github.com/users/rezz90/repos", "events_url": "https://api.github.com/users/rezz90/events{/privacy}", "received_events_url": "https://api.github.com/users/rezz90/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
null
[]
[]
1,685
1,685
1,685
NONE
null
<!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180 - [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through) - [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx). ## Tutorial section - [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) - [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) - [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) - [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) - [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) - [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) - [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx) <!-- Keep on adding more as you go 🔥 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23840/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23840/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23839
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23839/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23839/comments
https://api.github.com/repos/huggingface/transformers/issues/23839/events
https://github.com/huggingface/transformers/issues/23839
1,730,288,863
I_kwDOCUB6oc5nIhzf
23,839
4bit Blip2 compatibility
{ "login": "betterftr", "id": 84087448, "node_id": "MDQ6VXNlcjg0MDg3NDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/84087448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/betterftr", "html_url": "https://github.com/betterftr", "followers_url": "https://api.github.com/users/betterftr/followers", "following_url": "https://api.github.com/users/betterftr/following{/other_user}", "gists_url": "https://api.github.com/users/betterftr/gists{/gist_id}", "starred_url": "https://api.github.com/users/betterftr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/betterftr/subscriptions", "organizations_url": "https://api.github.com/users/betterftr/orgs", "repos_url": "https://api.github.com/users/betterftr/repos", "events_url": "https://api.github.com/users/betterftr/events{/privacy}", "received_events_url": "https://api.github.com/users/betterftr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada ", "Hi @betterftr \r\nThanks for the issue, indeed there seems to be a bug, that should be fixed in https://github.com/huggingface/transformers/pull/23895", "cc @SunMarc if you have time to look into this ? ", "Hey @betterftr , i'm able to run the following script with this env. Let me know if this works on your side. \r\n- `transformers` version: 4.34.0.dev0 (main branch)\r\n- `accelerate` version: 0.23\r\n- `bitsandbytes` version: 0.41.1\r\n``` py \r\nimport torch\r\nfrom transformers import Blip2ForConditionalGeneration, Blip2Processor, BitsAndBytesConfig\r\nfrom PIL import Image\r\nimport requests\r\n\r\nnf4_config = BitsAndBytesConfig(\r\nload_in_4bit=True,\r\nbnb_4bit_quant_type=\"nf4\",\r\nbnb_4bit_use_double_quant=True,\r\nbnb_4bit_compute_dtype=torch.float16\r\n)\r\n\r\nprocessor = Blip2Processor.from_pretrained(\"Salesforce/blip2-opt-6.7b-coco\")\r\nmodel = Blip2ForConditionalGeneration.from_pretrained(\"Salesforce/blip2-opt-6.7b-coco\", device_map='auto', quantization_config=nf4_config)\r\n\r\ndef prepare_img():\r\n url = \"https://huggingface.co/hf-internal-testing/blip-test-image/resolve/main/demo.jpg\"\r\n image = Image.open(requests.get(url, stream=True).raw)\r\n return image\r\n\r\nimage = prepare_img()\r\ninputs = processor(images=[image, image], return_tensors=\"pt\").to(dtype=torch.float16)\r\n\r\npredictions = model.generate(**inputs, num_beams=2)\r\nprint(processor.batch_decode(predictions, skip_special_tokens=True)[0].strip())\r\n# print -> a woman sitting on the beach with her dog\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@SunMarc I am getting similar error while using llma2 7b model, and I am using the latest version of transformers\r\n\r\n\r\nhere is the code\r\n```\r\nfrom transformers import AutoTokenizer, set_seed, BitsAndBytesConfig, AutoTokenizer, AutoModelForCausalLM\r\nbnb_config = BitsAndBytesConfig(\r\n load_in_4bit=True,\r\n bnb_4bit_use_double_quant=True,\r\n bnb_4bit_quant_type=\"nf4\",\r\n bnb_4bit_compute_dtype=torch.float16,\r\n )\r\nmodel_name = 'llm-models/Llama-2-7b-hf'\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n model_name, \r\n quantization_config=bnb_config,\r\n device_map=\"cuda\",\r\n trust_remote_code=True,\r\n)\r\ntokenizer = AutoTokenizer.from_pretrained(model_name, device_map='cuda')\r\ndef generate_text(prompt):\r\n # Tokenize the prompt\r\n inputs = tokenizer.encode(prompt, return_tensors='pt')\r\n \r\n print(f'inputs is {inputs} on {inputs.device}')\r\n \r\n inputs = inputs.to('cuda:0')\r\n \r\n print(f'inputs is {inputs} on {inputs.device}')\r\n \r\n # Generate a response\r\n outputs = model.generate(inputs)\r\n \r\n # Decode the response\r\n response = tokenizer.decode(outputs[0], skip_special_tokens=True)\r\n \r\n return response\r\n\r\nprompt = 'User1: Hey, I need a new laptop. Which one should I buy?'\r\nresponse = generate_text(prompt)\r\nprint(response)\r\n```\r\n**package info**\r\ntransformers==4.38.1\r\naccelerate==0.21.0\r\nbitsandbytes==0.42.0\r\n\r\nI also tried 4.34, it doesn't work either. Besides that, I check this [PR](https://github.com/huggingface/transformers/pull/23895), it doesn't look like that it is in any of the release branch nor the maser branch\r\n\r\nhere is the error I get\r\n\r\nFP4 quantization state not initialized. Please call .cuda() or .to(device) on the LinearFP4 layer first.\r\n---------------------------------------------------------------------------\r\nAssertionError Traceback (most recent call last)\r\nCell In[18], line 2\r\n 1 prompt = 'User1: Hey, I need a new laptop. Which one should I buy?'\r\n----> 2 response = generate_text(prompt)\r\n 3 print(response)\r\n\r\nCell In[17], line 13, in generate_text(prompt)\r\n 10 print(f'inputs is {inputs} on {inputs.device}')\r\n 12 # Generate a response\r\n---> 13 outputs = model.generate(inputs)\r\n 15 # Decode the response\r\n 16 response = tokenizer.decode(outputs[0], skip_special_tokens=True)\r\n\r\nFile /opt/conda/envs/domino-ray/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)\r\n 112 @functools.wraps(func)\r\n 113 def decorate_context(*args, **kwargs):\r\n 114 with ctx_factory():\r\n--> 115 return func(*args, **kwargs)\r\n\r\nFile /opt/conda/envs/domino-ray/lib/python3.10/site-packages/transformers/generation/utils.py:1345, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, **kwargs)\r\n 1337 logger.warning(\r\n 1338 \"A decoder-only architecture is being used, but right-padding was detected! For correct \"\r\n 1339 \"generation results, please set `padding_side='left'` when initializing the tokenizer.\"\r\n 1340 )\r\n 1342 if self.config.is_encoder_decoder and \"encoder_outputs\" not in model_kwargs:\r\n 1343 # if model is encoder decoder encoder_outputs are created\r\n 1344 # and added to `model_kwargs`\r\n-> 1345 model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(\r\n 1346 inputs_tensor, model_kwargs, model_input_name\r\n 1347 )\r\n 1349 # 5. Prepare `input_ids` which will be used for auto-regressive generation\r\n 1350 if self.config.is_encoder_decoder:\r\n\r\nFile /opt/conda/envs/domino-ray/lib/python3.10/site-packages/transformers/generation/utils.py:644, in GenerationMixin._prepare_encoder_decoder_kwargs_for_generation(self, inputs_tensor, model_kwargs, model_input_name)\r\n 642 encoder_kwargs[\"return_dict\"] = True\r\n 643 encoder_kwargs[model_input_name] = inputs_tensor\r\n--> 644 model_kwargs[\"encoder_outputs\"]: ModelOutput = encoder(**encoder_kwargs)\r\n 646 return model_kwargs\r\n\r\nFile /opt/conda/envs/domino-ray/lib/python3.10/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)\r\n 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\r\n 1517 else:\r\n-> 1518 return self._call_impl(*args, **kwargs)\r\n\r\nFile /opt/conda/envs/domino-ray/lib/python3.10/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)\r\n 1522 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1523 # this function, and just call forward.\r\n 1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1525 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1526 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1527 return forward_call(*args, **kwargs)\r\n 1529 try:\r\n 1530 result = None\r\n\r\nFile /opt/conda/envs/domino-ray/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)\r\n 163 output = old_forward(*args, **kwargs)\r\n 164 else:\r\n--> 165 output = old_forward(*args, **kwargs)\r\n 166 return module._hf_hook.post_forward(module, output)\r\n\r\nFile /opt/conda/envs/domino-ray/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:1094, in T5Stack.forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, cross_attn_head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 1081 layer_outputs = checkpoint(\r\n 1082 create_custom_forward(layer_module),\r\n 1083 hidden_states,\r\n (...)\r\n 1091 None, # past_key_value is always None with gradient checkpointing\r\n 1092 )\r\n 1093 else:\r\n-> 1094 layer_outputs = layer_module(\r\n 1095 hidden_states,\r\n 1096 attention_mask=extended_attention_mask,\r\n 1097 position_bias=position_bias,\r\n 1098 encoder_hidden_states=encoder_hidden_states,\r\n 1099 encoder_attention_mask=encoder_extended_attention_mask,\r\n 1100 encoder_decoder_position_bias=encoder_decoder_position_bias,\r\n 1101 layer_head_mask=layer_head_mask,\r\n 1102 cross_attn_layer_head_mask=cross_attn_layer_head_mask,\r\n 1103 past_key_value=past_key_value,\r\n 1104 use_cache=use_cache,\r\n 1105 output_attentions=output_attentions,\r\n 1106 )\r\n 1108 # layer_outputs is a tuple with:\r\n 1109 # hidden-states, key-value-states, (self-attention position bias), (self-attention weights), (cross-attention position bias), (cross-attention weights)\r\n 1110 if use_cache is False:\r\n\r\nFile /opt/conda/envs/domino-ray/lib/python3.10/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)\r\n 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\r\n 1517 else:\r\n-> 1518 return self._call_impl(*args, **kwargs)\r\n\r\nFile /opt/conda/envs/domino-ray/lib/python3.10/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)\r\n 1522 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1523 # this function, and just call forward.\r\n 1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1525 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1526 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1527 return forward_call(*args, **kwargs)\r\n 1529 try:\r\n 1530 result = None\r\n\r\nFile /opt/conda/envs/domino-ray/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)\r\n 163 output = old_forward(*args, **kwargs)\r\n 164 else:\r\n--> 165 output = old_forward(*args, **kwargs)\r\n 166 return module._hf_hook.post_forward(module, output)\r\n\r\nFile /opt/conda/envs/domino-ray/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:694, in T5Block.forward(self, hidden_states, attention_mask, position_bias, encoder_hidden_states, encoder_attention_mask, encoder_decoder_position_bias, layer_head_mask, cross_attn_layer_head_mask, past_key_value, use_cache, output_attentions, return_dict)\r\n 691 else:\r\n 692 self_attn_past_key_value, cross_attn_past_key_value = None, None\r\n--> 694 self_attention_outputs = self.layer[0](\r\n 695 hidden_states,\r\n 696 attention_mask=attention_mask,\r\n 697 position_bias=position_bias,\r\n 698 layer_head_mask=layer_head_mask,\r\n 699 past_key_value=self_attn_past_key_value,\r\n 700 use_cache=use_cache,\r\n 701 output_attentions=output_attentions,\r\n 702 )\r\n 703 hidden_states, present_key_value_state = self_attention_outputs[:2]\r\n 704 attention_outputs = self_attention_outputs[2:] # Keep self-attention outputs and relative position weights\r\n\r\nFile /opt/conda/envs/domino-ray/lib/python3.10/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)\r\n 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\r\n 1517 else:\r\n-> 1518 return self._call_impl(*args, **kwargs)\r\n\r\nFile /opt/conda/envs/domino-ray/lib/python3.10/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)\r\n 1522 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1523 # this function, and just call forward.\r\n 1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1525 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1526 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1527 return forward_call(*args, **kwargs)\r\n 1529 try:\r\n 1530 result = None\r\n\r\nFile /opt/conda/envs/domino-ray/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)\r\n 163 output = old_forward(*args, **kwargs)\r\n 164 else:\r\n--> 165 output = old_forward(*args, **kwargs)\r\n 166 return module._hf_hook.post_forward(module, output)\r\n\r\nFile /opt/conda/envs/domino-ray/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:601, in T5LayerSelfAttention.forward(self, hidden_states, attention_mask, position_bias, layer_head_mask, past_key_value, use_cache, output_attentions)\r\n 590 def forward(\r\n 591 self,\r\n 592 hidden_states,\r\n (...)\r\n 598 output_attentions=False,\r\n 599 ):\r\n 600 normed_hidden_states = self.layer_norm(hidden_states)\r\n--> 601 attention_output = self.SelfAttention(\r\n 602 normed_hidden_states,\r\n 603 mask=attention_mask,\r\n 604 position_bias=position_bias,\r\n 605 layer_head_mask=layer_head_mask,\r\n 606 past_key_value=past_key_value,\r\n 607 use_cache=use_cache,\r\n 608 output_attentions=output_attentions,\r\n 609 )\r\n 610 hidden_states = hidden_states + self.dropout(attention_output[0])\r\n 611 outputs = (hidden_states,) + attention_output[1:] # add attentions if we output them\r\n\r\nFile /opt/conda/envs/domino-ray/lib/python3.10/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)\r\n 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\r\n 1517 else:\r\n-> 1518 return self._call_impl(*args, **kwargs)\r\n\r\nFile /opt/conda/envs/domino-ray/lib/python3.10/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)\r\n 1522 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1523 # this function, and just call forward.\r\n 1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1525 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1526 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1527 return forward_call(*args, **kwargs)\r\n 1529 try:\r\n 1530 result = None\r\n\r\nFile /opt/conda/envs/domino-ray/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)\r\n 163 output = old_forward(*args, **kwargs)\r\n 164 else:\r\n--> 165 output = old_forward(*args, **kwargs)\r\n 166 return module._hf_hook.post_forward(module, output)\r\n\r\nFile /opt/conda/envs/domino-ray/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:520, in T5Attention.forward(self, hidden_states, mask, key_value_states, position_bias, past_key_value, layer_head_mask, query_length, use_cache, output_attentions)\r\n 517 return hidden_states\r\n 519 # get query states\r\n--> 520 query_states = shape(self.q(hidden_states)) # (batch_size, n_heads, seq_length, dim_per_head)\r\n 522 # get key/value states\r\n 523 key_states = project(\r\n 524 hidden_states, self.k, key_value_states, past_key_value[0] if past_key_value is not None else None\r\n 525 )\r\n\r\nFile /opt/conda/envs/domino-ray/lib/python3.10/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)\r\n 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\r\n 1517 else:\r\n-> 1518 return self._call_impl(*args, **kwargs)\r\n\r\nFile /opt/conda/envs/domino-ray/lib/python3.10/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)\r\n 1522 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1523 # this function, and just call forward.\r\n 1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1525 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1526 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1527 return forward_call(*args, **kwargs)\r\n 1529 try:\r\n 1530 result = None\r\n\r\nFile /opt/conda/envs/domino-ray/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)\r\n 163 output = old_forward(*args, **kwargs)\r\n 164 else:\r\n--> 165 output = old_forward(*args, **kwargs)\r\n 166 return module._hf_hook.post_forward(module, output)\r\n\r\nFile /opt/conda/envs/domino-ray/lib/python3.10/site-packages/bitsandbytes/nn/modules.py:256, in Linear4bit.forward(self, x)\r\n 253 x = x.to(self.compute_dtype)\r\n 255 bias = None if self.bias is None else self.bias.to(self.compute_dtype)\r\n--> 256 out = bnb.matmul_4bit(x, self.weight.t(), bias=bias, quant_state=self.weight.quant_state)\r\n 258 out = out.to(inp_dtype)\r\n 260 return out\r\n\r\nFile /opt/conda/envs/domino-ray/lib/python3.10/site-packages/bitsandbytes/autograd/_functions.py:566, in matmul_4bit(A, B, quant_state, out, bias)\r\n 565 def matmul_4bit(A: tensor, B: tensor, quant_state: F.QuantState, out: tensor = None, bias=None):\r\n--> 566 assert quant_state is not None\r\n 567 if A.numel() == A.shape[-1] and A.requires_grad == False:\r\n 568 if A.shape[-1] % quant_state.blocksize != 0:\r\n\r\nAssertionError: \r\n" ]
1,685
1,699
1,699
NONE
null
### System Info I am getting an error after loading Blip2 in 4bit, cant inference, cant train. Can anyone help? ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction `import torch from transformers import Blip2ForConditionalGeneration, AutoProcessor, Blip2Processor, AutoModelForCausalLM, BitsAndBytesConfig from peft import prepare_model_for_kbit_training #processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-6.7b-coco") #model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-6.7b-coco", device_map='auto', load_in_8bit=True) nf4_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=torch.bfloat16 ) processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-6.7b-coco") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-6.7b-coco", device_map='auto', quantization_config=nf4_config)` Then when I want to train with PEFT or just do a single image captioning with the loaded model I get: `FP4 quantization state not initialized. Please call .cuda() or .to(device) on the LinearFP4 layer first. --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[10], line 6 3 pixel_values = inputs.pixel_values 5 #generated_ids = model.generate(pixel_values=pixel_values, min_length=50, max_new_tokens=50, length_penalty=1.4, top_k=150, top_p=0.95, repetition_penalty=2.1, num_beams=5, temperature=0.75) ----> 6 generated_ids = model.generate(pixel_values=pixel_values, max_length=50) 7 generated_caption = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] 8 print(generated_caption) File H:\CONDA\envs\blip\lib\site-packages\torch\utils\_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs) 112 @functools.wraps(func) 113 def decorate_context(*args, **kwargs): 114 with ctx_factory(): --> 115 return func(*args, **kwargs) File H:\CONDA\envs\blip\lib\site-packages\transformers\models\blip_2\modeling_blip_2.py:1854, in Blip2ForConditionalGeneration.generate(self, pixel_values, input_ids, attention_mask, **generate_kwargs) 1851 inputs_embeds = self.get_input_embeddings()(input_ids) 1852 inputs_embeds = torch.cat([language_model_inputs, inputs_embeds.to(language_model_inputs.device)], dim=1) -> 1854 outputs = self.language_model.generate( 1855 inputs_embeds=inputs_embeds, 1856 attention_mask=attention_mask, 1857 **generate_kwargs, 1858 ) 1860 return outputs File H:\CONDA\envs\blip\lib\site-packages\torch\utils\_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs) 112 @functools.wraps(func) 113 def decorate_context(*args, **kwargs): 114 with ctx_factory(): --> 115 return func(*args, **kwargs) File H:\CONDA\envs\blip\lib\site-packages\transformers\generation\utils.py:1518, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, **kwargs) 1512 raise ValueError( 1513 "num_return_sequences has to be 1 when doing greedy search, " 1514 f"but is {generation_config.num_return_sequences}." 1515 ) 1517 # 11. run greedy search -> 1518 return self.greedy_search( 1519 input_ids, 1520 logits_processor=logits_processor, 1521 stopping_criteria=stopping_criteria, 1522 pad_token_id=generation_config.pad_token_id, 1523 eos_token_id=generation_config.eos_token_id, 1524 output_scores=generation_config.output_scores, 1525 return_dict_in_generate=generation_config.return_dict_in_generate, 1526 synced_gpus=synced_gpus, 1527 streamer=streamer, 1528 **model_kwargs, 1529 ) 1531 elif is_contrastive_search_gen_mode: 1532 if generation_config.num_return_sequences > 1: File H:\CONDA\envs\blip\lib\site-packages\transformers\generation\utils.py:2335, in GenerationMixin.greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, streamer, **model_kwargs) 2332 model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) 2334 # forward pass to get next token -> 2335 outputs = self( 2336 **model_inputs, 2337 return_dict=True, 2338 output_attentions=output_attentions, 2339 output_hidden_states=output_hidden_states, 2340 ) 2342 if synced_gpus and this_peer_finished: 2343 continue # don't waste resources running the code we don't need File H:\CONDA\envs\blip\lib\site-packages\torch\nn\modules\module.py:1501, in Module._call_impl(self, *args, **kwargs) 1496 # If we don't have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], [] File H:\CONDA\envs\blip\lib\site-packages\accelerate\hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs) 163 output = old_forward(*args, **kwargs) 164 else: --> 165 output = old_forward(*args, **kwargs) 166 return module._hf_hook.post_forward(module, output) File H:\CONDA\envs\blip\lib\site-packages\transformers\models\opt\modeling_opt.py:957, in OPTForCausalLM.forward(self, input_ids, attention_mask, head_mask, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict) 944 # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn) 945 outputs = self.model.decoder( 946 input_ids=input_ids, 947 attention_mask=attention_mask, (...) 954 return_dict=return_dict, 955 ) --> 957 logits = self.lm_head(outputs[0]).contiguous() 959 loss = None 960 if labels is not None: 961 # move labels to correct device to enable model parallelism File H:\CONDA\envs\blip\lib\site-packages\torch\nn\modules\module.py:1501, in Module._call_impl(self, *args, **kwargs) 1496 # If we don't have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], [] File H:\CONDA\envs\blip\lib\site-packages\accelerate\hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs) 163 output = old_forward(*args, **kwargs) 164 else: --> 165 output = old_forward(*args, **kwargs) 166 return module._hf_hook.post_forward(module, output) File H:\CONDA\envs\blip\lib\site-packages\bitsandbytes\nn\modules.py:219, in Linear4bit.forward(self, x) 216 x = x.to(self.compute_dtype) 218 bias = None if self.bias is None else self.bias.to(self.compute_dtype) --> 219 out = bnb.matmul_4bit(x, self.weight.t(), bias=bias, quant_state=self.weight.quant_state) 221 out = out.to(inp_dtype) 223 return out AttributeError: 'Parameter' object has no attribute 'quant_state'` ### Expected behavior 8 bit works fine
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23839/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23839/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23838
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23838/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23838/comments
https://api.github.com/repos/huggingface/transformers/issues/23838/events
https://github.com/huggingface/transformers/issues/23838
1,730,117,757
I_kwDOCUB6oc5nH4B9
23,838
Add EMD loss
{ "login": "wesboyt", "id": 30701972, "node_id": "MDQ6VXNlcjMwNzAxOTcy", "avatar_url": "https://avatars.githubusercontent.com/u/30701972?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wesboyt", "html_url": "https://github.com/wesboyt", "followers_url": "https://api.github.com/users/wesboyt/followers", "following_url": "https://api.github.com/users/wesboyt/following{/other_user}", "gists_url": "https://api.github.com/users/wesboyt/gists{/gist_id}", "starred_url": "https://api.github.com/users/wesboyt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wesboyt/subscriptions", "organizations_url": "https://api.github.com/users/wesboyt/orgs", "repos_url": "https://api.github.com/users/wesboyt/repos", "events_url": "https://api.github.com/users/wesboyt/events{/privacy}", "received_events_url": "https://api.github.com/users/wesboyt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,688
1,688
NONE
null
### Feature request could we import this file for 1d EMD? its like kl div but allows us to better represent ordinal/numeric classes. https://github.com/TakaraResearch/Pytorch-1D-Wasserstein-Statistical-Loss/blob/master/pytorch_stats_loss.py is the best option Ive seen online. ### Motivation I am currently using it locally for ordinal discrete density function approximation. ### Your contribution I'm not totally sure whats necessary to incorporate it into the currently available options throughout the codebase, but it shouldn't be hard to import it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23838/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/23838/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23837
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23837/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23837/comments
https://api.github.com/repos/huggingface/transformers/issues/23837/events
https://github.com/huggingface/transformers/pull/23837
1,730,022,369
PR_kwDOCUB6oc5RkS-d
23,837
Fix floating point precision issue for RoPE
{ "login": "butsugiri", "id": 6701836, "node_id": "MDQ6VXNlcjY3MDE4MzY=", "avatar_url": "https://avatars.githubusercontent.com/u/6701836?v=4", "gravatar_id": "", "url": "https://api.github.com/users/butsugiri", "html_url": "https://github.com/butsugiri", "followers_url": "https://api.github.com/users/butsugiri/followers", "following_url": "https://api.github.com/users/butsugiri/following{/other_user}", "gists_url": "https://api.github.com/users/butsugiri/gists{/gist_id}", "starred_url": "https://api.github.com/users/butsugiri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/butsugiri/subscriptions", "organizations_url": "https://api.github.com/users/butsugiri/orgs", "repos_url": "https://api.github.com/users/butsugiri/repos", "events_url": "https://api.github.com/users/butsugiri/events{/privacy}", "received_events_url": "https://api.github.com/users/butsugiri/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23837). All of your documentation changes will be reflected on that endpoint.", "Thank you for the message.\r\nWhile I appreciate that we have to keep the compatibility with existing models on the hub, my understanding is that all existing models converted from NeoX all have this precision issue.\r\nI would like to explore alternative solutions to address this issue rather than simply closing the pull request. Is there any other approach we can consider to fix the problem?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hey! If you want to fix the problem without having to close the PR you should be aiming for a full backward compatibility, add tests to make sure that you are fixing the issue in place, and that previous behaviour is not broken. " ]
1,685
1,688
1,688
NONE
null
## What does this PR do? This PR fixes the issue of floating point precision in `RotaryEmbedding`. The purpose of this PR is to fix inconsistency between GPT-Neo-X and HF Transformers, which is causing a model performance degradation. ## Issue In the current implementation of `RotaryEmbedding`, `inv_freq` is first initialized by float32. This value is then used for initializing `cos_cached` and `sin_cached` by float32. As a result, `cos_cached` and `sin_cached` remain float32 even if the model (including inv_freq) uses float16; this is because these two variables are not the target of dtype conversion of `half()` method Note that there is also a recomputation logic for these two variables, but it is very unlikely to occur https://github.com/huggingface/transformers/blob/f67dac97bdc63874f2288546b3fa87e69d2ea1c8/src/transformers/models/gpt_neox/modeling_gpt_neox.py#L268 However, this implementation seems inconsistent to the one in the [EleutherAI/gpt-neox](https://github.com/EleutherAI/gpt-neox) library. In their implementation, `cos_cached` and `sin_cached` are almost always recomputed in the [forward method](https://github.com/EleutherAI/gpt-neox/blob/23ad392fdfe0f0a22c986013013209a03b7c28a1/megatron/model/positional_embeddings.py#L51-L63). Thus, dtype of `cos_cached` and `sin_cached` are always consistent to the dtype of inv_freq. This inconsistency between two libraries (HF Transformers and GPT-Neo-X) causes the performance degradation of the model converted from gpt-neox. For example, the perplexity score of the language model on Wikitext corpus is as follows: - gpt-neo-x w/o conversion: 520.7840 - gpt-neo-x w/ conversion to HF format: 520.9911 - gpt-neo-x w/ conversion to HF format and this PR: 520.7840 (Sorry that the perplexity value is really bad. I am reporting the performance of model trained on toy data for debugging purpose) ## Solution I basically followed the previous PR https://github.com/huggingface/transformers/pull/22888 and made a similar fix. ## Possible Side Effect In the original code, `cos_cashed` and `sin_cashed` are initialized in the model consturctor. However, I had to move the initialization code to forward method. Otherwise the library gave me the following error: "cos_vml_cpu" not implemented for 'Half'. As a result, `torch.jit.trace` might be no longer available. Since I am not sure what jit.trace is, I don't have any workaround for this. ## Similar Issues - https://github.com/huggingface/transformers/pull/22888 - https://github.com/EleutherAI/gpt-neox/issues/873 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? - I would really appreciate it if the reviewers could point out the missing tests. ## Who can review? @ArthurZucker and @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23837/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23837/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23837", "html_url": "https://github.com/huggingface/transformers/pull/23837", "diff_url": "https://github.com/huggingface/transformers/pull/23837.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23837.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/23836
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23836/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23836/comments
https://api.github.com/repos/huggingface/transformers/issues/23836/events
https://github.com/huggingface/transformers/issues/23836
1,729,933,207
I_kwDOCUB6oc5nHK-X
23,836
loading dataset
{ "login": "Deemo-cqs", "id": 64957826, "node_id": "MDQ6VXNlcjY0OTU3ODI2", "avatar_url": "https://avatars.githubusercontent.com/u/64957826?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Deemo-cqs", "html_url": "https://github.com/Deemo-cqs", "followers_url": "https://api.github.com/users/Deemo-cqs/followers", "following_url": "https://api.github.com/users/Deemo-cqs/following{/other_user}", "gists_url": "https://api.github.com/users/Deemo-cqs/gists{/gist_id}", "starred_url": "https://api.github.com/users/Deemo-cqs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Deemo-cqs/subscriptions", "organizations_url": "https://api.github.com/users/Deemo-cqs/orgs", "repos_url": "https://api.github.com/users/Deemo-cqs/repos", "events_url": "https://api.github.com/users/Deemo-cqs/events{/privacy}", "received_events_url": "https://api.github.com/users/Deemo-cqs/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep GitHub issues for bugs and feature requests only." ]
1,685
1,685
1,685
NONE
null
Whether the loaded dataset is loaded into the memory at one time or in batches, why the model with the same parameters can be trained with a small dataset, but the memory of a large dataset will be full
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23836/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23836/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23835
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23835/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23835/comments
https://api.github.com/repos/huggingface/transformers/issues/23835/events
https://github.com/huggingface/transformers/issues/23835
1,729,550,843
I_kwDOCUB6oc5nFtn7
23,835
Sliding window for finetuning
{ "login": "DanaTurkif", "id": 52151359, "node_id": "MDQ6VXNlcjUyMTUxMzU5", "avatar_url": "https://avatars.githubusercontent.com/u/52151359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DanaTurkif", "html_url": "https://github.com/DanaTurkif", "followers_url": "https://api.github.com/users/DanaTurkif/followers", "following_url": "https://api.github.com/users/DanaTurkif/following{/other_user}", "gists_url": "https://api.github.com/users/DanaTurkif/gists{/gist_id}", "starred_url": "https://api.github.com/users/DanaTurkif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DanaTurkif/subscriptions", "organizations_url": "https://api.github.com/users/DanaTurkif/orgs", "repos_url": "https://api.github.com/users/DanaTurkif/repos", "events_url": "https://api.github.com/users/DanaTurkif/events{/privacy}", "received_events_url": "https://api.github.com/users/DanaTurkif/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Please use the [forums](https://discuss.huggingface.co/) to ask such questions. The feature is implemented via `stride` and `return_overflowing_tokens` in tokenizers as you note.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,688
1,688
NONE
null
### Feature request The sliding window feature is important since many models have limited size (BERT 512 token) , as far as i know the feature is available when using the pipelines for inference however when finetuning it's not. ### Motivation I'm trying to finetune BERT models since they reached state of the art in many NLP tasks, but especially for NER tasks and most of my documents are far larger than 512 tokens and truncating them will ruin the context. ### Your contribution I've been trying to implement sliding window manually by using the available features such as stride, return_overflowing_tokens
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23835/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23835/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23834
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23834/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23834/comments
https://api.github.com/repos/huggingface/transformers/issues/23834/events
https://github.com/huggingface/transformers/issues/23834
1,729,536,037
I_kwDOCUB6oc5nFqAl
23,834
Parameter: encoder_no_repeat_ngram_size or something that makes model not repeat input tokens in the output.
{ "login": "Oxi84", "id": 25420033, "node_id": "MDQ6VXNlcjI1NDIwMDMz", "avatar_url": "https://avatars.githubusercontent.com/u/25420033?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Oxi84", "html_url": "https://github.com/Oxi84", "followers_url": "https://api.github.com/users/Oxi84/followers", "following_url": "https://api.github.com/users/Oxi84/following{/other_user}", "gists_url": "https://api.github.com/users/Oxi84/gists{/gist_id}", "starred_url": "https://api.github.com/users/Oxi84/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Oxi84/subscriptions", "organizations_url": "https://api.github.com/users/Oxi84/orgs", "repos_url": "https://api.github.com/users/Oxi84/repos", "events_url": "https://api.github.com/users/Oxi84/events{/privacy}", "received_events_url": "https://api.github.com/users/Oxi84/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante ", "Hey @Oxi84 👋 \r\n\r\nWithout a stand-alone short script to reproduce the issue (as well as the desired output), it is hard for me to help :)\r\n\r\nNevertheless, I suspect the keyword argument you want to use is `no_repeat_ngram_size`, and not `encoder_no_repeat_ngram_size`", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,689
1,689
NONE
null
### Feature request Could you please add more explanations in the docs about what encoder_no_repeat_ngram_size practically does. It says in the docs that it makes sure that ngram sequence of specified length that is in the encoder input ids does not repeat in the decoder input ids, but I have no idea how this parameter changes decoder outputs ids. I use it with t5 ### Motivation When i set encoder_no_repeat_ngram_size=4 it does not repeat even ngram 2-3 in the output mostly. ### Your contribution with torch.no_grad(): beam_outputs = model1a.generate( input_ids=input_ids, attention_mask=attention_masks, encoder_no_repeat_ngram_size = 4, do_sample = False, num_return_sequences=4, num_beams=4, max_length=128 )
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23834/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23834/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23833
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23833/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23833/comments
https://api.github.com/repos/huggingface/transformers/issues/23833/events
https://github.com/huggingface/transformers/issues/23833
1,729,442,526
I_kwDOCUB6oc5nFTLe
23,833
[llama] AutoTokenizer does not add `eos_token` at the end
{ "login": "csyourui", "id": 23717487, "node_id": "MDQ6VXNlcjIzNzE3NDg3", "avatar_url": "https://avatars.githubusercontent.com/u/23717487?v=4", "gravatar_id": "", "url": "https://api.github.com/users/csyourui", "html_url": "https://github.com/csyourui", "followers_url": "https://api.github.com/users/csyourui/followers", "following_url": "https://api.github.com/users/csyourui/following{/other_user}", "gists_url": "https://api.github.com/users/csyourui/gists{/gist_id}", "starred_url": "https://api.github.com/users/csyourui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/csyourui/subscriptions", "organizations_url": "https://api.github.com/users/csyourui/orgs", "repos_url": "https://api.github.com/users/csyourui/repos", "events_url": "https://api.github.com/users/csyourui/events{/privacy}", "received_events_url": "https://api.github.com/users/csyourui/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "Hi,\r\n\r\nNote that it doesn't make sense to pass `use_fast` to the slow (Python-based) `LlamaTokenizer`. It only makes sense to pass use_fast to the `AutoTokenizer` class, which can either load the fast (Rust-based) `LlamaTokenizerFast` class or the slow (Python-based) `LlamaTokenizer`.\r\n\r\nIn the code snippet above, `auto_tokenizer` will be an instance of `LlamaTokenizerFast` and `llama_tokenizer` will be an instance of `LlamaTokenizer`:\r\n```\r\n>>> type(auto_tokenizer)\r\n<class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>\r\n>>> type(llama_tokenizer)\r\n<class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>\r\n```\r\n\r\nPinging @ArthurZucker regarding the `eos_token` issue", "> Hi,\r\n> \r\n> Note that it doesn't make sense to pass `use_fast` to the slow (Python-based) `LlamaTokenizer`. It only makes sense to pass use_fast to the `AutoTokenizer` class, which can either load the fast (Rust-based) `LlamaTokenizerFast` class or the slow (Python-based) `LlamaTokenizer`.\r\n> \r\n> In the code snippet above, `auto_tokenizer` will be an instance of `LlamaTokenizerFast` and `llama_tokenizer` will be an instance of `LlamaTokenizer`:\r\n> \r\n> ```\r\n> >>> type(auto_tokenizer)\r\n> <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>\r\n> >>> type(llama_tokenizer)\r\n> <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>\r\n> ```\r\n> \r\n> Pinging @ArthurZucker regarding the `eos_token` issue\r\n\r\nThank you so much for explaining this ~~~", "Hey! \r\nThanks for reporting. The quickest fix I can give you is to initialise the fast tokenizer from the slow one, using the correct arguments. \r\n```python \r\nfast = LlamaTokenizerFast.from_pretrained(\"huggyllama/llama-7b\", add_eos_token=True, from_slow=True)\r\n```\r\nThis will produce the expected outputs: \r\n```python \r\n>>> fast.encode(\"auto_tokenizer\", add_special_tokens = True)\r\n[1, 4469, 29918, 6979, 3950, 2]\r\n```\r\nThe reason behind this is that the `post_processor` is responsible of adding the `eos` and `bos` tokens. The processor is initialised when the slow tokenizer is converted to the fast version, and changing the argument on the fly will not result in a change of the processor. \r\n\r\nI'll open a PR to make sure that changing the eos and bos update the processor. Thanks for reporting. ", "For transformers v4.35.0, `LlamaTokenizerFast` still cannot encode `</s>` properly. I wonder if there are plans to fix this issue?", "Hello, this seems to work fine for me: \r\n```python \r\n>>> from transformers import AutoTokenizer \r\n>>> tokenizer = AutoTokenizer.from_pretrained(\"meta-llama/Llama-2-7b-hf\") \r\n>>> tokenizer.encode(\"</s>\", add_special_tokens = False)) \r\n>>> tokenizer.encode(\"Hey</s>sir\", add_special_tokens = False)\r\n>>> tokenizer.encode(\"Hey</s>sir\", add_special_tokens = False)\r\n [18637, 2, 8889]\r\n>>> tokenizer.tokenize(\"Hey</s>\", add_special_tokens = False)\r\n['▁Hey', '</s>']\r\n```\r\nFor such an important model we try to fix this as soon as possible as it can impact training for example, would mind sharing a reproducer ? 🤗 ", "@ArthurZucker \r\n\r\nI don't have access to `\"meta-llama/Llama-2-7b-hf\"`, but the following two llama / llama2 model gives me the same results.\r\n\r\nTransformers is installed via `pip install .` on commit `b8f1cde` and `tokenizer==0.14.1`\r\n\r\n```python\r\nimport transformers\r\nprint(transformers.__version__) # 4.35.0.dev0\r\nfrom transformers import AutoTokenizer \r\ns = 'huggyllama/llama-7b'\r\ns = \"NousResearch/Llama-2-7b-hf\"\r\ntokenizer = AutoTokenizer.from_pretrained(s) \r\nprint(tokenizer.encode(\"</s>\", add_special_tokens = False)) # [2]\r\nprint(tokenizer.tokenize(\"</s>\", add_special_tokens = False)) # ['▁</s>']\r\nprint(tokenizer.encode(\"Hey</s>sir\", add_special_tokens = False)) # [18637, 829, 29879, 29958, 29879, 381]\r\nprint(tokenizer.tokenize(\"Hey</s>sir\", add_special_tokens = False)) # ['▁Hey', '</', 's', '>', 's', 'ir']\r\n```", "That's expected if they did not update the `tokenizer.json` file to the correct normalisation. I would recommend you to open an issue on the hub as I don't maintain them 🤗 \r\n", "Thanks for the information. Just wondering what is the correct normalisation? I tried setting `normalized=False` for the special token `</s>` and that does not help\r\n", "`normalization=False` should be used. The way to set it on an already initialized tokenizer is the following:\r\n- Simple way:\r\n```python\r\n>>> tokenizer = AutoTokenizer.from_pretrained(\"NousResearch/Llama-2-7b-hf\", from_slow=True)\r\n>>> print(tokenizer.tokenize(\"Hey</s>sir\", add_special_tokens = False))\r\n['▁Hey', '</s>', '▁sir']\r\n```\r\n- After init: \r\n```python\r\n>>> from transformers import AddedToken\r\n>>> tokenizer = AutoTokenizer.from_pretrained(\"NousResearch/Llama-2-7b-hf\") \r\n>>> tokenizer.add_tokens(AddedToken(\"</s>\", normalized=False, special=True), special_tokens=True)\r\n\r\n>>> tokenizer.save_pretrained(\"/tmp/tokenizer-llama\")\r\n>>> tokenizer = AutoTokenizer.from_pretrained(\"/tmp/tokenizer-llama\") \r\n>>> print(tokenizer.tokenize(\"Hey</s>sir\", add_special_tokens = False))\r\n['▁Hey', '</s>', '▁sir']\r\n```\r\nThat is because fast tokenizers are supposed to be fixed after initialization. I'm planning on supporting the update without having to save/load the tokenizer but this was never possible before either. \r\n\r\n\r\n", "It works! Thanks!" ]
1,685
1,699
1,687
NONE
null
### System Info Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.29.2 - Platform: Linux-3.10.0-1160.42.2.el7.x86_64-x86_64-with-glibc2.35 - Python version: 3.9.16 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction code: ```python from transformers import AutoTokenizer, LlamaTokenizer auto_tokenizer = AutoTokenizer.from_pretrained("huggyllama/llama-7b", add_eos_token=True, use_fast=True) llama_tokenizer = LlamaTokenizer.from_pretrained("huggyllama/llama-7b", add_eos_token=True, use_fast=True) print(auto_tokenizer.decode(auto_tokenizer.encode("auto_tokenizer", add_special_tokens = True))) print(llama_tokenizer.decode(llama_tokenizer.encode("llama_tokenizer", add_special_tokens = True))) ``` results: ```shell <s> auto_tokenizer <s> llama_tokenizer</s> ``` ### Expected behavior add eos token like: ```shell <s> auto_tokenizer</s> <s> llama_tokenizer</s> ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23833/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23833/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23832
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23832/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23832/comments
https://api.github.com/repos/huggingface/transformers/issues/23832/events
https://github.com/huggingface/transformers/issues/23832
1,729,395,757
I_kwDOCUB6oc5nFHwt
23,832
In ViTForMaskedImageModeling, you will receive a reconstructed_pixel_values which shape is different with input when model.config.patch_size is not 16. This further triggers an error about loss when patch_size is not 16 and bool_masked_pos is not None.
{ "login": "yrqUni", "id": 35153598, "node_id": "MDQ6VXNlcjM1MTUzNTk4", "avatar_url": "https://avatars.githubusercontent.com/u/35153598?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yrqUni", "html_url": "https://github.com/yrqUni", "followers_url": "https://api.github.com/users/yrqUni/followers", "following_url": "https://api.github.com/users/yrqUni/following{/other_user}", "gists_url": "https://api.github.com/users/yrqUni/gists{/gist_id}", "starred_url": "https://api.github.com/users/yrqUni/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yrqUni/subscriptions", "organizations_url": "https://api.github.com/users/yrqUni/orgs", "repos_url": "https://api.github.com/users/yrqUni/repos", "events_url": "https://api.github.com/users/yrqUni/events{/privacy}", "received_events_url": "https://api.github.com/users/yrqUni/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @amyeroberts ", "Hi @yrqUni, thanks for reporting this issue. \r\n\r\nDigging into this, the error is arising because the decoder head for the model is parametreized by `config.encoder_stride`, which controls the size of the upscaled image. When we update the patch size, in order to calculate the loss, the encoder stride needs to be updated to ensure the reconstructed image has the same resolution as the input. \r\n\r\nI've opened a PR to raise a warning if the loss calculation isn't possible with the configuration settings. " ]
1,685
1,685
1,685
NONE
null
### System Info - `transformers` version: 4.29.2 - Platform: Linux-3.10.0-1160.90.1.el7.x86_64-x86_64-with-glibc2.35 - Python version: 3.9.16 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Use ViTForMaskedImageModeling; Set model.config.patch_size except 16; Get error. ### Expected behavior In ViTForMaskedImageModeling, you will receive a reconstructed_pixel_values which shape is different with input when model.config.patch_size is not 16. This further triggers an error about loss when patch_size is not 16 and bool_masked_pos is not None.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23832/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23832/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23831
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23831/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23831/comments
https://api.github.com/repos/huggingface/transformers/issues/23831/events
https://github.com/huggingface/transformers/issues/23831
1,729,343,166
I_kwDOCUB6oc5nE66-
23,831
IndexError when training with GLUE dataset using pretrained from scratch ELECTRA.
{ "login": "saiefulEZO", "id": 71864271, "node_id": "MDQ6VXNlcjcxODY0Mjcx", "avatar_url": "https://avatars.githubusercontent.com/u/71864271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/saiefulEZO", "html_url": "https://github.com/saiefulEZO", "followers_url": "https://api.github.com/users/saiefulEZO/followers", "following_url": "https://api.github.com/users/saiefulEZO/following{/other_user}", "gists_url": "https://api.github.com/users/saiefulEZO/gists{/gist_id}", "starred_url": "https://api.github.com/users/saiefulEZO/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/saiefulEZO/subscriptions", "organizations_url": "https://api.github.com/users/saiefulEZO/orgs", "repos_url": "https://api.github.com/users/saiefulEZO/repos", "events_url": "https://api.github.com/users/saiefulEZO/events{/privacy}", "received_events_url": "https://api.github.com/users/saiefulEZO/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The problem seems to be that you are using token IDs that are not accepted by your model. Are you sure your tokenzier length and model embedding size match?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,688
1,688
NONE
null
Hi guys, I have a problem in which i'm not sure how to solve. Short story is I pretrained ELECTRA from scratch, now I wanted to train and test with GLUE. I converted ELECTRA tf checkpoint to pytorch using `transformers/src/transformers/models/electra/convert_electra_original_tf_checkpoint_to_pytorch.py` then I run the GLUE test with this `transformers/examples/pytorch/text-classification/run_glue.py`. At 1st I ran with electra_small model, its working fine. However when I ran it with my model that I have pretrained from scratch it produced this error. python /Users/nlplabo/tensorflow-test/transformers/examples/pytorch/text-classification/run_glue.py\ --model_name_or_path "/Users/nlplabo/Desktop/electra_pos" \ --task_name $TASK_NAME \ --ignore_mismatched_sizes true \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_gpu_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3.0 \ --output_dir "/Users/nlplabo/Desktop/electra_pos/result/testing"\ 05/28/2023 18:03:09 - WARNING - __main__ - Process rank: 0, device: cpu, n_gpu: 0distributed training: True, 16-bits training: False Downloading and preparing dataset glue/cola to /Users/nlplabo/.cache/huggingface/datasets/glue/cola/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad... Dataset glue downloaded and prepared to /Users/nlplabo/.cache/huggingface/datasets/glue/cola/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad. Subsequent calls will reuse this data. 100%|███████████████████████████████████████████| 3/3 [00:00<00:00, 2262.71it/s] [WARNING|modeling_utils.py:3175] 2023-05-28 18:03:13,062 >> Some weights of the model checkpoint at /Users/nlplabo/Desktop/retrying/electra_pos were not used when initializing ElectraForSequenceClassification: ['discriminator_predictions.dense.weight', 'discriminator_predictions.dense_prediction.bias', 'discriminator_predictions.dense.bias', 'discriminator_predictions.dense_prediction.weight'] - This IS expected if you are initializing ElectraForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing ElectraForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). [WARNING|modeling_utils.py:3187] 2023-05-28 18:03:13,062 >> Some weights of ElectraForSequenceClassification were not initialized from the model checkpoint at /Users/nlplabo/Desktop/retrying/electra_pos and are newly initialized: ['classifier.out_proj.bias', 'classifier.dense.weight', 'classifier.out_proj.weight', 'classifier.dense.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. /Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/transformers/optimization.py:407: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning warnings.warn( 0%| | 1/804 [00:01<16:47, 1.25s/it]Traceback (most recent call last): File "/Users/nlplabo/tensorflow-test/transformers/examples/pytorch/text-classification/run_glue.py", line 622, in <module> main() File "/Users/nlplabo/tensorflow-test/transformers/examples/pytorch/text-classification/run_glue.py", line 530, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/transformers/trainer.py", line 1664, in train return inner_training_loop( File "/Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/transformers/trainer.py", line 1940, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/transformers/trainer.py", line 2735, in training_step loss = self.compute_loss(model, inputs) File "/Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/transformers/trainer.py", line 2767, in compute_loss outputs = model(**inputs) File "/Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/transformers/models/electra/modeling_electra.py", line 1004, in forward discriminator_hidden_states = self.electra( File "/Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/transformers/models/electra/modeling_electra.py", line 908, in forward hidden_states = self.embeddings( File "/Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/transformers/models/electra/modeling_electra.py", line 201, in forward inputs_embeds = self.word_embeddings(input_ids) File "/Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 162, in forward return F.embedding( File "/Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/torch/nn/functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) IndexError: index out of range in self 0%| | 1/804 [00:01<17:31, 1.31s/it] I tried to search for solutions out there, but I couldn't get it to work. Last choice is to pretrained again from scratch, but that would take too much time as my pc was not that strong. So I hope to make this model works. Thank you.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23831/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23831/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23827
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23827/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23827/comments
https://api.github.com/repos/huggingface/transformers/issues/23827/events
https://github.com/huggingface/transformers/pull/23827
1,729,230,148
PR_kwDOCUB6oc5RhowI
23,827
Add saving to cpu for the state dict for fsdp
{ "login": "tokestermw", "id": 4722119, "node_id": "MDQ6VXNlcjQ3MjIxMTk=", "avatar_url": "https://avatars.githubusercontent.com/u/4722119?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tokestermw", "html_url": "https://github.com/tokestermw", "followers_url": "https://api.github.com/users/tokestermw/followers", "following_url": "https://api.github.com/users/tokestermw/following{/other_user}", "gists_url": "https://api.github.com/users/tokestermw/gists{/gist_id}", "starred_url": "https://api.github.com/users/tokestermw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tokestermw/subscriptions", "organizations_url": "https://api.github.com/users/tokestermw/orgs", "repos_url": "https://api.github.com/users/tokestermw/repos", "events_url": "https://api.github.com/users/tokestermw/events{/privacy}", "received_events_url": "https://api.github.com/users/tokestermw/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23827). All of your documentation changes will be reflected on that endpoint." ]
1,685
1,685
1,685
CONTRIBUTOR
null
(oops sorry didn't mean to PR onto main branch) # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23827/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23827/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23827", "html_url": "https://github.com/huggingface/transformers/pull/23827", "diff_url": "https://github.com/huggingface/transformers/pull/23827.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23827.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/23826
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23826/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23826/comments
https://api.github.com/repos/huggingface/transformers/issues/23826/events
https://github.com/huggingface/transformers/issues/23826
1,729,169,885
I_kwDOCUB6oc5nEQnd
23,826
[TensorFlow SAM] Internal operations in the prompt encoder fail during fine-tuning
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Investigating now - when I was porting the model I got the feeling it didn't even really support fine-tuning! Is there a PyTorch notebook where fine-tuning works @sayakpaul?", "Update: Some bits of the code were still using static shapes incorrectly! I've fixed it and I think your code sample should work now (there is a label shape issue, but I think that's not the model code's fault)", "> Investigating now - when I was porting the model I got the feeling it didn't even really support fine-tuning! Is there a PyTorch notebook where fine-tuning works @sayakpaul?\r\n\r\nHere you go: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SAM/Fine_tune_SAM_(segment_anything)_on_a_custom_dataset.ipynb", "Thanks @Rocketknight1!\r\n\r\nSo, I had to transpose the predicted mask to have channels_last memory layout to make the loss computation work with Sparse Categorical Crossentropy:\r\n\r\n```python\r\nfrom tensorflow import keras\r\n\r\nclass SAMFineTuner(keras.Model):\r\n def __init__(self, sam, **kwargs):\r\n super().__init__(**kwargs)\r\n self.sam = sam\r\n\r\n def train_step(self, inputs):\r\n with tf.GradientTape() as tape:\r\n # Forward pass.\r\n outputs = self.sam(\r\n pixel_values=inputs[\"pixel_values\"],\r\n input_boxes=inputs[\"input_boxes\"],\r\n multimask_output=False\r\n )\r\n\r\n # Compute loss.\r\n predicted_masks = tf.squeeze(outputs.pred_masks, 1)\r\n predicted_masks = tf.transpose(predicted_masks, [0, 2, 3, 1])\r\n ground_truth_masks = tf.cast(inputs[\"ground_truth_mask\"], tf.float32)\r\n loss = self.compiled_loss(tf.expand_dims(ground_truth_masks, 1), predicted_masks)\r\n\r\n # Optimize the model.\r\n trainable_vars = self.sam.trainable_variables\r\n grads = tape.gradient(loss, trainable_vars)\r\n self.optimizer.apply_gradients(zip(grads, trainable_vars))\r\n\r\n # Reporting.\r\n return {m.name: m.result() for m in self.metrics}\r\n```\r\n\r\nBut we'll not be using SCCE anyway. \r\n\r\nClosing the issue, feel free to merge the PR :) ", "Dear @sayakpaul, @Rocketknight1 and @merveenoyan,\r\n\r\nMany thanks for your precious work.\r\n\r\nI am following https://keras.io/examples/vision/sam/ to finetune HF [TFSamModel](https://huggingface.co/docs/transformers/main/model_doc/sam#transformers.TFSamModel). The training correctly works and my custom loss decreases in the tested epochs, but when I infer the test images, the predictions before or after the finetuning are the same. \r\nTherefore, I compare the `sam.get_weights()` before and after the finetune, and indeed they are equal. I expected differences, am I right?\r\n\r\nI also tried to unident from `with tf.GradientTape() as tape:` \r\n```\r\n# calculate loss over predicted and ground truth masks\r\nloss = dice_loss(tf.expand_dims(ground_truth_masks, 1), predicted_masks)\r\n# update trainable variables\r\ntrainable_vars = sam.trainable_variables\r\ngrads = tape.gradient(loss, trainable_vars)\r\noptimizer.apply_gradients(zip(grads, trainable_vars))\r\nreturn loss\r\n```\r\nas described in [Customizing what happens in fit()] (https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit#a_first_simple_example), but the predictions are the same and the weights too.\r\n\r\nMy second approach was to override [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) [train_step](https://github.com/keras-team/keras/blob/v2.14.0/keras/engine/training.py#L1128) as TFSamModel subclass it. \r\n\r\nThe loss correctly decreases and the callbacks work but when the model weights before or after the finetune are the same.\r\n```\r\nEpoch 1/5\r\n17/Unknown - 39s 441ms/step - loss: 0.9899 - mean_io_u_3: 0.4663\r\nalpha updated to 0.9900000095367432\r\n17/17 [==============================] - 40s 505ms/step - loss: 0.9899 - mean_io_u_3: 0.4663\r\n\r\nEpoch 2/5\r\n17/17 [==============================] - ETA: 0s - loss: 0.9799 - mean_io_u_3: 0.4663\r\nalpha updated to 0.9800000190734863\r\n\r\n17/17 [==============================] - 8s 442ms/step - loss: 0.9799 - mean_io_u_3: 0.4663\r\nEpoch 3/5\r\n17/17 [==============================] - ETA: 0s - loss: 0.9698 - mean_io_u_3: 0.4663\r\nalpha updated to 0.9700000286102295\r\n\r\n17/17 [==============================] - 7s 441ms/step - loss: 0.9698 - mean_io_u_3: 0.4663\r\n\r\nEpoch 4/5\r\nalpha updated to 0.9600000381469727\r\n17/17 [==============================] - ETA: 0s - loss: 0.9599 - mean_io_u_3: 0.4663\r\n17/17 [==============================] - 8s 441ms/step - loss: 0.9599 - mean_io_u_3: 0.4663\r\n\r\nEpoch 5/5\r\nalpha updated to 0.9500000476837158\r\n17/17 [==============================] - ETA: 0s - loss: 0.9499 - mean_io_u_3: 0.4663\r\n17/17 [==============================] - 8s 442ms/step - loss: 0.9499 - mean_io_u_3: 0.4663\r\n*****\r\n**\r\n```\r\n\r\nDo you have any ideas?\r\n\r\nThank you.\r\n\r\nD", "Leaving that one to @sayakpaul and @merveenoyan since they wrote that example, but please ping me if you think there are any issues in the underlying model!" ]
1,685
1,699
1,685
MEMBER
null
@merveenoyan and I are trying to create a fine-tuning notebook for the TensorFlow variant of [SAM](https://huggingface.co/docs/transformers/main/model_doc/sam). After compiling the model, when trying to run the actual fine-tuning, it leads to: ``` TypeError: in user code: File "/usr/local/lib/python3.10/dist-packages/keras/engine/training.py", line 1284, in train_function * return step_function(self, iterator) File "/usr/local/lib/python3.10/dist-packages/keras/engine/training.py", line 1268, in step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) File "/usr/local/lib/python3.10/dist-packages/keras/engine/training.py", line 1249, in run_step ** outputs = model.train_step(data) File "<ipython-input-10-3c1490d3fea1>", line 12, in train_step outputs = self.sam( File "/usr/local/lib/python3.10/dist-packages/keras/utils/traceback_utils.py", line 70, in error_handler raise e.with_traceback(filtered_tb) from None File "/tmp/__autograph_generated_filel5x0ne4l.py", line 37, in tf__run_call_with_unpacked_inputs retval_ = ag__.converted_call(ag__.ld(func), (ag__.ld(self),), dict(**ag__.ld(unpacked_inputs)), fscope) File "/tmp/__autograph_generated_filem4r4bspv.py", line 195, in tf__call (sparse_embeddings, dense_embeddings) = ag__.converted_call(ag__.ld(self).prompt_encoder, (), dict(batch_size=ag__.converted_call(ag__.ld(shape_list), (ag__.ld(image_embeddings),), None, fscope)[0], input_points=ag__.ld(input_points), input_labels=ag__.ld(input_labels), input_boxes=ag__.ld(input_boxes), input_masks=ag__.ld(input_masks)), fscope) File "/tmp/__autograph_generated_file4_u_db6c.py", line 90, in tf__call ag__.if_stmt(ag__.ld(input_boxes) is not None, if_body_3, else_body_3, get_state_3, set_state_3, ('batch_size', 'sparse_embeddings'), 2) File "/tmp/__autograph_generated_file4_u_db6c.py", line 68, in if_body_3 box_embeddings = ag__.converted_call(ag__.ld(self)._embed_boxes, (ag__.ld(input_boxes),), None, fscope) File "/tmp/__autograph_generated_filehsxk3fhx.py", line 14, in tf___embed_boxes coords = ag__.converted_call(ag__.ld(tf).reshape, (ag__.ld(boxes), (ag__.ld(batch_size), ag__.ld(nb_boxes), 2, 2)), None, fscope) TypeError: Exception encountered when calling layer 'tf_sam_model' (type TFSamModel). in user code: File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_tf_utils.py", line 1356, in run_call_with_unpacked_inputs * return func(self, **unpacked_inputs) File "/usr/local/lib/python3.10/dist-packages/transformers/models/sam/modeling_tf_sam.py", line 1433, in call * sparse_embeddings, dense_embeddings = self.prompt_encoder( File "/usr/local/lib/python3.10/dist-packages/keras/utils/traceback_utils.py", line 70, in error_handler ** raise e.with_traceback(filtered_tb) from None File "/tmp/__autograph_generated_file4_u_db6c.py", line 90, in tf__call ag__.if_stmt(ag__.ld(input_boxes) is not None, if_body_3, else_body_3, get_state_3, set_state_3, ('batch_size', 'sparse_embeddings'), 2) File "/tmp/__autograph_generated_file4_u_db6c.py", line 68, in if_body_3 box_embeddings = ag__.converted_call(ag__.ld(self)._embed_boxes, (ag__.ld(input_boxes),), None, fscope) File "/tmp/__autograph_generated_filehsxk3fhx.py", line 14, in tf___embed_boxes coords = ag__.converted_call(ag__.ld(tf).reshape, (ag__.ld(boxes), (ag__.ld(batch_size), ag__.ld(nb_boxes), 2, 2)), None, fscope) TypeError: Exception encountered when calling layer 'prompt_encoder' (type TFSamPromptEncoder). in user code: File "/usr/local/lib/python3.10/dist-packages/transformers/models/sam/modeling_tf_sam.py", line 767, in call * box_embeddings = self._embed_boxes(input_boxes) File "/usr/local/lib/python3.10/dist-packages/transformers/models/sam/modeling_tf_sam.py", line 726, in _embed_boxes * coords = tf.reshape(boxes, (batch_size, nb_boxes, 2, 2)) TypeError: Failed to convert elements of (None, None, 2, 2) to Tensor. Consider casting elements to a supported type. See https://www.tensorflow.org/api_docs/python/tf/dtypes for supported TF dtypes. ``` From the accompanying [Colab Notebook](https://colab.research.google.com/gist/sayakpaul/de59527f657d0461f46d9cb8c4a3884f/scratchpad.ipynb), one can check that there's nothing apparently off in the dataset we're passing to the trainer for fine-tuning: ```python for sample in train_ds.take(2): for k in sample: print(k, sample[k].shape, isinstance(sample[k], tf.Tensor)) ``` Leads to: ```bash pixel_values (2, 3, 1024, 1024) True original_sizes (2, 2) True reshaped_input_sizes (2, 2) True input_boxes (2, 1, 4) True ground_truth_mask (2, 256, 256) True pixel_values (2, 3, 1024, 1024) True original_sizes (2, 2) True reshaped_input_sizes (2, 2) True input_boxes (2, 1, 4) True ground_truth_mask (2, 256, 256) True ``` Anything we're missing out on? Cc: @Rocketknight1
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23826/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23826/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23825
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23825/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23825/comments
https://api.github.com/repos/huggingface/transformers/issues/23825/events
https://github.com/huggingface/transformers/issues/23825
1,729,109,484
I_kwDOCUB6oc5nEB3s
23,825
[i18n-<languageCode>] Translating docs to <languageName>
{ "login": "rezz90", "id": 134093487, "node_id": "U_kgDOB_4arw", "avatar_url": "https://avatars.githubusercontent.com/u/134093487?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rezz90", "html_url": "https://github.com/rezz90", "followers_url": "https://api.github.com/users/rezz90/followers", "following_url": "https://api.github.com/users/rezz90/following{/other_user}", "gists_url": "https://api.github.com/users/rezz90/gists{/gist_id}", "starred_url": "https://api.github.com/users/rezz90/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rezz90/subscriptions", "organizations_url": "https://api.github.com/users/rezz90/orgs", "repos_url": "https://api.github.com/users/rezz90/repos", "events_url": "https://api.github.com/users/rezz90/events{/privacy}", "received_events_url": "https://api.github.com/users/rezz90/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
null
[]
[]
1,685
1,685
1,685
NONE
null
<!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180 - [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through) - [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx). ## Tutorial section - [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) - [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) - [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) - [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) - [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) - [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) - [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx) <!-- Keep on adding more as you go 🔥 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23825/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23825/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23824
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23824/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23824/comments
https://api.github.com/repos/huggingface/transformers/issues/23824/events
https://github.com/huggingface/transformers/issues/23824
1,729,108,506
I_kwDOCUB6oc5nEBoa
23,824
[i18n-<languageCode>] Translating docs to <languageName>
{ "login": "rezz90", "id": 134093487, "node_id": "U_kgDOB_4arw", "avatar_url": "https://avatars.githubusercontent.com/u/134093487?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rezz90", "html_url": "https://github.com/rezz90", "followers_url": "https://api.github.com/users/rezz90/followers", "following_url": "https://api.github.com/users/rezz90/following{/other_user}", "gists_url": "https://api.github.com/users/rezz90/gists{/gist_id}", "starred_url": "https://api.github.com/users/rezz90/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rezz90/subscriptions", "organizations_url": "https://api.github.com/users/rezz90/orgs", "repos_url": "https://api.github.com/users/rezz90/repos", "events_url": "https://api.github.com/users/rezz90/events{/privacy}", "received_events_url": "https://api.github.com/users/rezz90/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
null
[]
[]
1,685
1,685
1,685
NONE
null
<!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180 - [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through) - [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx). ## Tutorial section - [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) - [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) - [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) - [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) - [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) - [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) - [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx) <!-- Keep on adding more as you go 🔥 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23824/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23824/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23823
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23823/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23823/comments
https://api.github.com/repos/huggingface/transformers/issues/23823/events
https://github.com/huggingface/transformers/pull/23823
1,729,081,025
PR_kwDOCUB6oc5RhHrj
23,823
🌐 [i18n-KO] Translated `pad_truncation.mdx` to Korean
{ "login": "sim-so", "id": 96299403, "node_id": "U_kgDOBb1piw", "avatar_url": "https://avatars.githubusercontent.com/u/96299403?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sim-so", "html_url": "https://github.com/sim-so", "followers_url": "https://api.github.com/users/sim-so/followers", "following_url": "https://api.github.com/users/sim-so/following{/other_user}", "gists_url": "https://api.github.com/users/sim-so/gists{/gist_id}", "starred_url": "https://api.github.com/users/sim-so/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sim-so/subscriptions", "organizations_url": "https://api.github.com/users/sim-so/orgs", "repos_url": "https://api.github.com/users/sim-so/repos", "events_url": "https://api.github.com/users/sim-so/events{/privacy}", "received_events_url": "https://api.github.com/users/sim-so/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23823). All of your documentation changes will be reflected on that endpoint.", "연휴에 수고 많으셨습니다!\r\n위에서 댓글 남겨주셔서 더 수정할 부분은 없어 보입니다! ", "Could you review this PR? 😃 \r\n@sgugger, @ArthurZucker, @eunseojo" ]
1,685
1,685
1,685
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Translated the `pad_truncation.mdx` file of the documentation to Korean. Thank you in advance for your review! ## Before reviewing - [x] Check for missing / redundant translations (번역 누락/중복 검사) - [x] Grammar Check (맞춤법 검사) - [x] Review or Add new terms to glossary (용어 확인 및 추가) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview로 정상작동 확인) ## Who can review? (Initial) Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) May you please review this PR? @sgugger, @ArthurZucker, @eunseojo <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23823/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23823/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23823", "html_url": "https://github.com/huggingface/transformers/pull/23823", "diff_url": "https://github.com/huggingface/transformers/pull/23823.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23823.patch", "merged_at": 1685521440000 }
https://api.github.com/repos/huggingface/transformers/issues/23822
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23822/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23822/comments
https://api.github.com/repos/huggingface/transformers/issues/23822/events
https://github.com/huggingface/transformers/issues/23822
1,729,074,427
I_kwDOCUB6oc5nD5T7
23,822
index out of range in self torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
{ "login": "siddhsql", "id": 127623723, "node_id": "U_kgDOB5tiKw", "avatar_url": "https://avatars.githubusercontent.com/u/127623723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/siddhsql", "html_url": "https://github.com/siddhsql", "followers_url": "https://api.github.com/users/siddhsql/followers", "following_url": "https://api.github.com/users/siddhsql/following{/other_user}", "gists_url": "https://api.github.com/users/siddhsql/gists{/gist_id}", "starred_url": "https://api.github.com/users/siddhsql/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/siddhsql/subscriptions", "organizations_url": "https://api.github.com/users/siddhsql/orgs", "repos_url": "https://api.github.com/users/siddhsql/repos", "events_url": "https://api.github.com/users/siddhsql/events{/privacy}", "received_events_url": "https://api.github.com/users/siddhsql/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I cannot reproduce:\r\n```py\r\nfrom transformers import pipeline\r\nimport pandas as pd\r\n\r\n# prepare table + question\r\ndata = {\"Actors\": [\"Brad Pitt\", \"Leonardo Di Caprio\", \"George Clooney\"], \"Number of movies\": [\"87\", \"53\", \"69\"]}\r\ntable = pd.DataFrame.from_dict(data)\r\nquestion = \"how many movies does Leonardo Di Caprio have?\"\r\n\r\n# pipeline model\r\n# Note: you must to install torch-scatter first.\r\ntqa = pipeline(task=\"table-question-answering\", model=\"google/tapas-large-finetuned-wtq\")\r\n\r\n# result\r\n\r\nprint(tqa(table=table, query=question)['cells'][0])\r\n```\r\nworks without issue for me.", "try with more than 64 rows\n\nOn Tue, May 30, 2023 at 7:04 AM Sylvain Gugger ***@***.***>\nwrote:\n\n> I cannot reproduce:\n>\n> from transformers import pipelineimport pandas as pd\n> # prepare table + questiondata = {\"Actors\": [\"Brad Pitt\", \"Leonardo Di Caprio\", \"George Clooney\"], \"Number of movies\": [\"87\", \"53\", \"69\"]}table = pd.DataFrame.from_dict(data)question = \"how many movies does Leonardo Di Caprio have?\"\n> # pipeline model# Note: you must to install torch-scatter first.tqa = pipeline(task=\"table-question-answering\", model=\"google/tapas-large-finetuned-wtq\")\n> # result\n> print(tqa(table=table, query=question)['cells'][0])\n>\n> works without issue for me.\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/23822#issuecomment-1568496021>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/A6NWEKZF4W7ZMAGNN5HKKF3XIX5AJANCNFSM6AAAAAAYRQMPSE>\n> .\n> You are receiving this because you authored the thread.Message ID:\n> ***@***.***>\n>\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,688
1,688
NONE
null
### System Info ``` - `transformers` version: 4.29.2 - Platform: macOS-13.4-x86_64-i386-64bit - Python version: 3.10.2 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1 (False) - Tensorflow version (GPU?): 2.12.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run the code [at](https://huggingface.co/tasks/table-question-answering): ``` from transformers import pipeline import pandas as pd # prepare table + question data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]} table = pd.DataFrame.from_dict(data) question = "how many movies does Leonardo Di Caprio have?" # pipeline model # Note: you must to install torch-scatter first. tqa = pipeline(task="table-question-answering", model="google/tapas-large-finetuned-wtq") # result print(tqa(table=table, query=query)['cells'][0]) ``` # Observed Behavior ``` Exception has occurred: IndexError (note: full exception trace is shown but execution is paused at: _run_module_as_main) index out of range in self File "/llm/tapas-poc/.env/lib/python3.10/site-packages/torch/nn/functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 162, in forward return F.embedding( File "/llm/tapas-poc/.env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/transformers/models/tapas/modeling_tapas.py", line 326, in forward embeddings += getattr(self, name)(token_type_ids[:, :, i]) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/transformers/models/tapas/modeling_tapas.py", line 965, in forward embedding_output = self.embeddings( File "/llm/tapas-poc/.env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/transformers/models/tapas/modeling_tapas.py", line 1217, in forward outputs = self.tapas( File "/llm/tapas-poc/.env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/transformers/pipelines/table_question_answering.py", line 142, in batch_inference return self.model(**inputs) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/transformers/pipelines/table_question_answering.py", line 390, in _forward outputs = self.batch_inference(**model_inputs) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1025, in forward model_outputs = self._forward(model_inputs, **forward_params) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py", line 125, in __next__ processed = self.infer(item, **self.params) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py", line 124, in __next__ item = next(self.iterator) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1100, in __call__ outputs = list(final_iterator) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/transformers/pipelines/table_question_answering.py", line 350, in __call__ results = super().__call__(pipeline_inputs, **kwargs) File "/llm/tapas-poc/sample1.py", line 12, in <module> preds = table_qa(bkgs_df_str,queries) File "/usr/local/Cellar/[email protected]/3.10.2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/usr/local/Cellar/[email protected]/3.10.2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 196, in _run_module_as_main (Current frame) return _run_code(code, main_globals, None, IndexError: index out of range in self ``` ### Expected behavior there should be no error
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23822/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23822/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23821
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23821/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23821/comments
https://api.github.com/repos/huggingface/transformers/issues/23821/events
https://github.com/huggingface/transformers/pull/23821
1,729,073,701
PR_kwDOCUB6oc5RhGC5
23,821
T5 models
{ "login": "peter-sk", "id": 6168908, "node_id": "MDQ6VXNlcjYxNjg5MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/6168908?v=4", "gravatar_id": "", "url": "https://api.github.com/users/peter-sk", "html_url": "https://github.com/peter-sk", "followers_url": "https://api.github.com/users/peter-sk/followers", "following_url": "https://api.github.com/users/peter-sk/following{/other_user}", "gists_url": "https://api.github.com/users/peter-sk/gists{/gist_id}", "starred_url": "https://api.github.com/users/peter-sk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/peter-sk/subscriptions", "organizations_url": "https://api.github.com/users/peter-sk/orgs", "repos_url": "https://api.github.com/users/peter-sk/repos", "events_url": "https://api.github.com/users/peter-sk/events{/privacy}", "received_events_url": "https://api.github.com/users/peter-sk/received_events", "type": "User", "site_admin": false }
[ { "id": 5724035499, "node_id": "LA_kwDOCUB6oc8AAAABVS3Zqw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Model%20on%20the%20Hub", "name": "Model on the Hub", "color": "9CA0E9", "default": false, "description": "" } ]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23821). All of your documentation changes will be reflected on that endpoint.", "Hello. Could you update the PR name and description to better reflect the content of the PR? \r\nMoreover, could you explain the motivation behind adding this to transformers? Only adding it for the encoder seems like specific usecase on your par that can be overcome buy having your own version of the code. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@ArthurZucker\r\nSorry for the late reply. I got caught up in exam corrections, conference travels, and other pre-holiday stuff. The use case for just using the encoder part is the following:\r\n\r\nf you want to evaluate or use T5 for Natural Language Understanding (NLU) tasks such as Named Entity Recognition or Extractive Question Answering, there are two possible avenues:\r\n1) Fine tune the model with in-context learning using few-shots prompting, i.e., train T5 to hopefully decode the desired result to a query, typically prefixing the prompt with \"task: \". This is how the model is already trained for some tasks.\r\n2) Use either the encoder, the decoder, or both stacks with different heads and fine-tune the weights of these heads.\r\n\r\nWhile avenue 1) is the bread and butter of seq2seq language models, avenue 2) has some distinct advantages, including but not limited to:\r\na) No need to post process unexpected answers. Seq2seq models that have been fine-tuned with few-shots prompting are prone to produce spurious/non-sensical/unexpected output sequences on some inputs. If the desired output for example is a sequence of word classes, we might expect that a prompt of \"word classes: I like my fish raw.\" would return the desired output \"pronoun[personal] verb[present] pronoun[possessive] noun[singular] adjective[descriptive]\". But the output might be \"pronouns are words that are used instead of nouns\" or \"dishwasher[soap] melon[hat]\".\r\nb) Integration of the NLU model into larger models. A Grammatical Error Correction (GEC) model a la Grammarly's GECToR might directly use the outputs of an NER model for word classes as parts of its inputs. Having to analyze/parse/sanitize the output sequence puts a stopper to jointly train/fine-tune such models. (This is by the way not a hypothetical use case.)\r\n\r\nNow, why would one just want to use the encoder part and not both the encoder and the decoder part? First, for NLU tasks, the encoder should by all means be the part that represents the \"understanding\" part of the seq2seq model. Second, adding the decoder additionally should not hurt in principle, but it is unlikely to improve the performance significantly, blows up the model size, and provides some interesting implementation challenges.\r\n\r\nRegarding the implementation challenges, I have implemented such encoder+decoder for NLU tasks models, but getting them to clear the tests of the transformers library is not trivial, as these models use the concatenation of the encoder's and decoder's output as input to the head (i.e., classification layer etc). An alternative is to just use the decoder outputs, but this feels less meaningful for NLU tasks.\r\n\r\nHow do we move forward? Would it help if I provided some benchmarks of using (i) encoder part only, (ii) decoder part only, (iii) encoder+decoder part concatenated, and (iv) encoder+decoder but only decoder part as input for the head?\r\n\r\nChers,\r\nPeter", "Oh, well. I just saw that there is now a T5ForQuestionAnswering model. I will also review that.", "Hey! I think the best idea is to put your modifications on the hub! This would prevent you from having to go through all the hassle of passing the CI, and since it is your specific usage, it makes more sense. I invite you to follow[ this tutorial ](https://huggingface.co/docs/transformers/custom_models). Hope this will fit your usage! 🤗 ", "I can see that this pull request has been closed and is not updating.", "Hi Arthur,\r\n\r\nLet’s do that for now. And if it turns out that a particular model is wildly successful, I’ll make a pull request for it.\r\n\r\nCheers,\r\nPeter\r\n\r\nFrom: Arthur ***@***.***>\r\nDate: Friday, 21 July 2023 at 10.49\r\nTo: huggingface/transformers ***@***.***>\r\nCc: Peter Schneider-Kamp ***@***.***>, Author ***@***.***>\r\nSubject: Re: [huggingface/transformers] T5 models (PR #23821)\r\n\r\nHey! I think the best idea is to put your modifications on the hub! This would prevent you from having to go through all the hassle of passing the CI, and since it is your specific usage, it makes more sense. I invite you to follow this tutorial <https://huggingface.co/docs/transformers/custom_models> . Hope this will fit your usage! 🤗\r\n\r\n—\r\nReply to this email directly, view it on GitHub<https://github.com/huggingface/transformers/pull/23821#issuecomment-1645234058>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ABPCCTHXFIYBHNQQQYQAW23XRI67ZANCNFSM6AAAAAAYRQKCUI>.\r\nYou are receiving this because you authored the thread.Message ID: ***@***.***>\r\n", "Perfect! 👍🏻 🤗 " ]
1,685
1,689
1,688
CONTRIBUTOR
null
# What does this PR do? Add models for queston answering, sequence classification, and token classification with T5 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? @ArthurZucker @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23821/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23821/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23821", "html_url": "https://github.com/huggingface/transformers/pull/23821", "diff_url": "https://github.com/huggingface/transformers/pull/23821.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23821.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/23820
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23820/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23820/comments
https://api.github.com/repos/huggingface/transformers/issues/23820/events
https://github.com/huggingface/transformers/pull/23820
1,729,060,474
PR_kwDOCUB6oc5RhDEr
23,820
Implement Lion (EvoLved Sign Optimizer)
{ "login": "dannyadkins", "id": 20865714, "node_id": "MDQ6VXNlcjIwODY1NzE0", "avatar_url": "https://avatars.githubusercontent.com/u/20865714?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dannyadkins", "html_url": "https://github.com/dannyadkins", "followers_url": "https://api.github.com/users/dannyadkins/followers", "following_url": "https://api.github.com/users/dannyadkins/following{/other_user}", "gists_url": "https://api.github.com/users/dannyadkins/gists{/gist_id}", "starred_url": "https://api.github.com/users/dannyadkins/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dannyadkins/subscriptions", "organizations_url": "https://api.github.com/users/dannyadkins/orgs", "repos_url": "https://api.github.com/users/dannyadkins/repos", "events_url": "https://api.github.com/users/dannyadkins/events{/privacy}", "received_events_url": "https://api.github.com/users/dannyadkins/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23820). All of your documentation changes will be reflected on that endpoint.", "Thanks for your PR. Transformers is a library of models, not optimizers. All optimizers implemented inside the library are deprecated and we won't accept new ones. You can already use Lion via bitsandbytes for instance.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,688
1,688
NONE
null
# What does this PR do? Lion is a new optimizer from Google Brain that has seen early results improving on language modeling tasks: https://arxiv.org/abs/2302.06675 This PR implements Lion (unfused) as a drop-in replacement for Adam. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23820/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23820/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23820", "html_url": "https://github.com/huggingface/transformers/pull/23820", "diff_url": "https://github.com/huggingface/transformers/pull/23820.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23820.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/23819
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23819/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23819/comments
https://api.github.com/repos/huggingface/transformers/issues/23819/events
https://github.com/huggingface/transformers/issues/23819
1,729,053,307
I_kwDOCUB6oc5nD0J7
23,819
AttributeError: EagerTensor object has no attribute 'size'
{ "login": "siddhsql", "id": 127623723, "node_id": "U_kgDOB5tiKw", "avatar_url": "https://avatars.githubusercontent.com/u/127623723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/siddhsql", "html_url": "https://github.com/siddhsql", "followers_url": "https://api.github.com/users/siddhsql/followers", "following_url": "https://api.github.com/users/siddhsql/following{/other_user}", "gists_url": "https://api.github.com/users/siddhsql/gists{/gist_id}", "starred_url": "https://api.github.com/users/siddhsql/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/siddhsql/subscriptions", "organizations_url": "https://api.github.com/users/siddhsql/orgs", "repos_url": "https://api.github.com/users/siddhsql/repos", "events_url": "https://api.github.com/users/siddhsql/events{/privacy}", "received_events_url": "https://api.github.com/users/siddhsql/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You are using TensorFlow inputs with a PyTorch model, this cannot work.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "can you tell me how to solve this problem" ]
1,685
1,703
1,688
NONE
null
### System Info ``` - `transformers` version: 4.29.2 - Platform: macOS-13.4-x86_64-i386-64bit - Python version: 3.10.2 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1 (False) - Tensorflow version (GPU?): 2.12.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction run the code [at](https://huggingface.co/docs/transformers/main/en/model_doc/tapas#transformers.TFTapasForQuestionAnswering): ``` from transformers import AutoTokenizer, TapasForQuestionAnswering import pandas as pd tokenizer = AutoTokenizer.from_pretrained("google/tapas-base-finetuned-wtq") model = TapasForQuestionAnswering.from_pretrained("google/tapas-base-finetuned-wtq") data = { "Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Age": ["56", "45", "59"], "Number of movies": ["87", "53", "69"], } table = pd.DataFrame.from_dict(data) queries = ["How many movies has George Clooney played in?", "How old is Brad Pitt?"] inputs = tokenizer(table=table, queries=queries, padding="max_length", return_tensors="tf") outputs = model(**inputs) logits = outputs.logits logits_aggregation = outputs.logits_aggregation ``` # Observed Result ``` % python sample2.py 2023-05-27 16:48:53.829758: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. Traceback (most recent call last): File "/llm/tapas-poc/sample2.py", line 16, in <module> outputs = model(**inputs) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/transformers/models/tapas/modeling_tapas.py", line 1217, in forward outputs = self.tapas( File "/llm/tapas-poc/.env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/transformers/models/tapas/modeling_tapas.py", line 928, in forward input_shape = input_ids.size() File "/llm/tapas-poc/.env/lib/python3.10/site-packages/tensorflow/python/framework/ops.py", line 437, in __getattr__ raise AttributeError( AttributeError: EagerTensor object has no attribute 'size'. If you are looking for numpy-related methods, please run the following: from tensorflow.python.ops.numpy_ops import np_config np_config.enable_numpy_behavior() ``` ### Expected behavior there should be no error
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23819/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23819/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23818
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23818/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23818/comments
https://api.github.com/repos/huggingface/transformers/issues/23818/events
https://github.com/huggingface/transformers/issues/23818
1,728,955,795
I_kwDOCUB6oc5nDcWT
23,818
LLaMATokenizerFast works abnormally
{ "login": "jiangwangyi", "id": 39762734, "node_id": "MDQ6VXNlcjM5NzYyNzM0", "avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jiangwangyi", "html_url": "https://github.com/jiangwangyi", "followers_url": "https://api.github.com/users/jiangwangyi/followers", "following_url": "https://api.github.com/users/jiangwangyi/following{/other_user}", "gists_url": "https://api.github.com/users/jiangwangyi/gists{/gist_id}", "starred_url": "https://api.github.com/users/jiangwangyi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiangwangyi/subscriptions", "organizations_url": "https://api.github.com/users/jiangwangyi/orgs", "repos_url": "https://api.github.com/users/jiangwangyi/repos", "events_url": "https://api.github.com/users/jiangwangyi/events{/privacy}", "received_events_url": "https://api.github.com/users/jiangwangyi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Also have 2 questions related to `LlamaTokenizerFast`:\r\n\r\nFirst, loading a fast tokenizer from a saved slow one takes very long:\r\n\r\n```\r\nfrom transformers import LlamaTokenizer, LlamaTokenizerFast\r\n\r\ntokenizer = LlamaTokenizer.from_pretrained(\"huggyllama/llama-7b\")\r\ntokenizer.save_pretrained(\".\")\r\n\r\n# the following line takes > 1 min\r\nfast_tokenizer = LlamaTokenizerFast.from_pretrained(\".\")\r\n```\r\nThis is not the case for other tokenizers like `BertTokenizerFast`.\r\n\r\nSecond, for a new model I'm working on (#23460) I wonder how to get the same behaviour between slow and fast tokenizers for the following:\r\n```\r\nfrom transformers import LlamaTokenizer, LlamaTokenizerFast\r\n\r\ntokenizer = LlamaTokenizer.from_pretrained(\"huggyllama/llama-7b\", truncation_side=\"left\")\r\ntokenizer.add_special_tokens({\"pad_token\": \"[PAD]\"})\r\ntokenizer.add_special_tokens({\"bos_token\": \"</s>\"})\r\ntokenizer.add_special_tokens({\"eos_token\": \"</s>\"})\r\ntokenizer.add_special_tokens({\"unk_token\": \"</s>\"})\r\n\r\nfast_tokenizer = LlamaTokenizerFast.from_pretrained(\"huggyllama/llama-7b\", truncation_side=\"left\")\r\nfast_tokenizer.add_special_tokens({\"pad_token\": \"[PAD]\"})\r\nfast_tokenizer.add_special_tokens({\"bos_token\": \"</s>\"})\r\nfast_tokenizer.add_special_tokens({\"eos_token\": \"</s>\"})\r\nfast_tokenizer.add_special_tokens({\"unk_token\": \"</s>\"})\r\n\r\nprompt = \"What is unusual about this image?\"\r\n\r\nencoding = tokenizer(prompt, return_tensors=\"pt\")\r\n\r\nfast_encoding = fast_tokenizer(prompt, return_tensors=\"pt\")\r\n\r\nfor k,v in encoding.items():\r\n assert torch.allclose(fast_encoding[k], v)\r\n```\r\n=> this assertion fails since the input_ids differ:\r\n```\r\ntensor([[ 2, 1724, 338, 22910, 1048, 445, 1967, 29973]])\r\ntensor([[ 1, 1724, 338, 22910, 1048, 445, 1967, 29973]])\r\n```\r\n\r\n", "cc'ing @ArthurZucker and @Narsil here", "Hey! Thanks for opening this issue. \r\n - `return_token_type_ids` should be set to `None` by default but is updated with `\"token_type_ids\" in self.model_input_names`. This is specific to the fast tokenizer, and is a known difference. I am not sure why this was added only in the fast tokenizer but it's more than 2yo! \r\n - The BPE models splits on ` ` (spaces), before encoding the tokens. When converting the models from slow to fast the special tokens were added to the `BPE` vocabulary, with a score of `0`. We probably forgot to add them to the list of `additional_special_tokens`, which is why they are not properly split. ( quick fix: `t1.additional_special_tokens = [\"</s>, ... ]`) \r\n - @NielsRogge when you load a slow from a fast, it takes a long time because you need to convert the BPE sentenpiece model, which is very long. Nothing we can do about that. \r\n - About your second question, the best thing would be to open a new issue. Seems like it might be another slow/fast discrepency but you are not completely doing this the way the API is designed! (check that each call to add a token actively adds it!) ", "> Hey! Thanks for opening this issue.\r\n> \r\n> * `return_token_type_ids` should be set to `None` by default but is updated with `\"token_type_ids\" in self.model_input_names`. This is specific to the fast tokenizer, and is a known difference. I am not sure why this was added only in the fast tokenizer but it's more than 2yo!\r\n> * The BPE models splits on ` ` (spaces), before encoding the tokens. When converting the models from slow to fast the special tokens were added to the `BPE` vocabulary, with a score of `0`. We probably forgot to add them to the list of `additional_special_tokens`, which is why they are not properly split. ( quick fix: `t1.additional_special_tokens = [\"</s>, ... ]`)\r\n> * @NielsRogge when you load a slow from a fast, it takes a long time because you need to convert the BPE sentenpiece model, which is very long. Nothing we can do about that.\r\n> * About your second question, the best thing would be to open a new issue. Seems like it might be another slow/fast discrepency but you are not completely doing this the way the API is designed! (check that each call to add a token actively adds it!)\r\n\r\nIn the `tokenizer_config.json` of `huggyllama/llama-7b`, `</s>` is quite a special token (`eos_token`). Adding `</s>` to `t1.additional_special_tokens` does not fix the problem.", "Indeed, sorry for the confusion. I added a different token `<//s>` with `add_special_token` which worked as expected ( meaning whether there was a space or not, the output was properly encode) which is why the issue most probably lies with the handling of the special tokens ( maybe we should not have added them to the voab? I'll check). I'll dig into this! ", "@ArthurZucker How is the progress now?", "I am still working on this, top priority! My PR did not fix it yet, so I am opening a new on just for llama and will see for the other ones.", "> I am still working on this, top priority! My PR did not fix it yet, so I am opening a new on just for llama and will see for the other ones.\r\n\r\nThanks for working on this! I appreciate the update and look forward to getting the issue resolved.", "Update: in order to fix this, the `tokenizer.json` should be modified: the special tokens should not be normalized (so set `normalized = False`. There is a more profound issue, since the slow tokenizer is not bother by that and handles this differently. ", "@ArthurZucker\r\nMy transformer version is `4.30.1`. I do not change the `tokenizer_config.json`, instead I replace the default special tokens by `add_special_tokens` like\r\n```python\r\n>>> from transformers import AutoTokenizer\r\n>>> lt = AutoTokenizer.from_pretrained(\"huggyllama/llama-7b\")\r\n>>> lt\r\nLlamaTokenizerFast(name_or_path='huggyllama/llama-7b', vocab_size=32000, model_max_length=2048, is_fast=True, padding_side='left', truncation_side='right', special_tokens={'bos_token': AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), 'eos_token': AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), 'unk_token': AddedToken(\"<unk>\", rstrip=False, lstrip=False, single_word=False, normalized=True)}, clean_up_tokenization_spaces=False)\r\n>>> lt.add_special_tokens({\"bos_token\": \"<s>\", \"eos_token\": \"</s>\", \"unk_token\": \"<unk>\"})\r\n>>> lt\r\nLlamaTokenizerFast(name_or_path='huggyllama/llama-7b', vocab_size=32000, model_max_length=2048, is_fast=True, padding_side='left', truncation_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>'}, clean_up_tokenization_spaces=False)\r\n>>> lt(\"ok</s>\")\r\n>>> {'input_ids': [1, 3431, 829, 29879, 29958], 'token_type_ids': [0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1]}\r\n```\r\nIt seems that the problem still exists?", "Hey, as mentioned in #23889, as well as in #24042 the `tokenizer.json` has to be modified. I did not have time to open pr on all models yet, but you still have `normalized = True` on the special tokens, which is why they are split. \r\n", "> Hey, as mentioned in #23889, as well as in #24042 the `tokenizer.json` has to be modified. I did not have time to open pr on all models yet, but you still have `normalized = True` on the special tokens, which is why they are split.\r\n\r\nAs shown in your example in [#23889](https://github.com/huggingface/transformers/issues/23889#issuecomment-1584090357), if I do not modify the `tokenizer.json`, reseting the `bos_token` and `eos_token` when initializing the fast tokenizer or using the `add_special_tokens` method do not work (the `normalized=True` attribute still exists), even if the `special_tokens_dict` attribute has been changed to `{\"bos_token\": \"<s>\", \"eos_token\": \"</s>\"}`. Is that true?", "Yes. Basically, you have to correctly add the tokens when converting, ortherwise the underlying regex is not properly updated. We are thinking of adding a `update_tokens` feature, which would allow to modify a token that is already part of the vocab.\r\nSee the following problem: \r\n```python \r\n\r\nIn [2]: lt.add_special_tokens({\"eos_token\": AddedToken(\"<//s>\", normalized = False)})\r\nOut[2]: 1\r\n\r\nIn [3]: lt.encode(\"Another tests<//s>\")\r\nOut[3]: [1, 7280, 6987, 32000]\r\n\r\nIn [4]: lt.add_special_tokens({\"eos_token\": AddedToken(\"<//s>\", normalized = True)})\r\nOut[4]: 0\r\n\r\nIn [5]: lt.encode(\"Another tests<//s>\")\r\nOut[5]: [1, 7280, 6987, 32000]\r\n\r\nIn [6]: lt.add_special_tokens({\"eos_token\": AddedToken(\"<///s>\", normalized = True)})\r\nOut[6]: 1\r\n\r\nIn [7]: lt.encode(\"Another tests<///s>\")\r\nOut[7]: [1, 7280, 6987, 29966, 6658, 29879, 29958]\r\n```", "> Yes. Basically, you have to correctly add the tokens when converting, ortherwise the underlying regex is not properly updated. We are thinking of adding a `update_tokens` feature, which would allow to modify a token that is already part of the vocab. See the following problem:\r\n> \r\n> ```python\r\n> In [2]: lt.add_special_tokens({\"eos_token\": AddedToken(\"<//s>\", normalized = False)})\r\n> Out[2]: 1\r\n> \r\n> In [3]: lt.encode(\"Another tests<//s>\")\r\n> Out[3]: [1, 7280, 6987, 32000]\r\n> \r\n> In [4]: lt.add_special_tokens({\"eos_token\": AddedToken(\"<//s>\", normalized = True)})\r\n> Out[4]: 0\r\n> \r\n> In [5]: lt.encode(\"Another tests<//s>\")\r\n> Out[5]: [1, 7280, 6987, 32000]\r\n> \r\n> In [6]: lt.add_special_tokens({\"eos_token\": AddedToken(\"<///s>\", normalized = True)})\r\n> Out[6]: 1\r\n> \r\n> In [7]: lt.encode(\"Another tests<///s>\")\r\n> Out[7]: [1, 7280, 6987, 29966, 6658, 29879, 29958]\r\n> ```\r\n\r\nThank you for your kind guidance!" ]
1,685
1,686
1,686
CONTRIBUTOR
null
### System Info platform==Ubuntu18.04 python==3.10 transformers==4.29.2 ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction `</s>` is the special token of LLaMATokenizer(Fast), it is expected that `</s>` can be recognized as a single token when encoding the text. However, it can be shown that the two tokenizers behave differently: ```python >>> t1 = transformers.AutoTokenizer.from_pretrained("huggyllama/llama-7b", use_fast=True) >>> t2 = transformers.AutoTokenizer.from_pretrained("huggyllama/llama-7b", use_fast=False) >>> text = "I love you.</s>" >>> t1(text) >>> {'input_ids': [1, 306, 5360, 366, 21106, 29879, 29958], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]} >>> t2(text) >>> {'input_ids': [1, 306, 5360, 366, 29889, 2], 'attention_mask': [1, 1, 1, 1, 1, 1]} ``` also, LLaMATokenizerFast returns `token_type_ids` but LLaMATokenizer does not. ### Expected behavior LLaMATokenizerFast to be consistent with LLaMATokenzier.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23818/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23818/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23817
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23817/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23817/comments
https://api.github.com/repos/huggingface/transformers/issues/23817/events
https://github.com/huggingface/transformers/pull/23817
1,728,847,415
PR_kwDOCUB6oc5RgU2i
23,817
🌐 [i18n-KO] Translated `documnet question answering.mdx` to Korean
{ "login": "jungnerd", "id": 46880056, "node_id": "MDQ6VXNlcjQ2ODgwMDU2", "avatar_url": "https://avatars.githubusercontent.com/u/46880056?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jungnerd", "html_url": "https://github.com/jungnerd", "followers_url": "https://api.github.com/users/jungnerd/followers", "following_url": "https://api.github.com/users/jungnerd/following{/other_user}", "gists_url": "https://api.github.com/users/jungnerd/gists{/gist_id}", "starred_url": "https://api.github.com/users/jungnerd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jungnerd/subscriptions", "organizations_url": "https://api.github.com/users/jungnerd/orgs", "repos_url": "https://api.github.com/users/jungnerd/repos", "events_url": "https://api.github.com/users/jungnerd/events{/privacy}", "received_events_url": "https://api.github.com/users/jungnerd/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Closing in favor of #24588" ]
1,685
1,690
1,688
CONTRIBUTOR
null
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니다 --> # What does this PR do? Translated the `document_question_answering.mdx` file of the documentation to Korean. Thank you in advance for your review. Part of #20179 <!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: --> ## Before reviewing - [ ] Check for missing / redundant translations (번역 누락/중복 검사) - [ ] Grammar Check (맞춤법 검사) - [ ] Review or Add new terms to glossary (용어 확인 및 추가) - [ ] Check Inline TOC (e.g. `[[lowercased-header]]`) - [ ] Check live-preview for gotchas (live-preview로 정상작동 확인) ## Who can review? (Initial) <!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> <!-- Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> <!-- @sgugger, @ArthurZucker, @eunseojo May you please review this PR? -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23817/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23817/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23817", "html_url": "https://github.com/huggingface/transformers/pull/23817", "diff_url": "https://github.com/huggingface/transformers/pull/23817.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23817.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/23816
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23816/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23816/comments
https://api.github.com/repos/huggingface/transformers/issues/23816/events
https://github.com/huggingface/transformers/issues/23816
1,728,828,477
I_kwDOCUB6oc5nC9Q9
23,816
`MPTForCausalLM` does not support `device_map='auto'` yet.
{ "login": "harikc456", "id": 21287383, "node_id": "MDQ6VXNlcjIxMjg3Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/21287383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/harikc456", "html_url": "https://github.com/harikc456", "followers_url": "https://api.github.com/users/harikc456/followers", "following_url": "https://api.github.com/users/harikc456/following{/other_user}", "gists_url": "https://api.github.com/users/harikc456/gists{/gist_id}", "starred_url": "https://api.github.com/users/harikc456/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/harikc456/subscriptions", "organizations_url": "https://api.github.com/users/harikc456/orgs", "repos_url": "https://api.github.com/users/harikc456/repos", "events_url": "https://api.github.com/users/harikc456/events{/privacy}", "received_events_url": "https://api.github.com/users/harikc456/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "Hey! This is a duplicate of #23784 . Seems like we should give a more informative message. ", "This makes me think that we should add the ability to pass no split modules directly when calling from_pretrained for super users. There are more and more models that uses code on the Hub feature and this should make life much easier for these users (sometimes it takes a lot of time for the authors to approve / merge these PRs) wdyt @ArthurZucker @sgugger ?", "Hi @harikc456 \r\n\r\nYou can check: https://github.com/huggingface/transformers/pull/23896#issuecomment-1570036714 \r\n\r\nTo illustrate what should be done, [I made a PR on the Hub directly,](https://huggingface.co/mosaicml/mpt-7b/discussions/45) you can load the mpt-7b model as follows (until the authors will merge my PR):\r\n\r\n```python\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\nmodel_name = 'mosaicml/mpt-7b'\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"EleutherAI/gpt-neox-20b\")\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n model_name, \r\n load_in_8bit=True,\r\n device_map=\"auto\",\r\n trust_remote_code=True,\r\n revision=\"pr/45\"\r\n)\r\n\r\nprompt = \"What is the boiling point of Nitrogen?\"\r\n\r\ninput_ids = tokenizer(prompt, return_tensors=\"pt\").input_ids.to(0)\r\nout = model.generate(input_ids)\r\nprint(tokenizer.decode(out[0], skip_special_tokens=True))\r\n```", "> you can load the mpt-7b model as follows (until the authors will merge my PR):\r\n\r\nTo clarify, the code will continue working after the PR is merged, but you will also be able to do the same thing without `revision=\"pr/45\"`." ]
1,685
1,685
1,685
NONE
null
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu) - Jax version: 0.4.10 - JaxLib version: 0.4.10 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` import torch from transformers import AutoModelForCausalLM, AutoConfig from transformers import BitsAndBytesConfig model_name = 'mosaicml/mpt-7b-instruct' nf4_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=torch.bfloat16 ) model_nf4 = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, offload_folder="offload", offload_state_dict = True, quantization_config=nf4_config) ``` Error ``` ValueError: MPTForCausalLM does not support `device_map='auto'` yet. ``` I saw a similar issue #22188 for `XGLMForCausalLM`. I couldn't find `MPTForCausalLM ` in the repository if the MPT model is not supported currently, is there any hack that I can use to get `acclerate` support ? Than ### Expected behavior Model gets loaded without any errors
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23816/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23816/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23815
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23815/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23815/comments
https://api.github.com/repos/huggingface/transformers/issues/23815/events
https://github.com/huggingface/transformers/issues/23815
1,728,779,544
I_kwDOCUB6oc5nCxUY
23,815
RuntimeError
{ "login": "lazytensor", "id": 130575977, "node_id": "U_kgDOB8huaQ", "avatar_url": "https://avatars.githubusercontent.com/u/130575977?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lazytensor", "html_url": "https://github.com/lazytensor", "followers_url": "https://api.github.com/users/lazytensor/followers", "following_url": "https://api.github.com/users/lazytensor/following{/other_user}", "gists_url": "https://api.github.com/users/lazytensor/gists{/gist_id}", "starred_url": "https://api.github.com/users/lazytensor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lazytensor/subscriptions", "organizations_url": "https://api.github.com/users/lazytensor/orgs", "repos_url": "https://api.github.com/users/lazytensor/repos", "events_url": "https://api.github.com/users/lazytensor/events{/privacy}", "received_events_url": "https://api.github.com/users/lazytensor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Could you let us know your version of Accelerate?", "same problem", "This worked at my Kaggle Notebook \r\n\r\nhttps://stackoverflow.com/questions/76363436/cannot-import-name-partialstate-from-accelerate-when-using-huggingface-pipel", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,688
1,688
NONE
null
### System Info Facing error when doing`from transformers import Trainer` `RuntimeError: Failed to import transformers.training_args because of the following error (look up to see its traceback): cannot import name 'PartialState' from 'accelerate' (/opt/conda/lib/python3.10/site-packages/accelerate/__init__.py)` Here is the environment - `transformers` version: 4.29.2 - Platform: Linux-5.15.109+-x86_64-with-glibc2.35 - Python version: 3.10.10 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.0 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.10 (gpu) - Jax version: 0.4.10 - JaxLib version: 0.4.7 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No Solutions tried `pip install -U accelerate`. But it is still not resolved ### Who can help? @pacman100 @ArthurZucker @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction `from transformers import Trainer` ### Expected behavior Expecting no error
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23815/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23815/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23814
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23814/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23814/comments
https://api.github.com/repos/huggingface/transformers/issues/23814/events
https://github.com/huggingface/transformers/issues/23814
1,728,766,718
I_kwDOCUB6oc5nCuL-
23,814
Adding GPTNeoX (Tensorflow version)
{ "login": "shivance", "id": 51750587, "node_id": "MDQ6VXNlcjUxNzUwNTg3", "avatar_url": "https://avatars.githubusercontent.com/u/51750587?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shivance", "html_url": "https://github.com/shivance", "followers_url": "https://api.github.com/users/shivance/followers", "following_url": "https://api.github.com/users/shivance/following{/other_user}", "gists_url": "https://api.github.com/users/shivance/gists{/gist_id}", "starred_url": "https://api.github.com/users/shivance/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shivance/subscriptions", "organizations_url": "https://api.github.com/users/shivance/orgs", "repos_url": "https://api.github.com/users/shivance/repos", "events_url": "https://api.github.com/users/shivance/events{/privacy}", "received_events_url": "https://api.github.com/users/shivance/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "cc: @sgugger @NielsRogge ", "How would you run this model in Tensorflow though? I don't think it offers the same flexibility as PyTorch wrt to distributing layers on several devices.\r\n\r\ncc @Rocketknight1 ", "I believe a TF implementation with distribution of layers across devices is possible using [DTensor](https://www.tensorflow.org/guide/dtensor_overview). We've been in communication with the TF team about DTensor, but we haven't implemented a TF model with it yet. This could be a good model to try it with!", "Note that implementing our first DTensor model will probably be quite challenging @shivance - we'll support you if you try it, but you should expect that the PR will need changes to some of our tests or internal functions to support DTensor, and there'll be several rounds of iteration before it's ready.", "@sgugger @Rocketknight1 Check this out\r\nhttps://github.com/huggingface/transformers/blob/17a55534f5e5df10ac4804d4270bf6b8cc24998d/src/transformers/models/esm/modeling_tf_esm.py#L102", " I've seen the use of dtensor at KerasNLP deeply and I think I can start working on it if you and @sgugger give me an initial roadmap!", "Hi @shivance, the first task would be to make a port of GPT-NeoX to TF, and then we can start adding `DTensor` layout code to it. You can look at this [ongoing LLAMA PR](https://github.com/huggingface/transformers/pull/24375) to see what you need to do. The biggest thing you have to do is to make a conversion of the existing `modeling_gpt_neox.py` file to `modeling_tf_gpt_neox.py`, then import the classes from it and run some tests to see that it gives (approximately) the same outputs. Once that's done, we can talk about next steps!", "Hey @Rocketknight1 !\r\nI've worked on same as my Google Summer of Code project, check Tf version of GPT Neo X [here](https://github.com/keras-team/keras-nlp/pull/1056), I think what's needed is to do it in Huggingface Design." ]
1,685
1,689
null
NONE
null
### Model description Huggingface has GPTNeoX model by ElutherAI. It's a 20 billion parameter autoregressive language model trained on the Pile, whose weights will be made freely and openly available to the public through a permissive license. However Huggingface currently has only PyTorch implementation of the model. I would like to contribute it's corresponding TensorFlow implementation. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation _No response_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23814/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23814/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/23813
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23813/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23813/comments
https://api.github.com/repos/huggingface/transformers/issues/23813/events
https://github.com/huggingface/transformers/pull/23813
1,728,745,138
PR_kwDOCUB6oc5Rf_An
23,813
[MMS] Scaling Speech Technology to 1,000+ Languages | Add attention adapter to Wav2Vec2
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hey @patrickvonplaten - Thanks for working on this and I reviewed the options provided. I believe the second one would work best from a developer standpoint. IMO it ensures that all the adapter weights are in one repository and it all works the way it should, should someone want to use a different language with the base model.\r\n\r\nI am not a big fan of option 1 because it would make it difficult for a model to run in a resource-constrained environment.\r\n\r\nI am a bit conflicted with option 3, primarily because it involves the end-user having the same experience with Wav2Vec2 without worrying about the specific language adapter layers and so on. Although having 1000+ repos for the same sounds a bit wasteful IMO.\r\n\r\nQuestion: How would this work for fine-tuning, I am assuming if someone fine-tunes the Wav2Vec2-MMS on a language \"X\" then they'll push their adapter weights to a new repo and pull from that. So that'd mean that purely from a UX perspective, we should allow for the `load_adapter` function to be able to pull from a separate repository too right?", "I think 2 is probably the better solution, and I would also make it possible to set the lang in the `from_pretrained` call:\r\n```py\r\nfrom transformers import Wav2Vec2ForCTC, AutoProcessor\r\n\r\nckpt = \"./mms-1b-all/\"\r\n\r\nprocessor = AutoProcessor.from_pretrained(ckpt)\r\nmodel = Wav2Vec2ForCTC.from_pretrained(ckpt, target_lang=\"esp\")\r\n\r\nprocessor.set_lang(\"esp\")\r\n\r\nmodel.to(\"cuda\")\r\n\r\n### Stuff\r\n# want to change the language:\r\nmodel.load_adapter(\"fra\") \r\n```", "+1 on the composite solution proposed by @sgugger. Regarding fine-tuning @Vaibhavs10, users will save both the fine-tuned base weights and adapter layer weights to the same repo (this is different to PEFT where we only save the adapter weights, since here the base weights are also trainable. The way to view the adapter layer is as a extra small feed-forward network on top of the transformer block, so a regular layer of weights rather than a parameter efficient one), so probably we can assume we're loading the base weights and adapter weights from the same repo.", "Agreed, with all above - 2 would be my choice:\r\n\r\n* 1 doesn't feel very user friendly. I'd expect most people would only use a consistent subset so downloading everything is slow and wasteful. \r\n* 2 feels the most intuitive with the current API and flexible. Seconding @Vaibhavs10's questions about finetuning, pushing to the hub and loading finetuned weights. If we load model weights from `mms-1b-fl102` and want our own finetuned adapter weights, how do I specify when loading and how is this information saved? How would we differentiate weights such that when I call `model.push_to_hub` the adapter weights are uploaded separately from the rest of the model (pattern matching?) Should the adapter weights be tied to a specific version of the 'base model' weights?\r\n* 3 Probably simplest to do - but seems like a waste with many repeated weights. ", "I'll leave more in-detail functionality for fine-tuning adapter weights for a future PR, but in short we can already do the following:\r\n\r\n```py\r\nfrom transformers import Wav2Vec2ForCTC\r\n\r\nckpt = \"patrickvonplaten/mms-1b\"\r\nmodel = Wav2Vec2ForCTC.from_pretrained(ckpt, num_attn_adapters=1, vocab_size=277)\r\n\r\nadapter_keys = set(model._adapters.keys())\r\nfor name, param in model.named_parameters():\r\n if name not in adapter_keys:\r\n param.requires_grad = False\r\n```\r\n\r\nSo once we add adapter fine-tuning to the wav2vec2 fine-tuning script, we could also add a simple \"freeze_all_but_adapter()\" function or something.", "The code is now finished. I still need to upload the adapters for the smaller checkpoints, transfer them to Facebook and write some nice docs.\r\n\r\n**All modeling files except Wav2Vec2 are changed due to the #Copied from mechanism**. I think this is better than removing the copy-from mechanism, but happy to change.", "Could I get a final review here @sgugger @amyeroberts ? Once approved, I'll move the facebook checkpoints, add some more examples to the docs & fix the doc test.\r\n\r\nThe final question here for me is whether I should:\r\na) Keep # Copied from at the expense of adding currently not used code to Hubert etc...\r\nb) Remove # Copied fromr\r\nc) Adapt the config of Hubert etc.. as well so that one could fine-tune Hubert with this adapter training going forward.\r\n\r\nCurrently I have a) implemented as it's the safest option IMO. Happy to hear your opinion though.", "'Wav2Vec2Processor' object has no attribute 'set_lang'" ]
1,685
1,685
1,685
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds the MMS models fine-tuned on speech recognition. See official announcement here: https://about.fb.com/news/2023/05/ai-massively-multilingual-speech-technology/ See more details here: https://github.com/facebookresearch/fairseq/blob/main/examples/mms/README.md#asr Fixes #23811 and #23665 For now checkpoints are uploaded here: ## Pretrained-only - https://huggingface.co/patrickvonplaten/mms-300m - https://huggingface.co/patrickvonplaten/mms-1b ## ASR fine-tuned - https://huggingface.co/patrickvonplaten/mms-1b-fl102 - https://huggingface.co/patrickvonplaten/mms-1b-l1107 - https://huggingface.co/patrickvonplaten/mms-1b-all The fine-tuned checkpoints are based on **Adapter** layers as can be seen in this PR. The ASR fine-tuned weights consist of two parts: - The non-adapter weights which are exactly the same as the base model weights - Language specific fine-tuned adapter layer weights. This means we have 1000+ adapter weights for `mms-1b-all` If one wants to use a specific language, specific adapter weights need to be loaded into `mms-1b-all`. By default `mms-1b-all` et. al load the English adapter layer weights as is currently done in https://huggingface.co/patrickvonplaten/mms-1b-all The following works with this PR: ```py from transformers import Wav2Vec2ForCTC, AutoProcessor import soundfile as sf import torch ckpt = "./mms-1b-fl102/" ckpt = "./mms-1b-l1107" ckpt = "./mms-1b-all/" processor = AutoProcessor.from_pretrained(ckpt) model = Wav2Vec2ForCTC.from_pretrained(ckpt) # get audio.flac from https://huggingface.co/datasets/patrickvonplaten/audios/blob/main/audio.flac audio, sr = sf.read("./audio.flac") inputs = processor(audio, sampling_rate=sr, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits transcription = processor.batch_decode(logits.argmax(-1))[0] print(f"Transcription: {transcription}") ``` Now, the question what API to we want to build for allow the user to easily switch between languages for the fine-tuned weights. **Note**: - To switch from one language to another, both the tokenizer's vocab and the model's adapter layers need to be switched out - The tokenizer can always easily hold all langs dicts in RAM because each lang has around 150 entries so we have 150,000 entries which is not too much for RAM - **However**, things are a bit more tricky for the model. The base model requires 3.1 GB in FP32 RAM and each adapter weights are around 9MB in size. This means loading all adapter layers into RAM would cost ~9GB which is quite a bit. How should we design this model? We need to have some kind of switching between languages function anyways. I see the following APIs that could work. ### 1.) By default, we download **all** adapter layers and load all in RAM, but we provide a functionality to remove all language but one from RAM: ```py from transformers import Wav2Vec2ForCTC, AutoProcessor ckpt = "./mms-1b-all/" processor = AutoProcessor.from_pretrained(ckpt) model = Wav2Vec2ForCTC.from_pretrained(ckpt) # requires at least 10GB of CPU RAM target_lang = "esp" processor.set_lang("esp") adapter_id = processor.lang_to_id["esp"] model.set_adapter_weights(adapter_id) # throw away all but one weights => 3.1GB of CPU RAM model.to("cuda") ``` A problem with this is though also that it's not trivial to switch between languages because one needs to load the whole model again and then set the language again. Also we would have to add a `set_adapter_weights` function to Wav2Vec2 which is not ideal ### 2.) By default we only the adapter weights one of language (e.g. English) and the load upon request more adapter layers ```py ```py from transformers import Wav2Vec2ForCTC, AutoProcessor ckpt = "./mms-1b-all/" processor = AutoProcessor.from_pretrained(ckpt) model = Wav2Vec2ForCTC.from_pretrained(ckpt) # requires only 3GB of CPU RAM target_lang = "esp" processor.set_lang("esp") model.load_adapter("esp") # This will load a file called "adapter.esp.bin" from: https://huggingface.co/patrickvonplaten/mms-1b-all , cache it and replace the adapter model.to("cuda") ``` Think this is quite user-friendly, intuitive and this way we also never require more than 3.1 GB of RAM. It however requires to add a pretty specific `load_adapter` function to Wav2Vec2 (think it's fine though). ### 3.) We just upload 1000+ repos one for each language. This way we don't need any "set" or "load" function and we just tread each adapter weights as their own model: ```py from transformers import Wav2Vec2ForCTC, AutoProcessor ckpt = "./mms-1b-all-esp/" # repo names then become lang specific processor = AutoProcessor.from_pretrained(ckpt) model = Wav2Vec2ForCTC.from_pretrained(ckpt) # requires only 3GB of CPU RAM model.to("cuda") ``` Big disadvantage is that it's pretty wasteful since an adapter layer is just 0.3% of all the models weights. => Overall, I'm tending to API **2.)** because it's the most user-friendly and intuitive. It'd just require to add a somewhat specific "load_adapter" function to Wav2Vec2, but think that's totally fine. Thoughts @sanchit-gandhi @Vaibhavs10 @sgugger @LysandreJik @amyeroberts ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23813/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23813/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23813", "html_url": "https://github.com/huggingface/transformers/pull/23813", "diff_url": "https://github.com/huggingface/transformers/pull/23813.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23813.patch", "merged_at": 1685698225000 }
https://api.github.com/repos/huggingface/transformers/issues/23812
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23812/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23812/comments
https://api.github.com/repos/huggingface/transformers/issues/23812/events
https://github.com/huggingface/transformers/pull/23812
1,728,647,129
PR_kwDOCUB6oc5RfqB-
23,812
Add support for HYBRID_SHARD and _HYBRID_SHARD_ZERO2 in the trainer
{ "login": "raghavanone", "id": 115454562, "node_id": "U_kgDOBuGyYg", "avatar_url": "https://avatars.githubusercontent.com/u/115454562?v=4", "gravatar_id": "", "url": "https://api.github.com/users/raghavanone", "html_url": "https://github.com/raghavanone", "followers_url": "https://api.github.com/users/raghavanone/followers", "following_url": "https://api.github.com/users/raghavanone/following{/other_user}", "gists_url": "https://api.github.com/users/raghavanone/gists{/gist_id}", "starred_url": "https://api.github.com/users/raghavanone/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/raghavanone/subscriptions", "organizations_url": "https://api.github.com/users/raghavanone/orgs", "repos_url": "https://api.github.com/users/raghavanone/repos", "events_url": "https://api.github.com/users/raghavanone/events{/privacy}", "received_events_url": "https://api.github.com/users/raghavanone/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger Requesting for review.", "@pacman100 Gentle remainder.", "Hello @raghavanone, after PR #23158, the FSDP logic is being handled by Accelerate. So, these changes should reflect in Accelerate if they aren't being supported by FSDP-XLA ", "> Hello @raghavanone, after PR #23158, the FSDP logic is being handled by Accelerate. So, these changes should reflect in Accelerate if they aren't being supported by FSDP-XLA\r\n\r\nSure, I can make the changes there too, any pointers on things to do in accelerate will of great help.", "Hello @raghavanone, the above PR adds this functionality to the Accelerate which now powers Trainer—also made you the co-author of it. Thank you for all the effort and helping us keep up-to-date with the PyTorch FSDP.", "> Hello @raghavanone, after PR #23158, the FSDP logic is being handled by Accelerate. So, these changes should reflect in Accelerate if they aren't being supported by FSDP-XLA\r\n\r\nHi @pacman100, I have a question. Does this PR #23158 only work for accelerating launch? I am trying to use hybrid shared by using torchrun with trainer args \"torchrun --fsdp hybrid shared ...\". It seems that I need to merge #23812. Is there a better solution without modifying the transformers code?" ]
1,685
1,691
1,686
CONTRIBUTOR
null
#21156
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23812/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23812/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23812", "html_url": "https://github.com/huggingface/transformers/pull/23812", "diff_url": "https://github.com/huggingface/transformers/pull/23812.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23812.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/23811
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23811/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23811/comments
https://api.github.com/repos/huggingface/transformers/issues/23811/events
https://github.com/huggingface/transformers/issues/23811
1,728,544,164
I_kwDOCUB6oc5nB32k
23,811
Metas MMS speech recognition
{ "login": "LYPinASR", "id": 112866899, "node_id": "U_kgDOBro2Uw", "avatar_url": "https://avatars.githubusercontent.com/u/112866899?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LYPinASR", "html_url": "https://github.com/LYPinASR", "followers_url": "https://api.github.com/users/LYPinASR/followers", "following_url": "https://api.github.com/users/LYPinASR/following{/other_user}", "gists_url": "https://api.github.com/users/LYPinASR/gists{/gist_id}", "starred_url": "https://api.github.com/users/LYPinASR/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LYPinASR/subscriptions", "organizations_url": "https://api.github.com/users/LYPinASR/orgs", "repos_url": "https://api.github.com/users/LYPinASR/repos", "events_url": "https://api.github.com/users/LYPinASR/events{/privacy}", "received_events_url": "https://api.github.com/users/LYPinASR/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Duplicate of #23665", "@NielsRogge \r\nHi, Can you tell me how far along we are and about how long it will be ready for us to use? Thank you!\r\n\r\n", "PR merged.\r\n\r\nAlso see:\r\n- https://huggingface.co/docs/transformers/main/en/model_doc/mms\r\n- https://github.com/huggingface/transformers/pull/23813\r\n- https://huggingface.co/facebook/mms-1b-all", "> PR merged.\r\n> \r\n> Also see:\r\n> \r\n> * https://huggingface.co/docs/transformers/main/en/model_doc/mms\r\n> * [[MMS] Scaling Speech Technology to 1,000+ Languages | Add attention adapter to Wav2Vec2 #23813](https://github.com/huggingface/transformers/pull/23813)\r\n> * https://huggingface.co/facebook/mms-1b-all\r\n\r\nCan I use my own dataset instead of the dataset \"mozilla_foundation_common voice_6.1\", which you have shown in tutorial [https://huggingface.co/blog/mms_adapters](url) ? If so then , how. Thanks", "Sure, you just need to load your own dataset, maybe this helps: https://huggingface.co/docs/datasets/v2.13.1/en/audio_load", "> Sure, you just need to load your own dataset, maybe this helps: https://huggingface.co/docs/datasets/v2.13.1/en/audio_load\r\nThank you for your kind reply, although I have gone through the suggested tutorial but it didn't help, and ...\r\nI have just uploaded some demo dataset here u can check it: [https://huggingface.co/datasets/rashmi035/MKB_Hindi_2023](url) ,As you can see the audio is visible in the dataset viewer but the corresponding ngram is not visible, can you help me with this?" ]
1,685
1,690
1,685
NONE
null
### Feature request Our request a simpler and more convenient inference process for a speech recognition model based on MMS just like wav2vec 2.0 in Transformers. ### Motivation We aim to encapsulate the various subroutines called by Facebook’s official model into a direct speech recognition model that is as easy to use as other transformer-based models like wav2vec 2.0. But we also know that the Hugging face team has been among the industry leaders in this area of work. ### Your contribution We recognize that it may not be feasible for us to directly assist the Hugging Face technical team in this task. We believe that such an effort would be forward-looking given the popularity of MMS in current speech recognition research. The resulting model would be ideal for quickly transcribing our meeting notes.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23811/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23811/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23810
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23810/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23810/comments
https://api.github.com/repos/huggingface/transformers/issues/23810/events
https://github.com/huggingface/transformers/issues/23810
1,728,515,314
I_kwDOCUB6oc5nBwzy
23,810
How to convert flax model to pytorch?
{ "login": "GuodongFan", "id": 11190486, "node_id": "MDQ6VXNlcjExMTkwNDg2", "avatar_url": "https://avatars.githubusercontent.com/u/11190486?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GuodongFan", "html_url": "https://github.com/GuodongFan", "followers_url": "https://api.github.com/users/GuodongFan/followers", "following_url": "https://api.github.com/users/GuodongFan/following{/other_user}", "gists_url": "https://api.github.com/users/GuodongFan/gists{/gist_id}", "starred_url": "https://api.github.com/users/GuodongFan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GuodongFan/subscriptions", "organizations_url": "https://api.github.com/users/GuodongFan/orgs", "repos_url": "https://api.github.com/users/GuodongFan/repos", "events_url": "https://api.github.com/users/GuodongFan/events{/privacy}", "received_events_url": "https://api.github.com/users/GuodongFan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "how to convert a .msgpack (flax/jax format) model weights to .bin (huggingface format) or .pth (pytorch format)", "@jzssz please refrain from asking the same question on so many issues 😅 I answered https://github.com/huggingface/transformers/issues/26813#issuecomment-1835703292 " ]
1,685
1,701
1,688
NONE
null
### Feature request run_t5_mlm_flax.py only generate a `flax_model.msgpack` ,but i want to obtain `pytorch_model.bin`. ### Motivation convert msgpack to bin ### Your contribution none
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23810/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23810/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23809
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23809/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23809/comments
https://api.github.com/repos/huggingface/transformers/issues/23809/events
https://github.com/huggingface/transformers/issues/23809
1,728,475,548
I_kwDOCUB6oc5nBnGc
23,809
object of type 'IterableDataset' has no len()
{ "login": "johnchienbronci", "id": 27708347, "node_id": "MDQ6VXNlcjI3NzA4MzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/27708347?v=4", "gravatar_id": "", "url": "https://api.github.com/users/johnchienbronci", "html_url": "https://github.com/johnchienbronci", "followers_url": "https://api.github.com/users/johnchienbronci/followers", "following_url": "https://api.github.com/users/johnchienbronci/following{/other_user}", "gists_url": "https://api.github.com/users/johnchienbronci/gists{/gist_id}", "starred_url": "https://api.github.com/users/johnchienbronci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johnchienbronci/subscriptions", "organizations_url": "https://api.github.com/users/johnchienbronci/orgs", "repos_url": "https://api.github.com/users/johnchienbronci/repos", "events_url": "https://api.github.com/users/johnchienbronci/events{/privacy}", "received_events_url": "https://api.github.com/users/johnchienbronci/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sanchit-gandhi ", "Hey @johnchienbronci - could you verify that all the columns `raw_column_names[split] + [\"target_text\"]` are in `raw_datasets[split]` for each split prior to calling the `.map` method? Failing that, could you paste the results of:\r\n```\r\ntransformers-cli env\r\n```\r\n\r\nAnd also provide a reproducible code-snippet so that I can run the script locally on my end and reproduce the error result? Thank you!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,688
1,688
NONE
null
**run script**: run_speech_recognition_ctc_streaming.py (Multi GPU CTC with Dataset Streaming) ``` for split, dataset in raw_datasets.items(): vectorized_datasets[split] = ( dataset.map(prepare_dataset) .remove_columns(raw_column_names[split] + ["target_text"]) .with_format("torch") ) if split == "train": vectorized_datasets[split] = vectorized_datasets[split].shuffle( buffer_size=data_args.shuffle_buffer_size, seed=training_args.seed, ) .... trainer = Trainer( model=model, data_collator=data_collator, args=training_args, compute_metrics=compute_metrics, train_dataset=vectorized_datasets["train"] if training_args.do_train else None, eval_dataset=vectorized_datasets["eval"] if training_args.do_eval else None, tokenizer=processor, callbacks=[ShuffleCallback()], ) train_result = trainer.train(resume_from_checkpoint=checkpoint) ``` **Error**: ``` Traceback (most recent call last): File "/usr/local/bin/wav2vec2/run_speech_recognition_ctc_streaming.py", line 702, in <module> main() File "/usr/local/bin/wav2vec2/run_speech_recognition_ctc_streaming.py", line 656, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/usr/local/lib/python3.10/dist-packages/transformers-4.30.0.dev0-py3.10.egg/transformers/trainer.py", line 1664, in train return inner_training_loop( File "/usr/local/lib/python3.10/dist-packages/transformers-4.30.0.dev0-py3.10.egg/transformers/trainer.py", line 1909, in _inner_training_loop for step, inputs in enumerate(epoch_iterator): File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 633, in __next__ data = self._next_data() File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 677, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch data.append(next(self.dataset_iter)) File "/usr/local/lib/python3.10/dist-packages/datasets/iterable_dataset.py", line 981, in __iter__ for key, example in ex_iterable: File "/usr/local/lib/python3.10/dist-packages/datasets/iterable_dataset.py", line 647, in __iter__ for x in self.ex_iterable: File "/usr/local/lib/python3.10/dist-packages/datasets/iterable_dataset.py", line 512, in __iter__ if self.remove_columns: File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataset.py", line 276, in __len__ total += len(d) # type: ignore[arg-type] TypeError: object of type 'IterableDataset' has no len() ``` I'm try fix but error change to "_IterDataPipeSerializationWrapper' object has no attribute 'set_epoch" ``` trainer = Trainer( model=model, data_collator=data_collator, args=training_args, compute_metrics=compute_metrics, train_dataset=IterableWrapper(vectorized_datasets["train"]) if training_args.do_train else None, eval_dataset=IterableWrapper(vectorized_datasets["eval"]) if training_args.do_eval else None, tokenizer=processor, callbacks=[ShuffleCallback()], ) ``` Any thoughts on this?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23809/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23809/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23808
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23808/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23808/comments
https://api.github.com/repos/huggingface/transformers/issues/23808/events
https://github.com/huggingface/transformers/issues/23808
1,728,447,545
I_kwDOCUB6oc5nBgQ5
23,808
Why don't we set use_cache=False in default when training?
{ "login": "Coldog2333", "id": 29707234, "node_id": "MDQ6VXNlcjI5NzA3MjM0", "avatar_url": "https://avatars.githubusercontent.com/u/29707234?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Coldog2333", "html_url": "https://github.com/Coldog2333", "followers_url": "https://api.github.com/users/Coldog2333/followers", "following_url": "https://api.github.com/users/Coldog2333/following{/other_user}", "gists_url": "https://api.github.com/users/Coldog2333/gists{/gist_id}", "starred_url": "https://api.github.com/users/Coldog2333/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Coldog2333/subscriptions", "organizations_url": "https://api.github.com/users/Coldog2333/orgs", "repos_url": "https://api.github.com/users/Coldog2333/repos", "events_url": "https://api.github.com/users/Coldog2333/events{/privacy}", "received_events_url": "https://api.github.com/users/Coldog2333/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "We can't change the models as it would be a breaking change, sadly." ]
1,685
1,685
1,685
NONE
null
### Feature request Let's take GPT-2 as an example, in the current implementation (modeling_gpt2.py: Line 856~861): ``` if self.gradient_checkpointing and self.training: if use_cache: logger.warning_once( "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." ) use_cache = False ``` Why don't we set like this: ``` if self.training: # As long as it is being trained, we set use_cache=True. if use_cache: logger.warning_once( "`use_cache=True` makes no sense when training. Setting `use_cache=False`..." ) use_cache = False ``` Because when training, `use_cache=True` makes no sense (at least for decoder-only auto-regressive model) and if you use gradient_checkpointing, it should be under training instead of inference. ### Motivation Hello contributors, I realize that we set `use_cache=False` in default for almost all the transformer-based models. I understand that it can speed up the generation when using the cache from the previous step. However, when training (pertaining or fine-tuning), we don't need it and it consumes many memories when processing a very long sequence, especially when the model is very large. But it cannot provide us any advantage. ### Your contribution If needed, I can help to correct this for all models in the transformers library.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23808/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23808/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23807
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23807/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23807/comments
https://api.github.com/repos/huggingface/transformers/issues/23807/events
https://github.com/huggingface/transformers/issues/23807
1,728,405,594
I_kwDOCUB6oc5nBWBa
23,807
Same size on memory usage when loading gpt2 from torch.float16 and 8bit Quantization
{ "login": "czhang-trinity", "id": 35928300, "node_id": "MDQ6VXNlcjM1OTI4MzAw", "avatar_url": "https://avatars.githubusercontent.com/u/35928300?v=4", "gravatar_id": "", "url": "https://api.github.com/users/czhang-trinity", "html_url": "https://github.com/czhang-trinity", "followers_url": "https://api.github.com/users/czhang-trinity/followers", "following_url": "https://api.github.com/users/czhang-trinity/following{/other_user}", "gists_url": "https://api.github.com/users/czhang-trinity/gists{/gist_id}", "starred_url": "https://api.github.com/users/czhang-trinity/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/czhang-trinity/subscriptions", "organizations_url": "https://api.github.com/users/czhang-trinity/orgs", "repos_url": "https://api.github.com/users/czhang-trinity/repos", "events_url": "https://api.github.com/users/czhang-trinity/events{/privacy}", "received_events_url": "https://api.github.com/users/czhang-trinity/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada ", "Hi @czhang-trinity \r\nThanks for the issue, running your script on the current main branch gives me:\r\n\r\n```bash \r\n8 bit: 261462552\r\n16 bit: 261462552\r\n```\r\n\r\nThis is because GPT2 uses Conv1D in replacement to all linear layers. Therefore the 8bit conversion ends up converting no Linear layers !\r\n```bash\r\nGPT2LMHeadModel(\r\n (transformer): GPT2Model(\r\n (wte): Embedding(50257, 768)\r\n (wpe): Embedding(1024, 768)\r\n (drop): Dropout(p=0.1, inplace=False)\r\n (h): ModuleList(\r\n (0-11): 12 x GPT2Block(\r\n (ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (attn): GPT2Attention(\r\n (c_attn): Conv1D()\r\n (c_proj): Conv1D()\r\n (attn_dropout): Dropout(p=0.1, inplace=False)\r\n (resid_dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (mlp): GPT2MLP(\r\n (c_fc): Conv1D()\r\n (c_proj): Conv1D()\r\n (act): NewGELUActivation()\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n )\r\n (ln_f): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n )\r\n (lm_head): Linear(in_features=768, out_features=50257, bias=False)\r\n)\r\n```" ]
1,685
1,685
1,685
NONE
null
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.15.0-1026-aws-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?:No ### Who can help? @pacman100 ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` from transformers import AutoModelForCausalLM from transformers import BitsAndBytesConfig import torch model_8bit = AutoModelForCausalLM.from_pretrained( "gpt2", load_in_8bit=True, device_map='auto', ) print('8 bit: ', model_8bit.get_memory_footprint()) model_float16 = AutoModelForCausalLM.from_pretrained( "gpt2", load_in_8bit=False, device_map='auto', torch_dtype=torch.float16 ) print('float16: ', model_float16.get_memory_footprint()) ``` ### Expected behavior Result is 8 bit: 274045464 float16: 261462552 Should see 8bit smaller than float16
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23807/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23807/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23806
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23806/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23806/comments
https://api.github.com/repos/huggingface/transformers/issues/23806/events
https://github.com/huggingface/transformers/issues/23806
1,728,334,603
I_kwDOCUB6oc5nBEsL
23,806
bnb_4bit for Flan-T5-XL/XXL? Can't load on Colab T4...
{ "login": "i-am-neo", "id": 102043285, "node_id": "U_kgDOBhUOlQ", "avatar_url": "https://avatars.githubusercontent.com/u/102043285?v=4", "gravatar_id": "", "url": "https://api.github.com/users/i-am-neo", "html_url": "https://github.com/i-am-neo", "followers_url": "https://api.github.com/users/i-am-neo/followers", "following_url": "https://api.github.com/users/i-am-neo/following{/other_user}", "gists_url": "https://api.github.com/users/i-am-neo/gists{/gist_id}", "starred_url": "https://api.github.com/users/i-am-neo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/i-am-neo/subscriptions", "organizations_url": "https://api.github.com/users/i-am-neo/orgs", "repos_url": "https://api.github.com/users/i-am-neo/repos", "events_url": "https://api.github.com/users/i-am-neo/events{/privacy}", "received_events_url": "https://api.github.com/users/i-am-neo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Can load using b16 sharded version." ]
1,685
1,685
1,685
NONE
null
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu) - Jax version: 0.4.10 - JaxLib version: 0.4.10 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger (I think) ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Not sure this is the best place to raise the issue - please feel free to redirect. Following your great [Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA blog](https://huggingface.co/blog/4bit-transformers-bitsandbytes), I used your example [Basic usage Google Colab notebook](https://colab.research.google.com/drive/1ge2F1QSK8Q7h0hn3YKuBCOAS0bK8E0wf?usp=sharing), modified the model_id to "google/flan-t5-xl" (the 3B model) as follows. However, the Colab session crashes (free T4 GPU). According to the blog, T5 is supported, but I may be missing something. Please point me the right direction? ``` from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig from transformers import T5ForConditionalGeneration model_id = "google/flan-t5-xl" bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16import torch ) tokenizer = AutoTokenizer.from_pretrained(model_id) model_4bit = T5ForConditionalGeneration.from_pretrained(model_id, quantization_config=bnb_config, device_map="auto") # crashes - runs out of memory ``` ### Expected behavior I had expected the model to load and use test it. I'd like to finetune Flan-T5 using LoRA. In case the settings for T5 needs to be something specific, it would be helpful to know what they are. Thanks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23806/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23806/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23805
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23805/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23805/comments
https://api.github.com/repos/huggingface/transformers/issues/23805/events
https://github.com/huggingface/transformers/pull/23805
1,728,281,601
PR_kwDOCUB6oc5RecSO
23,805
[WIP] CI/CD Testing
{ "login": "AdnaneKhan", "id": 2006441, "node_id": "MDQ6VXNlcjIwMDY0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/2006441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AdnaneKhan", "html_url": "https://github.com/AdnaneKhan", "followers_url": "https://api.github.com/users/AdnaneKhan/followers", "following_url": "https://api.github.com/users/AdnaneKhan/following{/other_user}", "gists_url": "https://api.github.com/users/AdnaneKhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/AdnaneKhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AdnaneKhan/subscriptions", "organizations_url": "https://api.github.com/users/AdnaneKhan/orgs", "repos_url": "https://api.github.com/users/AdnaneKhan/repos", "events_url": "https://api.github.com/users/AdnaneKhan/events{/privacy}", "received_events_url": "https://api.github.com/users/AdnaneKhan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,685
1,685
1,685
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23805/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23805/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23805", "html_url": "https://github.com/huggingface/transformers/pull/23805", "diff_url": "https://github.com/huggingface/transformers/pull/23805.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23805.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/23803
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23803/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23803/comments
https://api.github.com/repos/huggingface/transformers/issues/23803/events
https://github.com/huggingface/transformers/pull/23803
1,728,204,789
PR_kwDOCUB6oc5ReL1T
23,803
[WIP] CI/CD Testing
{ "login": "AdnaneKhan", "id": 2006441, "node_id": "MDQ6VXNlcjIwMDY0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/2006441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AdnaneKhan", "html_url": "https://github.com/AdnaneKhan", "followers_url": "https://api.github.com/users/AdnaneKhan/followers", "following_url": "https://api.github.com/users/AdnaneKhan/following{/other_user}", "gists_url": "https://api.github.com/users/AdnaneKhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/AdnaneKhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AdnaneKhan/subscriptions", "organizations_url": "https://api.github.com/users/AdnaneKhan/orgs", "repos_url": "https://api.github.com/users/AdnaneKhan/repos", "events_url": "https://api.github.com/users/AdnaneKhan/events{/privacy}", "received_events_url": "https://api.github.com/users/AdnaneKhan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,685
1,685
1,685
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23803/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23803/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23803", "html_url": "https://github.com/huggingface/transformers/pull/23803", "diff_url": "https://github.com/huggingface/transformers/pull/23803.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23803.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/23802
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23802/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23802/comments
https://api.github.com/repos/huggingface/transformers/issues/23802/events
https://github.com/huggingface/transformers/pull/23802
1,728,135,473
PR_kwDOCUB6oc5Rd8jM
23,802
[WIP] Add CLIPViP
{ "login": "tensorpro", "id": 23471886, "node_id": "MDQ6VXNlcjIzNDcxODg2", "avatar_url": "https://avatars.githubusercontent.com/u/23471886?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tensorpro", "html_url": "https://github.com/tensorpro", "followers_url": "https://api.github.com/users/tensorpro/followers", "following_url": "https://api.github.com/users/tensorpro/following{/other_user}", "gists_url": "https://api.github.com/users/tensorpro/gists{/gist_id}", "starred_url": "https://api.github.com/users/tensorpro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tensorpro/subscriptions", "organizations_url": "https://api.github.com/users/tensorpro/orgs", "repos_url": "https://api.github.com/users/tensorpro/repos", "events_url": "https://api.github.com/users/tensorpro/events{/privacy}", "received_events_url": "https://api.github.com/users/tensorpro/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @amyeroberts ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23802). All of your documentation changes will be reflected on that endpoint.", "@tensorpro Thank you very much for this PR and adding this great model!\r\nLet us know when this is ready for review and feel free to ping me if you have any issues or questions about the implementation in the meantime. ", "Thanks @amyeroberts!\r\nThis PR should be pretty close. I need to add a custom processor for CLIPViP to setup the preprocessing for videos and improve documentation. But after that, it should be ready to review.\r\n", "Are there bugs in [modeling_outputs'](https://github.com/huggingface/transformers/blob/17a55534f5e5df10ac4804d4270bf6b8cc24998d/src/transformers/modeling_outputs.py#L46-L47) type hints for attentions/hidden outputs? It seems like we want `Tuple[FloatTensor,...]` instead of the`Tuple[FloatTensor]`? \r\nThe current type hint makes it seem like we should return a tuple with a single`FloatTensor` instead of a tuple with an arbitrary number of float tensors.", "@tensorpro Good spot - yes, the returned tuples can have a variable number of items. In practice, we're not running e.g. mypy against the repo so shouldn't break things but, as you point out, could be misleading. Would you like to open an issue or PR addressing this? ", "Ah thanks for clearing it up! I only caught it cause my editor was complaining about the return value types.\r\n\r\nAnd I'd be happy to make a PR.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Sorry about the delay, but I got around to adding a CLIPViPProcessor + improving some of the docs.\r\n\r\nMerging with the recent changes in [24306](https://github.com/huggingface/transformers/pull/24306) was a bit confusing, since it wasn't clear from the error messages I was getting that we need to add `_set_token_in_kwargs` to the config. Would it make sense to identify these errors and add something like \"you may need to add _set_token_in_kwargs\" to the error messages?\r\n\r\nAlso, I think the PR is ready for review though I will be refining the docs a bit more.", "I need to update the of the docstring examples that use images as examples to use videos instead, it should be ready for review after that though.", "@tensorpro Great :) Ping me when that's done and I'll review 👍 ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,691
1,691
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> Fixes #22829 ## Before submitting - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. https://github.com/huggingface/transformers/issues/22829 - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). Docs are WIP I haven't updated the docstrings yet. - [X] Did you write any new necessary tests? Integration test still needs some work. It's basically just the CLIP integration test, it doesn't test how video retrieval would work ## Who can review? @NielsRogge maybe others?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23802/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23802/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23802", "html_url": "https://github.com/huggingface/transformers/pull/23802", "diff_url": "https://github.com/huggingface/transformers/pull/23802.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23802.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/23801
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23801/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23801/comments
https://api.github.com/repos/huggingface/transformers/issues/23801/events
https://github.com/huggingface/transformers/issues/23801
1,728,113,365
I_kwDOCUB6oc5nAOrV
23,801
Training siamese (biencoder) based transformer model with gradient checkpointing throws error
{ "login": "sachinya00", "id": 45940252, "node_id": "MDQ6VXNlcjQ1OTQwMjUy", "avatar_url": "https://avatars.githubusercontent.com/u/45940252?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sachinya00", "html_url": "https://github.com/sachinya00", "followers_url": "https://api.github.com/users/sachinya00/followers", "following_url": "https://api.github.com/users/sachinya00/following{/other_user}", "gists_url": "https://api.github.com/users/sachinya00/gists{/gist_id}", "starred_url": "https://api.github.com/users/sachinya00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sachinya00/subscriptions", "organizations_url": "https://api.github.com/users/sachinya00/orgs", "repos_url": "https://api.github.com/users/sachinya00/repos", "events_url": "https://api.github.com/users/sachinya00/events{/privacy}", "received_events_url": "https://api.github.com/users/sachinya00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ArthurZucker and @younesbelkada ", "@sachinya00 What does your code look like, including training setup and training args?", "I've updated the post with the code to reproduce the same", "Hey, thanks for providing a reproduction script. \r\nBased on the provided traceback it seems like the issue lies with `DDP` that is asking you to use `_set_static_graph()`. Did that work for you? ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "The issue is very similar to the below one and I'm not able to make it work even with _set_static_graph()\r\n[https://discuss.pytorch.org/t/distributed-data-parallel-with-triplet-transformer-model/137347/4](url)" ]
1,685
1,693
1,688
NONE
null
### System Info PyTorch Lightning Version 1.6.5 Torch 1.13.0 Python version 3.8 CUDA Version: 11.4 4 NVIDIA A100-SXM4-40GBs transformers 4.24.0 ### Reproduction After adding `model.gradient_checkpointing_enable()` to the training code, throwing below error ```RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations``` The workaround to fix this is add `use_reentrant=False` in the below file. https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_bert.py#L600 ``` layer_outputs = torch.utils.checkpoint.checkpoint( create_custom_forward(layer_module), hidden_states, attention_mask, layer_head_mask, encoder_hidden_states, encoder_attention_mask, use_reentrant=False ) ``` What's the best way to fix this? instead of adding the above flag manually in the source code ### Expected behavior adding `model.gradient_checkpointing_enable()` shouldn't throw any error ### Code to reproduce ```import torch import torch.nn as nn from torch.utils.data import DataLoader, Dataset from transformers import BertTokenizer, BertModel import pytorch_lightning as pl # Sample data class SampleDataset(Dataset): def __init__(self): self.data = [ ("I love coding", "I enjoy programming", 1), ("Python is great", "Java is popular", 0), ("Deep learning is fascinating", "Machine learning is interesting", 1), ("I prefer cats", "I like dogs", 0) ] self.tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') def __len__(self): return len(self.data) def __getitem__(self, idx): text1, text2, label = self.data[idx] encoded_text1 = self.tokenizer.encode_plus(text1, add_special_tokens=True, padding='max_length', max_length=128, truncation=True) encoded_text2 = self.tokenizer.encode_plus(text2, add_special_tokens=True, padding='max_length', max_length=128, truncation=True) input_ids1 = torch.tensor(encoded_text1['input_ids']) attention_mask1 = torch.tensor(encoded_text1['attention_mask']) input_ids2 = torch.tensor(encoded_text2['input_ids']) attention_mask2 = torch.tensor(encoded_text2['attention_mask']) return (input_ids1, attention_mask1), (input_ids2, attention_mask2), label # Define your LightningModule class SiameseBiEncoder(pl.LightningModule): def __init__(self): super(SiameseBiEncoder, self).__init__() self.bert = BertModel.from_pretrained('bert-base-uncased') self.hidden_size = self.bert.config.hidden_size self.cosine_similarity = nn.CosineSimilarity(dim=1) self.criterion = nn.BCELoss() def forward(self, input_ids, attention_mask): outputs = self.bert(input_ids=input_ids, attention_mask=attention_mask) pooled_output = outputs.pooler_output return pooled_output def training_step(self, batch, batch_idx): (input_ids1, attention_mask1), (input_ids2, attention_mask2), labels = batch embeddings1 = self.forward(input_ids1, attention_mask1) embeddings2 = self.forward(input_ids2, attention_mask2) similarity_scores = self.cosine_similarity(embeddings1, embeddings2) loss = self.criterion(similarity_scores, labels.float()) self.log('train_loss', loss) return loss def validation_step(self, batch, batch_idx): (input_ids1, attention_mask1), (input_ids2, attention_mask2), labels = batch embeddings1 = self.forward(input_ids1, attention_mask1) embeddings2 = self.forward(input_ids2, attention_mask2) similarity_scores = self.cosine_similarity(embeddings1, embeddings2) loss = self.criterion(similarity_scores, labels.float()) self.log('val_loss', loss) return loss def configure_optimizers(self): optimizer = torch.optim.AdamW(self.parameters(), lr=2e-5) return optimizer # Create the LightningDataModule class SampleDataModule(pl.LightningDataModule): def __init__(self, batch_size=4): super(SampleDataModule, self).__init__() self.batch_size = batch_size def setup(self, stage=None): self.train_dataset = SampleDataset() self.val_dataset = SampleDataset() def train_dataloader(self): return DataLoader(self.train_dataset, batch_size=self.batch_size, shuffle=True) def val_dataloader(self): return DataLoader(self.val_dataset, batch_size=self.batch_size) # Create an instance of your LightningModule model = SiameseBiEncoder() model.bert.gradient_checkpointing_enable() print(f"Gradient Checkpointing: {model.bert.is_gradient_checkpointing}") # Create the LightningDataModule instance data_module = SampleDataModule() # Create a Trainer instance trainer = pl.Trainer( max_epochs=3, devices=2, accelerator="gpu", strategy="ddp") trainer.fit(model, data_module)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23801/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23801/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23800
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23800/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23800/comments
https://api.github.com/repos/huggingface/transformers/issues/23800/events
https://github.com/huggingface/transformers/pull/23800
1,728,093,266
PR_kwDOCUB6oc5RdzVc
23,800
Log the right train_batch_size if using auto_find_batch_size and also log the adjusted value seperately.
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "id": 2155169140, "node_id": "MDU6TGFiZWwyMTU1MTY5MTQw", "url": "https://api.github.com/repos/huggingface/transformers/labels/trainer", "name": "trainer", "color": "2ef289", "default": false, "description": "" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,685
1,685
1,685
CONTRIBUTOR
null
# What does this PR do? This PR will log the `train_batch_size` that is reduced when using the auto-batch-finder utility, and will also seperatly log what the reduced batch size is on a debug level. Fixes # (issue) Solves #23762 Solves #21950 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23800/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23800/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23800", "html_url": "https://github.com/huggingface/transformers/pull/23800", "diff_url": "https://github.com/huggingface/transformers/pull/23800.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23800.patch", "merged_at": 1685128146000 }
https://api.github.com/repos/huggingface/transformers/issues/23799
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23799/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23799/comments
https://api.github.com/repos/huggingface/transformers/issues/23799/events
https://github.com/huggingface/transformers/pull/23799
1,728,076,649
PR_kwDOCUB6oc5Rdvvp
23,799
Enable code-specific revision for code on the Hub
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,685
1,685
1,685
COLLABORATOR
null
# What does this PR do? This PR adds a new `code_revision` argument to all auto classes `from_pretrained` (and the auto models `from_config`) to allow for a specific revision for code on the Hub. Since code can now live in a different repo than the weights, the `revision` argument can't be used directly for the code files and we need a new argument. This PR also makes `code_revision` default to `revision` when the repo contains both the code and the model weights. Fixes #23745
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23799/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23799/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23799", "html_url": "https://github.com/huggingface/transformers/pull/23799", "diff_url": "https://github.com/huggingface/transformers/pull/23799.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23799.patch", "merged_at": 1685130676000 }
https://api.github.com/repos/huggingface/transformers/issues/23798
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23798/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23798/comments
https://api.github.com/repos/huggingface/transformers/issues/23798/events
https://github.com/huggingface/transformers/issues/23798
1,728,041,003
I_kwDOCUB6oc5m_9Ar
23,798
TFBertTokenizer - support for "never_split"
{ "login": "benzitohhh", "id": 861758, "node_id": "MDQ6VXNlcjg2MTc1OA==", "avatar_url": "https://avatars.githubusercontent.com/u/861758?v=4", "gravatar_id": "", "url": "https://api.github.com/users/benzitohhh", "html_url": "https://github.com/benzitohhh", "followers_url": "https://api.github.com/users/benzitohhh/followers", "following_url": "https://api.github.com/users/benzitohhh/following{/other_user}", "gists_url": "https://api.github.com/users/benzitohhh/gists{/gist_id}", "starred_url": "https://api.github.com/users/benzitohhh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/benzitohhh/subscriptions", "organizations_url": "https://api.github.com/users/benzitohhh/orgs", "repos_url": "https://api.github.com/users/benzitohhh/repos", "events_url": "https://api.github.com/users/benzitohhh/events{/privacy}", "received_events_url": "https://api.github.com/users/benzitohhh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @Rocketknight1 ", "Hi @benzitohhh , and sorry for the delay! This is an interesting and useful idea, but we're depending on the underlying Tensorflow Text layers, specifically [BertTokenizer](https://www.tensorflow.org/text/api_docs/python/text/BertTokenizer) in this case.\r\n\r\nI don't think there is a 'never_split' option here, but we could use the `preserve_unused_token` argument. This would mean that tokens of the form `[unused0]`, `[unused1]`, etc. would never be split, so you could use those as a control token like `[abstract]`. Would this work for your use case? If it's useful to you it's probably useful to other people, and we can add it to the `TFBertTokenizer` layer in a PR.", "hi @Rocketknight1 Thanks for the response, and sorry so slow getting back to you also!\r\n\r\nJust to check I understand...\r\n\r\nIn our case, vocabulary (token_id to token mapping) looks as below, where 5-9 inclusive are \"special\" tokens:\r\nhttps://huggingface.co/anferico/bert-for-patents/raw/main/vocab.txt\r\n\r\n```\r\n0: [PAD]\r\n1: [UNK]\r\n2: [CLS]\r\n3: [SEP]\r\n4: [MASK]\r\n5: [abstract]\r\n6: [claim]\r\n7: [summary]\r\n8: [invention]\r\n9: [cpc]\r\n10: [unused0]\r\n11: [unused1]\r\n12: [unused2]\r\n13: [unused3]\r\n14: [unused4]\r\n15: [unused5]\r\netc...\r\n```\r\n\r\nSo with the `preserve_unused_token` approach, I guess we'd need to do something like:\r\n\r\n```python\r\ninput = ' [abstract] some text here. '\r\n#out = [2, 5, 1735, 3458, 1847, 3] #### Expected tokenized ids\r\n\r\n# 1. Replace each \"special\" token with a unique \"unused\" token\r\n# So we need to map:\r\n# '[abstract]' -> '[unused0]'\r\n# '[claims]' -> '[unused1]'\r\n# etc..\r\n# I guess we could use some regex for this.\r\ninput__unused = '[unused0] some text here'\r\n\r\n# 2. Do the tokenization\r\nbert_input__unused = tokenizer(tf.constant([input__unused]))\r\n# { 'input_ids': ... numpy=array([[ 2, 10, 1735, 3458, 1847, 3]])> etc... }\r\n# i.e. the \"10\" above is the is '[unused0]' token\r\n\r\n# 3. Replace \"unused\" token_ids with the correct special token_ids\r\n# Not sure exactly how to do this with tensor operations, but I guess it's possible?\r\n# So we need to map:\r\n# 10 ('[unused0]') -> 5 ('[abstract]')\r\n# 11 ('[unused1]') -> 6 ('[claims]')\r\n# etc..\r\nbert_input = ..\r\n# { 'input_ids': ... numpy=array([[ 2, 5, 1735, 3458, 1847, 3]])> etc... } \r\n```\r\n\r\nWill the above work?\r\n\r\nIf so, that would be amazing, and totally solve our situation.\r\n\r\nObviously, being able to add a \"never_split\" param would be much nicer :)\r\n\r\nAnyways let us know what is possible - thanks!", "Hi @benzitohhh, yes, that's correct! I'd also need to file a PR to expose the option in our tokenizer, but if you're interested then I can do that soon.\r\n\r\nFor the issue of mapping the `unused` token ids to the correct special token IDs, I suggest using a block of unused token IDs in the same order as your special token IDs. Then all you would need to do is:\r\n\r\n```python\r\n# `True` for `unused` tokens, `False` otherwise\r\ncondition = (input_ids >= unused_start_idx) & (input_ids <= unused_end_idx)\r\n# Subtract offset value from all unused tokens\r\ninput_ids = tf.where(condition, input_ids - offset, input_ids)\r\n```\r\nIn the vocab list you linked above, an offset of `5` would map `[unused0]` -> `[abstract]` and so on.", "For more complex replacements, you could also just reorder the `vocab_list` for the `TFBertTokenizer` so it generates the indices you want!", "@Rocketknight1 Ok this would totally work for us, and would allow us to create an end-to-end model - yay!\r\n\r\nIf you could create a PR that would be super appreciated.\r\n\r\nThanks again for all your help here, and the super clear explanations. Have a good weekend meanwhile.\r\n\r\n", "Hi @benzitohhh, the PR is now open at #24324. You can try out the PR branch with the following command:\r\n```\r\npip install git+https://github.com/huggingface/transformers.git@allow_tf_tokenizer_kwargs\r\n```\r\n\r\nWhen creating the `TFBertTokenizer`, add the arguments `use_fast_bert_tokenizer=False` and `preserve_unused_token=True`. Also, note that only the slower TF tokenizer layer supports the `preserve_unused_token` argument, but only the fast layer can be exported to TFLite. This means that this solution won't work for you if you want to export to TFLite! ", "@Rocketknight1 Ah amazing thanks! Will try this out first thing on Monday and let you know asap", "@Rocketknight1 Ok just tested the PR, it works perfectly.\r\n\r\nThanks again for making this happen!", "Cool! Hopefully we can merge the PR soon in that case, so you can stop installing from the branch.", "@benzitohhh this has now been merged. You can now get it just by installing from `main` with\r\n```\r\npip install git+https://github.com/huggingface/transformers.git\r\n```\r\nIt will be included with the next release of transformers in a few weeks, at which point you can go back to the usual `pip install transformers`", "@Rocketknight1 amazing - thanks again" ]
1,685
1,687
1,687
NONE
null
### Feature request Often vocabularies contain special tokens that should not be split. For example, in model "anferico/bert-for-patents", the vocabulary contains a token "[abstract]" (token_id is 5) https://huggingface.co/anferico/bert-for-patents/raw/main/vocab.txt The normal `BertTokenizer` supports a param "never_split" for this: ```python from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained('anferico/bert-for-patents', never_split=['[abstract]']) tokenizer.tokenize('[abstract] some text here') # ['[abstract]', 'some', 'text', 'here'] ``` So above, even though '[abstract]' has parens, it is not split. But `TFBertTokenizer` does not have a "never_split" param, and so there is no way to prevent splits. For example: ```python from transformers import TFBertTokenizer tokenizer = TFBertTokenizer.from_pretrained('anferico/bert-for-patents') tokenizer(tf.constant(['[abstract] some text here'])) # {'input_ids': <tf.Tensor: shape=(1, 8), dtype=int64, numpy=array([[ 2, 1036, 9726, 1038, 1735, 3458, 1847, 3]])>, 'attention_mask': <tf.Tensor: shape=(1, 8), dtype=int64, numpy=array([[1, 1, 1, 1, 1, 1, 1, 1]])>, 'token_type_ids': <tf.Tensor: shape=(1, 8), dtype=int64, numpy=array([[0, 0, 0, 0, 0, 0, 0, 0]])>} ``` Above, notice that token_id 5 (['abstract']) is missing in the input_ids, and in fact '[abstract]' has been split into three separate tokens: * '[' - 1036 * 'abstract' - 9726 * ']' - 1038 ### Motivation We would like to use an end-to-end model, on TensorFlow Serving, with in-graph tokenization. But we need to be able to include special tokens in our input, such as '[abstract]', '[claims]' etc https://huggingface.co/anferico/bert-for-patents/raw/main/vocab.txt If TFBertTokenizer had a "never_split" param, this would be possible. But currently it is not, so we need to do Tokenization outside the server.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23798/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23798/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23797
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23797/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23797/comments
https://api.github.com/repos/huggingface/transformers/issues/23797/events
https://github.com/huggingface/transformers/pull/23797
1,728,032,202
PR_kwDOCUB6oc5RdmBd
23,797
Fix last instances of kbit -> quantized
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[ { "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false } ]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,685
1,685
1,685
COLLABORATOR
null
# What does this PR do? Just encountered a few kbit remaining. In particular the `_is_loaded_in_kbit` really needs to be changed, the others are just for consistency.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23797/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23797/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23797", "html_url": "https://github.com/huggingface/transformers/pull/23797", "diff_url": "https://github.com/huggingface/transformers/pull/23797.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23797.patch", "merged_at": 1685525901000 }
https://api.github.com/repos/huggingface/transformers/issues/23841
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23841/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23841/comments
https://api.github.com/repos/huggingface/transformers/issues/23841/events
https://github.com/huggingface/transformers/issues/23841
1,730,826,870
I_kwDOCUB6oc5nKlJ2
23,841
Causal language modeling documentation is wrong?
{ "login": "JoaoLages", "id": 17574157, "node_id": "MDQ6VXNlcjE3NTc0MTU3", "avatar_url": "https://avatars.githubusercontent.com/u/17574157?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JoaoLages", "html_url": "https://github.com/JoaoLages", "followers_url": "https://api.github.com/users/JoaoLages/followers", "following_url": "https://api.github.com/users/JoaoLages/following{/other_user}", "gists_url": "https://api.github.com/users/JoaoLages/gists{/gist_id}", "starred_url": "https://api.github.com/users/JoaoLages/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JoaoLages/subscriptions", "organizations_url": "https://api.github.com/users/JoaoLages/orgs", "repos_url": "https://api.github.com/users/JoaoLages/repos", "events_url": "https://api.github.com/users/JoaoLages/events{/privacy}", "received_events_url": "https://api.github.com/users/JoaoLages/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks a lot! Transfering this to transformers", "It's just an example that we keep as simple as possible. You can customize it to your needs for your own trainings.", "An example that is **wrong**, let's not try to argue that it isn't 😅 . In that example, you are interested in training to generate sentences, but you are actually training the model to never stop generating...", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,688
1,688
CONTRIBUTOR
null
I just noticed that [on this page](https://huggingface.co/docs/transformers/tasks/language_modeling) we do not add any end-of-speech token (EOS) to the end of the texts. This means we are training a model that does not shut up! The EOS token should be added!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23841/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23841/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23796
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23796/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23796/comments
https://api.github.com/repos/huggingface/transformers/issues/23796/events
https://github.com/huggingface/transformers/pull/23796
1,727,728,043
PR_kwDOCUB6oc5Rcj_r
23,796
fix: Replace `add_prefix_space` in `get_prompt_ids` with manual space for FastTokenizer compatibility
{ "login": "connor-henderson", "id": 78612354, "node_id": "MDQ6VXNlcjc4NjEyMzU0", "avatar_url": "https://avatars.githubusercontent.com/u/78612354?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-henderson", "html_url": "https://github.com/connor-henderson", "followers_url": "https://api.github.com/users/connor-henderson/followers", "following_url": "https://api.github.com/users/connor-henderson/following{/other_user}", "gists_url": "https://api.github.com/users/connor-henderson/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-henderson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-henderson/subscriptions", "organizations_url": "https://api.github.com/users/connor-henderson/orgs", "repos_url": "https://api.github.com/users/connor-henderson/repos", "events_url": "https://api.github.com/users/connor-henderson/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-henderson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sanchit-gandhi for sure just pushed something up" ]
1,685
1,685
1,685
CONTRIBUTOR
null
# What does this PR do? Fixes #23764 As discussed in the issue the 'FastTokenizer' for Whisper and other models does not accept `add_prefix_space` as an argument to tokenize, so to make `get_prompt_ids` compatible across both slow and fast tokenizers this was replaced with `" " + text` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @hollance @sanchit-gandhi
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23796/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23796/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23796", "html_url": "https://github.com/huggingface/transformers/pull/23796", "diff_url": "https://github.com/huggingface/transformers/pull/23796.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23796.patch", "merged_at": 1685544755000 }
https://api.github.com/repos/huggingface/transformers/issues/23795
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23795/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23795/comments
https://api.github.com/repos/huggingface/transformers/issues/23795/events
https://github.com/huggingface/transformers/pull/23795
1,727,705,344
PR_kwDOCUB6oc5RcfG_
23,795
no_cuda does not take effect in non distributed environment
{ "login": "sywangyi", "id": 36058628, "node_id": "MDQ6VXNlcjM2MDU4NjI4", "avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sywangyi", "html_url": "https://github.com/sywangyi", "followers_url": "https://api.github.com/users/sywangyi/followers", "following_url": "https://api.github.com/users/sywangyi/following{/other_user}", "gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions", "organizations_url": "https://api.github.com/users/sywangyi/orgs", "repos_url": "https://api.github.com/users/sywangyi/repos", "events_url": "https://api.github.com/users/sywangyi/events{/privacy}", "received_events_url": "https://api.github.com/users/sywangyi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,685
1,685
1,685
CONTRIBUTOR
null
Fixes # (issue) no_cuda does not take effect in non distributed case. gpu is still selected. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? - trainer: @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23795/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23795/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23795", "html_url": "https://github.com/huggingface/transformers/pull/23795", "diff_url": "https://github.com/huggingface/transformers/pull/23795.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23795.patch", "merged_at": 1685112472000 }
https://api.github.com/repos/huggingface/transformers/issues/23794
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23794/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23794/comments
https://api.github.com/repos/huggingface/transformers/issues/23794/events
https://github.com/huggingface/transformers/issues/23794
1,727,667,425
I_kwDOCUB6oc5m-hzh
23,794
Transformer trainer training crashed with GLM models
{ "login": "renjie-liu", "id": 36247193, "node_id": "MDQ6VXNlcjM2MjQ3MTkz", "avatar_url": "https://avatars.githubusercontent.com/u/36247193?v=4", "gravatar_id": "", "url": "https://api.github.com/users/renjie-liu", "html_url": "https://github.com/renjie-liu", "followers_url": "https://api.github.com/users/renjie-liu/followers", "following_url": "https://api.github.com/users/renjie-liu/following{/other_user}", "gists_url": "https://api.github.com/users/renjie-liu/gists{/gist_id}", "starred_url": "https://api.github.com/users/renjie-liu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/renjie-liu/subscriptions", "organizations_url": "https://api.github.com/users/renjie-liu/orgs", "repos_url": "https://api.github.com/users/renjie-liu/repos", "events_url": "https://api.github.com/users/renjie-liu/events{/privacy}", "received_events_url": "https://api.github.com/users/renjie-liu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The problem lies within your dataset, nothing to do with the `Trainer` :-)", "Hi @sgugger, I'm new to `Transformer.Trainer`, wonder what's the issue here? how should I setup the dataset? Thanks!\r\n\r\nI thought the tokenizer should tokenize the text and return a dict with `input_ids` in it. then `transformers.DataCollatorForLanguageModeling` should map `input_ids` to `labels` correctly?\r\n\r\nWonder if there is any example about using custom dataset?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,688
1,688
NONE
null
### System Info ### Environment ``` - `transformers` version: 4.29.2 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu) - Jax version: 0.4.10 - JaxLib version: 0.4.10 - Using GPU in script?: A100 - Using distributed or parallel set-up in script?: Single GPU - ``` I suspect it's because the model has no device info attached to it, so when transformer trainer tries to fetch per_device_batch * device but somehow the device is 0 and caused the issue. See detailed stack trace below: ### Stack trace: ``` /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1664 in train │ │ │ │ 1661 │ │ inner_training_loop = find_executable_batch_size( │ │ 1662 │ │ │ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size │ │ 1663 │ │ ) │ │ ❱ 1664 │ │ return inner_training_loop( │ │ 1665 │ │ │ args=args, │ │ 1666 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │ │ 1667 │ │ │ trial=trial, │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1909 in _inner_training_loop │ │ │ │ 1906 │ │ │ │ rng_to_sync = True │ │ 1907 │ │ │ │ │ 1908 │ │ │ step = -1 │ │ ❱ 1909 │ │ │ for step, inputs in enumerate(epoch_iterator): │ │ 1910 │ │ │ │ total_batched_samples += 1 │ │ 1911 │ │ │ │ if rng_to_sync: │ │ 1912 │ │ │ │ │ self._load_rng_state(resume_from_checkpoint) │ │ │ │ /usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:633 in __next__ │ │ │ │ 630 │ │ │ if self._sampler_iter is None: │ │ 631 │ │ │ │ # TODO(https://github.com/pytorch/pytorch/issues/76750) │ │ 632 │ │ │ │ self._reset() # type: ignore[call-arg] │ │ ❱ 633 │ │ │ data = self._next_data() │ │ 634 │ │ │ self._num_yielded += 1 │ │ 635 │ │ │ if self._dataset_kind == _DatasetKind.Iterable and \ │ │ 636 │ │ │ │ │ self._IterableDataset_len_called is not None and \ │ │ │ │ /usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:677 in _next_data │ │ │ │ 674 │ │ │ 675 │ def _next_data(self): │ │ 676 │ │ index = self._next_index() # may raise StopIteration │ │ ❱ 677 │ │ data = self._dataset_fetcher.fetch(index) # may raise StopIteration │ │ 678 │ │ if self._pin_memory: │ │ 679 │ │ │ data = _utils.pin_memory.pin_memory(data, self._pin_memory_device) │ │ 680 │ │ return data │ │ │ │ /usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py:49 in fetch │ │ │ │ 46 │ def fetch(self, possibly_batched_index): │ │ 47 │ │ if self.auto_collation: │ │ 48 │ │ │ if hasattr(self.dataset, "__getitems__") and self.dataset.__getitems__: │ │ ❱ 49 │ │ │ │ data = self.dataset.__getitems__(possibly_batched_index) │ │ 50 │ │ │ else: │ │ 51 │ │ │ │ data = [self.dataset[idx] for idx in possibly_batched_index] │ │ 52 │ │ else: │ │ │ │ /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:2782 in __getitems__ │ │ │ │ 2779 │ │ │ 2780 │ def __getitems__(self, keys: List) -> List: │ │ 2781 │ │ """Can be used to get a batch using a list of integers indices.""" │ │ ❱ 2782 │ │ batch = self.__getitem__(keys) │ │ 2783 │ │ n_examples = len(batch[next(iter(batch))]) │ │ 2784 │ │ return [{col: array[i] for col, array in batch.items()} for i in range(n_example │ │ 2785 │ │ │ │ /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:2778 in __getitem__ │ │ │ │ 2775 │ │ │ 2776 │ def __getitem__(self, key): # noqa: F811 │ │ 2777 │ │ """Can be used to index columns (by string names) or rows (by integer index or i │ │ ❱ 2778 │ │ return self._getitem(key) │ │ 2779 │ │ │ 2780 │ def __getitems__(self, keys: List) -> List: │ │ 2781 │ │ """Can be used to get a batch using a list of integers indices.""" │ │ │ │ /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:2762 in _getitem │ │ │ │ 2759 │ │ format_kwargs = kwargs["format_kwargs"] if "format_kwargs" in kwargs else self._ │ │ 2760 │ │ format_kwargs = format_kwargs if format_kwargs is not None else {} │ │ 2761 │ │ formatter = get_formatter(format_type, features=self._info.features, **format_kw │ │ ❱ 2762 │ │ pa_subtable = query_table(self._data, key, indices=self._indices if self._indice │ │ 2763 │ │ formatted_output = format_table( │ │ 2764 │ │ │ pa_subtable, key, formatter=formatter, format_columns=format_columns, output │ │ 2765 │ │ ) │ │ │ │ /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:578 in query_table │ │ │ │ 575 │ │ _check_valid_column_key(key, table.column_names) │ │ 576 │ else: │ │ 577 │ │ size = indices.num_rows if indices is not None else table.num_rows │ │ ❱ 578 │ │ _check_valid_index_key(key, size) │ │ 579 │ # Query the main table │ │ 580 │ if indices is None: │ │ 581 │ │ pa_subtable = _query_table(table, key) │ │ │ │ /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:531 in │ │ _check_valid_index_key │ │ │ │ 528 │ │ │ _check_valid_index_key(min(key), size=size) │ │ 529 │ elif isinstance(key, Iterable): │ │ 530 │ │ if len(key) > 0: │ │ ❱ 531 │ │ │ _check_valid_index_key(int(max(key)), size=size) │ │ 532 │ │ │ _check_valid_index_key(int(min(key)), size=size) │ │ 533 │ else: │ │ 534 │ │ _raise_bad_key_type(key) │ │ │ │ /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:521 in │ │ _check_valid_index_key │ │ │ │ 518 def _check_valid_index_key(key: Union[int, slice, range, Iterable], size: int) -> None: │ │ 519 │ if isinstance(key, int): │ │ 520 │ │ if (key < 0 and key + size < 0) or (key >= size): │ │ ❱ 521 │ │ │ raise IndexError(f"Invalid key: {key} is out of bounds for size {size}") │ │ 522 │ │ return │ │ 523 │ elif isinstance(key, slice): │ │ 524 │ │ pass │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ IndexError: Invalid key: 4 is out of bounds for size 0 ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ### Code to reproduce: ``` !pip install -q bitsandbytes datasets accelerate loralib !pip install sentencepiece !pip install -q transformers peft import torch import torch.nn as nn import bitsandbytes as bnb import datasets import accelerate import loralib import sentencepiece as spm import transformers from peft import LoraConfig, get_peft_model from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("THUDM/glm-10b-chinese", trust_remote_code=True) model = AutoModelForSeq2SeqLM.from_pretrained("THUDM/glm-10b-chinese", trust_remote_code=True) model = model.half().cuda() ds = ["hello world", "what the heck", "you are not alone", "not toyda", "more or less"] ds = {"text": ds} ds = datasets.Dataset.from_dict(ds) ds = ds.map(lambda x: tokenizer(x["text"], padding=True), batched=True) config = LoraConfig( r=16, lora_alpha=32, target_modules=["query_key_value"], lora_dropout=0.05, bias="none", task_type="CASUAL_LM" ) model = get_peft_model(model, config) trainer = transformers.Trainer( model=model, train_dataset=ds, args=transformers.TrainingArguments( per_device_train_batch_size=1, gradient_accumulation_steps=1, warmup_steps=0, num_train_epochs=2, learning_rate=3e-4, fp16=True, logging_steps=1, output_dir='outputs', save_total_limit=2, ), data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False) ) model.config.use_cache = False # silence the warnings. Please re-enable for inference! trainer.train() ``` ### Expected behavior Expect not to crash
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23794/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23794/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23793
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23793/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23793/comments
https://api.github.com/repos/huggingface/transformers/issues/23793/events
https://github.com/huggingface/transformers/issues/23793
1,727,653,058
I_kwDOCUB6oc5m-eTC
23,793
Inference API takes forever and output: "Model ... is currently loading"
{ "login": "KennStack01", "id": 67477516, "node_id": "MDQ6VXNlcjY3NDc3NTE2", "avatar_url": "https://avatars.githubusercontent.com/u/67477516?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KennStack01", "html_url": "https://github.com/KennStack01", "followers_url": "https://api.github.com/users/KennStack01/followers", "following_url": "https://api.github.com/users/KennStack01/following{/other_user}", "gists_url": "https://api.github.com/users/KennStack01/gists{/gist_id}", "starred_url": "https://api.github.com/users/KennStack01/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KennStack01/subscriptions", "organizations_url": "https://api.github.com/users/KennStack01/orgs", "repos_url": "https://api.github.com/users/KennStack01/repos", "events_url": "https://api.github.com/users/KennStack01/events{/privacy}", "received_events_url": "https://api.github.com/users/KennStack01/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @Narsil ", "Any update? :( ", "Still unsolved. [The same problem in the Discord channel.](https://discord.com/channels/879548962464493619/1112875103311630366)", "Hi, things should be back to normal, can you confirm ?\r\n\r\nThis model and some others were still running very old code, and some internal changes have made it crash, unfortunately silently for us, since everything was still 200 status codes somehow.\r\n\r\nWe merely updated everything.\r\nThanks @oOraph for the fix.", "I will check now, also a question: will there be additional parameters added to the voice recognition, something besides file (e.g. wait_for_model or output_language)\r\n[About additional parameters](https://discord.com/channels/879548962464493619/1115750221226446889)", "@Narsil\r\n\r\n![image](https://github.com/huggingface/transformers/assets/100136305/08decc4c-a1fc-4654-a6ed-cd9f732f8d87)\r\n\r\n", "openai/whisper-large-v2 worked right away, but still in English, although the source was in Russian.\r\n![image](https://github.com/huggingface/transformers/assets/100136305/c393e9fa-2616-4d0d-bc77-c741eb798a31)\r\n", "Still getting this error:\r\n\r\n`{\r\n \"error\": \"Model *** is currently loading\",\r\n \"estimated_time\": 20\r\n}`\r\n\r\nAny help?", "Hi @KennStack01 :), just made the test, if you haven't requested your model for a while it 's offline so you get this message (flagged as an error so that it cannot be confused with an answer to your prompt but it's not really an error in the sense your model won't work). I just tested your model and it got online in 20-22 seconds the two times I tried. So you get this message for sth like 20 s (sometimes more for bigger models but yours should be really fast to load given its size). Once it's online your prompt gets correctly answered :) (unless I misunderstood what you're saying and you're saying sometimes it just does not load at all, which I did not observe but would still be possible). And once online, it will stay online for a while, especially if you're requesting it regularly. But at some point it will get offline again and be preempted by others. More or less quickly, depending on several factors, essentially the whole cluster's load, the last time it was requested and the resources it consumes, sometimes only a few minutes later: this could explain your test @Zapzatron. Because from what I understand, in your test, you requested openai/whisper-large once and took the estimated time in response to know how long to sleep (which makes total sense). But from the tests I made, the estimated_time provided in answer was bad and it actually got online in less than 247s (in sth like 70s). Since you did not request it, it could actually have been already brought offline by the time you ended your sleep and requested it again. I would suggest that you actually poll the api a bit more frequently, sth like every 20 seconds after the first message, and would not pay too much attention to the estimated_time to know how long to sleep :)", "[I tried every 20 seconds, but that's a lot of requests](https://discord.com/channels/879548962464493619/1112875103311630366)\r\n![image](https://github.com/huggingface/transformers/assets/100136305/72cc7f6d-5ad3-4a16-b89c-8be5c19794a7)\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,689
1,689
NONE
null
### System Info I'm currently working on my final project (Fine-tuning the "Helsinki-NLP/opus-mt-en-zh" and "Helsinki-NLP/opus-mt-zh-en" models for Translation English-Chinese), have already trained the model, and deployed it to the Hub. The issue is that I'm not able to consume the API properly: it's slow, it seems not to work properly, and it takes forever to load. I'm trying to use the Inference API connected to my Nextjs (Frontend) app, but I'm getting this error message: ``` { "error": "Model ... is currently loading", "estimated_time": 20 } ``` Sometimes, It works, and then It stops working... Any help, please? Please, give me all the possible suggestions, would love to explore. Thanks ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` async function translateEnglishtoChinese(text: string) { const response = await fetch( "https://api-inference.huggingface.co/models/KennStack01/Helsinki-NLP-opus-mt-en-zh", { headers: { Authorization: `Bearer ${process.env.NEXT_PUBLIC_HF_TOKEN}`, }, method: "POST", body: JSON.stringify({ inputs: text, }), } ); const result = await response.json(); console.log("English to Chinese", result[0]?.translation_text); return result; } async function translateChinesetoEnglish(text: string) { const response = await fetch( "https://api-inference.huggingface.co/models/KennStack01/Helsinki-NLP-opus-mt-zh-en", { headers: { Authorization: `Bearer ${process.env.NEXT_PUBLIC_HF_TOKEN}`, }, method: "POST", body: JSON.stringify({ inputs: text, }), } ); const result = await response.json(); console.log("Chinese to English", result[0]?.translation_text); return result; } ``` ### Expected behavior Expecting to see a valid translation_text. Simple as that. Thanks :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23793/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23793/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23792
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23792/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23792/comments
https://api.github.com/repos/huggingface/transformers/issues/23792/events
https://github.com/huggingface/transformers/pull/23792
1,727,640,278
PR_kwDOCUB6oc5RcQ1i
23,792
Fix Trainer when model is loaded on a different GPU
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,685
1,685
1,685
COLLABORATOR
null
# What does this PR do? When a small model is loaded with `device_map="auto"` it might end up all on GPU 1, so currently `is_model_parallel` is set to `False` (cause one device) and later on the Trainer moves the model to GPU 0 which fails the execution of all the Accelerate hooks. This PR fixes this by making sure `is_model_parallel` is set to `True` when there is one device but it's not GPU 0.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23792/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23792/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23792", "html_url": "https://github.com/huggingface/transformers/pull/23792", "diff_url": "https://github.com/huggingface/transformers/pull/23792.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23792.patch", "merged_at": 1685534066000 }
https://api.github.com/repos/huggingface/transformers/issues/23791
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23791/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23791/comments
https://api.github.com/repos/huggingface/transformers/issues/23791/events
https://github.com/huggingface/transformers/issues/23791
1,727,229,407
I_kwDOCUB6oc5m823f
23,791
[Bloom] Inconsistent results when testing pretrained model bloom under different dtypes(float16, float32)
{ "login": "Lemon-412", "id": 57213526, "node_id": "MDQ6VXNlcjU3MjEzNTI2", "avatar_url": "https://avatars.githubusercontent.com/u/57213526?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Lemon-412", "html_url": "https://github.com/Lemon-412", "followers_url": "https://api.github.com/users/Lemon-412/followers", "following_url": "https://api.github.com/users/Lemon-412/following{/other_user}", "gists_url": "https://api.github.com/users/Lemon-412/gists{/gist_id}", "starred_url": "https://api.github.com/users/Lemon-412/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lemon-412/subscriptions", "organizations_url": "https://api.github.com/users/Lemon-412/orgs", "repos_url": "https://api.github.com/users/Lemon-412/repos", "events_url": "https://api.github.com/users/Lemon-412/events{/privacy}", "received_events_url": "https://api.github.com/users/Lemon-412/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@Lemon-412 have you try bloom_16 = BloomForCausalLMHF.from_pretrained(model_name).half().cuda()?\r\n\r\nI think \"model.cuda().to(dtype=torch.float16)\" is strange😿", "Hey! Thanks for opening an issue. \r\nThe best way to test closeness between to tensors is to use `torch.testing.all_close(tensor_a, tensor_b, atol, rtol)` which I would suggest you to use. \r\nThis will give: \r\n```python \r\nMismatched elements: 453121 / 524288 (86.4%)\r\nGreatest absolute difference: 200.00748252868652 at index (0, 508, 505) (up to 0.001 allowed)\r\nGreatest relative difference: 20611.88115471691 at index (0, 457, 98) (up to 0.001 allowed)\r\n```\r\nWhich indeed seems to show that there are instabilities. Pinging @thomasw21 in case he has already seen this. \r\n", "Wow this seems big! So I might see a few reasons why:\r\n - We convert bf16 original weights to fp16 weights, which might come at a huge cost\r\n - There are some systems we haven't ported from the original codebase, since we thought they were for backward stability: https://github.com/huggingface/transformers/blob/af45ec0a1611062929ddbf6f15935e01e6cbf1af/src/transformers/models/gpt_bigcode/modeling_gpt_bigcode.py#L133 (this might affect the runs)\r\n \r\n I tried running with `gpt_bigcode-santader` and `opt-350m`, and it indeed seems that `bloom` has a particular issue. I think since `santacoder` and `bloomz` are trained in roughly similar codebases, I think we should run `diffs` in their modeling code to see.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,688
1,688
NONE
null
### System Info Hi, We suffer inconsistent results when running model 'bloom' under different dtypes(float16, float32) Is that a bug? Environment: - `transformers` version: 4.29.1 - Platform: Linux-3.10.107-1-tlinux2_kvm_guest-0049-x86_64-with-glibc2.10 - Python version: 3.8.13 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 1.13.0a0+08820cb (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger @younesbelkada @thomasw21 @sywangyi @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Here's the testing code. We load the data from https://huggingface.co/bigscience/bloomz-560m ```python3 import torch from transformers import BloomConfig from transformers.models.bloom.modeling_bloom import BloomForCausalLM as BloomForCausalLMHF def test(model_name): config = BloomConfig.from_pretrained(model_name) torch.manual_seed(0) batch_size = 1 max_seqlen = 512 _ = torch.randint(max_seqlen // 2, max_seqlen + 1, (batch_size,), device='cuda') # keep this input_ids = torch.randint(0, config.vocab_size, (batch_size, max_seqlen), dtype=torch.long, device='cuda') bloom_32 = BloomForCausalLMHF.from_pretrained(model_name).cuda().to(dtype=torch.float32) bloom_32.eval() out_32 = bloom_32.transformer(input_ids).last_hidden_state out_32 = out_32.cpu().detach() del bloom_32 bloom_16 = BloomForCausalLMHF.from_pretrained(model_name).cuda().to(dtype=torch.float16) bloom_16.eval() out_16 = bloom_16.transformer(input_ids).last_hidden_state out_16 = out_16.cpu().detach() print(f'max diff: {(out_16 - out_32).abs().max().item()}') print(f'mean diff: {(out_16 - out_32).abs().mean().item()}') test("/path/to/bloomz-560m") ``` ### Expected behavior we run the code on our machine, and the result can be (given the random seed): ``` max diff: 196.88119506835938 mean diff: 0.2683866322040558 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23791/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23791/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23790
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23790/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23790/comments
https://api.github.com/repos/huggingface/transformers/issues/23790/events
https://github.com/huggingface/transformers/issues/23790
1,727,193,359
I_kwDOCUB6oc5m8uEP
23,790
Model.generate stop code execution without any error
{ "login": "hitriyvalenok", "id": 49437871, "node_id": "MDQ6VXNlcjQ5NDM3ODcx", "avatar_url": "https://avatars.githubusercontent.com/u/49437871?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hitriyvalenok", "html_url": "https://github.com/hitriyvalenok", "followers_url": "https://api.github.com/users/hitriyvalenok/followers", "following_url": "https://api.github.com/users/hitriyvalenok/following{/other_user}", "gists_url": "https://api.github.com/users/hitriyvalenok/gists{/gist_id}", "starred_url": "https://api.github.com/users/hitriyvalenok/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hitriyvalenok/subscriptions", "organizations_url": "https://api.github.com/users/hitriyvalenok/orgs", "repos_url": "https://api.github.com/users/hitriyvalenok/repos", "events_url": "https://api.github.com/users/hitriyvalenok/events{/privacy}", "received_events_url": "https://api.github.com/users/hitriyvalenok/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @hitriyvalenok 👋 \r\n\r\nYour snippet seems to be working fine on my end -- have a look at [this notebook](https://colab.research.google.com/drive/1lyXcScMOPhTP1bM0waLakIOBpYq7IjVO?usp=sharing)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,689
1,689
NONE
null
### System Info I am pretty new to HF, this is my first attempt to use a model. The problem is `model.generate` kinda abrupt the script execution without any error. Here’s my code: ```python from transformers import RobertaTokenizer, T5ForConditionalGeneration tokenizer = RobertaTokenizer.from_pretrained('Salesforce/codet5-base') model = T5ForConditionalGeneration.from_pretrained('Salesforce/codet5-base') text = "write for cycle" input_ids = tokenizer(text, return_tensors="pt").input_ids print("before") generated_ids = model.generate(input_ids, max_length=8) print("after") print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) ``` This code gives me no output except “before”. Also, I’ve tried other models with the same result. It looks the issue on my side… I’ll be very grateful for your help. Thanks! Env: ```text created virtual environment CPython3.9.16.final.0-64 in 444ms creator CPython3Posix(dest=/Users/zonder/Documents/PyCharmProjects/huggingface/venv, clear=False, no_vcs_ignore=False, global=False) seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/Users/zonder/Library/Application Support/virtualenv) added seed packages: Jinja2==3.1.2, MarkupSafe==2.1.2, PyYAML==6.0, certifi==2023.5.7, charset_normalizer==3.1.0, distlib==0.3.6, filelock==3.12.0, fsspec==2023.5.0, huggingface_hub==0.14.1, idna==3.4, mpmath==1.3.0, networkx==3.1, numpy==1.24.3, packaging==23.1, pip==23.1.2, platformdirs==3.5.1, regex==2023.5.5, requests==2.31.0, safetensors==0.3.1, setuptools==67.7.2, sympy==1.12, tokenizers==0.13.3, torch==2.0.1, tqdm==4.65.0, transformers==4.29.2, typing_extensions==4.6.2, urllib3==2.0.2, virtualenv==20.23.0, wheel==0.40.0 activators BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator,PythonActivator ``` Logs: ```text loading file vocab.json from cache at /Users/zonder/.cache/huggingface/hub/models--Salesforce--codet5-base/snapshots/4078456db09ba972a3532827a0b5df4da172323c/vocab.json loading file merges.txt from cache at /Users/zonder/.cache/huggingface/hub/models--Salesforce--codet5-base/snapshots/4078456db09ba972a3532827a0b5df4da172323c/merges.txt loading file added_tokens.json from cache at /Users/zonder/.cache/huggingface/hub/models--Salesforce--codet5-base/snapshots/4078456db09ba972a3532827a0b5df4da172323c/added_tokens.json loading file special_tokens_map.json from cache at /Users/zonder/.cache/huggingface/hub/models--Salesforce--codet5-base/snapshots/4078456db09ba972a3532827a0b5df4da172323c/special_tokens_map.json loading file tokenizer_config.json from cache at /Users/zonder/.cache/huggingface/hub/models--Salesforce--codet5-base/snapshots/4078456db09ba972a3532827a0b5df4da172323c/tokenizer_config.json loading configuration file config.json from cache at /Users/zonder/.cache/huggingface/hub/models--Salesforce--codet5-base/snapshots/4078456db09ba972a3532827a0b5df4da172323c/config.json Model config T5Config { "_name_or_path": "/content/drive/MyDrive/CodeT5/pretrained_models/codet5_base", "architectures": [ "T5ForConditionalGeneration" ], "bos_token_id": 1, "d_ff": 3072, "d_kv": 64, "d_model": 768, "decoder_start_token_id": 0, "dense_act_fn": "relu", "dropout_rate": 0.1, "eos_token_id": 2, "feed_forward_proj": "relu", "gradient_checkpointing": false, "id2label": { "0": "LABEL_0" }, "initializer_factor": 1.0, "is_encoder_decoder": true, "is_gated_act": false, "label2id": { "LABEL_0": 0 }, "layer_norm_epsilon": 1e-06, "model_type": "t5", "n_positions": 512, "num_decoder_layers": 12, "num_heads": 12, "num_layers": 12, "output_past": true, "pad_token_id": 0, "relative_attention_max_distance": 128, "relative_attention_num_buckets": 32, "task_specific_params": { "summarization": { "early_stopping": true, "length_penalty": 2.0, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } }, "torch_dtype": "float32", "transformers_version": "4.30.0.dev0", "use_cache": true, "vocab_size": 32100 } loading weights file pytorch_model.bin from cache at /Users/zonder/.cache/huggingface/hub/models--Salesforce--codet5-base/snapshots/4078456db09ba972a3532827a0b5df4da172323c/pytorch_model.bin Generate config GenerationConfig { "_from_model_config": true, "bos_token_id": 1, "decoder_start_token_id": 0, "eos_token_id": 2, "pad_token_id": 0, "transformers_version": "4.30.0.dev0" } All model checkpoint weights were used when initializing T5ForConditionalGeneration. All the weights of T5ForConditionalGeneration were initialized from the model checkpoint at Salesforce/codet5-base. If your task is similar to the task the model of the checkpoint was trained on, you can already use T5ForConditionalGeneration for predictions without further training. Generation config file not found, using a generation config created from the model config. before Generate config GenerationConfig { "_from_model_config": true, "bos_token_id": 1, "decoder_start_token_id": 0, "eos_token_id": 2, "pad_token_id": 0, "transformers_version": "4.30.0.dev0" } ``` ### Who can help? @gante ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Running the script above return no output except “before”. ### Expected behavior At least print "after"
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23790/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23790/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23789
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23789/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23789/comments
https://api.github.com/repos/huggingface/transformers/issues/23789/events
https://github.com/huggingface/transformers/pull/23789
1,727,188,112
PR_kwDOCUB6oc5RatuE
23,789
[OPT] Doc nit, using fast is fine
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,685
1,685
1,685
COLLABORATOR
null
# What does this PR do? use_fast=False when loading OPT's tokenizer? Fixes #23768
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23789/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23789/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23789", "html_url": "https://github.com/huggingface/transformers/pull/23789", "diff_url": "https://github.com/huggingface/transformers/pull/23789.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23789.patch", "merged_at": 1685104233000 }
https://api.github.com/repos/huggingface/transformers/issues/23788
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23788/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23788/comments
https://api.github.com/repos/huggingface/transformers/issues/23788/events
https://github.com/huggingface/transformers/issues/23788
1,727,138,480
I_kwDOCUB6oc5m8gqw
23,788
[Agents] text_reader does not work
{ "login": "osanseviero", "id": 7246357, "node_id": "MDQ6VXNlcjcyNDYzNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/osanseviero", "html_url": "https://github.com/osanseviero", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "repos_url": "https://api.github.com/users/osanseviero/repos", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm not able to reproduce locally or on Colab. Are you using the official [demo Colab](https://colab.research.google.com/drive/1c7MHD-T1forUPGcC_jlwsIptOzpG3hSj)? It's probably some missing dependency so would be interesting to know more about the env where this is failing.", "I was using Colab, but it seems the issue was not full-restarting after installing `sentencepiece` (which was importable but I should have restarted). Clean colab, installing first, works well", "Closing this as it's a user error not a agents error" ]
1,685
1,685
1,685
MEMBER
null
### System Info Google Colab, Python 3.10.11, transformers 4.29.2 ### Who can help? @sgugger @LysandreJik ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import load_tool tool = load_tool("text-to-speech") audio = tool("This is a text to speech tool") ``` or more end-to-end ``` from transformers import OpenAiAgent agent = OpenAiAgent(model="text-davinci-003", api_key="TOKEN") agent.chat("can you make an audio recording of someone saying 'hi'?") ``` Full error stacktrace ``` ==Explanation from the agent== I will use the tool `text_reader` to read the text "Hi" out loud. ==Code generated by the agent== audio_recording = text_reader(text="Hi") ==Result== ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ in <cell line: 1>:1 │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/tools/agents.py:278 in chat │ │ │ │ 275 │ │ │ │ print("\n\n==Result==") │ │ 276 │ │ │ │ self.cached_tools = resolve_tools(code, self.toolbox, remote=remote, cac │ │ 277 │ │ │ │ self.chat_state.update(kwargs) │ │ ❱ 278 │ │ │ │ return evaluate(code, self.cached_tools, self.chat_state, chat_mode=True │ │ 279 │ │ │ else: │ │ 280 │ │ │ │ tool_code = get_tool_creation_code(code, self.toolbox, remote=remote) │ │ 281 │ │ │ │ return f"{tool_code}\n{code}" │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:61 in evaluate │ │ │ │ 58 │ result = None │ │ 59 │ for idx, node in enumerate(expression.body): │ │ 60 │ │ try: │ │ ❱ 61 │ │ │ line_result = evaluate_ast(node, state, tools) │ │ 62 │ │ except InterpretorError as e: │ │ 63 │ │ │ msg = f"Evaluation of the code stopped at line {idx} before the end because │ │ 64 │ │ │ if chat_mode: │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:98 in │ │ evaluate_ast │ │ │ │ 95 │ if isinstance(expression, ast.Assign): │ │ 96 │ │ # Assignement -> we evaluate the assignement which should update the state │ │ 97 │ │ # We return the variable assigned as it may be used to determine the final resul │ │ ❱ 98 │ │ return evaluate_assign(expression, state, tools) │ │ 99 │ elif isinstance(expression, ast.Call): │ │ 100 │ │ # Function call -> we return the value of the function call │ │ 101 │ │ return evaluate_call(expression, state, tools) │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:139 in │ │ evaluate_assign │ │ │ │ 136 │ │ 137 def evaluate_assign(assign, state, tools): │ │ 138 │ var_names = assign.targets │ │ ❱ 139 │ result = evaluate_ast(assign.value, state, tools) │ │ 140 │ │ │ 141 │ if len(var_names) == 1: │ │ 142 │ │ state[var_names[0].id] = result │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:101 in │ │ evaluate_ast │ │ │ │ 98 │ │ return evaluate_assign(expression, state, tools) │ │ 99 │ elif isinstance(expression, ast.Call): │ │ 100 │ │ # Function call -> we return the value of the function call │ │ ❱ 101 │ │ return evaluate_call(expression, state, tools) │ │ 102 │ elif isinstance(expression, ast.Constant): │ │ 103 │ │ # Constant -> just return the value │ │ 104 │ │ return expression.value │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:167 in │ │ evaluate_call │ │ │ │ 164 │ # Todo deal with args │ │ 165 │ args = [evaluate_ast(arg, state, tools) for arg in call.args] │ │ 166 │ kwargs = {keyword.arg: evaluate_ast(keyword.value, state, tools) for keyword in call │ │ ❱ 167 │ return func(*args, **kwargs) │ │ 168 │ │ 169 │ │ 170 def evaluate_subscript(subscript, state, tools): │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/tools/base.py:532 in __call__ │ │ │ │ 529 │ │ │ 530 │ def __call__(self, *args, **kwargs): │ │ 531 │ │ if not self.is_initialized: │ │ ❱ 532 │ │ │ self.setup() │ │ 533 │ │ │ │ 534 │ │ encoded_inputs = self.encode(*args, **kwargs) │ │ 535 │ │ encoded_inputs = send_to_device(encoded_inputs, self.device) │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/tools/text_to_speech.py:45 in setup │ │ │ │ 42 │ def setup(self): │ │ 43 │ │ if self.post_processor is None: │ │ 44 │ │ │ self.post_processor = "microsoft/speecht5_hifigan" │ │ ❱ 45 │ │ super().setup() │ │ 46 │ │ │ 47 │ def encode(self, text, speaker_embeddings=None): │ │ 48 │ │ inputs = self.pre_processor(text=text, return_tensors="pt", truncation=True) │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/tools/base.py:492 in setup │ │ │ │ 489 │ │ Instantiates the `pre_processor`, `model` and `post_processor` if necessary. │ │ 490 │ │ """ │ │ 491 │ │ if isinstance(self.pre_processor, str): │ │ ❱ 492 │ │ │ self.pre_processor = self.pre_processor_class.from_pretrained(self.pre_proce │ │ 493 │ │ │ │ 494 │ │ if isinstance(self.model, str): │ │ 495 │ │ │ self.model = self.model_class.from_pretrained(self.model, **self.model_kwarg │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/processing_utils.py:184 in from_pretrained │ │ │ │ 181 │ │ │ │ [`~feature_extraction_utils.FeatureExtractionMixin.from_pretrained`] and │ │ 182 │ │ │ │ [`~tokenization_utils_base.PreTrainedTokenizer.from_pretrained`]. │ │ 183 │ │ """ │ │ ❱ 184 │ │ args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwarg │ │ 185 │ │ return cls(*args) │ │ 186 │ │ │ 187 │ @classmethod │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/processing_utils.py:228 in │ │ _get_arguments_from_pretrained │ │ │ │ 225 │ │ │ else: │ │ 226 │ │ │ │ attribute_class = getattr(transformers_module, class_name) │ │ 227 │ │ │ │ │ ❱ 228 │ │ │ args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, * │ │ 229 │ │ return args │ │ 230 │ │ │ 231 │ @property │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ TypeError: 'NoneType' object is not callable ``` ### Expected behavior It should generate an audio file
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23788/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23788/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23787
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23787/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23787/comments
https://api.github.com/repos/huggingface/transformers/issues/23787/events
https://github.com/huggingface/transformers/pull/23787
1,727,099,986
PR_kwDOCUB6oc5RabCX
23,787
Update trainer.mdx class_weights example
{ "login": "amitportnoy", "id": 113588658, "node_id": "U_kgDOBsU5sg", "avatar_url": "https://avatars.githubusercontent.com/u/113588658?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amitportnoy", "html_url": "https://github.com/amitportnoy", "followers_url": "https://api.github.com/users/amitportnoy/followers", "following_url": "https://api.github.com/users/amitportnoy/following{/other_user}", "gists_url": "https://api.github.com/users/amitportnoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/amitportnoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amitportnoy/subscriptions", "organizations_url": "https://api.github.com/users/amitportnoy/orgs", "repos_url": "https://api.github.com/users/amitportnoy/repos", "events_url": "https://api.github.com/users/amitportnoy/events{/privacy}", "received_events_url": "https://api.github.com/users/amitportnoy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,685
1,685
1,685
CONTRIBUTOR
null
class_weights tensor should follow model's device # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23787/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23787/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23787", "html_url": "https://github.com/huggingface/transformers/pull/23787", "diff_url": "https://github.com/huggingface/transformers/pull/23787.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23787.patch", "merged_at": 1685104593000 }
https://api.github.com/repos/huggingface/transformers/issues/23786
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23786/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23786/comments
https://api.github.com/repos/huggingface/transformers/issues/23786/events
https://github.com/huggingface/transformers/issues/23786
1,727,045,856
I_kwDOCUB6oc5m8KDg
23,786
Convert Pre-LN Transformers into equivalent Pre-RMSNorm Transformers to accelerate inference and training
{ "login": "ZixuanJiang", "id": 34562278, "node_id": "MDQ6VXNlcjM0NTYyMjc4", "avatar_url": "https://avatars.githubusercontent.com/u/34562278?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZixuanJiang", "html_url": "https://github.com/ZixuanJiang", "followers_url": "https://api.github.com/users/ZixuanJiang/followers", "following_url": "https://api.github.com/users/ZixuanJiang/following{/other_user}", "gists_url": "https://api.github.com/users/ZixuanJiang/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZixuanJiang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZixuanJiang/subscriptions", "organizations_url": "https://api.github.com/users/ZixuanJiang/orgs", "repos_url": "https://api.github.com/users/ZixuanJiang/repos", "events_url": "https://api.github.com/users/ZixuanJiang/events{/privacy}", "received_events_url": "https://api.github.com/users/ZixuanJiang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,688
1,688
NONE
null
### Feature request LayerNorm and RMSNorm are the top two normalization methods in Transformers. We unify them in Pre-Normalization Transformers in our paper https://arxiv.org/abs/2305.14858. The arithmetic equivalence allows us to convert Pre-LN Transformers into Pre-RMSNorm models without impact on the model functionality. Since RMSNorm offers superior efficiency compared to LayerNorm, our method enables faster equivalent inference and training for any Pre-LN Transformers, e.g., GPT, ViT. Our implementation is at https://github.com/ZixuanJiang/pre-rmsnorm-transformer. As the first step, we can start by accelerating the deployment of the existing Pre-LN Transformers. ### Motivation Related GitHub issue: https://github.com/pytorch/pytorch/issues/72643#issue ### Your contribution We have provided our reference implementation at https://github.com/ZixuanJiang/pre-rmsnorm-transformer. We are open to submitting a related PR in the future.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23786/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23786/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23785
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23785/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23785/comments
https://api.github.com/repos/huggingface/transformers/issues/23785/events
https://github.com/huggingface/transformers/issues/23785
1,726,989,560
I_kwDOCUB6oc5m78T4
23,785
deepcopy added in pipeline, breaks anything that worked before that used Rlock etc. like streaming generation
{ "login": "pseudotensor", "id": 2249614, "node_id": "MDQ6VXNlcjIyNDk2MTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2249614?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pseudotensor", "html_url": "https://github.com/pseudotensor", "followers_url": "https://api.github.com/users/pseudotensor/followers", "following_url": "https://api.github.com/users/pseudotensor/following{/other_user}", "gists_url": "https://api.github.com/users/pseudotensor/gists{/gist_id}", "starred_url": "https://api.github.com/users/pseudotensor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pseudotensor/subscriptions", "organizations_url": "https://api.github.com/users/pseudotensor/orgs", "repos_url": "https://api.github.com/users/pseudotensor/repos", "events_url": "https://api.github.com/users/pseudotensor/events{/privacy}", "received_events_url": "https://api.github.com/users/pseudotensor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@pseudotensor you closed this without a comment. Is this issue still relevant ?", "Github trying to be too smart with separate issue in another repo", "@pseudotensor 👋 \r\n\r\nTIL \"The dictionary itself is passed in as **generate_kwargs and any mutation to the dictionary itself has no effect on the parent dictionary or items passed in.\"\r\n\r\nIn that case you're right, no copy is needed at all!", "@pseudotensor should be fixed now (closing the issue, but feel free to reopen if you find further related issues)" ]
1,685
1,686
1,686
NONE
null
### System Info transformers==4.29.2 python 3.10 ### Who can help? @gante @Narsil ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Pass streamer instance of TextIteratorStreamer(), which includes thread, to TextGenerationPipeline as generator kwargs. 2. See pickle error due to new copy.deepcopy() added. Due to this change by @gante: https://github.com/huggingface/transformers/commit/b369e507aaa78103baf5d3f3563952b44a0408a1 This is a fully blocking change for me. I cannot upgrade to new transformers since this change because this breaks streaming scenario. ### Expected behavior I don't think copy.deepcopy() is appropriate. The dictionary itself is passed in as **generate_kwargs and any mutation to the dictionary itself has no effect on the parent dictionary or items passed in. Additionally, the modifications made to the dictionary in that changed code only involve entries *within* the dictionary, not to mutable items inside the dictionary, so none of those changes would have any effect to any other block of code. A simple shallow copy is sufficient. But additionally, I can't see any reason for any copy at all. Changes to the dictionary locally have no effect anywhere else.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23785/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23785/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23784
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23784/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23784/comments
https://api.github.com/repos/huggingface/transformers/issues/23784/events
https://github.com/huggingface/transformers/issues/23784
1,726,945,344
I_kwDOCUB6oc5m7xhA
23,784
CodeT5pEncoderDecoderModel does not support `device_map='auto'` yet.
{ "login": "karths8", "id": 47289950, "node_id": "MDQ6VXNlcjQ3Mjg5OTUw", "avatar_url": "https://avatars.githubusercontent.com/u/47289950?v=4", "gravatar_id": "", "url": "https://api.github.com/users/karths8", "html_url": "https://github.com/karths8", "followers_url": "https://api.github.com/users/karths8/followers", "following_url": "https://api.github.com/users/karths8/following{/other_user}", "gists_url": "https://api.github.com/users/karths8/gists{/gist_id}", "starred_url": "https://api.github.com/users/karths8/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/karths8/subscriptions", "organizations_url": "https://api.github.com/users/karths8/orgs", "repos_url": "https://api.github.com/users/karths8/repos", "events_url": "https://api.github.com/users/karths8/events{/privacy}", "received_events_url": "https://api.github.com/users/karths8/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Based on the traceback, I would suggest the authors to update their code to add a `_no_split_module` class variable, which would fix the error. Now we don't necessarily have a say on other's repo, I would suggest you open an Issue on the hub (or event better a PR) to support this. ", "@ArthurZucker it seems like the issue is resolved in https://huggingface.co/Salesforce/instructcodet5p-16b/discussions/1. Thanks for the help!" ]
1,685
1,685
1,685
NONE
null
### System Info Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.24.0 - Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.31 - Python version: 3.10.9 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help? @ArthurZucker @younesbelkada ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. Loaded the InstructCodeT5+ model as per https://huggingface.co/Salesforce/instructcodet5p-16b 2. Tried to use LoRA to fine-tune InstructCodeT5+ on NL to code translation task. Code and Traceback given below: ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import torch token_id="Salesforce/instructcodet5p-16b" tokenizer = AutoTokenizer.from_pretrained(token_id) model = AutoModelForSeq2SeqLM.from_pretrained(token_id, torch_dtype=torch.float16, low_cpu_mem_usage=True, trust_remote_code=True, decoder_start_token_id=1, pad_token_id=-100, load_in_8bit=True, device_map='auto').to(device) ``` Error: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-12-1436d2a64ffb> in <module> 10 11 # load model from the hub ---> 12 model = AutoModelForSeq2SeqLM.from_pretrained(model_id, 13 torch_dtype=torch.float16, 14 low_cpu_mem_usage=True, ~/.local/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 460 class_ref, pretrained_model_name_or_path, **hub_kwargs, **kwargs 461 ) --> 462 return model_class.from_pretrained( 463 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs 464 ) ~/.cache/huggingface/modules/transformers_modules/instructcodet5p-16b/modeling_codet5p.py in from_pretrained(cls, *args, **kwargs) 855 ) 856 kwargs["_fast_init"] = False --> 857 return super().from_pretrained(*args, **kwargs) 858 859 def forward( ~/.local/lib/python3.8/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 2683 2684 if model._no_split_modules is None: -> 2685 raise ValueError(f"{model.__class__.__name__} does not support `device_map='{device_map}'` yet.") 2686 no_split_modules = model._no_split_modules 2687 if device_map not in ["auto", "balanced", "balanced_low_0", "sequential"]: ValueError: CodeT5pEncoderDecoderModel does not support `device_map='auto'` yet. ``` Seems Like this functionality is not included yet. When to expect it to be added? Thanks in advance! ### Expected behavior Expect the model to get loaded without any error
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23784/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23784/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23783
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23783/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23783/comments
https://api.github.com/repos/huggingface/transformers/issues/23783/events
https://github.com/huggingface/transformers/pull/23783
1,726,934,404
PR_kwDOCUB6oc5RZ3fd
23,783
Fix no such file or directory error
{ "login": "RissyRan", "id": 20385466, "node_id": "MDQ6VXNlcjIwMzg1NDY2", "avatar_url": "https://avatars.githubusercontent.com/u/20385466?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RissyRan", "html_url": "https://github.com/RissyRan", "followers_url": "https://api.github.com/users/RissyRan/followers", "following_url": "https://api.github.com/users/RissyRan/following{/other_user}", "gists_url": "https://api.github.com/users/RissyRan/gists{/gist_id}", "starred_url": "https://api.github.com/users/RissyRan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RissyRan/subscriptions", "organizations_url": "https://api.github.com/users/RissyRan/orgs", "repos_url": "https://api.github.com/users/RissyRan/repos", "events_url": "https://api.github.com/users/RissyRan/events{/privacy}", "received_events_url": "https://api.github.com/users/RissyRan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Looks like the CI is complaining about code style now! Can you `pip install transformers[quality]` and then `make style` in the `transformers `directory, then commit/push? That will run our code formatters and hopefully resolve the issue.", "Thank you! I think the issues is resolved.", "Yep, looks good. Thank you for the PR and the quick iteration!" ]
1,685
1,685
1,685
CONTRIBUTOR
null
# What does this PR do? Add the logic to check if the output directory exists before opening the file. It fixes `no such file or directory` error when run [ViT model](https://github.com/huggingface/transformers/tree/8a817e1ecac6a420b1bdc701fcc33535a3b96ff5/examples/tensorflow/image-classification). ``` Traceback (most recent call last): File "/home/ranran/transformers/examples/tensorflow/image-classification/run_image_classification.py", line 564, in <module> main() File "/home/ranran/transformers/examples/tensorflow/image-classification/run_image_classification.py", line 546, in main with open(os.path.join(training_args.output_dir, "all_results.json"), "w") as f: FileNotFoundError: [Errno 2] No such file or directory: './beans_outputs/all_results.json' ``` # How to reproduce: ``` pip install --upgrade pip git clone https://github.com/huggingface/transformers.git cd transformers && pip install . pip install -r examples/tensorflow/_tests_requirements.txt pip install -r examples/tensorflow/image-classification/requirements.txt cd examples/tensorflow/image-classification python3 run_image_classification.py \ --dataset_name beans \ --output_dir ./beans_outputs/ \ --remove_unused_columns False \ --do_train \ --do_eval \ --learning_rate 2e-5 \ --num_train_epochs 1 \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 8 \ --logging_strategy steps \ --logging_steps 10 \ --evaluation_strategy epoch \ --save_strategy epoch \ --save_total_limit 3 ``` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @amyeroberts, @Rocketknight1 Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23783/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23783/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23783", "html_url": "https://github.com/huggingface/transformers/pull/23783", "diff_url": "https://github.com/huggingface/transformers/pull/23783.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23783.patch", "merged_at": 1685125498000 }
https://api.github.com/repos/huggingface/transformers/issues/23782
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23782/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23782/comments
https://api.github.com/repos/huggingface/transformers/issues/23782/events
https://github.com/huggingface/transformers/pull/23782
1,726,871,348
PR_kwDOCUB6oc5RZp-q
23,782
[WIP] Add internimage
{ "login": "millionhz", "id": 52637755, "node_id": "MDQ6VXNlcjUyNjM3NzU1", "avatar_url": "https://avatars.githubusercontent.com/u/52637755?v=4", "gravatar_id": "", "url": "https://api.github.com/users/millionhz", "html_url": "https://github.com/millionhz", "followers_url": "https://api.github.com/users/millionhz/followers", "following_url": "https://api.github.com/users/millionhz/following{/other_user}", "gists_url": "https://api.github.com/users/millionhz/gists{/gist_id}", "starred_url": "https://api.github.com/users/millionhz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/millionhz/subscriptions", "organizations_url": "https://api.github.com/users/millionhz/orgs", "repos_url": "https://api.github.com/users/millionhz/repos", "events_url": "https://api.github.com/users/millionhz/events{/privacy}", "received_events_url": "https://api.github.com/users/millionhz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "All I did was copy the code from [here](https://huggingface.co/OpenGVLab/internimage_s_1k_224) as mentioned in #22240.\r\n\r\nI couldn't figure out the test cases so I commented those out for now. \r\nI manually imported the model and initialized an instance to see if it works and it did. \r\n\r\nI will fix the failing documentation testcases. Might need some help with the technical details of the model.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23782). All of your documentation changes will be reflected on that endpoint.", "Hi @millionhz, thanks for opening this PR and for the work adding this model! \r\n\r\nThe linked to repo already has the [model code on the hub](https://huggingface.co/OpenGVLab/internimage_s_1k_224/blob/main/intern_image.py), and the model can be loaded directly with:\r\n\r\n```python\r\nfrom transformers import AutoModel\r\n\r\nmodel = AutoModel.from_pretrained(\"OpenGVLab/internimage_s_1k_224\", trust_remote_code=True)\r\n``` \r\n\r\nso a PR to add into the transformers repo isn't necessary. ", "@amyeroberts \r\n\r\nOh. I opened the PR because of #22240.\r\n\r\nI close it if its not needed.", "@millionhz - no worries, it's not obvious from the issue. I'll comment on the issue and we can close both that and this PR. \r\n\r\nIt's great that you're wanting to contribute and there's still plenty of ways you can in the library - addressing models or good first issues. We're looking forward to any future PRs :) Just remember to read the contributor's guideline and template carefully, e.g. for this PR more that 3 people were tagged. \r\n\r\n" ]
1,685
1,685
1,685
NONE
null
# What does this PR do? The PR adds internimage to transformers. Addresses #22240 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR and help with the left out work. Feel free to tag members/contributors who may be interested in your PR. If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. @adit299 @Weiyun1025 @amyeroberts @sgugger @stevhliu @MKhalusova
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23782/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23782/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23782", "html_url": "https://github.com/huggingface/transformers/pull/23782", "diff_url": "https://github.com/huggingface/transformers/pull/23782.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23782.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/25285
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25285/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25285/comments
https://api.github.com/repos/huggingface/transformers/issues/25285/events
https://github.com/huggingface/transformers/issues/25285
1,834,972,221
I_kwDOCUB6oc5tX3Q9
25,285
Sentence start got unexpected space
{ "login": "lucasjinreal", "id": 21303438, "node_id": "MDQ6VXNlcjIxMzAzNDM4", "avatar_url": "https://avatars.githubusercontent.com/u/21303438?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lucasjinreal", "html_url": "https://github.com/lucasjinreal", "followers_url": "https://api.github.com/users/lucasjinreal/followers", "following_url": "https://api.github.com/users/lucasjinreal/following{/other_user}", "gists_url": "https://api.github.com/users/lucasjinreal/gists{/gist_id}", "starred_url": "https://api.github.com/users/lucasjinreal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucasjinreal/subscriptions", "organizations_url": "https://api.github.com/users/lucasjinreal/orgs", "repos_url": "https://api.github.com/users/lucasjinreal/repos", "events_url": "https://api.github.com/users/lucasjinreal/events{/privacy}", "received_events_url": "https://api.github.com/users/lucasjinreal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "2 things that could be problematic here\r\n1. a token that has a prefixspace (metasymbol for unigram or or accent G for BBPE etc)\r\n2. somewhere in the tokenizer chain there is a module within the tokenizer that is adding a prefix\r\n(add_prefix_space = True)\r\n\r\nI checked and token 12968 does not have prefix space so it is not 1.", "possibly related\r\nhuggingface/tokenizers#1250 \r\nhuggingface/tokenizers#1174 \r\nhuggingface/tokenizers#990 ", "@chris-ha458 Thanks for taking look.\r\n\r\nI depart this problem and digged a little bit, this is what I found: \r\n\r\n*Encode one sentence in 2 part (such as question + answer) without any space in them, and then concat the ids, compare with ecnoder the whole sentence at once, THEY ARE NOT SAME*.\r\n\r\nI don't know if this is expected, but this is out of my expecetations.\r\n\r\nFor detail, please run this script on any llama tokeizer:\r\n\r\n```python\r\nfrom transformers import LlamaTokenizer\r\n\r\n# any LLama tokenizer\r\ntokenizer = LlamaTokenizer.from_pretrained(\"checkpoints/BiLLa-7B-LLM/tokenizer.model\")\r\n\r\ndef test1():\r\n prefix = \"Human:\\n用 python 写一段快排\\n\\nAssistant:\"\r\n output = \"OK, I will do for u!\"\r\n sentence_ids = tokenizer.encode(prefix, add_special_tokens=False)\r\n # b = tokenizer.decode(sentence_ids)\r\n print(sentence_ids)\r\n d = tokenizer.encode(output, add_special_tokens=False)\r\n print(d)\r\n input_ids = sentence_ids + d\r\n # input_ids += [tokenizer.eos_token_id]\r\n o = tokenizer.decode(input_ids)\r\n print(input_ids)\r\n print(o)\r\n\r\n\r\ndef test2():\r\n print('---------------- test2')\r\n prefix = \"Human:\\n用 python 写一段快排\\n\\nAssistant:\"\r\n output = \"OK, I will do for u!\"\r\n\r\n sentence = prefix + output\r\n sentence_ids = tokenizer.encode(sentence, add_special_tokens=False)\r\n b = tokenizer.decode(sentence_ids)\r\n print(sentence_ids)\r\n print(b)\r\n\r\n c = tokenizer.decode([12968])\r\n print(c)\r\n c = tokenizer.decode([9280])\r\n print(c)\r\n c = tokenizer.decode([8949])\r\n print(c)\r\n\r\n\r\nif __name__ == '__main__':\r\n test1()\r\n test2()\r\n```\r\n\r\nHere is interesting thing: \r\n\r\nthe 2 way to encode **same sentence** got different ids:\r\n\r\n```\r\n[12968, 29901, 13, 30406, 3017, 29871, 31479, 30287, 31559, 32815, 32996, 13, 13, 7900, 22137, 29901, 9280, 29892, 306, 674, 437, 363, 318, 29991]\r\n[12968, 29901, 13, 30406, 3017, 29871, 31479, 30287, 31559, 32815, 32996, 13, 13, 7900, 22137, 29901, 8949, 29892, 306, 674, 437, 363, 318, 29991]\r\n```\r\n\r\nAnd I decode the different ids that might caused space, they actually same character.......\r\n\r\nSo I am totally missed here....", "Hey! This issue has nothing to do with `tokenizers` since it uses the `slow` tokenizer. I believe that this will be fixed by #25224 ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,694
1,694
NONE
null
![image](https://github.com/huggingface/tokenizers/assets/21303438/9a5464e5-89c0-47a3-be97-a8b8a8abecc2) I got some iput_ids which encoded, after append some tgt_ids to input_ids, the new decoded sentences added weired spaces. Here is the code: ```python from transformers import LlamaTokenizer # any LLama tokenizer tokenizer = LlamaTokenizer.from_pretrained("checkpoints/BiLLa-7B-LLM/tokenizer.model") prefix = "Human: \n用python写一段快排\n\nAssistant: \n" output = "OK, I will do for u!" sentence_ids = tokenizer.encode(prefix, add_special_tokens=False) b = tokenizer.decode(sentence_ids) print(sentence_ids) print(b) input_ids = sentence_ids + tokenizer.encode(output, add_special_tokens=False) input_ids += [tokenizer.eos_token_id] o = tokenizer.decode(input_ids) print(input_ids) print() print(o) ``` My output: ``` [12968, 29901, 29871, 13, 30406, 4691, 31479, 30287, 31559, 32815, 32996, 13, 13, 7900, 22137, 29901, 29871, 13] Human: 用python写一段快排 Assistant: [12968, 29901, 29871, 13, 30406, 4691, 31479, 30287, 31559, 32815, 32996, 13, 13, 7900, 22137, 29901, 29871, 13, 9280, 29892, 306, 674, 437, 363, 318, 29991, 2] Human: 用python写一段快排 Assistant: OK, I will do for u!</s> ``` As you can see, both before Human and OK, there is an space, but actually not expected. Why?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25285/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25285/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23781
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23781/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23781/comments
https://api.github.com/repos/huggingface/transformers/issues/23781/events
https://github.com/huggingface/transformers/issues/23781
1,726,852,436
I_kwDOCUB6oc5m7a1U
23,781
BART-fusion
{ "login": "jnj2102", "id": 6268658, "node_id": "MDQ6VXNlcjYyNjg2NTg=", "avatar_url": "https://avatars.githubusercontent.com/u/6268658?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jnj2102", "html_url": "https://github.com/jnj2102", "followers_url": "https://api.github.com/users/jnj2102/followers", "following_url": "https://api.github.com/users/jnj2102/following{/other_user}", "gists_url": "https://api.github.com/users/jnj2102/gists{/gist_id}", "starred_url": "https://api.github.com/users/jnj2102/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jnj2102/subscriptions", "organizations_url": "https://api.github.com/users/jnj2102/orgs", "repos_url": "https://api.github.com/users/jnj2102/repos", "events_url": "https://api.github.com/users/jnj2102/events{/privacy}", "received_events_url": "https://api.github.com/users/jnj2102/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "@sgugger what do you think of this request? If you think it's a good addition to the repo, I can take this task on.", "cc @sanchit-gandhi and @hollance ", "Hey @jnj2102! Thanks for the feature request - while I think it's a cool model, I'm not sure it's best suited in the `transformers` library directly since the original repository has quite low usage (20 stars) and the paper as well (4 citations). If you're really keen on using this model, you could explore adding it to the Hub, e.g. as done with the [MERT](https://huggingface.co/m-a-p/MERT-v1-95M) model. WDYT?", "Hi! No problem. How do you add a model to the Hub? I’ll check out the MERT\nmodel too.\n\nOn Fri, Jun 2, 2023 at 11:29 AM Sanchit Gandhi ***@***.***>\nwrote:\n\n> Hey @jnj2102 <https://github.com/jnj2102>! Thanks for the feature request\n> - while I think it's a cool model, I'm not sure it's best suited in the\n> transformers library directly since the original repository has quite low\n> usage (20 stars) and the paper as well (4 citations). If you're really keen\n> on using this model, you could explore adding it to the Hub, e.g. as done\n> with the MERT <https://huggingface.co/m-a-p/MERT-v1-95M> model. WDYT?\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/23781#issuecomment-1573929450>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABP2N4VD6EPP5HNQTGMZ6ALXJIBFDANCNFSM6AAAAAAYPVMXB4>\n> .\n> You are receiving this because you were mentioned.Message ID:\n> ***@***.***>\n>\n-- \nBest wishes,\n\nJami\n", "Hey Jami! Awesome - there's info on using custom code on the Hub here: https://huggingface.co/docs/transformers/v4.27.1/en/custom_models#using-a-model-with-custom-code. Let me know if you have any questions, more than happy to help here!" ]
1,685
1,685
null
NONE
null
### Model description BART- fusion, a novel model for generating lyric interpretations from lyrics and music audio that combines a large-scale pre-trained language model with an audio encoder. It uses a cross-modal attention module to incorporate the audio representation into the lyrics representation to help the pre-trained language model understand the song from an audio perspective, while preserving the language model’s original generative performance. Please see the paper here: https://arxiv.org/abs/2208.11671 ### Open source status - [X] The model implementation is available - [x] The model weights are available ### Provide useful links for the implementation Here is the code repository for the paper: https://github.com/ldzhangyx/BART-fusion/tree/main. The weights should be available in the checkpoints: https://drive.google.com/drive/folders/18EUUx-KT9xGJ1uq2UoOgj0X9BpngNn_T
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23781/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23781/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/23780
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23780/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23780/comments
https://api.github.com/repos/huggingface/transformers/issues/23780/events
https://github.com/huggingface/transformers/issues/23780
1,726,809,737
I_kwDOCUB6oc5m7QaJ
23,780
trainer evaluation strucked when using dynamic padding in distributed evaluation
{ "login": "CaoYiwei", "id": 26463693, "node_id": "MDQ6VXNlcjI2NDYzNjkz", "avatar_url": "https://avatars.githubusercontent.com/u/26463693?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CaoYiwei", "html_url": "https://github.com/CaoYiwei", "followers_url": "https://api.github.com/users/CaoYiwei/followers", "following_url": "https://api.github.com/users/CaoYiwei/following{/other_user}", "gists_url": "https://api.github.com/users/CaoYiwei/gists{/gist_id}", "starred_url": "https://api.github.com/users/CaoYiwei/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CaoYiwei/subscriptions", "organizations_url": "https://api.github.com/users/CaoYiwei/orgs", "repos_url": "https://api.github.com/users/CaoYiwei/repos", "events_url": "https://api.github.com/users/CaoYiwei/events{/privacy}", "received_events_url": "https://api.github.com/users/CaoYiwei/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Without a code reproducer, there is nothing we can do. The Trainer will pad samples to the same length before gathering them, so this is already accounted for.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,688
1,688
NONE
null
### System Info tranasformer version=4.28.1 deepspeed=0.9.2 As I known, trainer evaluate func is distributed. when I use longest padding in every eval batch, the program will be strucked. This won't happen when I use max_length padding. I guess processes is structed because gather operation between different length tensors from different processes. Please fix this bug. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1.use longest padding 2.trainer evaluate in multi-gpus with deepspeed ### Expected behavior the program won't struck
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23780/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23780/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23779
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23779/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23779/comments
https://api.github.com/repos/huggingface/transformers/issues/23779/events
https://github.com/huggingface/transformers/issues/23779
1,726,799,406
I_kwDOCUB6oc5m7N4u
23,779
considering add some logic here, responsing with some text like "I don't know" when the model output is with some probs lower than threshold ,which means that it is not that confident.
{ "login": "antibits", "id": 6982140, "node_id": "MDQ6VXNlcjY5ODIxNDA=", "avatar_url": "https://avatars.githubusercontent.com/u/6982140?v=4", "gravatar_id": "", "url": "https://api.github.com/users/antibits", "html_url": "https://github.com/antibits", "followers_url": "https://api.github.com/users/antibits/followers", "following_url": "https://api.github.com/users/antibits/following{/other_user}", "gists_url": "https://api.github.com/users/antibits/gists{/gist_id}", "starred_url": "https://api.github.com/users/antibits/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/antibits/subscriptions", "organizations_url": "https://api.github.com/users/antibits/orgs", "repos_url": "https://api.github.com/users/antibits/repos", "events_url": "https://api.github.com/users/antibits/events{/privacy}", "received_events_url": "https://api.github.com/users/antibits/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "stopping_criteria may works" ]
1,685
1,685
1,685
NONE
null
https://github.com/huggingface/transformers/blob/f67dac97bdc63874f2288546b3fa87e69d2ea1c8/src/transformers/generation/utils.py#LL2650C69-L2650C69 considering add some logic here, responsing with some text like "I don't know" when the model output is with some probs lower than threshold ,which means that it is not that confident.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23779/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23779/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23778
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23778/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23778/comments
https://api.github.com/repos/huggingface/transformers/issues/23778/events
https://github.com/huggingface/transformers/issues/23778
1,726,783,071
I_kwDOCUB6oc5m7J5f
23,778
Training ByT5 for next response generation
{ "login": "salokr", "id": 19395011, "node_id": "MDQ6VXNlcjE5Mzk1MDEx", "avatar_url": "https://avatars.githubusercontent.com/u/19395011?v=4", "gravatar_id": "", "url": "https://api.github.com/users/salokr", "html_url": "https://github.com/salokr", "followers_url": "https://api.github.com/users/salokr/followers", "following_url": "https://api.github.com/users/salokr/following{/other_user}", "gists_url": "https://api.github.com/users/salokr/gists{/gist_id}", "starred_url": "https://api.github.com/users/salokr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/salokr/subscriptions", "organizations_url": "https://api.github.com/users/salokr/orgs", "repos_url": "https://api.github.com/users/salokr/repos", "events_url": "https://api.github.com/users/salokr/events{/privacy}", "received_events_url": "https://api.github.com/users/salokr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Thanks for reporting, however urgent this is, please refrain from pinging as many people as that. \r\nAll the questions related to `how to train` or `improve my training` should be asked on the [forum](https://discuss.huggingface.co/), as they are not bugs and the community is more adept to help you there. " ]
1,685
1,685
1,685
NONE
null
Hi, I am trying to train a ByT5 model for text2text generation specifically, given previous chat history the objective is to produce a response for the input. I understand that I can use decoder-only models for the task, but we need to use the byte-level information which we will be using in the future. For training purposes, I have obtained a dataset for fine-tuning and used the following configuration: ``` --model_name_or_path google/byt5-base \ --do_train \ --do_eval \ --do_predict \ --output_dir ./t5-base_50k_tast10 \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=16 \ --predict_with_generate \ --eval_steps 1 \ --greater_is_better True \ --load_best_model_at_end True\ --logging_steps 4 \ --metric_for_best_model bleu_2 \ --num_train_epochs 100 \ --save_steps 1 \ --save_total_limit 10 \ --evaluation_strategy epoch \ --save_strategy epoch \ --max_source_length 1000 \ --max_target_length 200 \ --learning_rate 5e-5 \ ``` My code to fine-tune looks like the following: ``` config = AutoConfig.from_pretrained( model_args.config_name if model_args.config_name else model_args.model_name_or_path, cache_dir=model_args.cache_dir) tokenizer = AutoTokenizer.from_pretrained( model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path, cache_dir=model_args.cache_dir, use_fast=True, truncation_side='left') model = AutoModelForSeq2SeqLM.from_pretrained( model_args.model_name_or_path, config=config, cache_dir=model_args.cache_dir, ) embedding_size = model.get_input_embeddings().weight.shape[0] if(len(tokenizer)>embedding_size): model.resize_token_embeddings(len(tokenizer)) if model.config.decoder_start_token_id is None: raise ValueError("Make sure that `config.decoder_start_token_id` is correctly defined") max_target_length = data_args.max_target_length padding = "max_length" if data_args.pad_to_max_length else False def preprocess(text): ... # some preprocessing code def preprocess_function(examples): ... #call preprocess above and tokenize model_inputs = tokenizer(inputs, max_length=data_args.max_source_length, padding='longest', truncation=True, return_tensors="pt") labels = tokenizer(text_target = targets, max_length=max_target_length, padding='longest', truncation=True, return_tensors="pt") ... if(training_args.do_train): train_dataset = train_dataset.map(preprocess_function, batched=True, num_proc=data_args.preprocessing_num_workers, desc="Running tokenizer on train dataset",remove_columns=column_names, load_from_cache_file=False) if(training_args.do_eval): eval_dataset = val_dataset.map(preprocess_function, batched=True, num_proc=data_args.preprocessing_num_workers, desc="Running tokenizer on validation dataset", remove_columns=column_names, load_from_cache_file=False) if(training_args.do_predict): test_dataset = test_dataset.map(preprocess_function, batched=True, num_proc=data_args.preprocessing_num_workers, desc="Running tokenizer on prediction dataset",remove_columns=column_names, load_from_cache_file=False) label_pad_token_id = -100 if data_args.ignore_pad_token_for_loss else tokenizer.pad_token_id data_collator = DataCollatorForSeq2Seq(tokenizer, model=model, label_pad_token_id=label_pad_token_id, pad_to_multiple_of=8 if training_args.fp16 else None) metric = evaluate.load("bleu") def postprocess_text(preds, labels): ...#post process stuff return preds, labels def compute_metrics(eval_preds): ... #get bleu and other metrics return result training_args.generation_max_length = training_args.generation_max_length if training_args.generation_max_length is not None else data_args.val_max_target_length training_args.generation_num_beams = data_args.num_beams if data_args.num_beams is not None else training_args.generation_num_beams trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset = train_dataset if training_args.do_train else None, eval_dataset = eval_dataset if training_args.do_eval else None, tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics if training_args.predict_with_generate else None, callbacks = [EarlyStoppingCallback(early_stopping_patience=5)] ) if training_args.do_train: checkpoint = None if(training_args.resume_from_checkpoint is not None): checkpoint = training_args.resume_from_checkpoint elif last_checkpoint is not None: checkpoint = last_checkpoint train_result = trainer.train(resume_from_checkpoint=checkpoint) trainer.save_model() metrics = train_result.metrics trainer.log_metrics("train", metrics) trainer.save_metrics("train", metrics) trainer.save_state() ``` However, the problem with the above code is after a lot of fine-tuning the model generates text which is repeated again and again and sometimes copies from the input or generates responses that are not relevant or related to the input. I have tried contrastive search, beam search, etc. also but the response generated by the model is still gibberish. Any suggestions on how to improve ByT5's capability to do the task? As I understand, T5-based models (or ByT5) perform well on many seq2seq tasks such as Text2SQL, etc. so they should at least generate relevant responses to the input for this task too. Please let me know, any suggestions you have. @ArthurZucker @younesbelkada I am also attaching some sample responses generated by the model. <img width="1204" alt="Screenshot 2023-05-25 at 10 24 34 PM" src="https://github.com/huggingface/transformers/assets/19395011/f67ade1b-99cc-4adc-95f6-7eecc1077bd0">
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23778/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23778/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23777
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23777/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23777/comments
https://api.github.com/repos/huggingface/transformers/issues/23777/events
https://github.com/huggingface/transformers/issues/23777
1,726,776,777
I_kwDOCUB6oc5m7IXJ
23,777
transformers-cli serve doesn't support muti-workers
{ "login": "CaoYiwei", "id": 26463693, "node_id": "MDQ6VXNlcjI2NDYzNjkz", "avatar_url": "https://avatars.githubusercontent.com/u/26463693?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CaoYiwei", "html_url": "https://github.com/CaoYiwei", "followers_url": "https://api.github.com/users/CaoYiwei/followers", "following_url": "https://api.github.com/users/CaoYiwei/following{/other_user}", "gists_url": "https://api.github.com/users/CaoYiwei/gists{/gist_id}", "starred_url": "https://api.github.com/users/CaoYiwei/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CaoYiwei/subscriptions", "organizations_url": "https://api.github.com/users/CaoYiwei/orgs", "repos_url": "https://api.github.com/users/CaoYiwei/repos", "events_url": "https://api.github.com/users/CaoYiwei/events{/privacy}", "received_events_url": "https://api.github.com/users/CaoYiwei/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is not an API we still maintain FYI." ]
1,685
1,685
1,685
NONE
null
### System Info transformers version=4.28.1 error: You must pass the application as an import string to enable "reload" or "workers" transformers-cli serve uses fastapi with uvicorn, the application adds routes in a class rather than decorators. In this way, uvicorn run can not import the applications. I don't have a solution yet. Does anyone have ideas to fix this ? ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction transformers-cli serve --workers 2 error: You must pass the application as an import string to enable "reload" or "workers" ### Expected behavior cli command supports muti-workers
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23777/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23777/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23776
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23776/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23776/comments
https://api.github.com/repos/huggingface/transformers/issues/23776/events
https://github.com/huggingface/transformers/issues/23776
1,726,757,403
I_kwDOCUB6oc5m7Dob
23,776
Saving Models Broke
{ "login": "johnml1135", "id": 13733556, "node_id": "MDQ6VXNlcjEzNzMzNTU2", "avatar_url": "https://avatars.githubusercontent.com/u/13733556?v=4", "gravatar_id": "", "url": "https://api.github.com/users/johnml1135", "html_url": "https://github.com/johnml1135", "followers_url": "https://api.github.com/users/johnml1135/followers", "following_url": "https://api.github.com/users/johnml1135/following{/other_user}", "gists_url": "https://api.github.com/users/johnml1135/gists{/gist_id}", "starred_url": "https://api.github.com/users/johnml1135/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johnml1135/subscriptions", "organizations_url": "https://api.github.com/users/johnml1135/orgs", "repos_url": "https://api.github.com/users/johnml1135/repos", "events_url": "https://api.github.com/users/johnml1135/events{/privacy}", "received_events_url": "https://api.github.com/users/johnml1135/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I tested out a work around - specifically adding empty id2label's etc. and it worked.\r\n```\r\nAutoConfig.from_pretrained(model_name, label2id={}, id2label={}, num_labels=0)\r\n```\r\nThis probably should have a longer term fix - possibly both in not auto-creating meaningless labels and making the save/restore not cause saving conflicts with int/str dict keys.", "What is a reproducer for the first issue?", "Hey @johnml1135, what version of clearml were you using?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "I believe the most recent version of ClearML - though the source of the error can be seen in the code refrenced.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,692
1,692
NONE
null
### System Info I get this error when building a hugginface NLLB model on a ClearML docker image as per this bug in my repo (https://github.com/sillsdev/machine.py/issues/14): ``` 50% 500/1000 [08:59<09:03, 1.09s/it][INFO|trainer.py:2904] 2023-05-24 13:04:36,643 >> Saving model checkpoint to /root/machine/builds/646e3f95cf5823db7b5edd92/model/checkpoint-500 Traceback (most recent call last): File "/root/.clearml/venvs-builds/3.8/code/untitled.py", line 11, in <module> run(args) File "/usr/local/lib/python3.8/dist-packages/machine/jobs/build_nmt_engine.py", line 56, in run job.run(check_canceled) File "/usr/local/lib/python3.8/dist-packages/machine/jobs/nmt_engine_build_job.py", line 54, in run model_trainer.train(check_canceled=check_canceled) File "/usr/local/lib/python3.8/dist-packages/machine/translation/huggingface/hugging_face_nmt_model_trainer.py", line 263, in train train_result = self._trainer.train(resume_from_checkpoint=ckpt) File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1664, in train return inner_training_loop( File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2019, in _inner_training_loop self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2308, in _maybe_log_save_evaluate self._save_checkpoint(model, trial, metrics=metrics) File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2365, in _save_checkpoint self.save_model(output_dir, _internal_call=True) File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2866, in save_model self._save(output_dir) File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2922, in _save self.model.save_pretrained( File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py", line 1734, in save_pretrained model_to_save.config.save_pretrained(save_directory) File "/usr/local/lib/python3.8/dist-packages/transformers/configuration_utils.py", line 457, in save_pretrained self.to_json_file(output_config_file, use_diff=True) File "/usr/local/lib/python3.8/dist-packages/transformers/configuration_utils.py", line 850, in to_json_file writer.write(self.to_json_string(use_diff=use_diff)) File "/usr/local/lib/python3.8/dist-packages/transformers/configuration_utils.py", line 836, in to_json_string return json.dumps(config_dict, indent=2, sort_keys=True) + "\n" File "/usr/lib/python3.8/json/__init__.py", line 234, in dumps return cls( File "/usr/lib/python3.8/json/encoder.py", line 201, in encode chunks = list(chunks) File "/usr/lib/python3.8/json/encoder.py", line 431, in _iterencode yield from _iterencode_dict(o, _current_indent_level) File "/usr/lib/python3.8/json/encoder.py", line 405, in _iterencode_dict yield from chunks File "/usr/lib/python3.8/json/encoder.py", line 353, in _iterencode_dict items = sorted(dct.items()) TypeError: '<' not supported between instances of 'str' and 'int' ``` Here is some analysis -> The normal dict has, among other things: ``` "id2label": { "0": "LABEL_0", "1": "LABEL_1" } ``` But after being trained (and possible ClearML doing something), it becomes: ``` "id2label": { 0: "LABEL_0", 1: "LABEL_1", "0": "LABEL_0", "1": "LABEL_1" } ``` Which causes the sorting to break (and the above error). **Ideas:** * There were no labels passed to it, but labels are auto-created based upon this code: * If there are no id2label mapping, num_labels is set to 2: https://github.com/huggingface/transformers/blob/f67dac97bdc63874f2288546b3fa87e69d2ea1c8/src/transformers/configuration_utils.py#L319-L331 * If num_labels is 2, then labels the above labels are created: https://github.com/huggingface/transformers/blob/f67dac97bdc63874f2288546b3fa87e69d2ea1c8/src/transformers/configuration_utils.py#L418-L421 * Likely this code got called twice - and in between the int's got converted to strings due to label stuff happening. What is a good path forward? ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction This is pretty custom and would not work without all the proper licenses - but to reproduce: * Spin up the docker-compose environment from: https://github.com/sillsdev/serval * Setup a clearml docker agent using a docker image made from the master branch from: https://github.com/sillsdev/machine.py * Run the NmtBatch E2E test from: https://github.com/sillsdev/serval * The error occurs in the docker container in clearml. ### Expected behavior * Don't crash. * Likely, don't auto-create labels if none are given (the case here) * Or, if you are going to convert all the numbers to strings when saving to a dictionary, account for the id2label fields properly, such as always use strings or convert back to int's when loading from a dict, etc.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23776/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23776/timeline
completed
null
null