url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/25725
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25725/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25725/comments
https://api.github.com/repos/huggingface/transformers/issues/25725/events
https://github.com/huggingface/transformers/issues/25725
1,864,901,414
I_kwDOCUB6oc5vKCMm
25,725
Meta-Transformer
{ "login": "rajveer43", "id": 64583161, "node_id": "MDQ6VXNlcjY0NTgzMTYx", "avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rajveer43", "html_url": "https://github.com/rajveer43", "followers_url": "https://api.github.com/users/rajveer43/followers", "following_url": "https://api.github.com/users/rajveer43/following{/other_user}", "gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}", "starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions", "organizations_url": "https://api.github.com/users/rajveer43/orgs", "repos_url": "https://api.github.com/users/rajveer43/repos", "events_url": "https://api.github.com/users/rajveer43/events{/privacy}", "received_events_url": "https://api.github.com/users/rajveer43/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "I am interested in adding this model.\r\nHi @ArthurZucker @amyeroberts, do you think this will be a valid addition? ", "For starters, if there are no online checkpoints then there's nothing much to be added 😓 \r\nEDIT: seems like they are available!\r\nAppart from having a very complicated codebase with submodules here and there might be a good addition yeah! WDYT @amyeroberts ?\r\n\r\nIMO putting this on the hub will be a lot easier and good way to test the popularity! ", "@ArthurZucker @susnato, Thank you for considering request.", "@ArthurZucker Agreed - would be cool to add and adding to the hub first would be my suggestion! ", "> @ArthurZucker Agreed - would be cool to add and adding to the hub first would be my suggestion!\r\n\r\nIts Paper is also available here: https://arxiv.org/abs/2307.10802", "Thanks @ArthurZucker and @amyeroberts for your views, I will quickly get on to create a demo.\r\n\r\n**EDIT : As of now, they have published the weights for the shared encoder but most of the model-head weights(specific to different modalities) are not published yet. Waiting for them to get published, before creating the demo.**", "@ArthurZucker @amyeroberts @susnato there has been some update to the repo check here: https://github.com/invictus717/MetaTransformer " ]
1,692
1,696
null
CONTRIBUTOR
null
### Model description Introducing a groundbreaking research paper that explores the potential of unified multimodal learning, revolutionizing the way we process and integrate diverse data types such as text, images, audio, and more! 🌟 [meta-transformer](https://arxiv.org/pdf/2307.10802.pdf) ### Open source status - [X] The model implementation is available - [x] The model weights are available ### Provide useful links for the implementation https://github.com/invictus717/MetaTransformer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25725/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25725/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/25724
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25724/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25724/comments
https://api.github.com/repos/huggingface/transformers/issues/25724/events
https://github.com/huggingface/transformers/issues/25724
1,864,900,937
I_kwDOCUB6oc5vKCFJ
25,724
Trainer.__init__() got an unexpected keyword argument 'model_flops'
{ "login": "andysingal", "id": 20493493, "node_id": "MDQ6VXNlcjIwNDkzNDkz", "avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andysingal", "html_url": "https://github.com/andysingal", "followers_url": "https://api.github.com/users/andysingal/followers", "following_url": "https://api.github.com/users/andysingal/following{/other_user}", "gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}", "starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andysingal/subscriptions", "organizations_url": "https://api.github.com/users/andysingal/orgs", "repos_url": "https://api.github.com/users/andysingal/repos", "events_url": "https://api.github.com/users/andysingal/events{/privacy}", "received_events_url": "https://api.github.com/users/andysingal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I can't see any `model_flops` argument in the codebase. Is this `Trainer` from `transformers` library?", "> I can't see any `model_flops` argument in the codebase. Is this `Trainer` from `transformers` library?\r\n\r\nthis is part of Andrew Ng course released yesterday: https://learn.deeplearning.ai/finetuning-large-language-models/lesson/6/training-process. \r\n<img width=\"1847\" alt=\"Screenshot 2023-08-24 at 7 58 43 PM\" src=\"https://github.com/huggingface/transformers/assets/20493493/0f57c3d4-5acf-4580-badb-fe48462cbfee\">\r\n@ydshieh @pacman100 @ArthurZucker ", "You should post the question on that course forum. There is no `model_flops` argument in `transformers` codebase.", "They probably want to put `model_flops` in `training_args` and not pass directly to the Trainer's `__init__`. Only they know better their usage.", "Thanks for your reply, I already did. I wanted run it by your team to see\r\nif this is something in your pipeline. Advantages of doing this is it\r\nquantizes during training . Anyways, if you find something that’s good\r\notherwise you can close the ticket.\r\n\r\nOn Thu, Aug 24, 2023 at 20:57 Yih-Dar ***@***.***> wrote:\r\n\r\n> You should post the question on that course forum. There is no model_flops\r\n> argument in transformers codebase.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/25724#issuecomment-1691897501>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AE4LJNIYEB6PY5DAWYUQAC3XW5XETANCNFSM6AAAAAA342OOII>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n", "How to solve this problem?", "where is code after the correction???\r\n" ]
1,692
1,704
1,692
NONE
null
### System Info colab notebook ### Who can help? @ArthurZucker @pacman100 @you ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction During Andrew Ng new course on fine tuning LLM Dataset: Andyrasika/llama_dataset they used: ``` training_config = { "model": { "pretrained_name": model_name, "max_length" : 2048 }, "datasets": { "use_hf": use_hf, "path": dataset }, "verbose": True } ``` ``` model_flops = ( base_model.floating_point_ops( { "input_ids": torch.zeros( (1, training_config["model"]["max_length"]) ) } ) * training_args.gradient_accumulation_steps ) print(base_model) print("Memory footprint", base_model.get_memory_footprint() / 1e9, "GB") print("Flops", model_flops / 1e9, "GFLOPs") ``` and further ``` trainer = Trainer( model=base_model, model_flops=model_flops, total_steps=max_steps, args=training_args, train_dataset=train_dataset, eval_dataset=test_dataset, ) ``` i get the following error: ``` trainer = Trainer( model=base_model, model_flops=model_flops, total_steps=max_steps, args=training_args, train_dataset=train_dataset, eval_dataset=test_dataset, ) ``` i have seen model_flops but since these courses are in collabration with huggingface, wanted to check if this something new? ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) [<ipython-input-69-0c6be6067a75>](https://localhost:8080/#) in <cell line: 1>() ----> 1 trainer = Trainer( 2 model=base_model, 3 model_flops=model_flops, 4 total_steps=max_steps, 5 args=training_args, TypeError: Trainer.__init__() got an unexpected keyword argument 'model_flops' ``` ### Expected behavior run normally
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25724/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25724/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25723
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25723/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25723/comments
https://api.github.com/repos/huggingface/transformers/issues/25723/events
https://github.com/huggingface/transformers/issues/25723
1,864,883,521
I_kwDOCUB6oc5vJ91B
25,723
NEW Model GPT-JX
{ "login": "alignment-ai", "id": 143067440, "node_id": "U_kgDOCIcJMA", "avatar_url": "https://avatars.githubusercontent.com/u/143067440?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alignment-ai", "html_url": "https://github.com/alignment-ai", "followers_url": "https://api.github.com/users/alignment-ai/followers", "following_url": "https://api.github.com/users/alignment-ai/following{/other_user}", "gists_url": "https://api.github.com/users/alignment-ai/gists{/gist_id}", "starred_url": "https://api.github.com/users/alignment-ai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alignment-ai/subscriptions", "organizations_url": "https://api.github.com/users/alignment-ai/orgs", "repos_url": "https://api.github.com/users/alignment-ai/repos", "events_url": "https://api.github.com/users/alignment-ai/events{/privacy}", "received_events_url": "https://api.github.com/users/alignment-ai/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "This model has been removed from the hub. It looks like an interesting model, why remove it? ", "> This model has been removed from the hub. It looks like an interesting model, why remove it?\r\n\r\nIt has been withdrawn by us and will be released along with 7b and 15b models " ]
1,692
1,695
null
NONE
null
### Model description request to add a new model gpt-jx-3b to the transformer library as well as request to create a custom model class, class names an be found in the below .txt files as well as at the below of the description, weights are present in hugging face model hub but made public for now, ["alien-ai/gpt-jx-3b"](https://huggingface.co/alien-ai/gpt-jx-3b) --id_repo. Just a little summary of the model has been provided in the repo.Currently the repo only contains pytorch_model.bin file and no files other than that. we provide you the code for modelling.py ,configuration.py and tokenization.py in .txt format [modelling_gptjx.txt](https://github.com/huggingface/transformers/files/12428050/modelling_gptjx.txt) [config.txt](https://github.com/huggingface/transformers/files/12428124/config.txt) [tokenization.txt] (https://github.com/huggingface/transformers/files/12428249/tokenization.txt) please check the files before proceeding. Make Sure to Set class names: GPTJXForCausalLM, GPTJXModel, GPTJXConfig, GPTJXTokenizer. Sorry, we are too busy right now so, i was not able to write the description properly. Please Check it Out Yourself. Please create the new model class as soon as possible ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Authors will be revealed later. repo-- ["alien-ai/gpt-jx-3b"](https://huggingface.co/alien-ai/gpt-jx-3b)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25723/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25723/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/25722
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25722/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25722/comments
https://api.github.com/repos/huggingface/transformers/issues/25722/events
https://github.com/huggingface/transformers/pull/25722
1,864,825,986
PR_kwDOCUB6oc5YrTOh
25,722
Generate: nudge towards `do_sample=False` when `temperature=0.0`
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> We could also just set do_sample = False in case temperature = 0. Will let you decide !\r\n\r\nI agree we should do that! But I'm going to leave that for the generate refactor, as it implies significant code changes to do it right :)" ]
1,692
1,692
1,692
MEMBER
null
# What does this PR do? Related issue: https://github.com/facebookresearch/llama/issues/687 Improves the error message when `temperature=0.0`, which asymptotically corresponds to greedy decoding... except that it results in numerical problems :D ___________________________ test run: ```py from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("distilgpt2") model = AutoModelForCausalLM.from_pretrained("distilgpt2") inputs = tokenizer(["The quick brown"], return_tensors="pt") gen_out = model.generate(**inputs, do_sample=True, temperature=0.0) ``` yields ``` ValueError: `temperature` (=0.0) has to be a strictly positive float, otherwise your next token scores will be invalid. If you're looking for greedy decoding strategies, set `do_sample=False`. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25722/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25722/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25722", "html_url": "https://github.com/huggingface/transformers/pull/25722", "diff_url": "https://github.com/huggingface/transformers/pull/25722.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25722.patch", "merged_at": 1692882944000 }
https://api.github.com/repos/huggingface/transformers/issues/25721
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25721/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25721/comments
https://api.github.com/repos/huggingface/transformers/issues/25721/events
https://github.com/huggingface/transformers/pull/25721
1,864,781,798
PR_kwDOCUB6oc5YrJpE
25,721
[PEFT] Allow PEFT model dict to be loaded
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,692
1,694
1,694
MEMBER
null
In order to allow `peft` to be leveraged in `diffusers` without breaking changes we need to allow loading adapters directly from a loaded `state_dict`. The reason is that in `diffusers` we currently store LoRA checkpoints in a format that is different to the PEFT format so we cannot just pass the model_id. This PR allows the user to manually pass a loaded PEFT model checkpoint as well as a PEFT configuration, thus circumventing the need to pass a model id. In pseudo code, the integration of `transformers` + PEFT in `diffusers` should then look as follows the ["load_lora"](https://github.com/huggingface/diffusers/blob/24c5e7708bb75076dd8e79ccaea195640555f945/src/diffusers/loaders.py#L1213) function of `diffusers`. ```py def load_lora_into_text_encoder(cls, state_dict, network_alphas, text_encoder, prefix=None, lora_scale=1.0): peft_state_dict, peft_config = convert_to_peft_format(state_dict, ...) # <- this function will take care of all the remapping necessary for the different formats text_encoder.load_adapter(peft_state_dict, peft_config=peft_config) ``` **Note**, there might be more changes we have to do to PEFT, `transformers'` PEFT integration to be sure that everything works as expected. E.g. I'm not yet sure how to pass `network_alphas` etc... to PEFT to make sure we get 1-to-1 the same result. cc @younesbelkada @sayakpaul @BenjaminBossan @pacman100
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25721/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25721/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25721", "html_url": "https://github.com/huggingface/transformers/pull/25721", "diff_url": "https://github.com/huggingface/transformers/pull/25721.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25721.patch", "merged_at": 1694794922000 }
https://api.github.com/repos/huggingface/transformers/issues/25720
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25720/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25720/comments
https://api.github.com/repos/huggingface/transformers/issues/25720/events
https://github.com/huggingface/transformers/issues/25720
1,864,766,874
I_kwDOCUB6oc5vJhWa
25,720
Downloading llama model
{ "login": "andysingal", "id": 20493493, "node_id": "MDQ6VXNlcjIwNDkzNDkz", "avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andysingal", "html_url": "https://github.com/andysingal", "followers_url": "https://api.github.com/users/andysingal/followers", "following_url": "https://api.github.com/users/andysingal/following{/other_user}", "gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}", "starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andysingal/subscriptions", "organizations_url": "https://api.github.com/users/andysingal/orgs", "repos_url": "https://api.github.com/users/andysingal/repos", "events_url": "https://api.github.com/users/andysingal/events{/privacy}", "received_events_url": "https://api.github.com/users/andysingal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Could you share the issue that you are having? ", "> \r\n\r\njust added the error above.", "Did you try to install the packages you are missing ? \r\n```bash\r\npip install -u accelerate bitsandbytes\r\n```\r\n\r\nAccording to the contribution guidelines could you share the output of `transformers-cli env`? Would help us know on which hardware you are running this. Pretty sure `load_in_8bit` is not available on `MAC`.", "\r\n\r\n\r\n> Did you try to install the packages you are missing ?\r\n> \r\n> ```shell\r\n> pip install -u accelerate bitsandbytes\r\n> ```\r\n> \r\n> According to the contribution guidelines could you share the output of `transformers-cli env`? Would help us know on which hardware you are running this. Pretty sure `load_in_8bit` is not available on `MAC`.\r\n\r\ni did:\r\n```\r\n!pip install -q accelerate==0.21.0 peft==0.4.0 bitsandbytes==0.40.2 trl==0.4.7 transformers\r\n```\r\ndoing without specific version uploads previous versions. I am using colab pro notebook", "Did you restart the environment after installing the libraries to make sure they are used?\r\nCould you share your notebook for reproducibility? ", "Yes, three times . Let me share the code in another notebook since this one\r\nhas a lot of code.\r\n\r\nOn Thu, Aug 24, 2023 at 16:41 Arthur ***@***.***> wrote:\r\n\r\n> Did you restart the environment after installing the libraries to make\r\n> sure they are used?\r\n> Could you share your notebook for reproducibility?\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/25720#issuecomment-1691478832>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AE4LJNLCI7CSZUMERSTLSZTXW4ZFBANCNFSM6AAAAAA34XC5PA>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n", "> Did you restart the environment after installing the libraries to make sure they are used? Could you share your notebook for reproducibility?\r\n\r\nHere is the colab notebook: https://colab.research.google.com/drive/1PGp9C7iGO7Lw9pUfwJ0RMsB0-J_xYBZn?usp=sharing ", "Thanks, pinging @younesbelkada for him to have a look", "Hmm looking at the notebook I don't see any reason to not work, does it works on a fresh new environment?", "Yeah, already tried it but shows the shared error.\r\n\r\nOn Fri, Aug 25, 2023 at 11:14 Younes Belkada ***@***.***>\r\nwrote:\r\n\r\n> Hmm looking at the notebook I don't see any reason to not work, does it\r\n> works on a fresh new environment?\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/25720#issuecomment-1692793168>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AE4LJNL3BT2PMNTHVW6UF5DXXA3SZANCNFSM6AAAAAA34XC5PA>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n", "Hmm I have just copied the notebook you shared and ran it and it worked fine on my end . Can you maybe delete and restart runtime?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,692
1,697
1,697
NONE
null
### System Info colab pro ### Who can help? @younesbelkada @pacman100 @arth ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` installed it manually: !pip3 install -q langchain openai jsonlines !pip3 install -q ipykernel jupyter datasets einops wandb !pip install -q accelerate==0.21.0 peft==0.4.0 bitsandbytes==0.40.2 trl==0.4.7 transformers import jsonlines import itertools import pandas as pd from pprint import pprint import torch import datasets from datasets import load_dataset from huggingface_hub import notebook_login # from llama import BasicModelRunner from transformers import AutoTokenizer, AutoModelForCausalLM,BitsAndBytesConfig from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model_name = "TintinMeimei/NousResearch-Llama-2-7b-chat-hf" device_map = {"": 0} bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtyp=torch.bfloat16, ) model = AutoModelForCausalLM.from_pretrained( model_name, quantization_config=bnb_config, device_map=device_map ) # this should be set as False for finetuning model.config.use_cache = False tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) tokenizer.pad_token = tokenizer.eos_token ``` ERROR: ``` --------------------------------------------------------------------------- ImportError Traceback (most recent call last) [<ipython-input-6-b7b86b5aff18>](https://localhost:8080/#) in <cell line: 10>() 8 bnb_4bit_compute_dtyp=torch.bfloat16, 9 ) ---> 10 model = AutoModelForCausalLM.from_pretrained( 11 model_name, 12 quantization_config=bnb_config, 1 frames [/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, config, cache_dir, ignore_mismatched_sizes, force_download, local_files_only, token, revision, use_safetensors, *model_args, **kwargs) 2399 if load_in_8bit or load_in_4bit: 2400 if not (is_accelerate_available() and is_bitsandbytes_available()): -> 2401 raise ImportError( 2402 "Using `load_in_8bit=True` requires Accelerate: `pip install accelerate` and the latest version of" 2403 " bitsandbytes `pip install -i https://test.pypi.org/simple/ bitsandbytes` or" ImportError: Using `load_in_8bit=True` requires Accelerate: `pip install accelerate` and the latest version of bitsandbytes `pip install -i https://test.pypi.org/simple/ bitsandbytes` or pip install bitsandbytes` --------------------------------------------------------------------------- NOTE: If your import is failing due to a missing package, you can manually install dependencies using either !pip or !apt. To view examples of installing some common dependencies, click the "Open Examples" button below. --------------------------------------------------------------------------- ``` ### Expected behavior model needs to run?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25720/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25720/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25719
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25719/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25719/comments
https://api.github.com/repos/huggingface/transformers/issues/25719/events
https://github.com/huggingface/transformers/issues/25719
1,864,749,219
I_kwDOCUB6oc5vJdCj
25,719
Trouble with AutoModelForSequenceClassification + Lora + Deepspeed_zero3
{ "login": "liuyu666-thu", "id": 33365197, "node_id": "MDQ6VXNlcjMzMzY1MTk3", "avatar_url": "https://avatars.githubusercontent.com/u/33365197?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liuyu666-thu", "html_url": "https://github.com/liuyu666-thu", "followers_url": "https://api.github.com/users/liuyu666-thu/followers", "following_url": "https://api.github.com/users/liuyu666-thu/following{/other_user}", "gists_url": "https://api.github.com/users/liuyu666-thu/gists{/gist_id}", "starred_url": "https://api.github.com/users/liuyu666-thu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liuyu666-thu/subscriptions", "organizations_url": "https://api.github.com/users/liuyu666-thu/orgs", "repos_url": "https://api.github.com/users/liuyu666-thu/repos", "events_url": "https://api.github.com/users/liuyu666-thu/events{/privacy}", "received_events_url": "https://api.github.com/users/liuyu666-thu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Pretty sure you should be asking this on [the forum](https://discuss.huggingface.co/), include a full reproducer with a full traceback.", "> Pretty sure you should be asking this on [the forum](https://discuss.huggingface.co/), include a full reproducer with a full traceback.\r\n\r\nSure. I'll post one there.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,692
1,697
1,697
NONE
null
Need to finetune a seq-classifier based on Llama-70B using Lora, but get stuck when integrating deepspeed_zero3. Deps version: - transformers 4.31.0 - peft 0.5.0 - deepspeed 0.10.1 My code is like: 1. initialize training args: `train_args = transformers.TrainingArguments(..., deepspeed=ds_config.json)` 2. create model: `model = LlamaForSequenceClassification.from_pretrained(...)` 3. wrap it with peft: `model = get_peft_model(model, lora_config)` 4. enable gradient checkpointing: `model.gradient_checkpointing_enable()` 5. train it with HF-trainer: `trainer = Trainer(..., args=train_args)` 6. launch the training using `torchrun xxx.py` The error is like: `{'id': 547, 'status': 'NOT_AVAILABLE', 'numel': 0, 'ds_numel': 0, 'shape': (0,), 'ds_shape': (0,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {1482}, 'ds_tensor.shape': torch.Size([0])}` Seems like the classification layer is sharded and something incompetible happens. Can anyone show the standard way to use deepspeed under hf-trainer in this scene?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25719/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25719/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25718
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25718/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25718/comments
https://api.github.com/repos/huggingface/transformers/issues/25718/events
https://github.com/huggingface/transformers/pull/25718
1,864,711,563
PR_kwDOCUB6oc5Yq6gJ
25,718
Fix failing `test_batch_generation` for bloom
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> 😅 Nice catch !\r\n\r\nI was caught by the failed CI 😢 ", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,692
1,692
1,692
COLLABORATOR
null
# What does this PR do? #25571 changed this test. There was a tiny issue. See my comment along the change in this PR. We should probably to change this variable names in multiple places for house keeping. Also cc @gante 😄
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25718/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25718/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25718", "html_url": "https://github.com/huggingface/transformers/pull/25718", "diff_url": "https://github.com/huggingface/transformers/pull/25718.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25718.patch", "merged_at": 1692868530000 }
https://api.github.com/repos/huggingface/transformers/issues/25717
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25717/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25717/comments
https://api.github.com/repos/huggingface/transformers/issues/25717/events
https://github.com/huggingface/transformers/issues/25717
1,864,703,258
I_kwDOCUB6oc5vJR0a
25,717
Remove the input tokens
{ "login": "satnair", "id": 497374, "node_id": "MDQ6VXNlcjQ5NzM3NA==", "avatar_url": "https://avatars.githubusercontent.com/u/497374?v=4", "gravatar_id": "", "url": "https://api.github.com/users/satnair", "html_url": "https://github.com/satnair", "followers_url": "https://api.github.com/users/satnair/followers", "following_url": "https://api.github.com/users/satnair/following{/other_user}", "gists_url": "https://api.github.com/users/satnair/gists{/gist_id}", "starred_url": "https://api.github.com/users/satnair/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/satnair/subscriptions", "organizations_url": "https://api.github.com/users/satnair/orgs", "repos_url": "https://api.github.com/users/satnair/repos", "events_url": "https://api.github.com/users/satnair/events{/privacy}", "received_events_url": "https://api.github.com/users/satnair/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey, this kind of question should be asked on [the forum](https://discuss.huggingface.co/) 🤗 ", "Sure. Thanks." ]
1,692
1,692
1,692
NONE
null
Hi, Scenario: I have got 5 files , which has code in it. Now I am trying to evaluate the files and get some recommendations via starcoder model. Challenge: I am able to iterate thru all files and get recommendations independently. But when running in a single flow in a loop, after the first file is encoded and decoded, for the second file, the input_ids of the previous file remains. How to remove the input_ids tokens of the previous file. for each file input_ids: torch.Tensor = self.tokenizer.encode(query, max_length=7000, return_tensors='pt', truncation=True).to(self.device) print(len(input_ids[0])) For example: 1st file: Len of input IDs is , 1111 2nd file[2nd iteration]: Len of input IDs is, 3018 [but it should 1907] Please help with a solution for this.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25717/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25717/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25716
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25716/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25716/comments
https://api.github.com/repos/huggingface/transformers/issues/25716/events
https://github.com/huggingface/transformers/issues/25716
1,864,675,763
I_kwDOCUB6oc5vJLGz
25,716
LLama2 tokenizer unexpected behaviors with special tokens
{ "login": "ShomyLiu", "id": 10215945, "node_id": "MDQ6VXNlcjEwMjE1OTQ1", "avatar_url": "https://avatars.githubusercontent.com/u/10215945?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ShomyLiu", "html_url": "https://github.com/ShomyLiu", "followers_url": "https://api.github.com/users/ShomyLiu/followers", "following_url": "https://api.github.com/users/ShomyLiu/following{/other_user}", "gists_url": "https://api.github.com/users/ShomyLiu/gists{/gist_id}", "starred_url": "https://api.github.com/users/ShomyLiu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ShomyLiu/subscriptions", "organizations_url": "https://api.github.com/users/ShomyLiu/orgs", "repos_url": "https://api.github.com/users/ShomyLiu/repos", "events_url": "https://api.github.com/users/ShomyLiu/events{/privacy}", "received_events_url": "https://api.github.com/users/ShomyLiu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You expectation is wrong, the output of the slow tokenizer is correct. It's a known bug, as the `fast` fix for special tokens is not there yet. ", "Thanks for the explanation. It is indeed a large change between different versions.\r\nIn transformers==4.31.0, the behavior is also different as follows:\r\n```\r\nIn [8]: tok.tokenize(\"</s>Human\")\r\nOut[8]: ['</s>', 'Human']\r\n\r\nIn [9]: tok(\"</s>Human\")\r\nOut[9]: {'input_ids': [1, 2, 0], 'attention_mask': [1, 1, 1]}\r\n```\r\nAlthough I have read the PR documents before, it is also confusing about which one is right.\r\nIs it a good choice that uses the legacy=True for stable consideration?\r\n\r\n", "Yes `legacy = True` is a very good choice. The `transformers==4.31.0` had bugs, which were fixed in the latest version (which is why you have different results). \r\n```python \r\n>>> tokenizer.sp_model.encode(\"Human\", out_type=str)\r\n```\r\nis the correct output (without a prefix space being added.\r\nNow previously you would have \r\n```python \r\n>>> tokenizer.sp_model.encode(\"Human\", out_type=str)\r\n['▁Human']\r\n```\r\nand the `▁` was stripped ", "I am sorry for the confusion and for the issues, this was a very nasty bug ", "Got it. Thanks a lot for your efforts on this confusing bug, and have seen that there are quite several commits about this issue.\r\nI will keep `legacy = True` now and look forward to your final fix. Thanks again.\r\n", "Thanks for your valuable feedback! Community issues help us know most of the edge cases that we might have missed", "🤗 " ]
1,692
1,692
1,692
NONE
null
### System Info - `transformers` version: 4.33.0.dev0 - Platform: Linux-3.10.0-1160.83.1.el7.x86_64-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0a0+b5021ba (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <yes> - Using distributed or parallel set-up in script?: <yes> ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction With the latest version of transformers: ``` import transformers tok1 = transformers.AutoTokenizer.from_pretrained("llama2-7b", use_fast=False) tok2 = transformers.AutoTokenizer.from_pretrained("llama2-7b", use_fast=False, legacy=True) print(tok1.tokenize("</s>Human is Here")) # output: ['</s>', 'H', 'uman', '▁is', '▁Here'] print(tok2.tokenize("</s>Human is Here")) # output ['</s>', '▁Human', '▁is', '▁Here'] ``` It seems that it is wrong with the default setting (legacy = False). ### Expected behavior The expected behavior is: ``` ['</s>', '▁Human', '▁is', '▁Here'] ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25716/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25716/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25715
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25715/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25715/comments
https://api.github.com/repos/huggingface/transformers/issues/25715/events
https://github.com/huggingface/transformers/pull/25715
1,864,673,676
PR_kwDOCUB6oc5YqyVB
25,715
Fix number of minimal calls to the Hub with peft integration
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25715). All of your documentation changes will be reflected on that endpoint." ]
1,692
1,692
1,692
COLLABORATOR
null
# What does this PR do? This PR makes sure we don't add two calls to every model instantiation when PEFT is installed by moving the calls to `find_adapter_config_file` after the config is created so we can use the commit hash.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25715/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25715/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25715", "html_url": "https://github.com/huggingface/transformers/pull/25715", "diff_url": "https://github.com/huggingface/transformers/pull/25715.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25715.patch", "merged_at": 1692881772000 }
https://api.github.com/repos/huggingface/transformers/issues/25714
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25714/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25714/comments
https://api.github.com/repos/huggingface/transformers/issues/25714/events
https://github.com/huggingface/transformers/pull/25714
1,864,655,084
PR_kwDOCUB6oc5YquV8
25,714
Patch with accelerate xpu
{ "login": "abhilash1910", "id": 30946547, "node_id": "MDQ6VXNlcjMwOTQ2NTQ3", "avatar_url": "https://avatars.githubusercontent.com/u/30946547?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhilash1910", "html_url": "https://github.com/abhilash1910", "followers_url": "https://api.github.com/users/abhilash1910/followers", "following_url": "https://api.github.com/users/abhilash1910/following{/other_user}", "gists_url": "https://api.github.com/users/abhilash1910/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhilash1910/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhilash1910/subscriptions", "organizations_url": "https://api.github.com/users/abhilash1910/orgs", "repos_url": "https://api.github.com/users/abhilash1910/repos", "events_url": "https://api.github.com/users/abhilash1910/events{/privacy}", "received_events_url": "https://api.github.com/users/abhilash1910/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sgugger seems like ruff is flagging errors on other files (which are not edited in this commit ). Could you take a look ? Thanks", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25714). All of your documentation changes will be reflected on that endpoint.", "@abhilash1910 I am no longer working at Hugging Face, so you should ping @muellerzr to help :-)", "@amyeroberts thanks for the suggestions. However when I try to run ```make style``` then many unrelated files are getting edited and ruff test triggers failure on some unmodified files. One such example was the markupml model test script (make style changed the formatting).Could you suggest how to go about this ? ", "@abhilash1910 Are you running the ruff and black versions expected in the library? They can be installed using `pip install -Ue .[quality]`", "> @abhilash1910 Are you running the ruff and black versions expected in the library? They can be installed using `pip install -Ue .[quality]`\r\n\r\nYes I used the same versions , make style is causing some other files to change (the ones added below). Should I add that in the commit? @amyeroberts @muellerzr could you suggest.\r\n``` modified: src/transformers/__init__.py\r\n modified: src/transformers/models/code_llama/tokenization_code_llama.py\r\n modified: src/transformers/models/code_llama/tokenization_code_llama_fast.py\r\n modified: src/transformers/models/idefics/modeling_idefics.py\r\n modified: src/transformers/models/llama/tokenization_llama.py\r\n modified: src/transformers/models/llama/tokenization_llama_fast.py\r\n```\r\n", "I think the initial tests pass now; I have not added the style fixes of the other mentioned files ( I used ```pip install -Ue .[quality]``` for black/ruff/isort etc ) . No files are open in my workspace IDE. Seems when I run make style on markuplm , it is formatting seq list (without it tests fail on make style). \r\n@amyeroberts @muellerzr could you trigger the slow tests(if any) and re-review? Thanks. ", "@amyeroberts @muellerzr could you please re-review (retrigger tests)? Since this integration is a little urgent for us ; thanks again for all suggestions. " ]
1,692
1,693
1,693
CONTRIBUTOR
null
Patch for Accelerate XPU support. cc @sgugger @muellerzr
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25714/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25714/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25714", "html_url": "https://github.com/huggingface/transformers/pull/25714", "diff_url": "https://github.com/huggingface/transformers/pull/25714.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25714.patch", "merged_at": 1693924902000 }
https://api.github.com/repos/huggingface/transformers/issues/25713
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25713/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25713/comments
https://api.github.com/repos/huggingface/transformers/issues/25713/events
https://github.com/huggingface/transformers/pull/25713
1,864,647,118
PR_kwDOCUB6oc5YqsoU
25,713
[`AutoGPTQ`] Add correct installation of GPTQ library + fix slow tests
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Can confirm the slow tests now pass in the docker image\r\n\r\n> So basically we want to install the same version but with cu118 support, right?\r\n\r\nIt is slightly trickier than that, auto-gptq listed in pypi does not contain the latest supported versions (>= `0.4.1`), one needs to install that directly through the command that I have shared. Also that way you get the pre-built wheels instead of building auto-gptq at each install (which is the case right now if you do `pip install auto-gptq`)" ]
1,692
1,692
1,692
CONTRIBUTOR
null
# What does this PR do? Per the instructions of installing auto-gptq library we need to slightly update the Dockerfile otherwise it will install the `0.3.2` version which is not compatible with the integration Will also update some expected values to make the slow tests pass cc @ydshieh @SunMarc Can confirm the Docker image is built successfully
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25713/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25713/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25713", "html_url": "https://github.com/huggingface/transformers/pull/25713", "diff_url": "https://github.com/huggingface/transformers/pull/25713.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25713.patch", "merged_at": 1692881836000 }
https://api.github.com/repos/huggingface/transformers/issues/25712
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25712/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25712/comments
https://api.github.com/repos/huggingface/transformers/issues/25712/events
https://github.com/huggingface/transformers/issues/25712
1,864,620,485
I_kwDOCUB6oc5vI9nF
25,712
NameError: name 'torch' is not defined
{ "login": "pseudotensor", "id": 2249614, "node_id": "MDQ6VXNlcjIyNDk2MTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2249614?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pseudotensor", "html_url": "https://github.com/pseudotensor", "followers_url": "https://api.github.com/users/pseudotensor/followers", "following_url": "https://api.github.com/users/pseudotensor/following{/other_user}", "gists_url": "https://api.github.com/users/pseudotensor/gists{/gist_id}", "starred_url": "https://api.github.com/users/pseudotensor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pseudotensor/subscriptions", "organizations_url": "https://api.github.com/users/pseudotensor/orgs", "repos_url": "https://api.github.com/users/pseudotensor/repos", "events_url": "https://api.github.com/users/pseudotensor/events{/privacy}", "received_events_url": "https://api.github.com/users/pseudotensor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, \r\n\r\nWhat's your `bitsandbytes` version? cc @younesbelkada .\r\n\r\ntorch is imported under `is_bitsandbytes_available()`, so might be a version issue.\r\n\r\n```\r\nif is_bitsandbytes_available():\r\n import bitsandbytes as bnb\r\n import torch\r\n```", "```\r\n(h2ogpt) jon@pseudotensor:~/h2ogpt$ pip freeze | grep bits\r\nbitsandbytes==0.41.1\r\n```\r\n\r\nLatest on pypi, I don't think relevant.", "This is because we changed a bit the `is_bitsandbytes_available()` condition, https://github.com/huggingface/transformers/blob/main/src/transformers/utils/import_utils.py#L539 as you can see if no GPU is available things should behave as bitsandbytes is not installed. I also think users should be aware that bnb can't be used under a non-GPU env.\r\nEDIT: it is a bad idea to raise an error if no GPU is installed", "Let me dig a bit and get back to you", "Restarting runtime and running `bitsandbytes==0.40.2` worked for me", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,692
1,697
1,697
NONE
null
### System Info `transformers` version: 4.32.0 - Platform: Linux-5.19.0-38-generic-x86_64-with-glibc2.35 - Python version: 3.10.9 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.22.0 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_type: MULTI_GPU - mixed_precision: bf16 - use_cpu: False - debug: False - num_processes: 1 - machine_rank: 0 - num_machines: 1 - gpu_ids: 0 - rdzv_backend: static - same_network: True - main_training_function: main - downcast_bf16: no - tpu_use_cluster: False - tpu_use_sudo: False - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker and @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction There is conditional in `/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/transformers/utils/bitsandbytes.py` that in new transformers (not prior) leaves torch undefined if bitsandbytes can't be used. E.g. for CPU. Then one hits: ``` model = model_loader( File "/home/jon/miniconda3/envs/alpaca/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 516, in from_pretrained return model_class.from_pretrained( File "/home/jon/miniconda3/envs/alpaca/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3091, in from_pretrained ) = cls._load_pretrained_model( File "/home/jon/miniconda3/envs/alpaca/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3471, in _load_pretrained_model new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model( File "/home/jon/miniconda3/envs/alpaca/lib/python3.10/site-packages/transformers/modeling_utils.py", line 744, in _load_state_dict_into_meta_model set_module_quantized_tensor_to_device( File "/home/jon/miniconda3/envs/alpaca/lib/python3.10/site-packages/transformers/utils/bitsandbytes.py", line 59, in set_module_quantized_tensor_to_device if old_value.device == torch.device("meta") and device not in ["meta", torch.device("meta")] and value is None: NameError: name 'torch' is not defined ``` Repro: * Install bitsandbytes * `export CUDA_VISIBLE_DEVICES=` * python checkbits.py ```checkbits.p from transformers import BitsAndBytesConfig base_model = 'OpenAssistant/reward-model-deberta-v3-large-v2' false = False null = None model_kwargs = {'local_files_only': False, 'resume_download': True, 'use_auth_token': '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', 'trust_remote_code': True, 'offload_folder': 'fooodasf3/offline_folder', 'revision': None, 'device_map': {'': 'cpu'}, 'quantization_config': BitsAndBytesConfig(**{ "bnb_4bit_compute_dtype": "bfloat16", "bnb_4bit_quant_type": "fp4", "bnb_4bit_use_double_quant": false, "llm_int8_enable_fp32_cpu_offload": false, "llm_int8_has_fp16_weight": false, "llm_int8_skip_modules": null, "llm_int8_threshold": 6.0, "load_in_4bit": false, "load_in_8bit": false, "quant_method": "bitsandbytes" }) } from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained(base_model, **model_kwargs) ``` based upon code here: https://huggingface.co/OpenAssistant/reward-model-deberta-v3-large-v2#how-to-use ### Expected behavior No failure
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25712/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25712/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25711
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25711/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25711/comments
https://api.github.com/repos/huggingface/transformers/issues/25711/events
https://github.com/huggingface/transformers/pull/25711
1,864,614,458
PR_kwDOCUB6oc5Yqlkf
25,711
docs: Resolve typos in warning text
{ "login": "tomaarsen", "id": 37621491, "node_id": "MDQ6VXNlcjM3NjIxNDkx", "avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomaarsen", "html_url": "https://github.com/tomaarsen", "followers_url": "https://api.github.com/users/tomaarsen/followers", "following_url": "https://api.github.com/users/tomaarsen/following{/other_user}", "gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions", "organizations_url": "https://api.github.com/users/tomaarsen/orgs", "repos_url": "https://api.github.com/users/tomaarsen/repos", "events_url": "https://api.github.com/users/tomaarsen/events{/privacy}", "received_events_url": "https://api.github.com/users/tomaarsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25711). All of your documentation changes will be reflected on that endpoint." ]
1,692
1,692
1,692
MEMBER
null
# What does this PR do? Resolves a warning & a double space in a new warning text. ## Before submitting - [x] This PR fixes a typo or improves the docs ## Who can review? @ArthurZucker - Tom Aarsen
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25711/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25711/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25711", "html_url": "https://github.com/huggingface/transformers/pull/25711", "diff_url": "https://github.com/huggingface/transformers/pull/25711.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25711.patch", "merged_at": 1692868467000 }
https://api.github.com/repos/huggingface/transformers/issues/25710
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25710/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25710/comments
https://api.github.com/repos/huggingface/transformers/issues/25710/events
https://github.com/huggingface/transformers/pull/25710
1,864,602,784
PR_kwDOCUB6oc5YqjCV
25,710
[`PEFT`] Fix peft version
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25710). All of your documentation changes will be reflected on that endpoint." ]
1,692
1,692
1,692
CONTRIBUTOR
null
# What does this PR do? Fixes the peft version check, in fact we should check if the current version is strictly greater than the required minimum version, not the opposite. That was leading to failing tests in the nightly CI. cc @ydshieh
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25710/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25710/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25710", "html_url": "https://github.com/huggingface/transformers/pull/25710", "diff_url": "https://github.com/huggingface/transformers/pull/25710.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25710.patch", "merged_at": 1692871753000 }
https://api.github.com/repos/huggingface/transformers/issues/25709
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25709/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25709/comments
https://api.github.com/repos/huggingface/transformers/issues/25709/events
https://github.com/huggingface/transformers/issues/25709
1,864,543,419
I_kwDOCUB6oc5vIqy7
25,709
Mask2Former after 4.32 release use more memory
{ "login": "Emilon1928", "id": 23121677, "node_id": "MDQ6VXNlcjIzMTIxNjc3", "avatar_url": "https://avatars.githubusercontent.com/u/23121677?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Emilon1928", "html_url": "https://github.com/Emilon1928", "followers_url": "https://api.github.com/users/Emilon1928/followers", "following_url": "https://api.github.com/users/Emilon1928/following{/other_user}", "gists_url": "https://api.github.com/users/Emilon1928/gists{/gist_id}", "starred_url": "https://api.github.com/users/Emilon1928/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Emilon1928/subscriptions", "organizations_url": "https://api.github.com/users/Emilon1928/orgs", "repos_url": "https://api.github.com/users/Emilon1928/repos", "events_url": "https://api.github.com/users/Emilon1928/events{/privacy}", "received_events_url": "https://api.github.com/users/Emilon1928/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "`CUDA out of memory. Tried to allocate 56.25 GiB`. It looks a lot.\r\n\r\n@amyeroberts Could you take a look? I tried the notebook and it shows the issue." ]
1,692
1,693
1,693
NONE
null
### System Info transfomers 4.32.0 ubuntu 22.04 python 3.9 @amyeroberts ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Run the script in the colab notebook twice (with T4 GPU): 1. with version 4.31.0 (everything should work fine) 2. with version 4.32.0 (OOM in inference) https://colab.research.google.com/drive/1xq54l9a2AQLIHT5jw63btifbOrvSqzts?usp=sharing ### Expected behavior GPU memory consumption should be constant between versions
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25709/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25709/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25708
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25708/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25708/comments
https://api.github.com/repos/huggingface/transformers/issues/25708/events
https://github.com/huggingface/transformers/pull/25708
1,864,459,812
PR_kwDOCUB6oc5YqEdY
25,708
Update list of persons to tag
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,692
1,692
1,692
COLLABORATOR
null
# What does this PR do? This updates the lists of persons to tag, removing me and adding something for quantization. cc @SunMarc @younesbelkada @muellerzr @pacman100 since you get new stuff.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25708/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25708/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25708", "html_url": "https://github.com/huggingface/transformers/pull/25708", "diff_url": "https://github.com/huggingface/transformers/pull/25708.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25708.patch", "merged_at": 1692864810000 }
https://api.github.com/repos/huggingface/transformers/issues/25707
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25707/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25707/comments
https://api.github.com/repos/huggingface/transformers/issues/25707/events
https://github.com/huggingface/transformers/pull/25707
1,864,444,864
PR_kwDOCUB6oc5YqBSg
25,707
Correct progress bar update step on all files no_runner. Change description of train, validation file of mlm
{ "login": "pphuc25", "id": 81808312, "node_id": "MDQ6VXNlcjgxODA4MzEy", "avatar_url": "https://avatars.githubusercontent.com/u/81808312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pphuc25", "html_url": "https://github.com/pphuc25", "followers_url": "https://api.github.com/users/pphuc25/followers", "following_url": "https://api.github.com/users/pphuc25/following{/other_user}", "gists_url": "https://api.github.com/users/pphuc25/gists{/gist_id}", "starred_url": "https://api.github.com/users/pphuc25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pphuc25/subscriptions", "organizations_url": "https://api.github.com/users/pphuc25/orgs", "repos_url": "https://api.github.com/users/pphuc25/repos", "events_url": "https://api.github.com/users/pphuc25/events{/privacy}", "received_events_url": "https://api.github.com/users/pphuc25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @muellerzr " ]
1,692
1,694
1,694
CONTRIBUTOR
null
Hi, as merged in #25324 (adding txt description on train and val of file clm_no_trainer) and #25691 (correct update progress bar of steps). I see that much of file no_trainer is not correct about these, so I made a pull request to correct them. In the edit: - I see that some files such as run_translataion_no_trainer are not correct about args.gradient_accumulation_steps name in completed_steps (the original is args.gradient_accumulation_stepp which did not appear before). - In run_qa_no_trainer, the resume_step is not multiplied with args.gradient_accumulation_steps yet. So I corrected it by multiplying it. I would like to cc @sgugger to review my PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25707/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25707/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25707", "html_url": "https://github.com/huggingface/transformers/pull/25707", "diff_url": "https://github.com/huggingface/transformers/pull/25707.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25707.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/25706
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25706/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25706/comments
https://api.github.com/repos/huggingface/transformers/issues/25706/events
https://github.com/huggingface/transformers/pull/25706
1,864,431,072
PR_kwDOCUB6oc5Yp-YY
25,706
🌐 [i18n-KO] Translated peft.md to Korean
{ "login": "nuatmochoi", "id": 46990061, "node_id": "MDQ6VXNlcjQ2OTkwMDYx", "avatar_url": "https://avatars.githubusercontent.com/u/46990061?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nuatmochoi", "html_url": "https://github.com/nuatmochoi", "followers_url": "https://api.github.com/users/nuatmochoi/followers", "following_url": "https://api.github.com/users/nuatmochoi/following{/other_user}", "gists_url": "https://api.github.com/users/nuatmochoi/gists{/gist_id}", "starred_url": "https://api.github.com/users/nuatmochoi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nuatmochoi/subscriptions", "organizations_url": "https://api.github.com/users/nuatmochoi/orgs", "repos_url": "https://api.github.com/users/nuatmochoi/repos", "events_url": "https://api.github.com/users/nuatmochoi/events{/privacy}", "received_events_url": "https://api.github.com/users/nuatmochoi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25706). All of your documentation changes will be reflected on that endpoint.", "LGTM! 👍" ]
1,692
1,693
1,693
CONTRIBUTOR
null
# What does this PR do? Translated the `peft.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (번역 누락/중복 검사) - [x] Grammar Check (맞춤법 검사) - [x] Review or Add new terms to glossary (용어 확인 및 추가) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview로 정상작동 확인) ## Who can review? (Initial) Team OSSCA, may you please review this PR? @bolizabeth, @nuatmochoi, @heuristicwave, @mjk0618, @keonju2, @harheem, @HongB1, @junejae, @54data, @Sunmin0520, @seank021, @augustinLib, @sronger, @TaeYupNoh, @kj021, @eenzeenee ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25706/reactions", "total_count": 6, "+1": 0, "-1": 0, "laugh": 2, "hooray": 0, "confused": 0, "heart": 2, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25706/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25706", "html_url": "https://github.com/huggingface/transformers/pull/25706", "diff_url": "https://github.com/huggingface/transformers/pull/25706.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25706.patch", "merged_at": 1693314600000 }
https://api.github.com/repos/huggingface/transformers/issues/25705
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25705/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25705/comments
https://api.github.com/repos/huggingface/transformers/issues/25705/events
https://github.com/huggingface/transformers/pull/25705
1,864,379,564
PR_kwDOCUB6oc5YpzjN
25,705
Add type hints for several pytorch models (batch-3)
{ "login": "nablabits", "id": 33068707, "node_id": "MDQ6VXNlcjMzMDY4NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/33068707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nablabits", "html_url": "https://github.com/nablabits", "followers_url": "https://api.github.com/users/nablabits/followers", "following_url": "https://api.github.com/users/nablabits/following{/other_user}", "gists_url": "https://api.github.com/users/nablabits/gists{/gist_id}", "starred_url": "https://api.github.com/users/nablabits/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nablabits/subscriptions", "organizations_url": "https://api.github.com/users/nablabits/orgs", "repos_url": "https://api.github.com/users/nablabits/repos", "events_url": "https://api.github.com/users/nablabits/events{/privacy}", "received_events_url": "https://api.github.com/users/nablabits/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @Rocketknight1 ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25705). All of your documentation changes will be reflected on that endpoint." ]
1,692
1,694
1,692
CONTRIBUTOR
null
# What does this PR do? Addresses some of the models in https://github.com/huggingface/transformers/issues/16059: 1. ErnieM 1. `ErnieMForInformationExtraction` 2. `ErnieMForMultipleChoice` 3. `ErnieMForQuestionAnswering` 4. `ErnieMForSequenceClassification` 5. `ErnieMForTokenClassification` 6. `ErnieMModel` 2. `EsmForProteinFolding` 3. `GraphormerModel` 4. `InstructBlipQFormerModel` 5. `LayoutLMForMaskedLM` 6. `LukeForEntitySpanClassification` ## Who can review? @Rocketknight1, please
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25705/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25705/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25705", "html_url": "https://github.com/huggingface/transformers/pull/25705", "diff_url": "https://github.com/huggingface/transformers/pull/25705.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25705.patch", "merged_at": 1692972775000 }
https://api.github.com/repos/huggingface/transformers/issues/25704
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25704/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25704/comments
https://api.github.com/repos/huggingface/transformers/issues/25704/events
https://github.com/huggingface/transformers/issues/25704
1,864,340,235
I_kwDOCUB6oc5vH5ML
25,704
Use `F.multi_head_attention_forward()` to take advantage of PyTorch's Flash attention
{ "login": "gau-nernst", "id": 26946864, "node_id": "MDQ6VXNlcjI2OTQ2ODY0", "avatar_url": "https://avatars.githubusercontent.com/u/26946864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gau-nernst", "html_url": "https://github.com/gau-nernst", "followers_url": "https://api.github.com/users/gau-nernst/followers", "following_url": "https://api.github.com/users/gau-nernst/following{/other_user}", "gists_url": "https://api.github.com/users/gau-nernst/gists{/gist_id}", "starred_url": "https://api.github.com/users/gau-nernst/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gau-nernst/subscriptions", "organizations_url": "https://api.github.com/users/gau-nernst/orgs", "repos_url": "https://api.github.com/users/gau-nernst/repos", "events_url": "https://api.github.com/users/gau-nernst/events{/privacy}", "received_events_url": "https://api.github.com/users/gau-nernst/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Something is being cooked up in #25598 😉 ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,692
1,696
1,696
CONTRIBUTOR
null
### Feature request Currently I'm using Wav2Vec 2.0 models. Digging into the code, I can see that it manually computes multi-head attention (and actually it copies from Bart). Using `F.multi_head_attention_forward()` would enjoy the benefits of any new improvements PyTorch brings (e.g. Flash attention) without installing extra libraries to do the hacking (i.e. optimum). The current solution is to use HF optimum to convert the model, which calls a private PyTorch's method. https://github.com/huggingface/optimum/blob/05d20df3e6602e26d01cf3994a108de5b097a719/optimum/bettertransformer/models/encoder_models.py#L1415 ### Motivation To take advantage of Flash attention, optimum is required to convert the model. I was quite surprised that using Flash attention is not the default behavior of HF models. By using `F.multi_head_attention_forward()`, the users can enjoy the best attention speedup by default. For advanced users, who will be able to dig into the code to figure out why Flash attention is not used, and figure out to use optimum, it will save debugging time. For beginner users, this provides the best speed without any prior knowledge. It will also save the trouble of installing an extra library and perform the conversion. Some considerations: - In terms of availability, `F.multi_head_attention_forward()` has existed for a long time (it goes back to at least PyTorch 1.8, I haven't checked before that). I see from latest main branch that minimum PyTorch version is 1.9 https://github.com/huggingface/transformers/blob/4d40109c3a93c9b8bbca204cb046ed510f1c72e8/setup.py#L176, so this function is definitely available. - In terms of weight-compatibility, `F.multi_head_attention_forward()` supports passing in separate q, k, v projection weights, but input projection weight must be packed together. We can keep the `nn.Linear` modules, and pass the parameters directly to `F.multi_head_attention_forward()`. q, k, v biases will need to be packed into a single tensor (probably with `torch.cat()`). For more details, check https://github.com/pytorch/pytorch/blob/v1.9.0/torch/nn/functional.py#L4836 ### Your contribution This seems like a big change, but I think it should be straight-forward. I'm happy to submit a PR if there are people to help me land this. Do let me know other considerations from HF side that I'm not aware of. Thank you!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25704/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/25704/timeline
not_planned
null
null
https://api.github.com/repos/huggingface/transformers/issues/25703
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25703/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25703/comments
https://api.github.com/repos/huggingface/transformers/issues/25703/events
https://github.com/huggingface/transformers/issues/25703
1,864,282,369
I_kwDOCUB6oc5vHrEB
25,703
Llama2 model not loading, stuck in infinite loop
{ "login": "AmritaBh", "id": 43297442, "node_id": "MDQ6VXNlcjQzMjk3NDQy", "avatar_url": "https://avatars.githubusercontent.com/u/43297442?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmritaBh", "html_url": "https://github.com/AmritaBh", "followers_url": "https://api.github.com/users/AmritaBh/followers", "following_url": "https://api.github.com/users/AmritaBh/following{/other_user}", "gists_url": "https://api.github.com/users/AmritaBh/gists{/gist_id}", "starred_url": "https://api.github.com/users/AmritaBh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmritaBh/subscriptions", "organizations_url": "https://api.github.com/users/AmritaBh/orgs", "repos_url": "https://api.github.com/users/AmritaBh/repos", "events_url": "https://api.github.com/users/AmritaBh/events{/privacy}", "received_events_url": "https://api.github.com/users/AmritaBh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada and @SunMarc ", "Hi @AmritaBh \r\nin the traceback you shared I see:\r\n\r\n```bash\r\nTraceback (most recent call last):\r\n File \"llama2_13b_bnb.py\", line 277, in <module>\r\n generate_ids = model.generate(inputs.input_ids, max_new_tokens=max_tokens, **generation_kwargs)\r\n```\r\nMeaning there might be some missing code in the shared snippet as it seems you are generating some text? ", "( I am mostly curious about the Config being printed 10x)", "Okay I did some more digging and it looks like model **does** load, but the Config gets printed way too many times (I guess multiple times for each call to `model.generate(...)`?) which had me confused that it's stuck in a loop. \r\n\r\nFor my use case, I am iterating over `N` data samples for a particular task I have, calling `model.generate(...)` three times for each data sample, and processing the responses. For my previous runs, `N` was too large (around 1000) and my job was timing out without any results. I tried with a smaller `N=50` and it did finish (albeit quite slow), but the Config gets printed in the log way too many times. Please see the attached log file below for this run. \r\n\r\nThis is making the error logs difficult to read and understand. I assume this is not the expected behavior?\r\n\r\n\r\nIn addition to the model loading code above, here's the generation code structure I am using:\r\n\r\n### Generation code (the basic structure for each inference)\r\n\r\n```python\r\nimport gc\r\n\r\nMAX_TOKENS_CONTEXT = 4096\r\n\r\ngeneration_kwargs = {\r\n \"top_p\": 1.0,\r\n \"temperature\": 0.4,\r\n \"do_sample\": True,\r\n \"repetition_penalty\": 1.1\r\n}\r\n\r\ntokenizer = transformers.AutoTokenizer.from_pretrained(model_id, use_auth_token=hf_auth)\r\n\r\ncontext = \"some long prompt with data and instructions\"\r\n\r\ncontext_token_count = count_tokens(context) ## function that uses tiktoken to get approx count of tokens\r\nmax_tokens = MAX_TOKENS_CONTEXT - context_token_count - 128\r\ninputs = tokenizer(context, return_tensors=\"pt\")\r\ngenerate_ids = model.generate(inputs.input_ids, max_new_tokens=max_tokens, **generation_kwargs)\r\ngen_tokens = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)\r\nresponse = gen_tokens[0][inputs.input_ids.shape[-1]:]\r\n\r\ngc.collect()\r\ntorch.cuda.empty_cache()\r\n```\r\n\r\n### Log file for new run with 150 `model.generate(...)` calls (`N=50`)\r\n[llama2_13b_error_n50.log](https://github.com/huggingface/transformers/files/12433202/llama2_13b_error_n50.log)\r\n", "Could you share the full script with the loop? ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,692
1,697
1,697
NONE
null
### System Info - `transformers` version: 4.31.0 - Platform: Linux-4.18.0-348.el8.0.2.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.17 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: only via `device_map='auto'` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ### Trying to load Llama2 in 4-bit using bnb but model load gets stuck in an infinite loop (check attached error log) Seems like the model config loads properly but the model itself doesn't load at `.from_pretrained(...)`. No errors are thrown, I had to interrupt execution. ```python import torch import transformers transformers.logging.set_verbosity_debug() model_id = 'meta-llama/Llama-2-13b-chat-hf' hf_auth = <MY-HF-AUTH-TOKEN> bnb_config = transformers.BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type='nf4', bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=torch.bfloat16 ) model_config = transformers.AutoConfig.from_pretrained( model_id, force_download=True, token=hf_auth ) model = transformers.AutoModelForCausalLM.from_pretrained( model_id, trust_remote_code=True, config=model_config, quantization_config=bnb_config, device_map='auto', token=hf_auth ) ``` ### Error log file [llama2_13b_error.log](https://github.com/huggingface/transformers/files/12424911/llama2_13b_error.log) ### Additional Hardware Details Executed on two A100s, each with 40G memory. ### Expected behavior Model should load
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25703/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25703/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25702
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25702/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25702/comments
https://api.github.com/repos/huggingface/transformers/issues/25702/events
https://github.com/huggingface/transformers/pull/25702
1,864,247,789
PR_kwDOCUB6oc5YpX2j
25,702
remove SharedDDP as it is deprecated
{ "login": "statelesshz", "id": 28150734, "node_id": "MDQ6VXNlcjI4MTUwNzM0", "avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4", "gravatar_id": "", "url": "https://api.github.com/users/statelesshz", "html_url": "https://github.com/statelesshz", "followers_url": "https://api.github.com/users/statelesshz/followers", "following_url": "https://api.github.com/users/statelesshz/following{/other_user}", "gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}", "starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions", "organizations_url": "https://api.github.com/users/statelesshz/orgs", "repos_url": "https://api.github.com/users/statelesshz/repos", "events_url": "https://api.github.com/users/statelesshz/events{/privacy}", "received_events_url": "https://api.github.com/users/statelesshz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sgugger sorry for bothering you, but would you mind taking a look at this PR?", "cc @muellerzr and @pacman100 who will take over the Trainer.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25702). All of your documentation changes will be reflected on that endpoint.", "Rebase my commits to master HEAD to fix the merge conflict.\r\nBTW, Is this PR still under reviewing? Any review suggestions? Please let me know if there is anything else that needs to be done.", "@muellerzr Could you please take a second look at this PR? \r\nSome modifications were made based on the code review comments.", "I've tested this PR using 4xA100-80G on [FastChat](https://github.com/lm-sys/FastChat) with the following scripts\r\n```\r\nCUDA_VISIBLE_DEVICES=4,5,6,7 torchrun --nproc_per_node=4 --master_port=20001 fastchat/train/train_mem.py \\\r\n --model_name_or_path lmsys/vicuna-7b-v1.5 \\\r\n --data_path data/dummy_conversation.json \\\r\n --bf16 True \\\r\n --output_dir output_vicuna \\\r\n --num_train_epochs 1 \\\r\n --max_steps 10 \\\r\n --per_device_train_batch_size 1 \\\r\n --per_device_eval_batch_size 1 \\\r\n --gradient_accumulation_steps 1 \\\r\n --evaluation_strategy \"no\" \\\r\n --save_strategy \"steps\" \\\r\n --save_steps 1200 \\\r\n --save_total_limit 10 \\\r\n --learning_rate 2e-5 \\\r\n --weight_decay 0. \\\r\n --warmup_ratio 0.03 \\\r\n --lr_scheduler_type \"cosine\" \\\r\n --logging_steps 1 \\\r\n --fsdp \"full_shard offload auto_wrap\" \\\r\n --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \\\r\n --tf32 True \\\r\n --model_max_length 2048 \\\r\n --gradient_checkpointing False \\\r\n --lazy_preprocess True\r\n\r\n```", "@pacman100 Could you please take a look once again :-) I think it's ready to be merged.", "Resolve merge conflicts by rebasing to main branch", "@pacman100 Thanks for your comments and patience, I've reverted the modifications to the IPEX logic", "Rebasing my commits to master HEAD and resolving merge conflicts", "Hi there. This PR is approved and the tests are green :D\r\ncc @muellerzr and @pacman100 ", "This PR is approved and the tests are green. @muellerzr Could you help to merge it?\r\n", "Thank you @statelesshz!", "Let me rebase quickly and merge if tests are green" ]
1,692
1,696
1,696
CONTRIBUTOR
null
# What does this PR do? As mentioned previously([see](https://github.com/huggingface/transformers/pull/24825)), fairscale's ShardedDDP is deprecated, and PyTorch FSDP is the recommended method for scaling to large NN models. Now it's time to say goodbye to this library👋. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @muellerz Good day. Could you please review this PR? Thanks😄 <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerz and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25702/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25702/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25702", "html_url": "https://github.com/huggingface/transformers/pull/25702", "diff_url": "https://github.com/huggingface/transformers/pull/25702.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25702.patch", "merged_at": 1696600992000 }
https://api.github.com/repos/huggingface/transformers/issues/25701
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25701/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25701/comments
https://api.github.com/repos/huggingface/transformers/issues/25701/events
https://github.com/huggingface/transformers/issues/25701
1,864,208,893
I_kwDOCUB6oc5vHZH9
25,701
creating a new env and installing huggingface, torch and running example scripts produces an error
{ "login": "surya-narayanan", "id": 17240858, "node_id": "MDQ6VXNlcjE3MjQwODU4", "avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4", "gravatar_id": "", "url": "https://api.github.com/users/surya-narayanan", "html_url": "https://github.com/surya-narayanan", "followers_url": "https://api.github.com/users/surya-narayanan/followers", "following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}", "gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}", "starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions", "organizations_url": "https://api.github.com/users/surya-narayanan/orgs", "repos_url": "https://api.github.com/users/surya-narayanan/repos", "events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}", "received_events_url": "https://api.github.com/users/surya-narayanan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I suspect there is something wrong with the way you are creating the environnement. I ran this successfully:\r\n```bash \r\nconda create -n myenv python=3.9\r\nconda activate myenv\r\npip install git+https://github.com/huggingface/transformers\r\npip install torch\r\npython -c \"from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))\"\r\n```\r\n```bash\r\nNo model was supplied, defaulted to distilbert-base-uncased-finetuned-sst-2-english and revision af0f99b (https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english).\r\nUsing a pipeline without specifying a model name and revision in production is not recommended.\r\n[{'label': 'POSITIVE', 'score': 0.9998656511306763}]\r\n```\r\nGiven the error, you don't seem to be using the `myenv` environnement. Did I muss something? (can't reproduce for now) could you share a colab maybe with this bug\r\n", "i think you were right about not using the right env, but after following the exact steps in your code, i get this error: \r\n\r\n```\r\nNo model was supplied, defaulted to distilbert-base-uncased-finetuned-sst-2-english and revision af0f99b (https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english).\r\nUsing a pipeline without specifying a model name and revision in production is not recommended.\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/suryahari/miniconda3/envs/myenv/lib/python3.9/site-packages/transformers/pipelines/__init__.py\", line 824, in pipeline\r\n framework, model = infer_framework_load_model(\r\n File \"/home/suryahari/miniconda3/envs/myenv/lib/python3.9/site-packages/transformers/pipelines/base.py\", line 276, in infer_framework_load_model\r\n raise ValueError(f\"Could not load model {model} with any of the following classes: {class_tuple}.\")\r\nValueError: Could not load model distilbert-base-uncased-finetuned-sst-2-english with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForSequenceClassification'>, <class 'transformers.models.distilbert.modeling_distilbert.DistilBertForSequenceClassification'>).\r\n\r\n```\r\n\r\nit works on a pay-as-you-go server, but not on my lab server, whose system info is pasted above. I will probe further into it, but any thoughts off the top of your head?", "It's my first time seeing this error so no idea haha 😅 Good luck 💪🏻 ", "Closing as #25892 adresses this by printing the correct warning! Feel free to re-open if you don't think this is it " ]
1,692
1,693
1,693
NONE
null
### System Info - `transformers` version: 4.33.0.dev0 - Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.3 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help? @ArthurZucker @younesbelkada @Narsil ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction running this shell script produced the error ``` y | conda create -n myenv python=3.9 conda activate myenv y | pip install git+https://github.com/huggingface/transformers y | conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia y | pip install chardet python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))" ``` should see this error ``` No model was supplied, defaulted to distilbert-base-uncased-finetuned-sst-2-english and revision af0f99b (https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english). Using a pipeline without specifying a model name and revision in production is not recommended. Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/suryahari/miniconda3/lib/python3.10/site-packages/transformers/pipelines/__init__.py", line 824, in pipeline framework, model = infer_framework_load_model( File "/home/suryahari/miniconda3/lib/python3.10/site-packages/transformers/pipelines/base.py", line 276, in infer_framework_load_model raise ValueError(f"Could not load model {model} with any of the following classes: {class_tuple}.") ValueError: Could not load model distilbert-base-uncased-finetuned-sst-2-english with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForSequenceClassification'>, <class 'transformers.models.distilbert.modeling_distilbert.DistilBertForSequenceClassification'>). ``` ### Expected behavior should work
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25701/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25701/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25700
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25700/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25700/comments
https://api.github.com/repos/huggingface/transformers/issues/25700/events
https://github.com/huggingface/transformers/issues/25700
1,864,132,065
I_kwDOCUB6oc5vHGXh
25,700
Discrepancy between `LlamaTokenizer` and `LlamaTokenizerFast` outputs
{ "login": "PyroGenesis", "id": 17806916, "node_id": "MDQ6VXNlcjE3ODA2OTE2", "avatar_url": "https://avatars.githubusercontent.com/u/17806916?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PyroGenesis", "html_url": "https://github.com/PyroGenesis", "followers_url": "https://api.github.com/users/PyroGenesis/followers", "following_url": "https://api.github.com/users/PyroGenesis/following{/other_user}", "gists_url": "https://api.github.com/users/PyroGenesis/gists{/gist_id}", "starred_url": "https://api.github.com/users/PyroGenesis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PyroGenesis/subscriptions", "organizations_url": "https://api.github.com/users/PyroGenesis/orgs", "repos_url": "https://api.github.com/users/PyroGenesis/repos", "events_url": "https://api.github.com/users/PyroGenesis/events{/privacy}", "received_events_url": "https://api.github.com/users/PyroGenesis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "On main I am getting this: \r\n```python \r\nLlamaTokenizer\r\n['<s>', '[', 'INST', ']', '▁<<', 'SY', 'S', '>>', '<0x0A>', 'Test', '▁system', '▁prompt', '<0x0A>', '<', '</', 'SY', 'S', '>>', '<0x0A>', '<0x0A>', 'Test', '▁user', '▁prompt', '.', '▁[', '/', 'INST', ']']\r\n\r\nLlamaTokenizerFast\r\n['<s>', '▁[', 'INST', ']', '▁<<', 'SY', 'S', '>>', '<0x0A>', 'Test', '▁system', '▁prompt', '<0x0A>', '<', '</', 'SY', 'S', '>>', '<0x0A>', '<0x0A>', 'Test', '▁user', '▁prompt', '.', '▁[', '/', 'INST', ']']\r\n```\r\nThe fast version is a bit wrong as it does not benefit from the fix that prevents adding an extra space after special tokens. As you can see: \r\n```diff\r\n'[', 'INST', ']'\r\n+ '▁[', 'INST', ']'\r\n```\r\nYou should not manually add the token but use `add_special_token = True` 😉 ", "Oh yes, I didn't realize that the Fast one is incorrect too. \r\n\r\nAbout adding tokens, I did not add any new tokens, just used the pretrained tokenizer directly. And correct me if I am wrong, but the `add_special_token` is True by default for the tokenizer call right?\r\n\r\nAlso, since the `LlamaTokenizer` on main is working correctly for you, can you tell me how to update to the version on main? Or do I need to wait for a new release?", "What I mean by `add_special_token = True` is that in the snippet you shared, you added the `<s>` token manually. A foolproof way to add it si to use `tokenizer.encode(text, add_special_tokens = True)`. Will produce the same outputs for both the fast and slow 😉 ", "Ok so first of all, I downloaded the bleeding edge of the `transformers` library (using `pip install git+https://github.com/huggingface/transformers`) and now my results from `LlamaTokenizer` match yours:\r\n\r\n```\r\nLlamaTokenizer\r\n['<s>', '[', 'INST', ']', '▁<<', 'SY', 'S', '>>', '<0x0A>', 'Test', '▁system', '▁prompt', '<0x0A>', '<', '</', 'SY', 'S', '>>', '<0x0A>', '<0x0A>', 'Test', '▁user', '▁prompt', '.', '▁[', '/', 'INST', ']']\r\n```\r\n---\r\nI also now understand what you mean by using `add_special_token`. If I run through the below input:\r\n```\r\n[INST] <<SYS>>\r\nTest system prompt\r\n<</SYS>>\r\n\r\nTest user prompt. [/INST]\r\n```\r\nwith the code :\r\n```py\r\ntokenizer = LlamaTokenizer.from_pretrained(\"meta-llama/Llama-2-7b-chat-hf\", legacy=True)\r\nprint(tokenizer.decode(tokenizer(input_prompt, add_special_tokens=True).input_ids))\r\n```\r\nI get the output:\r\n```\r\n<s>[INST] <<SYS>>\r\nTest system prompt\r\n<</SYS>>\r\n\r\nTest user prompt. [/INST]\r\n```\r\n\r\nBut if I run the same code with `legacy=False` it puts a space after the `<s>` token instead:\r\n```\r\n<s> [INST] <<SYS>>\r\nTest system prompt\r\n<</SYS>>\r\n\r\nTest user prompt. [/INST]\r\n```\r\nIs this intentional? I'm just trying to follow [the prompt format for llama2](https://huggingface.co/blog/llama2#how-to-prompt-llama-2) as closely as possible\r\n\r\n---\r\n\r\nLastly `LlamaTokenizerFast` seems to add a space after the `<s>` token, no matter whether I use `legacy=True` or `legacy=False`.\r\nIs this intentional too?", "Few things here. \r\nWhat you should be looking at is not `tokenizer.decode(tokenizer.encode(xxx))` but the `input_ids`. In this case you'll see that the extra space is not part of the `input_ids`. It is added by the tokenizer when decoding. You should notice that it is added both by the fast and the slow tokenizer. \r\n\r\nIs this expected? Yes:\r\n- when you add special token, you actually add the ids directly, instead of adding the tokens before the prompt. This means that the prompt is tokenizer with an additional prefix space. While I do understand this is not exactly what you want, the actual prompt format need the extra space. See [here](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L244)\r\n```python\r\n self.tokenizer.encode(f\"{B_INST} {(prompt['content']).strip()} {E_INST} {(answer['content']).strip()} \",bos=True,eos=True)\r\n```\r\n\r\nSo the blog is a little bit missleading but the code in `transformers` also uses this (see [here](https://github.com/ArthurZucker/transformers/blob/d7732c60c9fc61201b8a120a836e27fb3d8b3577/src/transformers/models/llama/tokenization_llama.py#L429-L432))\r\n\r\nLastly, the `fast` tokenizer does not use the `legacy` flag (yet 😉 )", "I think I understand it correctly now. That extra space is the difference of how the decoder works rather than a difference in the encoder. I can confirm using this code:\r\n```py\r\ntokenizer = LlamaTokenizer.from_pretrained(\"meta-llama/Llama-2-7b-chat-hf\", legacy=False)\r\ntokenizer_legacy = LlamaTokenizer.from_pretrained(\"meta-llama/Llama-2-7b-chat-hf\", legacy=True)\r\ntokenizer_fast = LlamaTokenizerFast.from_pretrained(\"meta-llama/Llama-2-7b-chat-hf\")\r\n\r\nprint('LlamaTokenizer')\r\nprint(tokenizer(input_prompt, add_special_tokens=True).input_ids)\r\nprint('\\nLlamaTokenizer Legacy')\r\nprint(tokenizer(input_prompt, add_special_tokens=True).input_ids)\r\nprint('\\nLlamaTokenizerFast')\r\nprint(tokenizer(input_prompt, add_special_tokens=True).input_ids)\r\n```\r\n\r\nwhich gives me the same output for all tokenizers:\r\n```\r\nLlamaTokenizer\r\n[1, 518, 25580, 29962, 3532, 14816, 29903, 6778, 13, 3057, 1788, 9508, 13, 29966, 829, 14816, 29903, 6778, 13, 13, 3057, 1404, 9508, 29889, 518, 29914, 25580, 29962]\r\n\r\nLlamaTokenizer Legacy\r\n[1, 518, 25580, 29962, 3532, 14816, 29903, 6778, 13, 3057, 1788, 9508, 13, 29966, 829, 14816, 29903, 6778, 13, 13, 3057, 1404, 9508, 29889, 518, 29914, 25580, 29962]\r\n\r\nLlamaTokenizerFast\r\n[1, 518, 25580, 29962, 3532, 14816, 29903, 6778, 13, 3057, 1788, 9508, 13, 29966, 829, 14816, 29903, 6778, 13, 13, 3057, 1404, 9508, 29889, 518, 29914, 25580, 29962]\r\n```\r\n\r\nThanks for also letting me know that the actual prompt format does have the prefix space, it just wasn't present in the blog (because the blog was written during the legacy decoder behavior?)\r\n\r\nThank you for taking the time to explain all of this to me. I'm a little new to tokenizers and I appreciate it!" ]
1,692
1,693
1,693
NONE
null
### System Info - `transformers` version: 4.32.0 - Platform: Windows-10-10.0.20348-SP0 - Python version: 3.10.11 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.2 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction It seems like the `LlamaTokenizer` and `LlamaTokenizerFast` tokenize text differently. Specifically, it looks like the `LlamaTokenizer` eats some characters instead of tokenizing them. Code: ```py import torch from transformers import LlamaForCausalLM, LlamaTokenizer, LlamaTokenizerFast # I tried legacy=False here too, the results are the same tokenizer = LlamaTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf") tokenizer_fast = LlamaTokenizerFast.from_pretrained("meta-llama/Llama-2-7b-chat-hf") # This functions transforms the user prompt into the prompt template mentioned here: # https://huggingface.co/blog/llama2#how-to-prompt-llama-2 def createPromptTemplate(prompt): system_prompt = 'Test system prompt' return '\n'.join([ "<s>[INST] <<SYS>>", system_prompt, "<</SYS>>", "", f"{prompt} [/INST]" ]) input_prompt = createPromptTemplate("Test user prompt.") print("Input:") print(input_prompt) print('\nLlamaTokenizer') print(tokenizer.tokenize(input_prompt)) print('\nLlamaTokenizerFast') print(tokenizer_fast.tokenize(input_prompt)) ``` Output: ``` Input: <s>[INST] <<SYS>> Test system prompt <</SYS>> Test user prompt. [/INST] LlamaTokenizer ['<s>', 'INST', ']', '▁<<', 'SY', 'S', '>>', '<0x0A>', 'Test', '▁system', '▁prompt', '<0x0A>', '<', '</', 'SY', 'S', '>>', '<0x0A>', '<0x0A>', 'Test', '▁user', '▁prompt', '.', '▁[', '/', 'INST', ']'] LlamaTokenizerFast ['<s>', '▁[', 'INST', ']', '▁<<', 'SY', 'S', '>>', '<0x0A>', 'Test', '▁system', '▁prompt', '<0x0A>', '<', '</', 'SY', 'S', '>>', '<0x0A>', '<0x0A>', 'Test', '▁user', '▁prompt', '.', '▁[', '/', 'INST', ']'] ``` ### Expected behavior I expected the tokenization of `LlamaTokenizer` to keep all the characters in the input.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25700/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25700/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25699
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25699/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25699/comments
https://api.github.com/repos/huggingface/transformers/issues/25699/events
https://github.com/huggingface/transformers/pull/25699
1,864,073,915
PR_kwDOCUB6oc5YozBF
25,699
Added max_length warning to pretrained tokenizer initialization
{ "login": "grantslewis", "id": 61807213, "node_id": "MDQ6VXNlcjYxODA3MjEz", "avatar_url": "https://avatars.githubusercontent.com/u/61807213?v=4", "gravatar_id": "", "url": "https://api.github.com/users/grantslewis", "html_url": "https://github.com/grantslewis", "followers_url": "https://api.github.com/users/grantslewis/followers", "following_url": "https://api.github.com/users/grantslewis/following{/other_user}", "gists_url": "https://api.github.com/users/grantslewis/gists{/gist_id}", "starred_url": "https://api.github.com/users/grantslewis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/grantslewis/subscriptions", "organizations_url": "https://api.github.com/users/grantslewis/orgs", "repos_url": "https://api.github.com/users/grantslewis/repos", "events_url": "https://api.github.com/users/grantslewis/events{/privacy}", "received_events_url": "https://api.github.com/users/grantslewis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,692
1,697
1,697
NONE
null
# What does this PR do? A truncation warning, "Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.", while not incorrect (max_length is a parameter for running the tokenize function), could also be interpreted as suggesting 'max_length' is a parameter used across the tokenizer class. However, 'max_length' silently does nothing when it is passed into AutoTokenizer.from_pretrained(), for example, and the previously mentioned warning will still occur. The proposed addition provides a simple warning when the 'model_max_length' is None and 'max_length' is passed in as an argument. The warning suggests the correct parameter name instead. This is meant to help provide a warning to direct developers to the correct variable name, helping prevent any unnecessary troubleshooting. This change should not require any additional tests or dependencies. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. CC @ArthurZucker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25699/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25699/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25699", "html_url": "https://github.com/huggingface/transformers/pull/25699", "diff_url": "https://github.com/huggingface/transformers/pull/25699.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25699.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/25698
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25698/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25698/comments
https://api.github.com/repos/huggingface/transformers/issues/25698/events
https://github.com/huggingface/transformers/pull/25698
1,864,047,479
PR_kwDOCUB6oc5YotIy
25,698
Added max_length warning to pretrained tokenizer initialization
{ "login": "grantslewis", "id": 61807213, "node_id": "MDQ6VXNlcjYxODA3MjEz", "avatar_url": "https://avatars.githubusercontent.com/u/61807213?v=4", "gravatar_id": "", "url": "https://api.github.com/users/grantslewis", "html_url": "https://github.com/grantslewis", "followers_url": "https://api.github.com/users/grantslewis/followers", "following_url": "https://api.github.com/users/grantslewis/following{/other_user}", "gists_url": "https://api.github.com/users/grantslewis/gists{/gist_id}", "starred_url": "https://api.github.com/users/grantslewis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/grantslewis/subscriptions", "organizations_url": "https://api.github.com/users/grantslewis/orgs", "repos_url": "https://api.github.com/users/grantslewis/repos", "events_url": "https://api.github.com/users/grantslewis/events{/privacy}", "received_events_url": "https://api.github.com/users/grantslewis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,692
1,692
1,692
NONE
null
# What does this PR do? A truncation warning, "Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.", while not incorrect (max_length is a parameter for running the tokenize function), could also be interpreted as suggesting 'max_length' is a parameter used across the tokenizer class. However, 'max_length' silently does nothing when it is passed into AutoTokenizer.from_pretrained(), for example, and the previously mentioned warning will still occur. The proposed addition provides a simple warning when the 'model_max_length' is None and 'max_length' is passed in as an argument. The warning suggests the correct parameter name instead. This is meant to help provide a warning to direct developers to the correct variable name, helping prevent any unnecessary troubleshooting. This change should not require any additional tests or dependencies. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. CC @ArthurZucker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25698/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25698/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25698", "html_url": "https://github.com/huggingface/transformers/pull/25698", "diff_url": "https://github.com/huggingface/transformers/pull/25698.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25698.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/25697
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25697/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25697/comments
https://api.github.com/repos/huggingface/transformers/issues/25697/events
https://github.com/huggingface/transformers/pull/25697
1,863,927,790
PR_kwDOCUB6oc5YoTJ9
25,697
[WIP] Implementation of SuperGlue
{ "login": "sbucaille", "id": 24275548, "node_id": "MDQ6VXNlcjI0Mjc1NTQ4", "avatar_url": "https://avatars.githubusercontent.com/u/24275548?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sbucaille", "html_url": "https://github.com/sbucaille", "followers_url": "https://api.github.com/users/sbucaille/followers", "following_url": "https://api.github.com/users/sbucaille/following{/other_user}", "gists_url": "https://api.github.com/users/sbucaille/gists{/gist_id}", "starred_url": "https://api.github.com/users/sbucaille/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sbucaille/subscriptions", "organizations_url": "https://api.github.com/users/sbucaille/orgs", "repos_url": "https://api.github.com/users/sbucaille/repos", "events_url": "https://api.github.com/users/sbucaille/events{/privacy}", "received_events_url": "https://api.github.com/users/sbucaille/received_events", "type": "User", "site_admin": false }
[ { "id": 5724035499, "node_id": "LA_kwDOCUB6oc8AAAABVS3Zqw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Model%20on%20the%20Hub", "name": "Model on the Hub", "color": "9CA0E9", "default": false, "description": "" } ]
closed
false
null
[]
[ "Hey! Thanks a lot for contributing! 🚀 as a first step I would suggest you to upload the model on the hub following [this tutorial](https://huggingface.co/docs/transformers/custom_models), as it will make the process a lot easier, and won't need to go through our strict CIs! 🤗 ", "Hi !\r\n\r\nOk, I have a first question regarding this specific model.\r\nSo SuperGlue is a keypoint matching model which, given keypoints, returns a matching of each of them.\r\nBut the keypoints this model relies on are given by another model, SuperPoint, which I introduced in the original issue.\r\nSuperGlue as a model does not really make sense without SuperPoint, but also SuperPoint can be interpreted as a complete different model which given an image, returns the list of keypoints detected. \r\nMoreover, SuperPoint is used in other models of the SoTA (like [LightGlue](https://arxiv.org/pdf/2306.13643.pdf), which is an evolution of SuperGlue, and which I also plan on implementing in transformers).\r\n\r\nThe question now : should I add the implementation of SuperPoint in the SuperGlue code and consider this combo SuperPoint + SuperGlue as the SuperGlue model in transformers (and also calling the class SuperGlueSuperPoint to fit the naming convention), or should I add SuperPoint as another model, standalone, part of transformers ?\r\n\r\nI myself can't really tell what would be best since : \r\n- SuperGlue without SuperPoint (so considered standalone) is not really \"usable\", in practice it would require the user to have the keypoints itself but also from a test point of view, how can I verify the matching of SuperGlue without knowing what are the original images the keypoints are from ?\r\n- SuperPoint as a standalone model is useful since it provides the \"image to keypoints\" pipeline and is reused in other models like the aforementioned LightGlue. Also, I noticed as \"First Good Issue\" mentions of pipeline (like [this one](https://github.com/huggingface/transformers/issues/25349)) and it gave me ideas of an image matching pipeline implementation such as \"SuperPoint + SuperGlue\" or \"SuperPoint + LightGlue\" or \"[DISK](https://arxiv.org/pdf/2006.13566.pdf) + LightGlue\" (LightGlue was also tested with DISK which is another keypoint localizer) where 2 images are given and we obtain a matching.\r\n\r\nHope the question is clear and also since I'm new to all this \"collaborating\" thing on GitHub, let me know if this kind of questions should belong here or somewhere else.\r\n\r\nThanks again for considering my contribution !\r\n\r\nEDIT : Also what is the difference between a model on the hub and a model added in the transformers library, I got confused by the existence of both [this page](https://huggingface.co/docs/transformers/main/add_new_model) and [this page](https://huggingface.co/docs/transformers/custom_models)", "cc @amyeroberts I need your take on this 😄 ", "Hi,\r\n\r\nIn the meantime I added the basics for the implementation of SuperGlue by following the example of the tutorial you provided me earlier. I also looked around other models on how conversion scripts were implemented and mimicked it for the SuperGlue case.\r\nRegardless of what is decided for the SuperPoint part, this code is the necessary minimum. It yet needs to be tested but without knowing what we should do with the SuperPoint part I preferred to stick what that.", "@sbucaille Thanks for the detailed explanation about the two models! The model PR is a good place to ask questions about implementation :) \r\n\r\nAs both models, SuperPoint and SuperGlue offer new capabilities and are very popular, I would consider them good additions directly into transformers. However, as Arthur mentions, this would involve going through the PR review process which will be slower and more restrictive than adding directly onto the hub. If you want to go straight to the hub - you can decide how you would like to add the model! \r\n\r\nIf adding to transformers, what I would suggest is implementing SuperPoint as its own model and PR with task models e.g.`SuperPointForInterestPointDescription` (we can settle on a name later). I wouldn't add a separate MagicPoint model. In that PR, we can also add a mapping `AutoModelForInterestPointDescription`, which we define as taking two images and returning interest keypoints and their descriptions. \r\n\r\nThen we can implement SuperGlue. Similar to [e.g. MusicGen](https://github.com/huggingface/transformers/blob/4d9e45f3ef624cab41f605d7439862ce23ca806a/src/transformers/models/musicgen/modeling_musicgen.py#L1487) we can have SuperGlue load in any keypoint detection model using AutoModelForPointCorrespondence. \r\n\r\nThen, if in the future you wanted to add DISK, SuperGlue could load either DISK or SuperPoint. Likewise, if you wanted to add LightGlue, it can then load DISK or SuperPoint using the same AutoModelForPointCorrespondence structure. \r\n\r\nThe important thing is that all the models being loaded using AutoModelForXxx have the same input / output structure. ", "I created the PR for the SuperPoint implementation. \r\nThe main reason I'm doing this is to learn, so of course I am willing to go through the PR review process ! :smile: \r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@sbucaille I'll leave this closed for now so we don't have to keep re-opening every week. I know you're off for a few weeks - just ping when you're back and I can reopen again then" ]
1,692
1,701
1,700
NONE
null
# What does this PR do? This PR implements the SuperGlue model. https://github.com/huggingface/transformers/issues/25489 ## Who can review? @amyeroberts ## Todo's - [x] Adapt the template code to match SuperGlue - [x] Write a conversion script - [ ] Adding model tests - [ ] Add docstring - [ ] Upload models to hub
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25697/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25697/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25697", "html_url": "https://github.com/huggingface/transformers/pull/25697", "diff_url": "https://github.com/huggingface/transformers/pull/25697.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25697.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/25696
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25696/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25696/comments
https://api.github.com/repos/huggingface/transformers/issues/25696/events
https://github.com/huggingface/transformers/issues/25696
1,863,925,078
I_kwDOCUB6oc5vGT1W
25,696
bug in loading gpt 2 in a pipeline
{ "login": "surya-narayanan", "id": 17240858, "node_id": "MDQ6VXNlcjE3MjQwODU4", "avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4", "gravatar_id": "", "url": "https://api.github.com/users/surya-narayanan", "html_url": "https://github.com/surya-narayanan", "followers_url": "https://api.github.com/users/surya-narayanan/followers", "following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}", "gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}", "starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions", "organizations_url": "https://api.github.com/users/surya-narayanan/orgs", "repos_url": "https://api.github.com/users/surya-narayanan/repos", "events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}", "received_events_url": "https://api.github.com/users/surya-narayanan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "i have a feeling that this error was created when an env was created, transformers was installed, (the code worked fine), diffusers was installed, (and the code broke). ", "Both colabs are private so can't reproduce, but thanks for reporting! ", "fwiw - anecdotal evidence suggests that this happens when disks are full - this behavior was likely never observed in colabs / testing environments. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,692
1,697
1,697
NONE
null
### System Info Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.32.0.dev0 - Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.35 - Python version: 3.10.10 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ``` Cell In[5], line 23 20 query_model_pipelines = [] 22 for query_model_name in query_model_names: ---> 23 query_model_pipelines.append(pipeline(model=query_model_name, 24 trust_remote_code=True, 25 max_new_tokens= 500, 26 device_map="auto", 27 torch_dtype=torch.bfloat16, 28 )) File [~/miniconda3/envs/vornoi/lib/python3.10/site-packages/transformers/pipelines/__init__.py:793](https://vscode-remote+ssh-002dremote-002bthomsonlab-002d2-002ejamesgornet-002ecom.vscode-resource.vscode-cdn.net/home/suryahari/Vornoi/QA/~/miniconda3/envs/vornoi/lib/python3.10/site-packages/transformers/pipelines/__init__.py:793), in pipeline(task, model, config, tokenizer, feature_extractor, image_processor, framework, revision, use_fast, token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs) 791 if isinstance(model, str) or framework is None: 792 model_classes = {"tf": targeted_task["tf"], "pt": targeted_task["pt"]} --> 793 framework, model = infer_framework_load_model( 794 model, 795 model_classes=model_classes, 796 config=config, 797 framework=framework, 798 task=task, 799 **hub_kwargs, 800 **model_kwargs, 801 ) 803 model_config = model.config 804 hub_kwargs["_commit_hash"] = model.config._commit_hash File [~/miniconda3/envs/vornoi/lib/python3.10/site-packages/transformers/pipelines/base.py:276](https://vscode-remote+ssh-002dremote-002bthomsonlab-002d2-002ejamesgornet-002ecom.vscode-resource.vscode-cdn.net/home/suryahari/Vornoi/QA/~/miniconda3/envs/vornoi/lib/python3.10/site-packages/transformers/pipelines/base.py:276), in infer_framework_load_model(model, config, model_classes, task, framework, **model_kwargs) 273 continue 275 if isinstance(model, str): --> 276 raise ValueError(f"Could not load model {model} with any of the following classes: {class_tuple}.") 278 if framework is None: 279 framework = infer_framework(model.__class__) ValueError: Could not load model gpt2 with any of the following classes: (, ). ``` ### Who can help? @younesbelkada @ArthurZucker @Narsil ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction can't reproduce :( https://colab.research.google.com/drive/19CmvYMZqQqBDfp-I1_oh4ImvcFynJygI#scrollTo=Ib94uj4w8zH7 ### Expected behavior should work as here: https://colab.research.google.com/drive/19CmvYMZqQqBDfp-I1_oh4ImvcFynJygI#scrollTo=Ib94uj4w8zH7
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25696/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 1, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25696/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25695
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25695/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25695/comments
https://api.github.com/repos/huggingface/transformers/issues/25695/events
https://github.com/huggingface/transformers/issues/25695
1,863,922,183
I_kwDOCUB6oc5vGTIH
25,695
tensor size mismatch with larger gradient_accumulation_steps and fewer training data
{ "login": "yyymeta", "id": 123776235, "node_id": "U_kgDOB2Cs6w", "avatar_url": "https://avatars.githubusercontent.com/u/123776235?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yyymeta", "html_url": "https://github.com/yyymeta", "followers_url": "https://api.github.com/users/yyymeta/followers", "following_url": "https://api.github.com/users/yyymeta/following{/other_user}", "gists_url": "https://api.github.com/users/yyymeta/gists{/gist_id}", "starred_url": "https://api.github.com/users/yyymeta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yyymeta/subscriptions", "organizations_url": "https://api.github.com/users/yyymeta/orgs", "repos_url": "https://api.github.com/users/yyymeta/repos", "events_url": "https://api.github.com/users/yyymeta/events{/privacy}", "received_events_url": "https://api.github.com/users/yyymeta/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! If you want help from our trainer expert @pacman100 we're gonna need to have a look at the training script or at least have a reproducer.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi, I have encountered the same problem, fewer training examples and adam optimizer. May I ask if you have resolved it? How was it resolved?" ]
1,692
1,708
1,697
NONE
null
### System Info A100 Nvidia 80G GPU ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction it seems when I have fewer training examples (1000 or so), and when I use a larger gradient_accumulation_steps, (32) I get tensor size mismatch on Adam gradient update: ``` \"/tmp/jetter.yrcudeja/torch/optim/optimizer.py\", line 33, in _use_grad\n ret = func(self, *args, **kwargs)\n File \"/tmp/jetter.yrcudeja/torch/optim/adamw.py\", line 173, in step\n adamw(\n File \"/tmp/jetter.yrcudeja/torch/optim/adamw.py\", line 323, in adamw\n func(\n File \"/tmp/jetter.yrcudeja/torch/optim/adamw.py\", line 502, in _multi_tensor_adamw\n torch._foreach_add_(device_exp_avgs, device_grads, alpha=1 - beta1)\nRuntimeError: The size of tensor a (8192384) must match the size of tensor b (262156288) at non-singleton dimension 0\n", "errorTraits": null, "timestamp_us": 1692818123557766} [4]: File "/usr/local/fbcode/platform010/lib/python3.8/runpy.py", line 194, in _run_module_as_main [4]: return _run_code(code, main_globals, None, [4]: File "/usr/local/fbcode/platform010/lib/python3.8/runpy.py", line 87, in _run_code [4]: exec(code, run_globals) [4]: File "/tmp/jetter.12kzp8qf/aml/comment/llama_finetune/train.py", line 150, in <module> [4]: train() [4]: File "/tmp/jetter.12kzp8qf/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper [4]: return f(*args, **kwargs) [4]: File "/tmp/jetter.12kzp8qf/aml/comment/llama_finetune/train.py", line 124, in train [4]: trainer.train() [4]: File "/tmp/jetter.12kzp8qf/transformers/trainer.py", line 1664, in train [4]: return inner_training_loop( [4]: File "/tmp/jetter.12kzp8qf/transformers/trainer.py", line 1998, in _inner_training_loop [4]: self.optimizer.step() [4]: File "/tmp/jetter.12kzp8qf/torch/optim/lr_scheduler.py", line 69, in wrapper [4]: return wrapped(*args, **kwargs) [4]: File "/tmp/jetter.12kzp8qf/torch/optim/optimizer.py", line 280, in wrapper [4]: out = func(*args, **kwargs) [4]: File "/tmp/jetter.12kzp8qf/torch/optim/optimizer.py", line 33, in _use_grad [4]: ret = func(self, *args, **kwargs) [4]: File "/tmp/jetter.12kzp8qf/torch/optim/adamw.py", line 173, in step [4]: adamw( [4]: File "/tmp/jetter.12kzp8qf/torch/optim/adamw.py", line 323, in adamw [4]: func( [4]: File "/tmp/jetter.12kzp8qf/torch/optim/adamw.py", line 502, in _multi_tensor_adamw [4]: torch._foreach_add_(device_exp_avgs, device_grads, alpha=1 - beta1) [4]:RuntimeError: The size of tensor a (8192384) must match the size of tensor b (262156288) at non-singleton dimension 0 [7]:ERROR:aiplatform.error_reporting.error_reporting:Exception Found: The size of tensor a (8192384) must match the size of tensor b (262156288) at non-singleton dimension 0 ``` using gradient_accumulation_steps=1 fixes it, but then it causes some impact on model quality command was /packages/torchx_python/python -m torch.distributed.run --rdzv_backend zeus --rdzv_id torchx-llama_finetune_train-k64xgt1h4dt52c --nnodes 4 --nproc_per_node 8 --tee 3 --role -m aml.comment.llama_finetune.train --local_dir /tmp/users --model_manifold_bucket pi_adv_problems --model_manifold_dir tree/dpa_llama --input_model_filename 7B-converted --output_model_filename yytest__v7_instagram_basic_5e-6 --data_path manifold://pi_adv_problems/tree/appreview_llama/data/v7/train__instagram_basic.json --eval_data_path manifold://pi_adv_problems/tree/appreview_llama/data/v7/eval__instagram_basic.json --data_task generic --prompt_temp normal --processed True --model_max_length 1024 --num_train_epochs 30 --per_device_train_batch_size 2 --per_device_eval_batch_size 8 --gradient_accumulation_steps 32 --evaluation_strategy steps --eval_steps 10 --save_strategy steps --save_steps 200 --save_total_limit 1 --learning_rate 5e-6 --weight_decay 0. --warmup_ratio 0.03 --lr_scheduler_type cosine --logging_steps 50 --fsdp full_shard auto_wrap --fsdp_transformer_layer_cls_to_wrap LlamaDecoderLayer --bf16 True --tf32 True ### Expected behavior note that the above error shows yyy@yyy-mbp ~ % echo '262156288/8192384'|bc -l 32.00000000000000000000 so somehow it seems only one gradient is obtained while maybe 32 are expected?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25695/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25695/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25694
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25694/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25694/comments
https://api.github.com/repos/huggingface/transformers/issues/25694/events
https://github.com/huggingface/transformers/issues/25694
1,863,822,632
I_kwDOCUB6oc5vF60o
25,694
ValueError: Unsupported number of image dimensions: 2 - An error during embedding Image data
{ "login": "UmarIgan", "id": 38042220, "node_id": "MDQ6VXNlcjM4MDQyMjIw", "avatar_url": "https://avatars.githubusercontent.com/u/38042220?v=4", "gravatar_id": "", "url": "https://api.github.com/users/UmarIgan", "html_url": "https://github.com/UmarIgan", "followers_url": "https://api.github.com/users/UmarIgan/followers", "following_url": "https://api.github.com/users/UmarIgan/following{/other_user}", "gists_url": "https://api.github.com/users/UmarIgan/gists{/gist_id}", "starred_url": "https://api.github.com/users/UmarIgan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/UmarIgan/subscriptions", "organizations_url": "https://api.github.com/users/UmarIgan/orgs", "repos_url": "https://api.github.com/users/UmarIgan/repos", "events_url": "https://api.github.com/users/UmarIgan/events{/privacy}", "received_events_url": "https://api.github.com/users/UmarIgan/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "cc @amyeroberts and @rafaelpadilla ", "Hi @UmarIgan \r\n\r\nThank you for bringing this to our attention!\r\n\r\nI've tested your code and indeed, I've encountered the same error. I'm on it and will work towards a solution.\r\n", "Thanks @rafaelpadilla \r\nAs I understand vision transformers also can't encode grayscale of images as well, I tried to wrap around the dataset - tried to transform image to add a new channel but no go. Is there a way to overcome this?" ]
1,692
1,706
null
NONE
null
### System Info I am facing an issue during encoding image dataset using facebook/dino-vits16, I faced this issue with grayscale images before too but it worked well with Bingsu/Human_Action_Recognition dataset. Versions ``` transformers==4.32.0 torch==2.0.1+cu118 datasets==2.14.4 ``` The error: ``` Some weights of ViTModel were not initialized from the model checkpoint at facebook/dino-vits16 and are newly initialized: ['pooler.dense.weight', 'pooler.dense.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Map: 0% 2/10000 [00:00<40:18, 4.13 examples/s] --------------------------------------------------------------------------- ValueError Traceback (most recent call last) [<ipython-input-30-0547920c10ef>](https://localhost:8080/#) in <cell line: 22>() 20 return batch 21 ---> 22 dataset_train = dataset_train.map(get_embeddings) 8 frames [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in wrapper(*args, **kwargs) 590 self: "Dataset" = kwargs.pop("self") 591 # apply actual function --> 592 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 593 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 594 for dataset in datasets: [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in wrapper(*args, **kwargs) 555 } 556 # apply actual function --> 557 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 558 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 559 # re-apply format to the output [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 3095 desc=desc or "Map", 3096 ) as pbar: -> 3097 for rank, done, content in Dataset._map_single(**dataset_kwargs): 3098 if done: 3099 shards_done += 1 [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in _map_single(shard, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset) 3448 _time = time.time() 3449 for i, example in shard_iterable: -> 3450 example = apply_function_on_filtered_inputs(example, i, offset=offset) 3451 if update_data: 3452 if i == 0: [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in apply_function_on_filtered_inputs(pa_inputs, indices, check_same_num_examples, offset) 3351 if with_rank: 3352 additional_args += (rank,) -> 3353 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) 3354 if isinstance(processed_inputs, LazyDict): 3355 processed_inputs = { [<ipython-input-30-0547920c10ef>](https://localhost:8080/#) in get_embeddings(batch) 14 15 def get_embeddings(batch): ---> 16 inputs = processor(images=batch['image'], return_tensors="pt").to(device) 17 with torch.no_grad(): 18 outputs = model(**inputs).last_hidden_state.mean(dim=1).cpu().numpy() [/usr/local/lib/python3.10/dist-packages/transformers/image_processing_utils.py](https://localhost:8080/#) in __call__(self, images, **kwargs) 544 def __call__(self, images, **kwargs) -> BatchFeature: 545 """Preprocess an image or a batch of images.""" --> 546 return self.preprocess(images, **kwargs) 547 548 def preprocess(self, images, **kwargs) -> BatchFeature: [/usr/local/lib/python3.10/dist-packages/transformers/models/vit/image_processing_vit.py](https://localhost:8080/#) in preprocess(self, images, do_resize, size, resample, do_rescale, rescale_factor, do_normalize, image_mean, image_std, return_tensors, data_format, input_data_format, **kwargs) 232 if input_data_format is None: 233 # We assume that all images have the same channel dimension format. --> 234 input_data_format = infer_channel_dimension_format(images[0]) 235 236 if do_resize: [/usr/local/lib/python3.10/dist-packages/transformers/image_utils.py](https://localhost:8080/#) in infer_channel_dimension_format(image, num_channels) 168 first_dim, last_dim = 1, 3 169 else: --> 170 raise ValueError(f"Unsupported number of image dimensions: {image.ndim}") 171 172 if image.shape[first_dim] in num_channels: ValueError: Unsupported number of image dimensions: 2 ``` ### Who can help? @amyeroberts ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import ViTImageProcessor, ViTModel from datasets import load_dataset, Dataset import torch dataset_train = load_dataset( 'ashraq/fashion-product-images-small', split='train[:10000]' ) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = ViTImageProcessor.from_pretrained('facebook/dino-vits16') model = ViTModel.from_pretrained('facebook/dino-vits16') def get_embeddings(batch): inputs = processor(images=batch['image'], return_tensors="pt").to(device) with torch.no_grad(): outputs = model(**inputs).last_hidden_state.mean(dim=1).cpu().numpy() batch['embeddings'] = outputs return batch dataset_train = dataset_train.map(get_embeddings) ``` ### Expected behavior Expected behavior was to obtaining embeddings.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25694/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25694/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/25693
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25693/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25693/comments
https://api.github.com/repos/huggingface/transformers/issues/25693/events
https://github.com/huggingface/transformers/pull/25693
1,863,744,026
PR_kwDOCUB6oc5YnrWy
25,693
Add Seamless M4T model
{ "login": "ylacombe", "id": 52246514, "node_id": "MDQ6VXNlcjUyMjQ2NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ylacombe", "html_url": "https://github.com/ylacombe", "followers_url": "https://api.github.com/users/ylacombe/followers", "following_url": "https://api.github.com/users/ylacombe/following{/other_user}", "gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}", "starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions", "organizations_url": "https://api.github.com/users/ylacombe/orgs", "repos_url": "https://api.github.com/users/ylacombe/repos", "events_url": "https://api.github.com/users/ylacombe/events{/privacy}", "received_events_url": "https://api.github.com/users/ylacombe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hey @sanchit-gandhi, thanks for your thorough review! I've addressed or answered almost every comment, except your request on the nested configuration. \r\n\r\nFor the moment being, I'd rather use a formatting with clear delimitation, a bit like the one you can find on the [PretrainedConfig doc](https://huggingface.co/docs/transformers/v4.33.2/en/main_classes/configuration#transformers.PretrainedConfig). Here is how it looks like with my config: [docs](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25693/en/model_doc/seamless_m4t#transformers.SeamlessM4TConfig). I believe that it renders pretty well, and avoid the complexity of adding a nested config to an already pretty heavy PR. WDYT ?\r\n\r\nI would really like @ArthurZucker point of view on that! This is also the opportunity to ask @ArthurZucker for a review!\r\n\r\nEDIT: Rebased and modified on the new tokenizer PR (#23909)", "Hi @ydshieh! \r\n\r\nIt seems that the [documentation tests](https://github.com/huggingface/transformers/pull/26212) fail because it tries running `src/transformers/models/seamless_m4t/convert_fairseq2_to_hf.py`.\r\nI saw that you addressed a similar issue in #26212, and I was wondering if that was a normal behavior ! I believe that precedent PRs didn't try running the converting script.\r\n\r\nI'll update `utils/not_doctested.txt` if this is the expected fix!\r\n", "Hi\r\n\r\nUsually, if the script has `__main__`, it should be fine. But here is an import error, and put it under `utils/not_doctested.txt` is the way to go.", "Hi @ydshieh, thanks for your help. The script had `__main__` but was still throwing some errors, so I added it to `utils/not_doctested.txt`", "Following #26182 merging, I've updated the FE, thus removing dependency to torchaudio. \r\n\r\nBTW, this is also a gentle reminder to review this PR when you have time @ArthurZucker :hugs: ", "Hey @sanchit-gandhi , thanks for your review, I've corrected everything!", "Thanks for the quick review, I'll address most your comments this morning.\r\n\r\nI still have this pending questions if you have time:\r\n- https://github.com/huggingface/transformers/pull/25693#discussion_r1350681175\r\n- That one's for @ydshieh : https://github.com/huggingface/transformers/pull/25693#discussion_r1350682649\r\n\r\n", "Hey @ArthurZucker, your suggestions made it even cleaner! I've also added some copied from statements! Thanks for that!\r\n\r\nDo you think you can review a bit more thoroughly the modeling code soon ?", "on it tomorrow! ", "Thanks for the last review @ArthurZucker! \r\nTwo last things to address before merging:\r\n1.~~`SeamlessM4TConfig` formatting: https://github.com/huggingface/transformers/pull/25693#discussion_r1362184879 - cc @ydshieh , could you take a look at it?~~\r\n2. ~~Assisted generation test: I can't seem to pass `SeamlessM4TModelWithTextInputTest::test_assisted_decoding_sample` and this might be out of my scope. Could you take a look at it @gante ? It's basically testing with SeamlessM4TForTextToText which is a basic seq2seq model!~~ [UPDATE: ignore test after offline discusions]" ]
1,692
1,698
1,698
COLLABORATOR
null
# What does this PR do? Meta recently introduced [Seamless M4T](https://ai.meta.com/blog/seamless-m4t/), a collection of models designed to provide high quality translation, allowing people from different linguistic communities to communicate effortlessly through speech and text. SeamlessM4T supports multiple audio and/or translation tasks, namely S2TT, S2ST, T2TT, T2ST, where the last T stands for translation. In other words, this model _seamlessly_ supports audio|text to translated audio|text. SeamlessM4T weights are already available on the hub ([large](https://huggingface.co/facebook/seamless-m4t-large) and [medium](https://huggingface.co/facebook/seamless-m4t-medium)) and the code is available on the [seamless_communication git repo](https://github.com/facebookresearch/seamless_communication). In terms of architecture, and after having discussed with @sanchit-gandhi, I've came up with 4 differents models for the 4 tasks and one model that can do each task. I've been working on the integration for a couple of days already. At the moment, the converting script is more or less ready and the different models can generate. Here is a TODO of what's left to be done: - [x] Agree on the current architecture and some modeling details (for example what outputs) - [x] integrate feature extraction (fbank) and tokenizer (similar to NLLB) - [x] Integrate their vocoder - [x] Write and format docstrings - [x] Do integration tests - there's probably some modeling discrepancy at the moment (except for the speech encoder with a one-to-one correspondance) - [x] Finish regular tests - [x] There's probably some work to be done for the optimal generation config cc @sanchit-gandhi and @ArthurZucker !
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25693/reactions", "total_count": 3, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25693/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25693", "html_url": "https://github.com/huggingface/transformers/pull/25693", "diff_url": "https://github.com/huggingface/transformers/pull/25693.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25693.patch", "merged_at": 1698065388000 }
https://api.github.com/repos/huggingface/transformers/issues/25692
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25692/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25692/comments
https://api.github.com/repos/huggingface/transformers/issues/25692/events
https://github.com/huggingface/transformers/pull/25692
1,863,740,427
PR_kwDOCUB6oc5YnqlN
25,692
Generate: logits processors are doctested and fix broken doctests
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It's nice to see the red cross 🔥 ", "_The documentation is not available anymore as the PR was closed or merged._", "ooh boy 🤦 ", "Super glad to see the full power of the `tests_pr_documentation_tests` on display 🚀 ", "@ArthurZucker ready for a review! I've updated the PR header with a summary of the changes :)" ]
1,692
1,692
1,692
MEMBER
null
# What does this PR do? This PR: - Adds `logits_processors.py` to the doctests - Fixes the broken tests (while fixing the tests, also improved the examples with better practices and shorter examples) - Updates sample-related examples to much shorter examples (the examples were too long and not very representative of the processor)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25692/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25692/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25692", "html_url": "https://github.com/huggingface/transformers/pull/25692", "diff_url": "https://github.com/huggingface/transformers/pull/25692.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25692.patch", "merged_at": 1692963726000 }
https://api.github.com/repos/huggingface/transformers/issues/25691
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25691/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25691/comments
https://api.github.com/repos/huggingface/transformers/issues/25691/events
https://github.com/huggingface/transformers/pull/25691
1,863,736,396
PR_kwDOCUB6oc5YnptN
25,691
correct resume training steps number in progress bar
{ "login": "pphuc25", "id": 81808312, "node_id": "MDQ6VXNlcjgxODA4MzEy", "avatar_url": "https://avatars.githubusercontent.com/u/81808312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pphuc25", "html_url": "https://github.com/pphuc25", "followers_url": "https://api.github.com/users/pphuc25/followers", "following_url": "https://api.github.com/users/pphuc25/following{/other_user}", "gists_url": "https://api.github.com/users/pphuc25/gists{/gist_id}", "starred_url": "https://api.github.com/users/pphuc25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pphuc25/subscriptions", "organizations_url": "https://api.github.com/users/pphuc25/orgs", "repos_url": "https://api.github.com/users/pphuc25/repos", "events_url": "https://api.github.com/users/pphuc25/events{/privacy}", "received_events_url": "https://api.github.com/users/pphuc25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,692
1,692
1,692
CONTRIBUTOR
null
Hi, In this code of resume training with step, I see that completed_steps will always start at epoch 0, and it does not seem to be corrected (progress_bar needs to be at the step in the file save). So I changed the location of this code to make the progress_bar update with the latest step. I would like to cc @sgugger to review it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25691/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25691/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25691", "html_url": "https://github.com/huggingface/transformers/pull/25691", "diff_url": "https://github.com/huggingface/transformers/pull/25691.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25691.patch", "merged_at": 1692814154000 }
https://api.github.com/repos/huggingface/transformers/issues/25690
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25690/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25690/comments
https://api.github.com/repos/huggingface/transformers/issues/25690/events
https://github.com/huggingface/transformers/issues/25690
1,863,697,216
I_kwDOCUB6oc5vFcNA
25,690
Memory leak
{ "login": "yurkoff-mv", "id": 82467993, "node_id": "MDQ6VXNlcjgyNDY3OTkz", "avatar_url": "https://avatars.githubusercontent.com/u/82467993?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yurkoff-mv", "html_url": "https://github.com/yurkoff-mv", "followers_url": "https://api.github.com/users/yurkoff-mv/followers", "following_url": "https://api.github.com/users/yurkoff-mv/following{/other_user}", "gists_url": "https://api.github.com/users/yurkoff-mv/gists{/gist_id}", "starred_url": "https://api.github.com/users/yurkoff-mv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yurkoff-mv/subscriptions", "organizations_url": "https://api.github.com/users/yurkoff-mv/orgs", "repos_url": "https://api.github.com/users/yurkoff-mv/repos", "events_url": "https://api.github.com/users/yurkoff-mv/events{/privacy}", "received_events_url": "https://api.github.com/users/yurkoff-mv/received_events", "type": "User", "site_admin": false }
[ { "id": 5616426447, "node_id": "LA_kwDOCUB6oc8AAAABTsPdzw", "url": "https://api.github.com/repos/huggingface/transformers/labels/solved", "name": "solved", "color": "B1D6DC", "default": false, "description": "" } ]
closed
false
null
[]
[ "You need to call the garbage collector and `torch.cuda.empty_cache()` at the very least to get a chance to get the memory back.", "After every inference? I didn't do this before and it was done automatically.", "If you do a new inference, you won't see new memory used, just the same one being re-used. You need to properly empty cache/garbage collect when doing memory measurements.", "I call \r\n```\r\ngc.collect()\r\ntorch.cuda.empty_cache()\r\n```\r\nafter each inference and the problem is gone. Thank you!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,692
1,695
1,695
NONE
null
### System Info OS: Ubuntu 20.04 - `transformers` version: 4.31.0 - Platform: Linux-5.15.0-79-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.2 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> Python packages: torch==2.0.1+cu118; sys_platform == 'linux' torchvision==0.15.2+cu118; sys_platform == 'linux' torchtext==0.15.2; sys_platform == 'linux' torchaudio==2.0.2+cu118; sys_platform == 'linux' psutil==5.9.5 requests==2.31.0 captum==0.6.0 packaging==23.1 pynvml==11.4.1 pyyaml==6.0 nvgpu cython==0.29.34 wheel==0.40.0 pillow==9.3.0 numpy==1.24.3 torchtext==0.15.2 torchserve==0.7.1 torch-model-archiver==0.7.1 transformers==4.31.0 tokenizers==0.13.3 sentencepiece==0.1.99 bitsandbytes==0.41.1 accelerate==0.21.0 scipy==1.10.1 ### Who can help? @sgugger, @muellerzr, @ArthurZucker and @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` import torch from transformers import LlamaTokenizerFast, LlamaForCausalLM # model_name = 'TheBloke/Llama-2-13B-fp16' model_name = 'lmsys/vicuna-13b-v1.5-16k' tokenizer = LlamaTokenizerFast.from_pretrained(model_name ) model = LlamaForCausalLM.from_pretrained(model_name , load_in_8bit=True, device_map='sequential', torch_dtype=torch.float16, low_cpu_mem_usage=True, ) ``` **prompt = "LARGE PROMPT"** ``` inputs = self.tokenizer(prompts) output_ids = self.model.generate(torch.as_tensor(inputs.input_ids).to(self.device), do_sample=True, temperature=0.8, max_new_tokens=512, top_p=0.95, # synced_gpus=True, ) results = self.tokenizer.batch_decode(output_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] ``` **BREAKPOINT** Model after loading ![image](https://github.com/huggingface/transformers/assets/82467993/29aea701-9e9d-4876-992a-05f8e05495bc) Model in inference ![image](https://github.com/huggingface/transformers/assets/82467993/18115e2b-a9cf-4577-bd68-9c3fbbe5352e) Model after inference ![image](https://github.com/huggingface/transformers/assets/82467993/48689f90-79c6-4a6e-963b-0387a24e6a89) After the calculations are completed, the model does not return the used memory to the pool. ### Expected behavior After the calculations are completed, the amount of memory occupied should return to the previous values as when loading the model (56.5 % or ~14GB).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25690/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25690/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25689
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25689/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25689/comments
https://api.github.com/repos/huggingface/transformers/issues/25689/events
https://github.com/huggingface/transformers/pull/25689
1,863,669,579
PR_kwDOCUB6oc5YnbYN
25,689
[`LlamaTokenizer`] make unk_token_length a property
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,692
1,692
1,692
COLLABORATOR
null
# What does this PR do? Small nit to make sure the `unk_token_length` is updated with the unk token˜
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25689/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25689/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25689", "html_url": "https://github.com/huggingface/transformers/pull/25689", "diff_url": "https://github.com/huggingface/transformers/pull/25689.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25689.patch", "merged_at": 1692857014000 }
https://api.github.com/repos/huggingface/transformers/issues/25688
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25688/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25688/comments
https://api.github.com/repos/huggingface/transformers/issues/25688/events
https://github.com/huggingface/transformers/pull/25688
1,863,626,555
PR_kwDOCUB6oc5YnR-7
25,688
ImageProcessor - check if input pixel values between 0-255
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@rafaelpadilla @ArthurZucker Thanks bot for your detailed reviews! I've added the suggested updated: making the function name consistent with others `_is_scaled_image` -> `is_scaled_image` and any missing doc updates." ]
1,692
1,692
1,692
COLLABORATOR
null
# What does this PR do? Adds a warning in the image processor if the user is passing in images that have already had their pixel values rescale between 0 and 1 and `do_rescale=True`. Additionally adds information in the docstring. This has caused some confusion and error reports recently, when users have been passing in images that have already been partially transformed. Related issues: #25195, #24857, #23096 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25688/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25688/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25688", "html_url": "https://github.com/huggingface/transformers/pull/25688", "diff_url": "https://github.com/huggingface/transformers/pull/25688.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25688.patch", "merged_at": 1692894277000 }
https://api.github.com/repos/huggingface/transformers/issues/25687
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25687/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25687/comments
https://api.github.com/repos/huggingface/transformers/issues/25687/events
https://github.com/huggingface/transformers/pull/25687
1,863,545,614
PR_kwDOCUB6oc5YnAXp
25,687
Generate: general test for decoder-only generation from `inputs_embeds`
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@ydshieh #25664 motivates this PR because it was a reaction to #25659, which had an uncaught bug. This uncaught bug would have been detected by the test added in this PR :)\r\n\r\nRegarding the other PR comments: going to elaborate in the test comments (we disable EOS to always have a test length of 20 tokens, and disable PAD because generate tries to infer the attention mask from `input_ids` when it exists [and it obviously fails from `inputs_embeds`])", "(not important question)\r\n\r\nI know it's from #25659, but #25664 only avoids putting `input_ids` into the `kwargs` (using `update`). I am not seeing why the test added in this PR can fail if it is run against #25659 😅 I must miss something.\r\n\r\n", "@ydshieh before the fix, it would crash when `inputs_embeds` and `inputs_ids` are passed to generate (because the forward pass gets the two, and it triggers an exception), but it should be allowed :)\r\n\r\nThis is a case that the new test checks ", "I see, thank you for explaining! You are really king of generate (and Arthur knows everything now!)" ]
1,692
1,692
1,692
MEMBER
null
# What does this PR do? As discussed in #25664, adds a test to detect whether generating from `inputs_embeds` has exactly the same output as from the corresponding `input_ids`. This ensures contributed implementations are correct 🤗
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25687/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25687/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25687", "html_url": "https://github.com/huggingface/transformers/pull/25687", "diff_url": "https://github.com/huggingface/transformers/pull/25687.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25687.patch", "merged_at": 1692814622000 }
https://api.github.com/repos/huggingface/transformers/issues/25686
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25686/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25686/comments
https://api.github.com/repos/huggingface/transformers/issues/25686/events
https://github.com/huggingface/transformers/pull/25686
1,863,452,583
PR_kwDOCUB6oc5Ymr-t
25,686
fix ram efficient fsdp init
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,692
1,692
1,692
CONTRIBUTOR
null
# What does this PR do? 1. Currently, when using Trainer if the model is loaded before creating `TrainingArguments` object, torch distributed process group won't be initialized and as such when FSDP is enabled via accelerate config, it will end up initializing the model with random weights on all ranks as the `is_fsdp_enabled_and_dist_rank_0` function will always return `False`. This results in NaN losses. Quite a journey to uncover this bug. This PR fixes it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25686/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25686/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25686", "html_url": "https://github.com/huggingface/transformers/pull/25686", "diff_url": "https://github.com/huggingface/transformers/pull/25686.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25686.patch", "merged_at": 1692856843000 }
https://api.github.com/repos/huggingface/transformers/issues/25685
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25685/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25685/comments
https://api.github.com/repos/huggingface/transformers/issues/25685/events
https://github.com/huggingface/transformers/pull/25685
1,863,428,223
PR_kwDOCUB6oc5Ymmps
25,685
Fix `pad_token` check condition
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,692
1,692
1,692
COLLABORATOR
null
# What does this PR do? Fix #25625
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25685/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25685/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25685", "html_url": "https://github.com/huggingface/transformers/pull/25685", "diff_url": "https://github.com/huggingface/transformers/pull/25685.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25685.patch", "merged_at": 1692801568000 }
https://api.github.com/repos/huggingface/transformers/issues/25684
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25684/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25684/comments
https://api.github.com/repos/huggingface/transformers/issues/25684/events
https://github.com/huggingface/transformers/pull/25684
1,863,201,659
PR_kwDOCUB6oc5Yl06F
25,684
[`Sentencepiece`] make sure `legacy` do not require `protobuf`
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,692
1,693
1,692
COLLABORATOR
null
# What does this PR do? Fixes the `get_spm_processor()` function to make sure that if `protobuf` is not installed, we can still initialize the model when `legacy=False`. Just realized that `make fixup` fails without protobuf :sweat: fixes #25753 ```python >>> import tensorflow as tf --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) Cell In[2], line 1 ----> 1 import tensorflow as tf File /opt/conda/envs/py39/lib/python3.9/site-packages/tensorflow/__init__.py:37 34 import sys as _sys 35 import typing as _typing ---> 37 from tensorflow.python.tools import module_util as _module_util 38 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader 40 # Make sure code inside the TensorFlow codebase can use tf2.enabled() at import. File /opt/conda/envs/py39/lib/python3.9/site-packages/tensorflow/python/__init__.py:37 29 # We aim to keep this file minimal and ideally remove completely. 30 # If you are adding a new file with @tf_export decorators, 31 # import it in modules_with_exports.py instead. 32 33 # go/tf-wildcard-import 34 # pylint: disable=wildcard-import,g-bad-import-order,g-import-not-at-top 36 from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow ---> 37 from tensorflow.python.eager import context 39 # pylint: enable=wildcard-import 40 41 # Bring in subpackages. 42 from tensorflow.python import data File /opt/conda/envs/py39/lib/python3.9/site-packages/tensorflow/python/eager/context.py:28 25 from absl import logging 26 import numpy as np ---> 28 from tensorflow.core.framework import function_pb2 29 from tensorflow.core.protobuf import config_pb2 30 from tensorflow.core.protobuf import coordination_config_pb2 File /opt/conda/envs/py39/lib/python3.9/site-packages/tensorflow/core/framework/function_pb2.py:7 5 import sys 6 _b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1')) ----> 7 from google.protobuf import descriptor as _descriptor 8 from google.protobuf import message as _message 9 from google.protobuf import reflection as _reflection ModuleNotFoundError: No module named 'google.protobuf' ``` fixes #25667
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25684/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/25684/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25684", "html_url": "https://github.com/huggingface/transformers/pull/25684", "diff_url": "https://github.com/huggingface/transformers/pull/25684.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25684.patch", "merged_at": 1692967264000 }
https://api.github.com/repos/huggingface/transformers/issues/25683
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25683/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25683/comments
https://api.github.com/repos/huggingface/transformers/issues/25683/events
https://github.com/huggingface/transformers/pull/25683
1,863,173,803
PR_kwDOCUB6oc5YluwB
25,683
[WIP] Sanity CI check for safetensors 0.3.3
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,692
1,692
1,692
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25683/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25683/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25683", "html_url": "https://github.com/huggingface/transformers/pull/25683", "diff_url": "https://github.com/huggingface/transformers/pull/25683.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25683.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/25682
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25682/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25682/comments
https://api.github.com/repos/huggingface/transformers/issues/25682/events
https://github.com/huggingface/transformers/pull/25682
1,863,164,418
PR_kwDOCUB6oc5Ylssw
25,682
⚠️ [CLAP] Fix dtype of logit scales in init
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Note that in the original repo, the model is always cast to float16 for all training / inference. Thus, they likely never used the model in it's default dtype, and always relied on explicitly casting to float16" ]
1,692
1,692
1,692
CONTRIBUTOR
null
# What does this PR do? The dtype of the CLAP logit scale parameters was always float64 by default (even if the rest of the model was initialised in float32). This PR fixes the logit scales, such that they respect the default dtype of the model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25682/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25682/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25682", "html_url": "https://github.com/huggingface/transformers/pull/25682", "diff_url": "https://github.com/huggingface/transformers/pull/25682.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25682.patch", "merged_at": 1692793057000 }
https://api.github.com/repos/huggingface/transformers/issues/25681
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25681/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25681/comments
https://api.github.com/repos/huggingface/transformers/issues/25681/events
https://github.com/huggingface/transformers/issues/25681
1,862,877,122
I_kwDOCUB6oc5vCT_C
25,681
LlamaRotaryEmbedding (wrong cache value when casting model to float16/bfloat16)
{ "login": "KeremTurgutlu", "id": 19826777, "node_id": "MDQ6VXNlcjE5ODI2Nzc3", "avatar_url": "https://avatars.githubusercontent.com/u/19826777?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KeremTurgutlu", "html_url": "https://github.com/KeremTurgutlu", "followers_url": "https://api.github.com/users/KeremTurgutlu/followers", "following_url": "https://api.github.com/users/KeremTurgutlu/following{/other_user}", "gists_url": "https://api.github.com/users/KeremTurgutlu/gists{/gist_id}", "starred_url": "https://api.github.com/users/KeremTurgutlu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KeremTurgutlu/subscriptions", "organizations_url": "https://api.github.com/users/KeremTurgutlu/orgs", "repos_url": "https://api.github.com/users/KeremTurgutlu/repos", "events_url": "https://api.github.com/users/KeremTurgutlu/events{/privacy}", "received_events_url": "https://api.github.com/users/KeremTurgutlu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante you recently worked on the extension of the cache for RotaryEmbeddings! Might affect other (dynamic ones)", "Hey @KeremTurgutlu 👋 \r\n\r\nIt is known that, when casting to 16 bits for inference purposes, you should use the exact casting strategy as used with the model at train time. We try to store that in the `torch_dtype` config field, whenever we have access to that information (e.g. [here](https://huggingface.co/meta-llama/Llama-2-7b-hf/blob/main/config.json#L21)). \r\n\r\nIn this particular case, the issue is compounded by the fact that the RoPE layer has buffers, which mask the issue in some cases.\r\n\r\n@ArthurZucker should we emit a warning when the model gets converted to a 16-bit format different from the `torch_dtype` field? 🤔 ", "This is the same bug that's discussed here\r\n\r\nhttps://github.com/EleutherAI/gpt-neox/issues/1003\r\n\r\nThe fix is to calculate sin and cos values in init and ensure they're not stored in buffers. Or don't cast the model, but instead use autocast, which avoids this issue. Note that with deepspeed it will always cast, so you need the fix. ", "There's also this #24262 and if we can have a code fix would be awesome than having another warning ", "@KeremTurgutlu \r\nIs this just an inaccuracy problem of float16 precision?\r\nThe last value shown in your snippet may be calculated following way. \r\n\r\n```\r\n>>> import torch\r\n>>> e = torch.tensor(0.1032)\r\n>>> e.cos()\r\ntensor(0.9947)\r\n>>> e.cos().to(torch.bfloat16)\r\ntensor(0.9961, dtype=torch.bfloat16)\r\n>>> e.cos().to(torch.float16)\r\ntensor(0.9946, dtype=torch.float16)\r\n```\r\n\r\ninv_freq is always float32 since it's converted using [`.float()`](https://pytorch.org/docs/stable/generated/torch.Tensor.float.html). Hence, the variable `t` in `_set_cos_sin_cache` is also always float32.\r\n", "> inv_freq is always float32 since it's converted using [`.float()`](https://pytorch.org/docs/stable/generated/torch.Tensor.float.html). Hence, the variable `t` in `_set_cos_sin_cache` is also always float32.\r\n\r\nNo, it's stored as a buffer, so it gets cast in some situations. See the full description of the bug and code to fix it here: https://github.com/EleutherAI/gpt-neox/issues/1003", "@ArthurZucker @gante I don't think this issue should be closed AFAICT.", "Yep, it’s on my todo when I’ll deep dive on all the llama related issues", "Sorry I'll get to this soon 🤗 ", "cc @fxmarty related to your #26836 and why we have to be extra careful with ROPE and float16! ", "I don't know what took me so long but this is similar to #25306 and can be fixed by something close to #27033 (which slows down a lot) but this should be fixed for all ROPEs that copy from Llama / use dynamic scaling. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,692
1,703
1,703
NONE
null
### System Info - `transformers` version: 4.31.0 - Platform: Linux-5.15.0-79-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.2 - Accelerate version: 0.22.0.dev0 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_type: FSDP - mixed_precision: bf16 - use_cpu: False - debug: False - num_processes: 1 - machine_rank: 0 - num_machines: 1 - rdzv_backend: static - same_network: True - main_training_function: main - fsdp_config: {'fsdp_auto_wrap_policy': 'SIZE_BASED_WRAP', 'fsdp_backward_prefetch_policy': 'BACKWARD_PRE', 'fsdp_forward_prefetch': False, 'fsdp_min_num_params': 100000000, 'fsdp_offload_params': False, 'fsdp_sharding_strategy': 2, 'fsdp_state_dict_type': 'FULL_STATE_DICT', 'fsdp_sync_module_states': True, 'fsdp_use_orig_params': True} - downcast_bf16: no - tpu_use_cluster: False - tpu_use_sudo: False - tpu_env: [] - dynamo_config: {'dynamo_backend': 'INDUCTOR'} - PyTorch version (GPU?): 2.1.0.dev20230809+cu121 (True) - Tensorflow version (GPU?): 2.13.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.7.2 (cpu) - Jax version: 0.4.13 - JaxLib version: 0.4.13 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker would be the best person to discuss this. ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction **TL;DR If a model with a `LlamaRotaryEmbedding` layer is cast to bfloat16/float16 after initialization and if during forward pass a sequence with a sequence length > `self.max_position_embeddings` is used, then the cached cos and sin buffer values will most probably be different than the trained model, giving unexpected results.** I came across this very subtle error doing the following and I am not sure what might be the best solution for this. I finetuned the Llama-2 model using accelerate FSDP and bfloat16 mixed precision policy. I used a slightly different config than the original one in which the `max_position_embeddings=2048` was set. FSDP + accelerate uses autocast under the hood which takes care of the ops inside `LlamaRotaryEmbedding` to be in full precision which is great. Problem happens when we feed a sequence with a greater sequence length and also cast the model to a lower precision as opposed to using autocast. I loaded this trained model using ```python load_checkpoint_and_dispatch(custom_config_model, str(fn), device_map={ "model":torch.cuda.current_device(), "lm_head":torch.cuda.current_device(), }, dtype=torch.bfloat16); ``` My custom config looked like this, notice `"max_position_embeddings": 2048,`: ``` LlamaConfig { "block_size": 2960, "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 11008, "max_position_embeddings": 2048, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 32, "packed_inputs": false, "pad_token_id": 0, "prefix_lm": false, "pretraining_tp": 1, "rms_norm_eps": 1e-06, "rope_scaling": null, "tie_word_embeddings": false, "transformers_version": "4.31.0", "use_cache": true, "vocab_size": 64008 } ``` During inference when testing the trained model my training/validation perplexity increased from ~2.5 to ~20.0, it took me 2 days to figure out that the exact issue was with model casting + having sequence lengths > max_position_embeddings. ### Potential Fixes: - Add warning about this, and suggest using autocast during inference. - Add warning about this, and suggest initializing the model with a very high `self.max_position_embeddings` value so that cos-sin caches won't be re-initialized with wrong values due to lower precision. Even using, `self.max_position_embeddings=80k` should be fine given the relatively small size of the buffer compared to total model size. - Modify `LlamaRotaryEmbedding` so that always float32 is used in ops and cast to `x.dtype` only at the very end. This is a bit difficult because if a model is cast to bfloat16/float16, it will still produce different cache values even if its cast back to float32. I don't know if there is way to disable model casting for certain layers - but I guess that would be autocast 😄 This modified version will produce closer but still wrong cache values: ```python class LlamaRotaryEmbedding(torch.nn.Module): def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None): super().__init__() self.dim = dim self.max_position_embeddings = max_position_embeddings self.base = base inv_freq = 1.0 / (self.base ** (torch.arange(0, self.dim, 2).float().to(device) / self.dim)) self.register_buffer("inv_freq", inv_freq, persistent=False) # Build here to make `torch.jit.trace` work. self._set_cos_sin_cache( seq_len=max_position_embeddings, device=self.inv_freq.device, dtype=torch.get_default_dtype() ) def _set_cos_sin_cache(self, seq_len, device, dtype): self.max_seq_len_cached = seq_len t = torch.arange(self.max_seq_len_cached, device=device, dtype=dtype) freqs = torch.einsum("i,j->ij", t, self.inv_freq.to(dtype)) # Different from paper, but it uses a different permutation in order to obtain the same calculation emb = torch.cat((freqs, freqs), dim=-1) self.register_buffer("cos_cached", emb.cos()[None, None, :, :].to(dtype), persistent=False) self.register_buffer("sin_cached", emb.sin()[None, None, :, :].to(dtype), persistent=False) def forward(self, x, seq_len=None): # x: [bs, num_attention_heads, seq_len, head_size] if seq_len > self.max_seq_len_cached: self._set_cos_sin_cache(seq_len=seq_len, device=x.device, dtype=torch.get_default_dtype()) return ( self.cos_cached[:, :, :seq_len, ...].to(dtype=x.dtype), self.sin_cached[:, :, :seq_len, ...].to(dtype=x.dtype), ) ``` I personally will keep `self.max_position_embeddings` as high as my max intended sequence length and also will use autocast where possible. ### Reproduction ```python # from https://github.com/huggingface/transformers/blob/3d1edb6c5d36bf6426e72223f534266ff29c45c4/src/transformers/models/llama/modeling_llama.py#L92C1-L125C10 class LlamaRotaryEmbedding(torch.nn.Module): def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None): super().__init__() self.dim = dim self.max_position_embeddings = max_position_embeddings self.base = base inv_freq = 1.0 / (self.base ** (torch.arange(0, self.dim, 2).float().to(device) / self.dim)) self.register_buffer("inv_freq", inv_freq, persistent=False) # Build here to make `torch.jit.trace` work. self._set_cos_sin_cache( seq_len=max_position_embeddings, device=self.inv_freq.device, dtype=torch.get_default_dtype() ) def _set_cos_sin_cache(self, seq_len, device, dtype): self.max_seq_len_cached = seq_len t = torch.arange(self.max_seq_len_cached, device=device, dtype=self.inv_freq.dtype) freqs = torch.einsum("i,j->ij", t, self.inv_freq) # Different from paper, but it uses a different permutation in order to obtain the same calculation emb = torch.cat((freqs, freqs), dim=-1) self.register_buffer("cos_cached", emb.cos()[None, None, :, :].to(dtype), persistent=False) self.register_buffer("sin_cached", emb.sin()[None, None, :, :].to(dtype), persistent=False) def forward(self, x, seq_len=None): # x: [bs, num_attention_heads, seq_len, head_size] if seq_len > self.max_seq_len_cached: self._set_cos_sin_cache(seq_len=seq_len, device=x.device, dtype=x.dtype) return ( self.cos_cached[:, :, :seq_len, ...].to(dtype=x.dtype), self.sin_cached[:, :, :seq_len, ...].to(dtype=x.dtype), ) ``` ```python # expected cache values rotary_emb = LlamaRotaryEmbedding(2048) rotary_emb.cos_cached[:,:,:1024] tensor([[[[ 1.0000, 1.0000, 1.0000, ..., 1.0000, 1.0000, 1.0000], [ 0.5403, 0.5478, 0.5552, ..., 1.0000, 1.0000, 1.0000], [-0.4161, -0.3998, -0.3835, ..., 1.0000, 1.0000, 1.0000], ..., [-0.9998, 0.9651, -0.8084, ..., 0.9945, 0.9946, 0.9947], [-0.5550, 0.3096, 0.0407, ..., 0.9945, 0.9946, 0.9947], [ 0.4001, -0.6259, 0.8536, ..., 0.9945, 0.9946, 0.9947]]]]) # Wrong cache values when cast to bfloat16 rotary_emb.to(torch.bfloat16); # create an input > 2048 x = torch.randn(2, 32, 4096, 128) _ = rotary_emb(x, seq_len=4096) rotary_emb.cos_cached[:,:,:1024] tensor([[[[ 1.0000, 1.0000, 1.0000, ..., 1.0000, 1.0000, 1.0000], [ 0.5391, 0.5469, 0.5547, ..., 1.0000, 1.0000, 1.0000], [-0.4160, -0.4023, -0.3809, ..., 1.0000, 1.0000, 1.0000], ..., [-0.5273, 0.9180, 0.5625, ..., 0.9961, 0.9961, 0.9961], [ 0.9883, -0.3008, 0.2578, ..., 0.9961, 0.9961, 0.9961], [ 0.9883, -0.3008, 0.2578, ..., 0.9961, 0.9961, 0.9961]]]]) # try with float16 this time rotary_emb = LlamaRotaryEmbedding(2048) # cast model to float16 rotary_emb.to(torch.float16); rotary_emb.cos_cached[:,:,:1024] # create an input > 2048 x = torch.randn(2, 32, 4096, 128) _ = rotary_emb(x, seq_len=4096) rotary_emb.cos_cached[:,:,:1024] tensor([[[[ 1.0000, 1.0000, 1.0000, ..., 1.0000, 1.0000, 1.0000], [ 0.5405, 0.5479, 0.5552, ..., 1.0000, 1.0000, 1.0000], [-0.4163, -0.4001, -0.3831, ..., 1.0000, 1.0000, 1.0000], ..., [-1.0000, 0.9185, -0.9453, ..., 0.9946, 0.9946, 0.9946], [-0.5552, 0.1628, -0.2366, ..., 0.9946, 0.9946, 0.9946], [ 0.4001, -0.7422, 0.6899, ..., 0.9946, 0.9946, 0.9946]]]]) ``` cc: @ArthurZucker ### Expected behavior Same cache values for rotary embeddings.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25681/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/25681/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25680
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25680/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25680/comments
https://api.github.com/repos/huggingface/transformers/issues/25680/events
https://github.com/huggingface/transformers/pull/25680
1,862,843,547
PR_kwDOCUB6oc5Ykm_V
25,680
Remove `utils/documentation_tests.txt`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25680). All of your documentation changes will be reflected on that endpoint." ]
1,692
1,692
1,692
COLLABORATOR
null
# What does this PR do? Let's remove `utils/documentation_tests.txt` and just keep `not_doctested.txt`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25680/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25680/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25680", "html_url": "https://github.com/huggingface/transformers/pull/25680", "diff_url": "https://github.com/huggingface/transformers/pull/25680.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25680.patch", "merged_at": 1692782085000 }
https://api.github.com/repos/huggingface/transformers/issues/25679
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25679/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25679/comments
https://api.github.com/repos/huggingface/transformers/issues/25679/events
https://github.com/huggingface/transformers/pull/25679
1,862,835,386
PR_kwDOCUB6oc5YklQf
25,679
🌐 [i18n-KO] Translated `visual_question_answering.md` to Korean
{ "login": "wonhyeongseo", "id": 29195190, "node_id": "MDQ6VXNlcjI5MTk1MTkw", "avatar_url": "https://avatars.githubusercontent.com/u/29195190?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wonhyeongseo", "html_url": "https://github.com/wonhyeongseo", "followers_url": "https://api.github.com/users/wonhyeongseo/followers", "following_url": "https://api.github.com/users/wonhyeongseo/following{/other_user}", "gists_url": "https://api.github.com/users/wonhyeongseo/gists{/gist_id}", "starred_url": "https://api.github.com/users/wonhyeongseo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wonhyeongseo/subscriptions", "organizations_url": "https://api.github.com/users/wonhyeongseo/orgs", "repos_url": "https://api.github.com/users/wonhyeongseo/repos", "events_url": "https://api.github.com/users/wonhyeongseo/events{/privacy}", "received_events_url": "https://api.github.com/users/wonhyeongseo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "안녕하세요!\r\n\r\n🔥 번역에 참여해주신 모든 분들께 정말 감사드립니다. 이번 번역에서는 특정 이미지에 대해서 설명이 나와 보면서 번역이 알맞는지 확인해야했습니다. 오랜만에 번역하는 것이다보니 오류가 많을 것으로 예상되는데요. 어떤 부분이든 모두 주저없이 말씀해주세요 :)\r\n\r\n리뷰는 일방적인 수용이 아닌 서로간의 대화를 전제로 이루어집니다. 때문에 \"어라, 이건 내가 맞아!\"라고 주장해주시면 오히려 더 재밌는 리뷰 시간이 될 것입니다. 지금까지 최고의 리뷰를 꾸준히 올려주시는 @nuatmochoi 님 🥇 , @heuristicwave 님 🥈 , @sim-so 님 🥉 도 모두 리뷰 자유롭게 올려주세요. 감사합니다. 💯 😆", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25679). All of your documentation changes will be reflected on that endpoint." ]
1,692
1,692
1,692
CONTRIBUTOR
null
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.md` to Korean" 으로 부탁드립니다! --> # What does this PR do? Translated the `visual_question_answering.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (번역 누락/중복 검사) - [x] Grammar Check (맞춤법 검사) - [x] Review or Add new terms to glossary (용어 확인 및 추가) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [ ] Check live-preview for gotchas (live-preview로 정상작동 확인) ## Who can review? (Initial) <!-- 1. 위 체크가 모두 완료된 뒤에, 이 아래에 리뷰를 요청할 팀원들을 멘션해주세요! --> Team OSSCA, may you please review this PR? @bolizabeth, @nuatmochoi, @heuristicwave, @mjk0618, @keonju2, @harheem, @HongB1, @junejae, @54data, @Sunmin0520, @seank021, @augustinLib, @sronger, @TaeYupNoh, @kj021, @eenzeenee ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> May you please review this PR? @sgugger, @ArthurZucker, @stevhliu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25679/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25679/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25679", "html_url": "https://github.com/huggingface/transformers/pull/25679", "diff_url": "https://github.com/huggingface/transformers/pull/25679.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25679.patch", "merged_at": 1692900898000 }
https://api.github.com/repos/huggingface/transformers/issues/25678
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25678/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25678/comments
https://api.github.com/repos/huggingface/transformers/issues/25678/events
https://github.com/huggingface/transformers/pull/25678
1,862,765,245
PR_kwDOCUB6oc5YkWEU
25,678
Sets the stalebot to 10 AM CEST
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,692
1,692
1,692
MEMBER
null
This sets the stale bot trigger time at 10 AM CEST rather than 5 PM CEST as all core maintainers on watch duty are now in the European timezone cc @amyeroberts @ArthurZucker @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25678/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25678/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25678", "html_url": "https://github.com/huggingface/transformers/pull/25678", "diff_url": "https://github.com/huggingface/transformers/pull/25678.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25678.patch", "merged_at": 1692793268000 }
https://api.github.com/repos/huggingface/transformers/issues/25677
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25677/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25677/comments
https://api.github.com/repos/huggingface/transformers/issues/25677/events
https://github.com/huggingface/transformers/issues/25677
1,862,624,951
I_kwDOCUB6oc5vBWa3
25,677
Transformers documentation translation to Macedonian
{ "login": "NinoRisteski", "id": 95188570, "node_id": "U_kgDOBax2Wg", "avatar_url": "https://avatars.githubusercontent.com/u/95188570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NinoRisteski", "html_url": "https://github.com/NinoRisteski", "followers_url": "https://api.github.com/users/NinoRisteski/followers", "following_url": "https://api.github.com/users/NinoRisteski/following{/other_user}", "gists_url": "https://api.github.com/users/NinoRisteski/gists{/gist_id}", "starred_url": "https://api.github.com/users/NinoRisteski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NinoRisteski/subscriptions", "organizations_url": "https://api.github.com/users/NinoRisteski/orgs", "repos_url": "https://api.github.com/users/NinoRisteski/repos", "events_url": "https://api.github.com/users/NinoRisteski/events{/privacy}", "received_events_url": "https://api.github.com/users/NinoRisteski/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
null
[]
[ "Hi @sgugger, is it ok to work on this?" ]
1,692
1,692
null
CONTRIBUTOR
null
<!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [ ] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) - [ ] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md) - [ ] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md). ## Tutorial section - [ ] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md) - [ ] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.md) - [ ] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md) - [ ] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md) - [ ] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md) - [ ] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md) - [ ] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md) <!-- Keep on adding more as you go 🔥 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25677/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25677/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/25676
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25676/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25676/comments
https://api.github.com/repos/huggingface/transformers/issues/25676/events
https://github.com/huggingface/transformers/pull/25676
1,862,624,485
PR_kwDOCUB6oc5Yj30m
25,676
Fix typo in `configuration_gpt2.py`
{ "login": "susnato", "id": 56069179, "node_id": "MDQ6VXNlcjU2MDY5MTc5", "avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4", "gravatar_id": "", "url": "https://api.github.com/users/susnato", "html_url": "https://github.com/susnato", "followers_url": "https://api.github.com/users/susnato/followers", "following_url": "https://api.github.com/users/susnato/following{/other_user}", "gists_url": "https://api.github.com/users/susnato/gists{/gist_id}", "starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/susnato/subscriptions", "organizations_url": "https://api.github.com/users/susnato/orgs", "repos_url": "https://api.github.com/users/susnato/repos", "events_url": "https://api.github.com/users/susnato/events{/privacy}", "received_events_url": "https://api.github.com/users/susnato/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25676). All of your documentation changes will be reflected on that endpoint." ]
1,692
1,692
1,692
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25676/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25676/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25676", "html_url": "https://github.com/huggingface/transformers/pull/25676", "diff_url": "https://github.com/huggingface/transformers/pull/25676.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25676.patch", "merged_at": 1692816003000 }
https://api.github.com/repos/huggingface/transformers/issues/25675
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25675/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25675/comments
https://api.github.com/repos/huggingface/transformers/issues/25675/events
https://github.com/huggingface/transformers/issues/25675
1,862,437,392
I_kwDOCUB6oc5vAooQ
25,675
`self.tokenizer` is nil in example code
{ "login": "adsr", "id": 315003, "node_id": "MDQ6VXNlcjMxNTAwMw==", "avatar_url": "https://avatars.githubusercontent.com/u/315003?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adsr", "html_url": "https://github.com/adsr", "followers_url": "https://api.github.com/users/adsr/followers", "following_url": "https://api.github.com/users/adsr/following{/other_user}", "gists_url": "https://api.github.com/users/adsr/gists{/gist_id}", "starred_url": "https://api.github.com/users/adsr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adsr/subscriptions", "organizations_url": "https://api.github.com/users/adsr/orgs", "repos_url": "https://api.github.com/users/adsr/repos", "events_url": "https://api.github.com/users/adsr/events{/privacy}", "received_events_url": "https://api.github.com/users/adsr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I just noticed the example code on https://huggingface.co/cerebras/btlm-3b-8k-base differs from https://huggingface.co/tasks/text-generation. That's probably it." ]
1,692
1,692
1,692
NONE
null
### System Info - `transformers` version: 4.33.0.dev0 - Platform: Linux-6.1.0-9-amd64-x86_64-with-glibc2.36 - Python version: 3.11.4 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.2 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: N - Using distributed or parallel set-up in script?: N ### Who can help? @ArthurZucker @younesbelkada @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction (Apologies if I have something wrong in my setup.) I'm running into an error while following along with a simple example from https://huggingface.co/tasks/text-generation: ``` user@host:~/test$ venv/bin/python -c 'from transformers import pipeline; pipe = pipeline("text-generation", model="cerebras/btlm-3b-8k-base", trust_remote_code=True); print(pipe("hello world", max_length=1, num_return_sequences=1))' Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/test/test/venv/lib/python3.11/site-packages/transformers/pipelines/text_generation.py", line 204, in __call__ return super().__call__(text_inputs, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/test/test/venv/lib/python3.11/site-packages/transformers/pipelines/base.py", line 1129, in __call__ return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/test/test/venv/lib/python3.11/site-packages/transformers/pipelines/base.py", line 1135, in run_single model_inputs = self.preprocess(inputs, **preprocess_params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/test/test/venv/lib/python3.11/site-packages/transformers/pipelines/text_generation.py", line 207, in preprocess inputs = self.tokenizer( ^^^^^^^^^^^^^^^ TypeError: 'NoneType' object is not callable user@host:~/test$ ``` For readability, here is the same Python code formatted: ```python from transformers import pipeline pipe = pipeline("text-generation", model="cerebras/btlm-3b-8k-base", trust_remote_code=True) print(pipe("hello world", max_length=1, num_return_sequences=1)) ``` ### Expected behavior No exception, or a friendlier one
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25675/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25675/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25674
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25674/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25674/comments
https://api.github.com/repos/huggingface/transformers/issues/25674/events
https://github.com/huggingface/transformers/pull/25674
1,862,390,052
PR_kwDOCUB6oc5YjGL3
25,674
🌐 [i18n-KO] Translated `community.md` to Korean
{ "login": "sim-so", "id": 96299403, "node_id": "U_kgDOBb1piw", "avatar_url": "https://avatars.githubusercontent.com/u/96299403?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sim-so", "html_url": "https://github.com/sim-so", "followers_url": "https://api.github.com/users/sim-so/followers", "following_url": "https://api.github.com/users/sim-so/following{/other_user}", "gists_url": "https://api.github.com/users/sim-so/gists{/gist_id}", "starred_url": "https://api.github.com/users/sim-so/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sim-so/subscriptions", "organizations_url": "https://api.github.com/users/sim-so/orgs", "repos_url": "https://api.github.com/users/sim-so/repos", "events_url": "https://api.github.com/users/sim-so/events{/privacy}", "received_events_url": "https://api.github.com/users/sim-so/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "리뷰 하실 때 참고하실 점 공유 드립니다.\r\n - [`Nlp`](https://huggingface.co/docs/datasets/v0.3.0/installation.html)는 `Datasets` 이전에 사용된 라이브러리 이름입니다. v1.0.0부터 `Datasets`로 바뀌었습니다. (현재 가장 최신 Datasets 버전은 v2.14입니다!) 모든 노트북에서 잘 실행되지는 않는 것 같지만 여전히 `Nlp` 이름으로 모듈을 다운로드 받을 수 있고, `Datasets`로 바꿀 때 관련 노트북도 수정이 필요할 것 같아 원문의 이름을 유지했습니다. \r\n - 모델 이름인 `transformer`는 한글로, Hugging Face 라이브러리 이름인 `Transformers`는 영어로 옮겼습니다.\r\n - Hugging Face 라이브러리 앞에 쓰는 🤗 이모지는 원문에서 쓴 그대로 옮겼습니다. 원문에서도 일관되지 않게 붙이고 있는데요, 전부 붙일지 고민하다 일단 그대로 두었습니다.\r\n\r\n위 내용 관련하여 의견 있으시면 편히 남겨주세요! 리뷰 잘 부탁 드립니다 😊", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25674). All of your documentation changes will be reflected on that endpoint.", "@sgugger, @ArthurZucker, @stevhliu \r\nMay you please review this PR? 😊" ]
1,692
1,693
1,693
CONTRIBUTOR
null
# What does this PR do? Translated the `community.mdx` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (번역 누락/중복 검사) - [x] Grammar Check (맞춤법 검사) - [x] Review or Add new terms to glossary (용어 확인 및 추가) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview로 정상작동 확인) ## Who can review? (Initial) Team OSSCA, may you please review this PR? @bolizabeth, @nuatmochoi, @heuristicwave, @mjk0618, @keonju2, @harheem, @junejae, @54data, @Sunmin0520, @seank021, @augustinLib, @sronger, @TaeYupNoh, @kj021 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) @sgugger, @ArthurZucker, @stevhliu May you please review this PR?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25674/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25674/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25674", "html_url": "https://github.com/huggingface/transformers/pull/25674", "diff_url": "https://github.com/huggingface/transformers/pull/25674.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25674.patch", "merged_at": 1693324045000 }
https://api.github.com/repos/huggingface/transformers/issues/25673
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25673/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25673/comments
https://api.github.com/repos/huggingface/transformers/issues/25673/events
https://github.com/huggingface/transformers/issues/25673
1,862,337,256
I_kwDOCUB6oc5vAQLo
25,673
Seems to be a bug in documentation with Speech Encoder Decoder Model Training
{ "login": "greeshmasmenon", "id": 102393140, "node_id": "U_kgDOBhplNA", "avatar_url": "https://avatars.githubusercontent.com/u/102393140?v=4", "gravatar_id": "", "url": "https://api.github.com/users/greeshmasmenon", "html_url": "https://github.com/greeshmasmenon", "followers_url": "https://api.github.com/users/greeshmasmenon/followers", "following_url": "https://api.github.com/users/greeshmasmenon/following{/other_user}", "gists_url": "https://api.github.com/users/greeshmasmenon/gists{/gist_id}", "starred_url": "https://api.github.com/users/greeshmasmenon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/greeshmasmenon/subscriptions", "organizations_url": "https://api.github.com/users/greeshmasmenon/orgs", "repos_url": "https://api.github.com/users/greeshmasmenon/repos", "events_url": "https://api.github.com/users/greeshmasmenon/events{/privacy}", "received_events_url": "https://api.github.com/users/greeshmasmenon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The line where the model takes in the input features should actually be:\r\n\r\n`loss = model(input_values=input_values, labels=labels).loss`", "Hey! Thanks for noticing 🤗 Would you like to open a PR and fixe this? You could also try running the doctests to find if there are other parts that are not up to date ", "Sure, I will submit a PR by the end of the week.\r\n\r\nOn Wed, Aug 23, 2023 at 7:59 AM Arthur ***@***.***> wrote:\r\n\r\n> Hey! Thanks for noticing 🤗 Would you like to open a PR and fixe this? You\r\n> could also try running the doctests to find if there are other parts that\r\n> are not up to date\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/25673#issuecomment-1689327003>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AYNGKNFMG5WEYMM2TSEORUTXWWL3TANCNFSM6AAAAAA32UADQA>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n\r\n\r\n-- \r\nThanks and Regards,\r\nGreeshma S Menon\r\n", "Awesome, thanks @greeshmasmenon!" ]
1,692
1,692
1,692
NONE
null
### System Info @sanchit-gandhi - Can you please take a look at the below? The documentation [here](https://huggingface.co/docs/transformers/main/model_doc/speech-encoder-decoder) doesn't seem to execute. It looks like there is a call being made to `loss = model(**input_features).loss` when input_features has not been initialized yet. ``` from transformers import AutoTokenizer, AutoFeatureExtractor, SpeechEncoderDecoderModel from datasets import load_dataset encoder_id = "facebook/wav2vec2-base-960h" # acoustic model encoder decoder_id = "bert-base-uncased" # text decoder feature_extractor = AutoFeatureExtractor.from_pretrained(encoder_id) tokenizer = AutoTokenizer.from_pretrained(decoder_id) # Combine pre-trained encoder and pre-trained decoder to form a Seq2Seq model model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id) model.config.decoder_start_token_id = tokenizer.cls_token_id model.config.pad_token_id = tokenizer.pad_token_id # load an audio input and pre-process (normalise mean/std to 0/1) ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") input_values = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt").input_values # load its corresponding transcription and tokenize to generate labels labels = tokenizer(ds[0]["text"], return_tensors="pt").input_ids # the forward function automatically creates the correct decoder_input_ids loss = model(**input_features).loss loss.backward() ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Execute the code anywhere. It is an official example. ### Expected behavior To execute and show the backward loss. Instead, you get the error - `NameError: name 'input_features' is not defined `
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25673/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25673/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25672
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25672/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25672/comments
https://api.github.com/repos/huggingface/transformers/issues/25672/events
https://github.com/huggingface/transformers/issues/25672
1,862,326,751
I_kwDOCUB6oc5vANnf
25,672
Llama2-13b-chat: Output contains part of prompt (including [/INST] tag)
{ "login": "mukundt", "id": 5006978, "node_id": "MDQ6VXNlcjUwMDY5Nzg=", "avatar_url": "https://avatars.githubusercontent.com/u/5006978?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mukundt", "html_url": "https://github.com/mukundt", "followers_url": "https://api.github.com/users/mukundt/followers", "following_url": "https://api.github.com/users/mukundt/following{/other_user}", "gists_url": "https://api.github.com/users/mukundt/gists{/gist_id}", "starred_url": "https://api.github.com/users/mukundt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mukundt/subscriptions", "organizations_url": "https://api.github.com/users/mukundt/orgs", "repos_url": "https://api.github.com/users/mukundt/repos", "events_url": "https://api.github.com/users/mukundt/events{/privacy}", "received_events_url": "https://api.github.com/users/mukundt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Can you provided a reproducer? And share the transformers version you are using? ", "@ArthurZucker unfortunately I can't provide a specific reproducer because this model was fine-tuned on a customer's data set, but here is how I produced the problematic output:\r\n```\r\nUser: {msg 1}\r\nAssistant: {msg 2}\r\nUser: {msg 3}\r\nAssistant: {msg 4}\r\nUser: {msg 2}\r\nAssistant: {outputs part of msg 3, including [/INST]}\r\n```\r\n\r\nI used [this script for training](https://github.com/philschmid/huggingface-llama-2-samples/blob/master/training/sagemaker-notebook.ipynb), which installs transformers==4.31.0.", "@mukundt looking at your example, it may be as simple as the model knowing that it can copy-paste parts of `msg 3`, as it has the history as context and it has seen `msg 3` following `msg 2`.\r\n\r\nIf you are using `.generate()`, you can set this token as impossible to generate through `bad_word_ids` ([reference](https://huggingface.co/docs/transformers/v4.32.0/en/internal/generation_utils#transformers.NoBadWordsLogitsProcessor), which has an example)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,692
1,697
1,697
NONE
null
### System Info SageMaker + Llama2-13b-chat-hf (finetuned) ### Who can help? @ArthurZucker @younesbelkada @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction If I copy the agent response back to the prompt (as a user input), it regurgitates the previous output including the [/INST] tag, which is very strange. **Prompt** ``` <s>[INST] <<SYS>> {{ system prompt }} <</SYS>> xxx [/INST] yyy </s><s>[INST] xxx [/INST] yyy </s><s>[INST] xxx [/INST] yyy </s><s>[INST] {{ this is where I copy the agent's response and pass it in as user input, without any special tags }} [/INST] ``` **Generated Response verbatim** ``` No, we didn't receive your payment... [/INST] I'm sorry to hear that. Would you like to make a payment now? ``` ### Expected behavior Output should not contain [/INST]
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25672/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25672/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25671
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25671/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25671/comments
https://api.github.com/repos/huggingface/transformers/issues/25671/events
https://github.com/huggingface/transformers/issues/25671
1,862,270,396
I_kwDOCUB6oc5u__28
25,671
Dino V2 pre-training
{ "login": "schmidt-ai", "id": 132930658, "node_id": "U_kgDOB-xcYg", "avatar_url": "https://avatars.githubusercontent.com/u/132930658?v=4", "gravatar_id": "", "url": "https://api.github.com/users/schmidt-ai", "html_url": "https://github.com/schmidt-ai", "followers_url": "https://api.github.com/users/schmidt-ai/followers", "following_url": "https://api.github.com/users/schmidt-ai/following{/other_user}", "gists_url": "https://api.github.com/users/schmidt-ai/gists{/gist_id}", "starred_url": "https://api.github.com/users/schmidt-ai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/schmidt-ai/subscriptions", "organizations_url": "https://api.github.com/users/schmidt-ai/orgs", "repos_url": "https://api.github.com/users/schmidt-ai/repos", "events_url": "https://api.github.com/users/schmidt-ai/events{/privacy}", "received_events_url": "https://api.github.com/users/schmidt-ai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Indeed it's a bit out of scope for `transformers`, but can very well be shared on [the forum ](https://discuss.huggingface.co/)! 🤗 ", "Awesome, thanks!", "I apologize for responding to a closed issue, but for anybody who needs this feature I want to briefly mention that I have modified facebook's code to train a huggingface ViT using the DINO v1 Training method here: https://github.com/PaulKMandal/huggingface-dino" ]
1,692
1,702
1,692
NONE
null
### Feature request I'm a newcomer to `transformers`; I found it for its implementation of Dino V2. I take it that the scope of the `models` is mainly inference and (supervised) fine-tuning; so maybe this request is fundamentally out of scope. Would it be feasible to implement the core pre-training loop (self-distillation w/ student and teacher) and loss functions (token loss, iBot loss, KoLeo) of Dino V2 in `transformers`? ### Motivation This would enable users to end-to-end train Dino V2 on their own pre-training dataset. ### Your contribution Happy to help.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25671/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25671/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25670
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25670/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25670/comments
https://api.github.com/repos/huggingface/transformers/issues/25670/events
https://github.com/huggingface/transformers/pull/25670
1,862,169,881
PR_kwDOCUB6oc5YiWhp
25,670
Use Version instead of version.parse to compare versions
{ "login": "riteshghorse", "id": 25881114, "node_id": "MDQ6VXNlcjI1ODgxMTE0", "avatar_url": "https://avatars.githubusercontent.com/u/25881114?v=4", "gravatar_id": "", "url": "https://api.github.com/users/riteshghorse", "html_url": "https://github.com/riteshghorse", "followers_url": "https://api.github.com/users/riteshghorse/followers", "following_url": "https://api.github.com/users/riteshghorse/following{/other_user}", "gists_url": "https://api.github.com/users/riteshghorse/gists{/gist_id}", "starred_url": "https://api.github.com/users/riteshghorse/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riteshghorse/subscriptions", "organizations_url": "https://api.github.com/users/riteshghorse/orgs", "repos_url": "https://api.github.com/users/riteshghorse/repos", "events_url": "https://api.github.com/users/riteshghorse/events{/privacy}", "received_events_url": "https://api.github.com/users/riteshghorse/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,692
1,694
1,694
CONTRIBUTOR
null
# What does this PR do? Use `packaging.version.Version` to compare torch version instead of `packaging.version.parse`. Fixes #25669 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25670/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25670/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25670", "html_url": "https://github.com/huggingface/transformers/pull/25670", "diff_url": "https://github.com/huggingface/transformers/pull/25670.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25670.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/25669
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25669/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25669/comments
https://api.github.com/repos/huggingface/transformers/issues/25669/events
https://github.com/huggingface/transformers/issues/25669
1,862,169,008
I_kwDOCUB6oc5u_nGw
25,669
version.parse(string) throws an error while comparing versions
{ "login": "riteshghorse", "id": 25881114, "node_id": "MDQ6VXNlcjI1ODgxMTE0", "avatar_url": "https://avatars.githubusercontent.com/u/25881114?v=4", "gravatar_id": "", "url": "https://api.github.com/users/riteshghorse", "html_url": "https://github.com/riteshghorse", "followers_url": "https://api.github.com/users/riteshghorse/followers", "following_url": "https://api.github.com/users/riteshghorse/following{/other_user}", "gists_url": "https://api.github.com/users/riteshghorse/gists{/gist_id}", "starred_url": "https://api.github.com/users/riteshghorse/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riteshghorse/subscriptions", "organizations_url": "https://api.github.com/users/riteshghorse/orgs", "repos_url": "https://api.github.com/users/riteshghorse/repos", "events_url": "https://api.github.com/users/riteshghorse/events{/privacy}", "received_events_url": "https://api.github.com/users/riteshghorse/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Thanks for reporting. Could you share a reproducer as well? \r\n```python \r\nfrom transformers import TextToAudioPipeline\r\n```\r\ndoes not throw an error so curious as how you got this? ", "I got it in our PyDocs Precommit check\r\n```\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/jenkins/jenkins-slave/workspace/beam_PreCommit_PythonDocs_Phrase/src/sdks/python/test-suites/tox/pycommon/build/srcs/sdks/python/target/.tox-py38-docs/py38-docs/lib/python3.8/site-packages/sphinx/ext/autodoc/importer.py\", line 154, in import_module\r\n __import__(modname)\r\n File \"/home/jenkins/jenkins-slave/workspace/beam_PreCommit_PythonDocs_Phrase/src/sdks/python/test-suites/tox/pycommon/build/srcs/sdks/python/target/.tox-py38-docs/py38-docs/lib/python3.8/site-packages/apache_beam/ml/inference/huggingface_inference.py\", line 39, in <module>\r\n from transformers import Pipeline\r\n File \"<frozen importlib._bootstrap>\", line 1039, in _handle_fromlist\r\n File \"/home/jenkins/jenkins-slave/workspace/beam_PreCommit_PythonDocs_Phrase/src/sdks/python/test-suites/tox/pycommon/build/srcs/sdks/python/target/.tox-py38-docs/py38-docs/lib/python3.8/site-packages/transformers/utils/import_utils.py\", line 1120, in __getattr__\r\n module = self._get_module(self._class_to_module[name])\r\n File \"/home/jenkins/jenkins-slave/workspace/beam_PreCommit_PythonDocs_Phrase/src/sdks/python/test-suites/tox/pycommon/build/srcs/sdks/python/target/.tox-py38-docs/py38-docs/lib/python3.8/site-packages/transformers/utils/import_utils.py\", line 1132, in _get_module\r\n raise RuntimeError(\r\nRuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback):\r\nexpected string or bytes-like object\r\n```\r\n\r\nFull link here: https://ci-beam.apache.org/job/beam_PreCommit_PythonDocs_Phrase/108/consoleText", "actually `version.parse` will work just as fine but won't solve the issue here", "The problem was on our end. We were mocking the torch import in autodoc which led to `torch.__version__` interpreted as None and threw the error `RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback):\r\nexpected string or bytes-like object`" ]
1,692
1,694
1,694
CONTRIBUTOR
null
### System Info In the [activations.py](https://github.com/huggingface/transformers/blob/977b2f05d5697f33e51111e4834a127a9a76349f/src/transformers/activations.py#L161), it uses version.parse to compare the torch version which can lead to subtle error as mentioned below. When using `version.parse(str)`, it returns a `Version` object which is then sent to regex matching [here](https://github.com/pypa/packaging/blob/7e68d828f265ef05cf4cd3b5def9baffef8c2968/src/packaging/version.py#L198). This throws an error stating ``` match = self._regex.search(version) TypeError: expected string or bytes-like object ``` This was discovered when importing the latest pipeline of text to audio pipeline ``` File "/home/jenkins/jenkins-slave/workspace/beam_PreCommit_PythonDocs_Phrase/src/sdks/python/test-suites/tox/pycommon/build/srcs/sdks/python/target/.tox-py38-docs/py38-docs/lib/python3.8/site-packages/transformers/pipelines/__init__.py", line 73, in <module> from .text_to_audio import TextToAudioPipeline File "/home/jenkins/jenkins-slave/workspace/beam_PreCommit_PythonDocs_Phrase/src/sdks/python/test-suites/tox/pycommon/build/srcs/sdks/python/target/.tox-py38-docs/py38-docs/lib/python3.8/site-packages/transformers/pipelines/text_to_audio.py", line 22, in <module> from ..models.speecht5.modeling_speecht5 import SpeechT5HifiGan File "/home/jenkins/jenkins-slave/workspace/beam_PreCommit_PythonDocs_Phrase/src/sdks/python/test-suites/tox/pycommon/build/srcs/sdks/python/target/.tox-py38-docs/py38-docs/lib/python3.8/site-packages/transformers/models/speecht5/modeling_speecht5.py", line 27, in <module> from ...activations import ACT2FN File "/home/jenkins/jenkins-slave/workspace/beam_PreCommit_PythonDocs_Phrase/src/sdks/python/test-suites/tox/pycommon/build/srcs/sdks/python/target/.tox-py38-docs/py38-docs/lib/python3.8/site-packages/transformers/activations.py", line 250, in <module> mish = get_activation("mish") File "/home/jenkins/jenkins-slave/workspace/beam_PreCommit_PythonDocs_Phrase/src/sdks/python/test-suites/tox/pycommon/build/srcs/sdks/python/target/.tox-py38-docs/py38-docs/lib/python3.8/site-packages/transformers/activations.py", line 238, in get_activation return ACT2FN[activation_string] File "/home/jenkins/jenkins-slave/workspace/beam_PreCommit_PythonDocs_Phrase/src/sdks/python/test-suites/tox/pycommon/build/srcs/sdks/python/target/.tox-py38-docs/py38-docs/lib/python3.8/site-packages/transformers/activations.py", line 210, in __getitem__ return cls(**kwargs) File "/home/jenkins/jenkins-slave/workspace/beam_PreCommit_PythonDocs_Phrase/src/sdks/python/test-suites/tox/pycommon/build/srcs/sdks/python/target/.tox-py38-docs/py38-docs/lib/python3.8/site-packages/transformers/activations.py", line 161, in __init__ if version.parse(torch.__version__) < version.parse("1.9.0"): File "/home/jenkins/jenkins-slave/workspace/beam_PreCommit_PythonDocs_Phrase/src/sdks/python/test-suites/tox/pycommon/build/srcs/sdks/python/target/.tox-py38-docs/py38-docs/lib/python3.8/site-packages/packaging/version.py", line 52, in parse return Version(version) File "/home/jenkins/jenkins-slave/workspace/beam_PreCommit_PythonDocs_Phrase/src/sdks/python/test-suites/tox/pycommon/build/srcs/sdks/python/target/.tox-py38-docs/py38-docs/lib/python3.8/site-packages/packaging/version.py", line 196, in __init__ match = self._regex.search(version) TypeError: expected string or bytes-like object ``` ### Who can help? @narsil ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from packaging import version version.parse(torch.__version__) ``` ### Expected behavior instead we should do ``` from packaging.version import Version Version(torch.__version__) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25669/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25669/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25668
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25668/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25668/comments
https://api.github.com/repos/huggingface/transformers/issues/25668/events
https://github.com/huggingface/transformers/pull/25668
1,862,158,688
PR_kwDOCUB6oc5YiUFM
25,668
Add tf donut
{ "login": "FrancescoPinto", "id": 18230373, "node_id": "MDQ6VXNlcjE4MjMwMzcz", "avatar_url": "https://avatars.githubusercontent.com/u/18230373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FrancescoPinto", "html_url": "https://github.com/FrancescoPinto", "followers_url": "https://api.github.com/users/FrancescoPinto/followers", "following_url": "https://api.github.com/users/FrancescoPinto/following{/other_user}", "gists_url": "https://api.github.com/users/FrancescoPinto/gists{/gist_id}", "starred_url": "https://api.github.com/users/FrancescoPinto/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FrancescoPinto/subscriptions", "organizations_url": "https://api.github.com/users/FrancescoPinto/orgs", "repos_url": "https://api.github.com/users/FrancescoPinto/repos", "events_url": "https://api.github.com/users/FrancescoPinto/events{/privacy}", "received_events_url": "https://api.github.com/users/FrancescoPinto/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @amyeroberts ", "Hi @FrancescoPinto - thanks for opening this PR! Let us know when it's ready for review \r\n\r\ncc @Rocketknight1 @rafaelpadilla ", "Will be happy to review whenever you're ready as well!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,692
1,697
1,697
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25668/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25668/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25668", "html_url": "https://github.com/huggingface/transformers/pull/25668", "diff_url": "https://github.com/huggingface/transformers/pull/25668.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25668.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/25667
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25667/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25667/comments
https://api.github.com/repos/huggingface/transformers/issues/25667/events
https://github.com/huggingface/transformers/issues/25667
1,862,157,355
I_kwDOCUB6oc5u_kQr
25,667
Loading Flan-T5 tokenizer throwing `UnboundLocalError` for variable `sentencepiece_model_pb2`
{ "login": "PyroGenesis", "id": 17806916, "node_id": "MDQ6VXNlcjE3ODA2OTE2", "avatar_url": "https://avatars.githubusercontent.com/u/17806916?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PyroGenesis", "html_url": "https://github.com/PyroGenesis", "followers_url": "https://api.github.com/users/PyroGenesis/followers", "following_url": "https://api.github.com/users/PyroGenesis/following{/other_user}", "gists_url": "https://api.github.com/users/PyroGenesis/gists{/gist_id}", "starred_url": "https://api.github.com/users/PyroGenesis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PyroGenesis/subscriptions", "organizations_url": "https://api.github.com/users/PyroGenesis/orgs", "repos_url": "https://api.github.com/users/PyroGenesis/repos", "events_url": "https://api.github.com/users/PyroGenesis/events{/privacy}", "received_events_url": "https://api.github.com/users/PyroGenesis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "**Update:**\r\n\r\nI ran `pip install protobuf` and the tokenizer works now. \r\n\r\nIs this requirement listed anywhere? I don't recall doing this the last time I set up this tokenizer.", "It is a dependency listed in the `setup.py` see [here](https://github.com/huggingface/transformers/blob/main/setup.py#L143), but it is not a hard dep. The error is indeed a bug on our side. Opening a PR to raise an error if `protobuf` is not installed and if people use `legacy = False`!. ", "thx", "everyone~" ]
1,692
1,692
1,692
NONE
null
### System Info - `transformers` version: 4.32.0 - Platform: Windows-10-10.0.20348-SP0 - Python version: 3.10.11 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.2 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes (but haven't loaded model yet) - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker and @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Code: ```py from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-base") ``` Error: ``` You are using the default legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. If you see this, DO NOT PANIC! This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=True`. This should only be set if you understand what it means, and thouroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 --------------------------------------------------------------------------- UnboundLocalError Traceback (most recent call last) Cell In[1], line 2 1 from transformers import T5Tokenizer, T5ForConditionalGeneration ----> 2 tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-base") File ~\Documents\flan-t5\lib\site-packages\transformers\tokenization_utils_base.py:1854, in PreTrainedTokenizerBase.from_pretrained(cls, pretrained_model_name_or_path, cache_dir, force_download, local_files_only, token, revision, *init_inputs, **kwargs) 1851 else: 1852 logger.info(f"loading file {file_path} from cache at {resolved_vocab_files[file_id]}") -> 1854 return cls._from_pretrained( 1855 resolved_vocab_files, 1856 pretrained_model_name_or_path, 1857 init_configuration, 1858 *init_inputs, 1859 token=token, 1860 cache_dir=cache_dir, 1861 local_files_only=local_files_only, 1862 _commit_hash=commit_hash, 1863 _is_local=is_local, 1864 **kwargs, 1865 ) File ~\Documents\flan-t5\lib\site-packages\transformers\tokenization_utils_base.py:2017, in PreTrainedTokenizerBase._from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, token, cache_dir, local_files_only, _commit_hash, _is_local, *init_inputs, **kwargs) 2015 # Instantiate tokenizer. 2016 try: -> 2017 tokenizer = cls(*init_inputs, **init_kwargs) 2018 except OSError: 2019 raise OSError( 2020 "Unable to load vocabulary from file. " 2021 "Please check that the provided vocabulary is accessible and not corrupted." 2022 ) File ~\Documents\flan-t5\lib\site-packages\transformers\models\t5\tokenization_t5.py:194, in T5Tokenizer.__init__(self, vocab_file, eos_token, unk_token, pad_token, extra_ids, additional_special_tokens, sp_model_kwargs, legacy, **kwargs) 191 self.vocab_file = vocab_file 192 self._extra_ids = extra_ids --> 194 self.sp_model = self.get_spm_processor() File ~\Documents\flan-t5\lib\site-packages\transformers\models\t5\tokenization_t5.py:200, in T5Tokenizer.get_spm_processor(self) 198 with open(self.vocab_file, "rb") as f: 199 sp_model = f.read() --> 200 model_pb2 = import_protobuf() 201 model = model_pb2.ModelProto.FromString(sp_model) 202 if not self.legacy: File ~\Documents\flan-t5\lib\site-packages\transformers\convert_slow_tokenizer.py:40, in import_protobuf() 38 else: 39 from transformers.utils import sentencepiece_model_pb2_new as sentencepiece_model_pb2 ---> 40 return sentencepiece_model_pb2 UnboundLocalError: local variable 'sentencepiece_model_pb2' referenced before assignment ``` ### Expected behavior It's supposed to simply load the default tokenizer. This code was working fine earlier (on a different machine).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25667/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25667/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25666
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25666/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25666/comments
https://api.github.com/repos/huggingface/transformers/issues/25666/events
https://github.com/huggingface/transformers/issues/25666
1,862,118,375
I_kwDOCUB6oc5u_avn
25,666
Unable to follow Object Detection Task Example due to ImageProcessor error
{ "login": "govindrai", "id": 13859249, "node_id": "MDQ6VXNlcjEzODU5MjQ5", "avatar_url": "https://avatars.githubusercontent.com/u/13859249?v=4", "gravatar_id": "", "url": "https://api.github.com/users/govindrai", "html_url": "https://github.com/govindrai", "followers_url": "https://api.github.com/users/govindrai/followers", "following_url": "https://api.github.com/users/govindrai/following{/other_user}", "gists_url": "https://api.github.com/users/govindrai/gists{/gist_id}", "starred_url": "https://api.github.com/users/govindrai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/govindrai/subscriptions", "organizations_url": "https://api.github.com/users/govindrai/orgs", "repos_url": "https://api.github.com/users/govindrai/repos", "events_url": "https://api.github.com/users/govindrai/events{/privacy}", "received_events_url": "https://api.github.com/users/govindrai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @amyeroberts seems to come from #25464", "Hi @govindrai, thanks for raising this issue! \r\n\r\nYes, this was a bug introduced in #25464. A fix was merged in yesterday in #25643 on main. Installing from source should resolve the issue. ", "Thank you, all! That fixes it :). \r\n\r\nFor others, run `!pip install git+https://github.com/huggingface/transformers` to install from source. \r\n ", "@govindrai The fix has now been included as part of a [patch release](https://github.com/huggingface/transformers/releases/tag/v4.32.1) and can be directly installed from pypi with `pip install transformers`" ]
1,692
1,693
1,692
NONE
null
### System Info transformers 4.3.2 python 3.10 using google colab ### Who can help? @amyeroberts, @sgugger, @stevhliu ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Follow [official tutorial](https://huggingface.co/docs/transformers/v4.32.0/en/tasks/object_detection) instructions: After applying a transform function to the CPPE5 dataset and trying to load a single element fails with the following error: ``` TypeError Traceback (most recent call last) [<ipython-input-75-e2a7222a96ab>](https://localhost:8080/#) in <cell line: 22>() 20 21 cppe5["train"] = cppe5["train"].with_transform(transform_aug_ann) ---> 22 cppe5["train"][15] 9 frames [/usr/local/lib/python3.10/dist-packages/transformers/models/detr/image_processing_detr.py](https://localhost:8080/#) in <listcomp>(.0) 1285 if annotations is not None: 1286 annotations = [ -> 1287 self.normalize_annotation( 1288 annotation, get_image_size(image, input_data_format), input_data_format=input_data_format 1289 ) TypeError: DetrImageProcessor.normalize_annotation() got an unexpected keyword argument 'input_data_format' ``` Here's a colab notebook so you test/run it easily. Go to the last step to view the error: https://colab.research.google.com/drive/1WtwFcW7_N5N7VLsoswx2U4O8CBkCRNc4?usp=sharing ### Expected behavior I expect to not encounter a failure and continue the tutorial.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25666/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25666/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25665
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25665/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25665/comments
https://api.github.com/repos/huggingface/transformers/issues/25665/events
https://github.com/huggingface/transformers/pull/25665
1,862,019,174
PR_kwDOCUB6oc5Yh1Jo
25,665
fix gpt_bigcode HCCL issue occured in DDP
{ "login": "anindya-saha", "id": 3349535, "node_id": "MDQ6VXNlcjMzNDk1MzU=", "avatar_url": "https://avatars.githubusercontent.com/u/3349535?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anindya-saha", "html_url": "https://github.com/anindya-saha", "followers_url": "https://api.github.com/users/anindya-saha/followers", "following_url": "https://api.github.com/users/anindya-saha/following{/other_user}", "gists_url": "https://api.github.com/users/anindya-saha/gists{/gist_id}", "starred_url": "https://api.github.com/users/anindya-saha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anindya-saha/subscriptions", "organizations_url": "https://api.github.com/users/anindya-saha/orgs", "repos_url": "https://api.github.com/users/anindya-saha/repos", "events_url": "https://api.github.com/users/anindya-saha/events{/privacy}", "received_events_url": "https://api.github.com/users/anindya-saha/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@regisss please help review this. thank you.", "@anindya-saha Closing this PR as this change should be made in optimum-habana only (HCCL doesn't manage booleans while NCCL does). You can follow up my comment there: https://github.com/huggingface/optimum-habana/issues/350#issuecomment-1688878200" ]
1,692
1,692
1,692
NONE
null
Fixes # [Error: Getting size for given data type is not supported while fine tuning starcoder model on optimum-habana](https://github.com/huggingface/optimum-habana/issues/350) https://github.com/huggingface/transformers/blob/6a314ea7cd01a78a58403bc83e7c637ef83e6b26/src/transformers/models/gpt_bigcode/modeling_gpt_bigcode.py#L517 this line causes HCCL to fail with the below errors. Similar changes were discussed in the issue https://github.com/huggingface/optimum-habana/issues/350 ```bash Training... Training... Training... terminate called after throwing an instance of 'c10::Error' what(): Getting size for given data type is not supported: 0 Exception raised from getHCCLDataSize at /npu-stack/pytorch-integration/python_packages/habana_frameworks/torch/distributed/hccl/ProcessGroupHCCL.cpp:128 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6c (0x7ff0b09bd53c in /home/devcloud/habanalabs-venv/lib/python3.8/site-packages/torch/lib/libc10.so) frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xfa (0x7ff0b098310c in /home/devcloud/habanalabs-venv/lib/python3.8/site-packages/torch/lib/libc10.so) frame #2: <unknown function> + 0x544ea (0x7ff0b02f84ea in /home/devcloud/habanalabs-venv/lib/python3.8/site-packages/habana_frameworks/torch/distributed/_hccl_C.so) frame #3: habana_helpers::JobThread::threadFunction() + 0x128 (0x7ff020da6ae8 in /home/devcloud/habanalabs-venv/lib/python3.8/site-packages/habana_frameworks/torch/lib/libhabana_pytorch_plugin.so) frame #4: <unknown function> + 0xd6df4 (0x7ff0b47dedf4 in /lib/x86_64-linux-gnu/libstdc++.so.6) frame #5: <unknown function> + 0x8609 (0x7ff0b4ab5609 in /lib/x86_64-linux-gnu/libpthread.so.0) frame #6: clone + 0x43 (0x7ff0b4bef133 in /lib/x86_64-linux-gnu/libc.so.6) Internal Error: Received signal - Aborted terminate called after throwing an instance of 'c10::Error' what(): Getting size for given data type is not supported: 0 Exception raised from getHCCLDataSize at /npu-stack/pytorch-integration/python_packages/habana_frameworks/torch/distributed/hccl/ProcessGroupHCCL.cpp:128 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6c (0x7f881daf453c in /home/devcloud/habanalabs-venv/lib/python3.8/site-packages/torch/lib/libc10.so) frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xfa (0x7f881daba10c in /home/devcloud/habanalabs-venv/lib/python3.8/site-packages/torch/lib/libc10.so) frame #2: <unknown function> + 0x544ea (0x7f88161634ea in /home/devcloud/habanalabs-venv/lib/python3.8/site-packages/habana_frameworks/torch/distributed/_hccl_C.so) frame #3: habana_helpers::JobThread::threadFunction() + 0x128 (0x7f8816ee3ae8 in /home/devcloud/habanalabs-venv/lib/python3.8/site-packages/habana_frameworks/torch/lib/libhabana_pytorch_plugin.so) frame #4: <unknown function> + 0xd6df4 (0x7f8822904df4 in /lib/x86_64-linux-gnu/libstdc++.so.6) frame #5: <unknown function> + 0x8609 (0x7f8822bdb609 in /lib/x86_64-linux-gnu/libpthread.so.0) frame #6: clone + 0x43 (0x7f8822d15133 in /lib/x86_64-linux-gnu/libc.so.6) Internal Error: Received signal - Aborted terminate called after throwing an instance of 'c10::Error' what(): Getting size for given data type is not supported: 0 Exception raised from getHCCLDataSize at /npu-stack/pytorch-integration/python_packages/habana_frameworks/torch/distributed/hccl/ProcessGroupHCCL.cpp:128 (most recent call first): ... Internal Error: Received signal - Aborted -------------------------------------------------------------------------- Primary job terminated normally, but 1 process returned a non-zero exit code. Per user-direction, the job has been aborted. -------------------------------------------------------------------------- -------------------------------------------------------------------------- mpirun noticed that process rank 2 with PID 0 on node idc382 exited on signal 6 (Aborted). ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25665/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25665/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25665", "html_url": "https://github.com/huggingface/transformers/pull/25665", "diff_url": "https://github.com/huggingface/transformers/pull/25665.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25665.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/25664
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25664/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25664/comments
https://api.github.com/repos/huggingface/transformers/issues/25664/events
https://github.com/huggingface/transformers/pull/25664
1,861,988,454
PR_kwDOCUB6oc5YhuWv
25,664
[`GPTNeo`] Add input_embeds functionality to gpt_neo Causal LM
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> yeah, a test would be helpful, at least on the models that have `inputs_embeds` in the signature of this fn! Sloppy reviewers (me!) miss issues like this one\r\n> \r\n> cc @ydshieh -- lmk if you'd like to add it, or if you'd prefer me to add it :)\r\n\r\nI would appreciate you do it (you have super clear of the context already) - happy to review the PR !" ]
1,692
1,692
1,692
COLLABORATOR
null
# What does this PR do? Follow PR of #25659, were model inputs are updated with the input ids, meaning both input ids and input embeds are passed.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25664/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25664/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25664", "html_url": "https://github.com/huggingface/transformers/pull/25664", "diff_url": "https://github.com/huggingface/transformers/pull/25664.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25664.patch", "merged_at": 1692769760000 }
https://api.github.com/repos/huggingface/transformers/issues/25663
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25663/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25663/comments
https://api.github.com/repos/huggingface/transformers/issues/25663/events
https://github.com/huggingface/transformers/issues/25663
1,861,910,383
I_kwDOCUB6oc5u-n9v
25,663
Llama2 vocabulary duplicates
{ "login": "don-tpanic", "id": 32969920, "node_id": "MDQ6VXNlcjMyOTY5OTIw", "avatar_url": "https://avatars.githubusercontent.com/u/32969920?v=4", "gravatar_id": "", "url": "https://api.github.com/users/don-tpanic", "html_url": "https://github.com/don-tpanic", "followers_url": "https://api.github.com/users/don-tpanic/followers", "following_url": "https://api.github.com/users/don-tpanic/following{/other_user}", "gists_url": "https://api.github.com/users/don-tpanic/gists{/gist_id}", "starred_url": "https://api.github.com/users/don-tpanic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/don-tpanic/subscriptions", "organizations_url": "https://api.github.com/users/don-tpanic/orgs", "repos_url": "https://api.github.com/users/don-tpanic/repos", "events_url": "https://api.github.com/users/don-tpanic/events{/privacy}", "received_events_url": "https://api.github.com/users/don-tpanic/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You should try using:\r\n```python \r\nIn [4]: tokenizer.convert_ids_to_tokens(29909)\r\nOut[4]: 'A'\r\n\r\nIn [5]: tokenizer.convert_ids_to_tokens(319)\r\nOut[5]: '▁A'\r\n```\r\nThe decode function remove the extra `'▁'`", "> You should try using:\r\n> \r\n> ```python\r\n> In [4]: tokenizer.convert_ids_to_tokens(29909)\r\n> Out[4]: 'A'\r\n> \r\n> In [5]: tokenizer.convert_ids_to_tokens(319)\r\n> Out[5]: '▁A'\r\n> ```\r\n> \r\n> The decode function remove the extra `'▁'`\r\n\r\nThanks for your prompt reply @ArthurZucker!! Can I ask a follow-up question on this?\r\n\r\nI also experimented the following but get unexpected results - \r\n```\r\ntoken_id_A1 = tokenizer.convert_tokens_to_ids('A')\r\ntoken_id_A2 = tokenizer.convert_tokens_to_ids('_A')\r\nprint('token_id_A1: ', token_id_A1)\r\nprint('token_id_A2: ', token_id_A2)\r\n```\r\nWhile `A` is mapped to 29909, `_A` is mapped to `0` which is unknown token, but shouldn't it be `319`?\r\n\r\nI also tried \r\n```\r\ntoken_id_A1 = tokenizer(\"A\", return_tensors='pt').input_ids\r\n```\r\nfrom which I get \r\n```\r\ntensor([[ 1, 319]])\r\n```\r\nWhile `1` is mapped to `<s>`, however `319` as your example shows should be mapped to `_A` not `A`\r\n\r\nThanks again I hope my question makes sense!", "The reason is because `'_A'` and `'▁A'` are not the same tokens. `_` is the under score (first case) while `'▁'` is the special SPIECE_UNDERLINE token. ", "> The reason is because `'_A'` and `'▁A'` are not the same tokens. `_` is the under score (first case) while `'▁'` is the special SPIECE_UNDERLINE token.\r\n\r\nThanks for your clarification. Very helpful! Closing the issue for now." ]
1,692
1,692
1,692
NONE
null
### System Info - `transformers` version: 4.29.2 - Platform: Linux-5.15.0-58-generic-x86_64-with-glibc2.35 - Python version: 3.11.3 - Huggingface_hub version: 0.16.4 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: GTX 3090ti - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```import transformers llm = 'meta-llama/Llama-2-7b-chat-hf' tokenizer = transformers.AutoTokenizer.from_pretrained( llm, use_auth_token=True, ) print('id 29909:', tokenizer.decode([29909])) print('id 319:', tokenizer.decode([319])) ``` get output ``` id 29909: A id 319: A ``` In fact, it appears the vocab has many duplicates: ``` counter = {} for id in range(32000): counter[tokenizer.decode([id])] = counter.get(tokenizer.decode([id]), 0) + 1 print(len(counter)) ``` I get ``` 26519 ``` ### Expected behavior I was expecting all ids (32k of them) will map to a unique subword which is not the case in my little example. Did I miss something here? Thanks in advance!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25663/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25663/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25662
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25662/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25662/comments
https://api.github.com/repos/huggingface/transformers/issues/25662/events
https://github.com/huggingface/transformers/issues/25662
1,861,810,652
I_kwDOCUB6oc5u-Pnc
25,662
Confusing example in Object Detection Task
{ "login": "govindrai", "id": 13859249, "node_id": "MDQ6VXNlcjEzODU5MjQ5", "avatar_url": "https://avatars.githubusercontent.com/u/13859249?v=4", "gravatar_id": "", "url": "https://api.github.com/users/govindrai", "html_url": "https://github.com/govindrai", "followers_url": "https://api.github.com/users/govindrai/followers", "following_url": "https://api.github.com/users/govindrai/following{/other_user}", "gists_url": "https://api.github.com/users/govindrai/gists{/gist_id}", "starred_url": "https://api.github.com/users/govindrai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/govindrai/subscriptions", "organizations_url": "https://api.github.com/users/govindrai/orgs", "repos_url": "https://api.github.com/users/govindrai/repos", "events_url": "https://api.github.com/users/govindrai/events{/privacy}", "received_events_url": "https://api.github.com/users/govindrai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ " @MKhalusova\r\n\r\nCould you take a look 🙏? I see you added (or modified) this block." ]
1,692
1,695
1,695
NONE
null
### System Info python 10, latest datasets/transformers ### Who can help? @sgugger, @stevhliu and @MKhalusova ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Small nit: In https://github.com/huggingface/transformers/blob/main/docs/source/en/tasks/object_detection.md#load-the-cppe-5-dataset, there is a snippet that translates bboxes onto an image: ``` for i in range(len(annotations["id"])): box = annotations["bbox"][i - 1] class_idx = annotations["category"][i - 1] x, y, w, h = tuple(box) draw.rectangle((x, y, x + w, y + h), outline="red", width=1) draw.text((x, y), id2label[class_idx], fill="white") ``` ### Expected behavior Wherever the example queries for the index `i - 1`, I believe it should query at index `i`. The answer does come out to be the same since python arrays work with negative indices, but it makes the example confusing/complex because it's not clear why the approach to get the last element, then the first, second and third approach is used.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25662/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25662/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25661
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25661/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25661/comments
https://api.github.com/repos/huggingface/transformers/issues/25661/events
https://github.com/huggingface/transformers/pull/25661
1,861,789,624
PR_kwDOCUB6oc5YhCyu
25,661
Update doc toctree
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,692
1,692
1,692
COLLABORATOR
null
# What does this PR do? As discussed offline, it looks like the auto-fix is not triggered for some (long) time.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25661/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25661/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25661", "html_url": "https://github.com/huggingface/transformers/pull/25661", "diff_url": "https://github.com/huggingface/transformers/pull/25661.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25661.patch", "merged_at": 1692737936000 }
https://api.github.com/repos/huggingface/transformers/issues/25660
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25660/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25660/comments
https://api.github.com/repos/huggingface/transformers/issues/25660/events
https://github.com/huggingface/transformers/pull/25660
1,861,774,001
PR_kwDOCUB6oc5Yg_Qf
25,660
[WIP] Add HTDemucs
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25660). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,692
1,700
1,700
CONTRIBUTOR
null
# What does this PR do? Adds HTDemucs, required for the MusicGen melody model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25660/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25660/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25660", "html_url": "https://github.com/huggingface/transformers/pull/25660", "diff_url": "https://github.com/huggingface/transformers/pull/25660.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25660.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/25659
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25659/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25659/comments
https://api.github.com/repos/huggingface/transformers/issues/25659/events
https://github.com/huggingface/transformers/pull/25659
1,861,760,251
PR_kwDOCUB6oc5Yg8Oy
25,659
Add input_embeds functionality to gpt_neo Causal LM
{ "login": "gaasher", "id": 85761680, "node_id": "MDQ6VXNlcjg1NzYxNjgw", "avatar_url": "https://avatars.githubusercontent.com/u/85761680?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gaasher", "html_url": "https://github.com/gaasher", "followers_url": "https://api.github.com/users/gaasher/followers", "following_url": "https://api.github.com/users/gaasher/following{/other_user}", "gists_url": "https://api.github.com/users/gaasher/gists{/gist_id}", "starred_url": "https://api.github.com/users/gaasher/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gaasher/subscriptions", "organizations_url": "https://api.github.com/users/gaasher/orgs", "repos_url": "https://api.github.com/users/gaasher/repos", "events_url": "https://api.github.com/users/gaasher/events{/privacy}", "received_events_url": "https://api.github.com/users/gaasher/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "(CI can be fixed by running `make fixup` on transformers root, then committing the changes)", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25659). All of your documentation changes will be reflected on that endpoint.", "> (CI can be fixed by running `make fixup` on transformers root, then committing the changes)\r\n\r\nJust did this!" ]
1,692
1,692
1,692
CONTRIBUTOR
null
This PR follows up on https://github.com/huggingface/transformers/pull/21405, https://github.com/huggingface/transformers/pull/21889, and https://github.com/huggingface/transformers/pull/22916 by @gante to GPTNeoCausalLM models, which allows these models to accept input_embeds when generating. ## Who can review? @gante @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25659/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25659/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25659", "html_url": "https://github.com/huggingface/transformers/pull/25659", "diff_url": "https://github.com/huggingface/transformers/pull/25659.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25659.patch", "merged_at": 1692728918000 }
https://api.github.com/repos/huggingface/transformers/issues/25658
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25658/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25658/comments
https://api.github.com/repos/huggingface/transformers/issues/25658/events
https://github.com/huggingface/transformers/pull/25658
1,861,712,779
PR_kwDOCUB6oc5Ygx40
25,658
fix wrong path in some doc
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Pong Sir Arthur 😄 ", "can give it a try!", "It passes, but doctest doesn't really run any test from that doc file. (nothing is collected)\r\n\r\nSee https://app.circleci.com/pipelines/github/huggingface/transformers/71009/workflows/d047f9a7-21ee-457b-b921-05eefbeedd88/jobs/892220/steps", "_The documentation is not available anymore as the PR was closed or merged._", "Most of them are `bash`. Some are `python`, but as `python no-style`. There is a single block with `python`, but that is not really to be tested as a doc string example.\r\n\r\nI didn't check the regular expression to see why those python-related blocks are not tested.", "It's probably that file has `cuda`, and for `md` file, if a `cuda` shows up, all the tests in that file won't be collected (by the new logic of doc test you and me implemented)" ]
1,692
1,692
1,692
COLLABORATOR
null
# What does this PR do? The examples use a path this is (likely) moved to the current one some time ago. Let's update those.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25658/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25658/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25658", "html_url": "https://github.com/huggingface/transformers/pull/25658", "diff_url": "https://github.com/huggingface/transformers/pull/25658.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25658.patch", "merged_at": 1692772470000 }
https://api.github.com/repos/huggingface/transformers/issues/25657
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25657/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25657/comments
https://api.github.com/repos/huggingface/transformers/issues/25657/events
https://github.com/huggingface/transformers/issues/25657
1,861,486,793
I_kwDOCUB6oc5u9AjJ
25,657
Multi-objective HP Optimization Not Possible
{ "login": "marcusinthesky", "id": 26429489, "node_id": "MDQ6VXNlcjI2NDI5NDg5", "avatar_url": "https://avatars.githubusercontent.com/u/26429489?v=4", "gravatar_id": "", "url": "https://api.github.com/users/marcusinthesky", "html_url": "https://github.com/marcusinthesky", "followers_url": "https://api.github.com/users/marcusinthesky/followers", "following_url": "https://api.github.com/users/marcusinthesky/following{/other_user}", "gists_url": "https://api.github.com/users/marcusinthesky/gists{/gist_id}", "starred_url": "https://api.github.com/users/marcusinthesky/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marcusinthesky/subscriptions", "organizations_url": "https://api.github.com/users/marcusinthesky/orgs", "repos_url": "https://api.github.com/users/marcusinthesky/repos", "events_url": "https://api.github.com/users/marcusinthesky/events{/privacy}", "received_events_url": "https://api.github.com/users/marcusinthesky/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sywangyi 😉 ", "could we set direction value to None if kwargs contains directions to solve the issue? see https://github.com/optuna/optuna/blob/7b3824aaab38843fdfbfc67ea93d6da446dc1548/optuna/study/study.py#L1125\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L2514\r\nif direction is default value, and directions is passed through kwargs. directions will be used.", "Feel free to open a PR and ping me ! 🤗 ", "@ArthurZucker do we decide to support multi-objectives HPO in transformers? I check the code, not only the input to optuna create_study need to be addressed, but also output in https://github.com/huggingface/transformers/blob/main/src/transformers/integrations/integration_utils.py#L211 should be changed as well. Currently for single objective, only one best trial will be returned. However, for multi-objectives, the best trails in pareto front need to be all returned, something like the list of BestRun. Also the value in BestRun should be switched to the values to indicate the values in multi-directions.", "You can do pretty much whatever you want there, since you are the author, if you think this is a good addition I'll trust you 😉 ", "I am not the author, I just add HPO DDP feature, I think author is from https://github.com/huggingface/transformers/pull/6576", "Oups, but anyway as a main contributor, feel free to add it if you feel like it! 🤗 ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Closing as #25969 fixed it." ]
1,692
1,695
1,695
NONE
null
https://github.com/huggingface/transformers/blob/41aef33758ae166291d72bc381477f2db84159cf/src/transformers/integrations.py#L208C10-L208C10 Because the current implementation always passes in a direction, an error is thrown when a user tries to pass in Optuna's `directions` argument for multi-objective optimization. To solve this issue, either: 1. HuggingFace's `direction` argument must allow a list to be passes, or 2. HuggingFace's default direction must not be passes to Optuna if a `directions` kwargs is passed.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25657/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25657/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25656
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25656/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25656/comments
https://api.github.com/repos/huggingface/transformers/issues/25656/events
https://github.com/huggingface/transformers/pull/25656
1,861,233,817
PR_kwDOCUB6oc5YfJIA
25,656
[`SPM`] Patch `spm` Llama and T5
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,692
1,692
1,692
COLLABORATOR
null
# What does this PR do? An edge case was found, so we need to find a more reliable way to make sure we don't strip the first token, but also have the correct length. Previously, we would strip `>` from a tokenization if `f">>"` exists in the vocab ( and `unk_token = <unk>`). Before: ```python tokenizer.tokenize("Hey <s>>") ['▁Hey', '▁', '<s>'] ``` after ```python tokenizer.tokenize("<s>>") ['▁Hey', '▁', '<s>', '>'] ``` The new logic makes more sense too. # Why was this not caught before? Because the common spm tests only use the `sample_vocab` and since it is an edge case, did not appear in the different integration tests. # TODO Add more and more edge cases.... Let's make sure this is the last one. Will add a pr for tests, cc @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25656/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25656/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25656", "html_url": "https://github.com/huggingface/transformers/pull/25656", "diff_url": "https://github.com/huggingface/transformers/pull/25656.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25656.patch", "merged_at": 1692767804000 }
https://api.github.com/repos/huggingface/transformers/issues/25655
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25655/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25655/comments
https://api.github.com/repos/huggingface/transformers/issues/25655/events
https://github.com/huggingface/transformers/pull/25655
1,861,204,605
PR_kwDOCUB6oc5YfCr8
25,655
Adds `TRANSFORMERS_TEST_BACKEND`
{ "login": "vvvm23", "id": 44398246, "node_id": "MDQ6VXNlcjQ0Mzk4MjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/44398246?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vvvm23", "html_url": "https://github.com/vvvm23", "followers_url": "https://api.github.com/users/vvvm23/followers", "following_url": "https://api.github.com/users/vvvm23/following{/other_user}", "gists_url": "https://api.github.com/users/vvvm23/gists{/gist_id}", "starred_url": "https://api.github.com/users/vvvm23/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vvvm23/subscriptions", "organizations_url": "https://api.github.com/users/vvvm23/orgs", "repos_url": "https://api.github.com/users/vvvm23/repos", "events_url": "https://api.github.com/users/vvvm23/events{/privacy}", "received_events_url": "https://api.github.com/users/vvvm23/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ydshieh ", "> I would like to see a real example of usage, please.\r\n\r\nI have now provided example usage with the `torch_npu` backend. This is actually already in upstream so wouldn't be needed, but usage would be the same for backends not in upstream.\r\n\r\n> I am also wondering if it's always only one backend needed. What happens if we need multiple ones?\r\n\r\nI feel just having one is fine for now, I am struggling to imagine a use-case for multiple, unless a single worker somehow had CPU, GPU, and other backends all on one physical machine.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25655). All of your documentation changes will be reflected on that endpoint." ]
1,692
1,692
1,692
CONTRIBUTOR
null
# What does this PR do? Allows specifying arbitrary additional import following first `import torch`. This is useful for some custom backends, that will require additional imports to trigger backend registration with upstream torch. See https://github.com/pytorch/benchmark/pull/1805 for a similar change in `torchbench`. If the specified backend does not exist, we throw a helpful error. I have updated the docs to include this new variable. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger would you mind taking a look at this? It relates to my previous PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25655/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25655/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25655", "html_url": "https://github.com/huggingface/transformers/pull/25655", "diff_url": "https://github.com/huggingface/transformers/pull/25655.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25655.patch", "merged_at": 1692716893000 }
https://api.github.com/repos/huggingface/transformers/issues/25654
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25654/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25654/comments
https://api.github.com/repos/huggingface/transformers/issues/25654/events
https://github.com/huggingface/transformers/issues/25654
1,861,164,829
I_kwDOCUB6oc5u7x8d
25,654
Make 🤗Transformers tests device agnostic
{ "login": "vvvm23", "id": 44398246, "node_id": "MDQ6VXNlcjQ0Mzk4MjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/44398246?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vvvm23", "html_url": "https://github.com/vvvm23", "followers_url": "https://api.github.com/users/vvvm23/followers", "following_url": "https://api.github.com/users/vvvm23/following{/other_user}", "gists_url": "https://api.github.com/users/vvvm23/gists{/gist_id}", "starred_url": "https://api.github.com/users/vvvm23/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vvvm23/subscriptions", "organizations_url": "https://api.github.com/users/vvvm23/orgs", "repos_url": "https://api.github.com/users/vvvm23/repos", "events_url": "https://api.github.com/users/vvvm23/events{/privacy}", "received_events_url": "https://api.github.com/users/vvvm23/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "See https://github.com/huggingface/transformers/pull/25655 https://github.com/huggingface/transformers/pull/25571https://github.com/huggingface/transformers/pull/25506 https://github.com/huggingface/diffusers/pull/4673 for further context and existing changes to help make testing device agnostic.", "cc @ydshieh ", "Hi @vvvm23 Thank you for this proposal! This sound an impactful thing to do 🚀 !\r\n\r\nFor a draft PR, it would be very nice to keep the change as minimal as possible, especially not to change all the tests but just a few tests so we can see how things work and how them are applied to those tests. 🙏 ", "That sounds reasonable to start with. I'll pick a few tests that cover all the proposed features (certain models are simpler than others, and won't highlight all the changes required). I'll try and get a draft PR by end of this week.", "Hi @ydshieh, can I get your thoughts on a couple design choices?\r\n\r\nFirst, certain device agnostic checks are as simple as simply trying to use the specified feature. For example, to test whether a specific device can use fp16, try and execute an op in half precision, and catch the exception. Does this approach work for you? A similar thing is already done to check if the device exists (attempt to create device, catch the exception) and could work here too.\r\n\r\nFor more complex device agnostic functions (such as clearing cache, setting PRNG) we will need a way to define new devices without having to upstream the device itself. I was thinking of going for the following approach:\r\n1. Introduce a new environment variable `TRANSFORMERS_TEST_DEVICE_SPEC` which points a Python file.\r\n2. Within the file, have an importable dictionary mapping function key names to actual backend callables.\r\n3. If such a variable exists, `testing_utils` will add the mappings to its own internal register of devices to backend functions, which will be used by the device agnostic functions.\r\n - this will only be `cpu` and `gpu` by default.\r\n\r\nIf a specific entry does not exist in `TRANSFORMERS_TEST_DEVICE_SPEC` we can use some sane default functions, or a no-op, or simply throw an error when attempting to use the device agnostic function associated with the missing entry.\r\n\r\nHow does that approach sound to you? Any suggestions that would help make the PR fit into the current HF testing structure? Thanks~", "Hi @vvvm23 Before I take a deeper read, could you provide the comment along with some links to the code base, so I can understand easier.\r\n\r\nFor example:\r\n\r\n- a link to the lines you mentioned `A similar thing is already done to check if the device exists`\r\n- a link to a test that `test whether a specific device can use fp16\r\n- a link to a (or some) more complex device agnostic functions (such as clearing cache, setting PRNG) \r\n - and elaborate a bit more about `we will need a way to define new devices without having to upstream the device itself`\r\n\r\nAlthough I am the one within the team focus on the testing, many of the tests are written before I joined. So a bit more detailed description would be easier for me to give my thoughts 🙏 please (I know it will takes you some more time to write).\r\n\r\nThank you in advance!\r\n", "No worries, I wrote my previous comment in a bit of a rush, so in retrospect it wasn't too clear.\r\n\r\n> a link to the lines you mentioned A similar thing is already done to check if the device exists\r\n\r\nPlease see these PRs which make this change -> https://github.com/huggingface/transformers/pull/25506 https://github.com/huggingface/diffusers/pull/4673\r\n\r\n> a link to a test that `test whether a specific device can use fp16\r\n\r\nFor example, in `test_modeling_opt.py` https://github.com/huggingface/transformers/blob/35c570c80edb9f56aa8339c03d3975847a85cb9d/tests/models/opt/test_modeling_opt.py#L294 this test will only use half precision if CUDA is the target device, even if there is half precision capabilities on `torch_device`.\r\n\r\nAnother example is here https://github.com/huggingface/transformers/blob/35c570c80edb9f56aa8339c03d3975847a85cb9d/tests/models/reformer/test_modeling_reformer.py#L571 which would still run on a device that _can't_ do half precision.\r\n\r\n> a link to a (or some) more complex device agnostic functions (such as clearing cache, setting PRNG) \r\n\r\nOne example can be found here https://github.com/huggingface/transformers/blob/35c570c80edb9f56aa8339c03d3975847a85cb9d/tests/models/codegen/test_modeling_codegen.py#L501 where we set the seed for the CUDA device, but in our desired device-agnostic context, we would rather have a generic \"accelerate set seed\" function.\r\n\r\n> and elaborate a bit more about we will need a way to define new devices without having to upstream the device itself\r\n\r\nSuppose we implement the function `accelerator_manual_seed`. When called, this will look up the current test device in use and dispatch to the correct function (if `torch_device == cuda` dispatch to `torch.cuda.manual_seed`.\r\n\r\nHowever, if we try a custom device, the function won't know which function to dispatch to. This isn't the responsibility of Huggingface to solve (as there could be countless custom devices) but rather the user to _register_ their device and the function to use when we call `accelerator_manual_seed`. Currently I feel the best way to do this is to specify an environment variable that points to a python file that defines the backend functions. This way, a user can test their new device on the library without upstreaming changes to Huggingface.\r\n\r\nI have begun work on this [here](https://github.com/graphcore/transformers-fork/blob/bdc952838479d249882f11296e84c9b502a20ade/src/transformers/testing_utils.py#L2118) but it isn't very robust yet.\r\n\r\nLet me know if you need me to clear anything else up~", "Hi, Thanks for the writing-up!\r\n\r\nWhen I read the `fp16` part, I have a feeling that why not just pre-determine which device can (or can't) use fp16 explicitly in `src/transformers/testing_utils.py`, just like what you mentioned in the description `testing_utils.accelerator_is_fp16_available(torch_device)`.\r\n\r\nNote the design of `accelerator_is_fp16_available` should not try to create/use the device in each call, rather we should cache the result and reuse it. I was originally thinking a `if/elif` statements for such method, but you said `here could be countless custom devices` afterward ...\r\n\r\nRegarding `accelerator_manual_seed`, I am not sure we really will have so many different devices. If this is the case, your proposal of having a external file and let user to specify makes sense.\r\n\r\nBut maybe we can start the task easily and not using external file. Just put everything inside `testing_utils`. WDYT?", "> Note the design of accelerator_is_fp16_available should not try to create/use the device in each call, rather we should cache the result and reuse it.\r\n\r\nThis is a fair point. I noticed in some other functions that check for device availability, they are wrapped in `@lru_cache` which should match this behaviour. I will add this to the function calls.\r\n\r\n> I was originally thinking a if/elif statements for such method, but you said here could be countless custom devices afterward\r\n\r\nTo clarify, I am only intending _one_ device to be used for a single set of tests, so there won't be countless devices in use in a single session.\r\n\r\n> But maybe we can start the task easily and not using external file. Just put everything inside testing_utils. WDYT?\r\n\r\nFor our purposes (and I am assuming others too) the main point is to be able to specifiy a new backend or device _without_ having to upstream anything into HF. So if we put everything inside `testing_utils` this defeats the point, even if this is only to start. So, I think for the initial PR we should support this. Let me know what you think~", "> For our purposes (and I am assuming others too) the main point is to be able to specifiy a new backend or device without having to upstream anything into HF. So if we put everything inside testing_utils this defeats the point, even if this is only to start. So, I think for the initial PR we should support this. Let me know what you think~\r\n\r\nYeah, I agree. I would say something defined in HF (only for `cuda` and `cpu`), and for 3rd party, we use the definitions from external file(s) as you suggested. The main point of this is `cuda` and `cpu` is known to everyone and let's not put the definitions related to them outside `transformers`, so it's easier to find the info.", "Yep! That was my original plan, we don't want to put any burden on HF into maintaining additional devices that are only known to a small set of people 👍 We will have the definitions for `cpu` and `cuda` within `transformers` but have support for external files.", "@ydshieh please see the above draft PR 🙂 \r\n\r\nI am hoping the CI passes without issue as there should be no effect on Huggingface CI runners with these changes.", "The proof-of-concept for device-agnostic testing has been merged into the master branch :tada:. And I'd like to use this issue as a centralized place to list and track work on making the rest of the testing suites device agnostic.\r\nBelow is the list of test suites we would work on:\r\n\r\n- [x] `examples` https://github.com/huggingface/transformers/pull/27081\r\n- [x] `trainer` https://github.com/huggingface/transformers/pull/27131\r\n- [x] `pipelines` https://github.com/huggingface/transformers/pull/27129\r\n- [x] `deepspeed` https://github.com/huggingface/transformers/pull/27342\r\n- [x] `extended` https://github.com/huggingface/transformers/pull/27131\r\n- [x] `fsdp` https://github.com/huggingface/transformers/pull/27120\r\n- [x] `generation` https://github.com/huggingface/transformers/pull/27146\r\n- [x] `peft_integration` (Very compatible with third-party accelerators 🤗)\r\n- [x] `models` https://github.com/huggingface/transformers/pull/27146\r\n\r\n\r\n" ]
1,692
1,700
null
CONTRIBUTOR
null
### Feature request We would like to make the testing suite in this repository more device agnostic. It seems there has already been some work towards this, however the majority of tests will still only run on either GPU or CPU. This would require a number of changes to all tests present in the library, however it would not alter the behaviour of Huggingface's CI runners. A non-exhaustive list of changes would be: - Add a new test decorator `@require_torch_with_accelerator` that largerly supersedes (but does not replace) `@require_torch_gpu`. This new decorator can be used for any test that is device agnostic that we would like to accelerate. We would keep `@require_torch_gpu` for tests that truly require CUDA features, such as ones that check device memory utilisation (such as in model parallelism or lower precision tests) or use custom CUDA kernels (such as Flash Attention). - Certain tests could be made device agnostic quite easily, such as tests that only check for CUDA devices to enable fp16, tests that use backend specific PRNG initialisation, or tests that clear cache before executing. This could be done by adding device agnostic variants to `testing_utils.py` that compare the current device in use and dispatch to the appropriate backend specific function if available. - For example, rather than the comparison `torch_device == 'cuda'` to check if we can run with fp16, we could call a function `testing_utils.accelerator_is_fp16_available(torch_device)` or similar. Similar functions already exist to check for tf32 or bf16 support. - Crucially, in upstream we would only have settings for CUDA and CPU devices – as well as any other of your supported backends. However, we would expose functions to register your own device in user code so third parties can test custom backends without upstreaming changes. ### Motivation As Huggingface libraries and models make up a significant part of the current ML community, it makes sense when developing custom PyTorch backends to test against these model libraries as they cover a large proportion of the most users' use cases. However, the current testing suite does not easily allow for custom devices – not without maintaining a custom private fork that needs to be continuously kept up to date with the upstream repository. This reason, and because the number of changes required is not especially significant, is why we are making this proposal. ### Your contribution We would write and submit a PR to implement these changes following discussion and approval with 🤗Transformers maintainers. I am collaborating with @joshlk and @arsalanu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25654/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25654/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/25653
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25653/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25653/comments
https://api.github.com/repos/huggingface/transformers/issues/25653/events
https://github.com/huggingface/transformers/pull/25653
1,861,125,733
PR_kwDOCUB6oc5YexVs
25,653
Generate: add missing logits processors docs
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I've confirmed that the previously missing docs are now properly rendered in the doc preview 👍 ", "@ArthurZucker re tests -- fully on board. If you're okay with it, I'd like to do it on a separate PR :) Ditto for the side bar on the left, it is in need of an update!", "@ArthurZucker ready for a re-review! \r\n\r\nTo recap, the previously reviewed version:\r\n- Added missing logits processor docs\r\n\r\nThis version also:\r\n- Updates the left-handed TOC for this page (splits the classes by framework, so it's easier to navigate; I've decided against enumerating the logits processors in the docs since the list is quite big)\r\n- Sorts the logits processors in the docs by alphabetical order\r\n- Adds missing output classes to the output class docs section\r\n\r\nIn parallel, other concerns raised in the PR are being tackled in another places:\r\n- #25692 Adds logits processors to the doctests", "(preview of the left-handed side TOC)\r\n<img width=\"471\" alt=\"Screenshot 2023-08-24 at 14 21 45\" src=\"https://github.com/huggingface/transformers/assets/12240844/997a2a9c-38ee-467f-a8b4-13b42a73f1c6\">\r\n" ]
1,692
1,692
1,692
MEMBER
null
# What does this PR do? Some logits processor classes were missing on the docs. This PR corrects it by: 1 - Adding all missing logits processors to the docs 2 - Adding missing top-level imports (to stay consistent with previously existing logits processors)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25653/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25653/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25653", "html_url": "https://github.com/huggingface/transformers/pull/25653", "diff_url": "https://github.com/huggingface/transformers/pull/25653.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25653.patch", "merged_at": 1692960977000 }
https://api.github.com/repos/huggingface/transformers/issues/25652
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25652/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25652/comments
https://api.github.com/repos/huggingface/transformers/issues/25652/events
https://github.com/huggingface/transformers/pull/25652
1,861,116,354
PR_kwDOCUB6oc5YevSQ
25,652
Fix bloom add prefix space
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,692
1,692
1,692
COLLABORATOR
null
# What does this PR do? Fixes a typo from #25563 and adds a test
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25652/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25652/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25652", "html_url": "https://github.com/huggingface/transformers/pull/25652", "diff_url": "https://github.com/huggingface/transformers/pull/25652.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25652.patch", "merged_at": 1692708613000 }
https://api.github.com/repos/huggingface/transformers/issues/25651
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25651/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25651/comments
https://api.github.com/repos/huggingface/transformers/issues/25651/events
https://github.com/huggingface/transformers/issues/25651
1,860,937,910
I_kwDOCUB6oc5u66i2
25,651
Bug: RuntimeError: Tensors must be contiguous
{ "login": "yorhaha", "id": 42489990, "node_id": "MDQ6VXNlcjQyNDg5OTkw", "avatar_url": "https://avatars.githubusercontent.com/u/42489990?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yorhaha", "html_url": "https://github.com/yorhaha", "followers_url": "https://api.github.com/users/yorhaha/followers", "following_url": "https://api.github.com/users/yorhaha/following{/other_user}", "gists_url": "https://api.github.com/users/yorhaha/gists{/gist_id}", "starred_url": "https://api.github.com/users/yorhaha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yorhaha/subscriptions", "organizations_url": "https://api.github.com/users/yorhaha/orgs", "repos_url": "https://api.github.com/users/yorhaha/repos", "events_url": "https://api.github.com/users/yorhaha/events{/privacy}", "received_events_url": "https://api.github.com/users/yorhaha/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This has been fixed in Accelerate, please install that library from source." ]
1,692
1,692
1,692
NONE
null
### System Info - `transformers` version: 4.31.0 - Platform: Linux-5.4.0-146-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.2 - Accelerate version: 0.21.0 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_type: DEEPSPEED - mixed_precision: bf16 - use_cpu: False - num_processes: 4 - machine_rank: 0 - num_machines: 1 - rdzv_backend: static - same_network: True - main_training_function: main - deepspeed_config: {'gradient_accumulation_steps': 4, 'gradient_clipping': 0.5, 'offload_optimizer_device': 'none', 'offload_param_device': 'none', 'zero3_init_flag': False, 'zero_stage': 2} - downcast_bf16: no - tpu_use_cluster: False - tpu_use_sudo: False - tpu_env: [] - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @pacman ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I used my own dataset to train with a customized trainer class (inherited from `transformers.Seq2SeqTrainer`). It reported error `RuntimeError: Tensors must be contiguous`. If I change the `_gpu_gather` function of `/venv/lib/python3.8/site-packages/accelerate/utils/operations.py` as following, it becomes well: ``` def _gpu_gather(tensor): def _gpu_gather_one(tensor): if tensor.ndim == 0: tensor = tensor.clone()[None] ################################################### make tensors to be contiguous output_tensors = [torch.empty_like(tensor).contiguous() for _ in range(torch.distributed.get_world_size())] torch.distributed.all_gather(output_tensors, tensor.contiguous()) ################################################### return torch.cat(output_tensors, dim=0) return recursively_apply(_gpu_gather_one, tensor, error_on_other_type=True) ``` ### Expected behavior ``` File "/venv/lib/python3.8/site-packages/transformers/trainer.py", line 3147, in evaluation_loop logits = self.accelerator.gather_for_metrics((logits)) File "/venv/lib/python3.8/site-packages/accelerate/accelerator.py", line 2012, in gather_for_metrics tensor = self.gather(tensor) File "/venv/lib/python3.8/site-packages/accelerate/accelerator.py", line 1985, in gather return gather(tensor) File "/venv/lib/python3.8/site-packages/accelerate/utils/operations.py", line 289, in gather return _gpu_gather(tensor) File "/venv/lib/python3.8/site-packages/accelerate/utils/operations.py", line 269, in _gpu_gather return recursively_apply(_gpu_gather_one, tensor, error_on_other_type=True) File "/venv/lib/python3.8/site-packages/accelerate/utils/operations.py", line 109, in recursively_apply return honor_type( File "/venv/lib/python3.8/site-packages/accelerate/utils/operations.py", line 83, in honor_type return type(obj)(generator) File "/venv/lib/python3.8/site-packages/accelerate/utils/operations.py", line 112, in <genexpr> recursively_apply( File "/venv/lib/python3.8/site-packages/accelerate/utils/operations.py", line 128, in recursively_apply return func(data, *args, **kwargs) File "/venv/lib/python3.8/site-packages/accelerate/utils/operations.py", line 266, in _gpu_gather_one torch.distributed.all_gather(output_tensors, tensor) File "/venv/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1451, in wrapper return func(*args, **kwargs) File "/venv/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 2448, in all_gather work = default_pg.allgather([tensor_list], [tensor]) RuntimeError: Tensors must be contiguous ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25651/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25651/timeline
not_planned
null
null
https://api.github.com/repos/huggingface/transformers/issues/25650
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25650/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25650/comments
https://api.github.com/repos/huggingface/transformers/issues/25650/events
https://github.com/huggingface/transformers/pull/25650
1,860,880,444
PR_kwDOCUB6oc5Yd64n
25,650
Put IDEFICS in the right section of the doc
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,692
1,692
1,692
COLLABORATOR
null
# What does this PR do? Put IDEFICS in the right section of the doc, it's not a RL model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25650/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25650/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25650", "html_url": "https://github.com/huggingface/transformers/pull/25650", "diff_url": "https://github.com/huggingface/transformers/pull/25650.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25650.patch", "merged_at": 1692693551000 }
https://api.github.com/repos/huggingface/transformers/issues/25649
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25649/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25649/comments
https://api.github.com/repos/huggingface/transformers/issues/25649/events
https://github.com/huggingface/transformers/pull/25649
1,860,832,239
PR_kwDOCUB6oc5YdwJf
25,649
Pass the proper token to PEFT integration in auto classes
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,692
1,692
1,692
COLLABORATOR
null
# What does this PR do? This fixes the token passed along to the PEFT integration in the auto classes, which result in errors for every users with private models.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25649/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25649/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25649", "html_url": "https://github.com/huggingface/transformers/pull/25649", "diff_url": "https://github.com/huggingface/transformers/pull/25649.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25649.patch", "merged_at": 1692692036000 }
https://api.github.com/repos/huggingface/transformers/issues/25648
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25648/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25648/comments
https://api.github.com/repos/huggingface/transformers/issues/25648/events
https://github.com/huggingface/transformers/issues/25648
1,860,818,854
I_kwDOCUB6oc5u6dem
25,648
Bug: dataclasses.FrozenInstanceError: cannot assign to field generation_config
{ "login": "vasuems", "id": 1922015, "node_id": "MDQ6VXNlcjE5MjIwMTU=", "avatar_url": "https://avatars.githubusercontent.com/u/1922015?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vasuems", "html_url": "https://github.com/vasuems", "followers_url": "https://api.github.com/users/vasuems/followers", "following_url": "https://api.github.com/users/vasuems/following{/other_user}", "gists_url": "https://api.github.com/users/vasuems/gists{/gist_id}", "starred_url": "https://api.github.com/users/vasuems/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vasuems/subscriptions", "organizations_url": "https://api.github.com/users/vasuems/orgs", "repos_url": "https://api.github.com/users/vasuems/repos", "events_url": "https://api.github.com/users/vasuems/events{/privacy}", "received_events_url": "https://api.github.com/users/vasuems/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You should report the issue on that repo. It is not possible to change the training arguments once instantiated, so the script you are using needs fixing :-)", "Thanks.", "Closing the issue." ]
1,692
1,692
1,692
NONE
null
### System Info Hi @julien-c I am getting error dataclasses.FrozenInstanceError: cannot assign to field generation_config. Transformer Version: Latest. Build from source. Cuda: 11.5 Python: 3.10 Execute the script /scripts/finetune_llama2_guanaco_7b.sh using qlora repository. I am getting the below error. qlora.py", line 841, in <module> train() qlora.py", line 694, in train training_args.generation_config = transformers.GenerationConfig(**vars(generation_args)) File "qlora/.venv/lib/python3.10/site-packages/transformers/training_args.py", line 1714, in __setattr__ raise FrozenInstanceError(f"cannot assign to field {name}") dataclasses.FrozenInstanceError: cannot assign to field generatio Can you give pls suggest me a solution to fix this issue? ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Clone qlora repo https://github.com/artidoro/qlora.git. Execute the script under scripts folder: finetune_llama2_guanaco_7b.sh. ### Expected behavior the script should train the default inbuilt model. But the script throws the above error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25648/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25648/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25647
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25647/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25647/comments
https://api.github.com/repos/huggingface/transformers/issues/25647/events
https://github.com/huggingface/transformers/pull/25647
1,860,808,734
PR_kwDOCUB6oc5YdrAX
25,647
dos: ko: llm_tutorial.md
{ "login": "harheem", "id": 49297157, "node_id": "MDQ6VXNlcjQ5Mjk3MTU3", "avatar_url": "https://avatars.githubusercontent.com/u/49297157?v=4", "gravatar_id": "", "url": "https://api.github.com/users/harheem", "html_url": "https://github.com/harheem", "followers_url": "https://api.github.com/users/harheem/followers", "following_url": "https://api.github.com/users/harheem/following{/other_user}", "gists_url": "https://api.github.com/users/harheem/gists{/gist_id}", "starred_url": "https://api.github.com/users/harheem/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/harheem/subscriptions", "organizations_url": "https://api.github.com/users/harheem/orgs", "repos_url": "https://api.github.com/users/harheem/repos", "events_url": "https://api.github.com/users/harheem/events{/privacy}", "received_events_url": "https://api.github.com/users/harheem/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,692
1,692
1,692
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25647/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25647/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25647", "html_url": "https://github.com/huggingface/transformers/pull/25647", "diff_url": "https://github.com/huggingface/transformers/pull/25647.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25647.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/25646
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25646/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25646/comments
https://api.github.com/repos/huggingface/transformers/issues/25646/events
https://github.com/huggingface/transformers/pull/25646
1,860,804,877
PR_kwDOCUB6oc5YdqK9
25,646
[MINOR:TYPO]
{ "login": "cakiki", "id": 3664563, "node_id": "MDQ6VXNlcjM2NjQ1NjM=", "avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cakiki", "html_url": "https://github.com/cakiki", "followers_url": "https://api.github.com/users/cakiki/followers", "following_url": "https://api.github.com/users/cakiki/following{/other_user}", "gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}", "starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cakiki/subscriptions", "organizations_url": "https://api.github.com/users/cakiki/orgs", "repos_url": "https://api.github.com/users/cakiki/repos", "events_url": "https://api.github.com/users/cakiki/events{/privacy}", "received_events_url": "https://api.github.com/users/cakiki/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,692
1,692
1,692
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25646/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25646/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25646", "html_url": "https://github.com/huggingface/transformers/pull/25646", "diff_url": "https://github.com/huggingface/transformers/pull/25646.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25646.patch", "merged_at": 1692690885000 }
https://api.github.com/repos/huggingface/transformers/issues/25645
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25645/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25645/comments
https://api.github.com/repos/huggingface/transformers/issues/25645/events
https://github.com/huggingface/transformers/issues/25645
1,860,462,892
I_kwDOCUB6oc5u5Gks
25,645
Why `attention_mask` in Bert is 1D tensor and is doing 1D masking for keys, rather than 2D masking for both queries and keys
{ "login": "KatarinaYuan", "id": 43512683, "node_id": "MDQ6VXNlcjQzNTEyNjgz", "avatar_url": "https://avatars.githubusercontent.com/u/43512683?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KatarinaYuan", "html_url": "https://github.com/KatarinaYuan", "followers_url": "https://api.github.com/users/KatarinaYuan/followers", "following_url": "https://api.github.com/users/KatarinaYuan/following{/other_user}", "gists_url": "https://api.github.com/users/KatarinaYuan/gists{/gist_id}", "starred_url": "https://api.github.com/users/KatarinaYuan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KatarinaYuan/subscriptions", "organizations_url": "https://api.github.com/users/KatarinaYuan/orgs", "repos_url": "https://api.github.com/users/KatarinaYuan/repos", "events_url": "https://api.github.com/users/KatarinaYuan/events{/privacy}", "received_events_url": "https://api.github.com/users/KatarinaYuan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! You should check in the modelling code, the attention mask is processed as such:\r\n```python \r\n # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]\r\n # ourselves in which case we just need to make it broadcastable to all heads.\r\n extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape)\r\n\r\n # If a 2D or 3D attention mask is provided for the cross-attention\r\n # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]\r\n if self.config.is_decoder and encoder_hidden_states is not None:\r\n encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size()\r\n encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)\r\n if encoder_attention_mask is None:\r\n encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device)\r\n encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask)\r\n else:\r\n encoder_extended_attention_mask = None\r\n```\r\nThis kind of question should be asked on [the forum](https://discuss.huggingface.co/) 😉 ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,692
1,695
1,695
NONE
null
### System Info In Bert, as this tutorial(https://huggingface.co/docs/transformers/glossary) suggested, we should have an `attention_mask`, which masks out padding tokens. Intuitively, since the multiplication of queries and keys results in attention probability (`attention_probs`), we should mask out the padding tokens for both queries and keys. But as shown in the following link, `attention_mask` is constructed as a 1D tensor. https://github.com/huggingface/transformers/blob/450a181d8b963b4e896be4aac701815aa554a6bb/src/transformers/tokenization_utils_base.py#L3450 And as shown in this link, **1D tensor `attention_mask` is directly added upon 2D tensor `attention_probs`**. https://github.com/huggingface/transformers/blob/450a181d8b963b4e896be4aac701815aa554a6bb/src/transformers/models/bert/modeling_bert.py#L352 The addition operates without any running errors due to **broadcasting**. But what `attention_mask` does is **only masking out keys, not queries**. I'm not sure whether my understanding is correct and feel quite confused. Thank you in advance for helping! ### Who can help? @ArthurZucker and @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction This is an intuitive question. There is no need of reproduction. ### Expected behavior N/A
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25645/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25645/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25644
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25644/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25644/comments
https://api.github.com/repos/huggingface/transformers/issues/25644/events
https://github.com/huggingface/transformers/pull/25644
1,860,399,164
PR_kwDOCUB6oc5YcRph
25,644
Adding EGT Model for Graph Classification
{ "login": "rudongyu", "id": 16982108, "node_id": "MDQ6VXNlcjE2OTgyMTA4", "avatar_url": "https://avatars.githubusercontent.com/u/16982108?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rudongyu", "html_url": "https://github.com/rudongyu", "followers_url": "https://api.github.com/users/rudongyu/followers", "following_url": "https://api.github.com/users/rudongyu/following{/other_user}", "gists_url": "https://api.github.com/users/rudongyu/gists{/gist_id}", "starred_url": "https://api.github.com/users/rudongyu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rudongyu/subscriptions", "organizations_url": "https://api.github.com/users/rudongyu/orgs", "repos_url": "https://api.github.com/users/rudongyu/repos", "events_url": "https://api.github.com/users/rudongyu/events{/privacy}", "received_events_url": "https://api.github.com/users/rudongyu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Thanks a lot for wanting to contribute! I would recommend you to push your model on the hub following[ this tutorial](https://huggingface.co/docs/transformers/custom_models) as it will be a lot easier and doesn't require you to go through all the CI and should be easier to do! Feel free to then share the huggingface repository were you added it! ", "> Hey! Thanks a lot for wanting to contribute! I would recommend you to push your model on the hub following[ this tutorial](https://huggingface.co/docs/transformers/custom_models) as it will be a lot easier and doesn't require you to go through all the CI and should be easier to do! Feel free to then share the huggingface repository were you added it!\r\n\r\nThanks for your suggestions! I have pushed our model on the hub and [this](https://huggingface.co/Zhiteng/dgl-egt/tree/main) is the huggingface repository where I added it.", "> Hey! Thanks a lot for wanting to contribute! I would recommend you to push your model on the hub following[ this tutorial](https://huggingface.co/docs/transformers/custom_models) as it will be a lot easier and doesn't require you to go through all the CI and should be easier to do! Feel free to then share the huggingface repository were you added it!\r\n\r\nThanks for your kind reminder! I went through the tutorial and found two potential barriers to directly sharing models in that manner:\r\n1. HF transformers doesn't have a definition for AutoModelForGraphClassification.\r\n2. The tutorial doesn't include a way for sharing the data collating method. However, it could be a critical part for users conveniently applying the model for graph classification.\r\n\r\nSo, maybe providing the model in the library is still a better way in this case? @clefourrier for awareness.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,692
1,697
1,697
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Add the graph transformer model EGT for the graph classification task. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25644/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25644/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25644", "html_url": "https://github.com/huggingface/transformers/pull/25644", "diff_url": "https://github.com/huggingface/transformers/pull/25644.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25644.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/25643
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25643/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25643/comments
https://api.github.com/repos/huggingface/transformers/issues/25643/events
https://github.com/huggingface/transformers/pull/25643
1,860,347,198
PR_kwDOCUB6oc5YcGzD
25,643
removing unnecesssary extra parameter
{ "login": "rafaelpadilla", "id": 31217453, "node_id": "MDQ6VXNlcjMxMjE3NDUz", "avatar_url": "https://avatars.githubusercontent.com/u/31217453?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rafaelpadilla", "html_url": "https://github.com/rafaelpadilla", "followers_url": "https://api.github.com/users/rafaelpadilla/followers", "following_url": "https://api.github.com/users/rafaelpadilla/following{/other_user}", "gists_url": "https://api.github.com/users/rafaelpadilla/gists{/gist_id}", "starred_url": "https://api.github.com/users/rafaelpadilla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rafaelpadilla/subscriptions", "organizations_url": "https://api.github.com/users/rafaelpadilla/orgs", "repos_url": "https://api.github.com/users/rafaelpadilla/repos", "events_url": "https://api.github.com/users/rafaelpadilla/events{/privacy}", "received_events_url": "https://api.github.com/users/rafaelpadilla/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,692
1,692
1,692
CONTRIBUTOR
null
# What does this PR do? Fix errors raised when function `self.normalize_annotation` is called in 3 models. There are 3 models (`conditional detr`, `deformable detr` and `detr`), whose `self.normalize_annotation` functions do not receive the `input_data_format` parameter. `input_data_format` parameter was introduced in [PR #25464](https://github.com/huggingface/transformers/pull/25464) to allow images with unusual number of channels to be used. However, functions like `self.normalize_annotation` deal with annotations only, and don't need this parameter. In fact, this parameter raises an error, as it is not defined in the signature of the functions `self.normalize_annotation`. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25643/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25643/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25643", "html_url": "https://github.com/huggingface/transformers/pull/25643", "diff_url": "https://github.com/huggingface/transformers/pull/25643.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25643.patch", "merged_at": 1692713431000 }
https://api.github.com/repos/huggingface/transformers/issues/25642
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25642/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25642/comments
https://api.github.com/repos/huggingface/transformers/issues/25642/events
https://github.com/huggingface/transformers/pull/25642
1,860,343,776
PR_kwDOCUB6oc5YcGCt
25,642
Add descriptive docstring to WhisperTimeStampLogitsProcessor
{ "login": "jprivera44", "id": 9093934, "node_id": "MDQ6VXNlcjkwOTM5MzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9093934?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jprivera44", "html_url": "https://github.com/jprivera44", "followers_url": "https://api.github.com/users/jprivera44/followers", "following_url": "https://api.github.com/users/jprivera44/following{/other_user}", "gists_url": "https://api.github.com/users/jprivera44/gists{/gist_id}", "starred_url": "https://api.github.com/users/jprivera44/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jprivera44/subscriptions", "organizations_url": "https://api.github.com/users/jprivera44/orgs", "repos_url": "https://api.github.com/users/jprivera44/repos", "events_url": "https://api.github.com/users/jprivera44/events{/privacy}", "received_events_url": "https://api.github.com/users/jprivera44/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25642). All of your documentation changes will be reflected on that endpoint.", "Hey! Thanks for opening a PR! \r\nThis looks a bit too specific / long so will not be accepting it. Also you should run `pytest --doctest src/transformers/generation/logits_process.py` / rebase to main to check if this the docstring examples actually work 😉 \r\n", "Hi @gante thank you for your feedback :) I've made the suggested changes. However, I'm a little bit confused on the returning of the timestamps? The WhisperTimeStampLogitsProcessor only returns the scores, which then get transcribed into words. Would outputting the token timestamps be sufficient? Since these are directly accessible within modeling_whisper.py.\r\n\r\n@ArthurZucker , I ran pytest --doctest-modules and all tests within the WhisperLogits passed. I noticed the recommendation for --doctest – does the transformers repo use a custom configuration that requires this argument?", "@jprivera44 Since the last reviews, we've added this file to the list of files to be doctested in our PR CI. If you rebase with `main`, your examples will be tested :) \r\n\r\nRegarding the examples themselves... something is odd. We should be getting a timestamp at the start of the transcription, right @ArthurZucker?", "@jprivera44 double-checking: you are still planning on iterating on this PR, correct? :)", "Hello @gante, apologies I was having some issues getting the time stamps to show up, but it's all fixed now. I've added in the updated code we discussed with two examples, one with timestamps and one without. This includes running pytest as @ArthurZucker mentioned. Please let me know what else is needed, and thank you again for the help!", "Hello @gante hope you're well, is there anything else needed on this from my end? I just want to make sure I've completed the requirements.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi sorry this got to the stale status, @ArthurZucker got in an update on this! Thank you again.", "No thank you guys! Apologies this took a while!" ]
1,692
1,698
1,698
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds in docstrings that explains the usage of the arguments that can be passed in to the Whisper logits processor. Fixes #24783 ## Before submitting - [ x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? @gante <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25642/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25642/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25642", "html_url": "https://github.com/huggingface/transformers/pull/25642", "diff_url": "https://github.com/huggingface/transformers/pull/25642.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25642.patch", "merged_at": 1698141727000 }
https://api.github.com/repos/huggingface/transformers/issues/25641
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25641/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25641/comments
https://api.github.com/repos/huggingface/transformers/issues/25641/events
https://github.com/huggingface/transformers/issues/25641
1,860,342,594
I_kwDOCUB6oc5u4pNC
25,641
Eval Accumulation Steps Silently Fails w/ Accelerate >= 0.20.3?
{ "login": "sam-scale", "id": 106690182, "node_id": "U_kgDOBlv2hg", "avatar_url": "https://avatars.githubusercontent.com/u/106690182?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sam-scale", "html_url": "https://github.com/sam-scale", "followers_url": "https://api.github.com/users/sam-scale/followers", "following_url": "https://api.github.com/users/sam-scale/following{/other_user}", "gists_url": "https://api.github.com/users/sam-scale/gists{/gist_id}", "starred_url": "https://api.github.com/users/sam-scale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sam-scale/subscriptions", "organizations_url": "https://api.github.com/users/sam-scale/orgs", "repos_url": "https://api.github.com/users/sam-scale/repos", "events_url": "https://api.github.com/users/sam-scale/events{/privacy}", "received_events_url": "https://api.github.com/users/sam-scale/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false } ]
[ "@muellerzr just so I know, is there any estimate on this issue?", "@muellerzr sent out a PR to fix here: https://github.com/huggingface/transformers/pull/26060", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "I believe this was closed by https://github.com/huggingface/transformers/pull/26060\r\n\r\nClosing this!" ]
1,692
1,697
1,697
CONTRIBUTOR
null
### System Info ``` Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.31.0 - Platform: Linux-5.15.0-1026-aws-x86_64-with-glibc2.10 - Python version: 3.8.17 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction * saw that nothing was getting offloaded to GPU here: https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L3196 * Needed `self.accelerator.sync_gradients` to be `True` * It's set at this line: https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L1863-L1866 Requires accelerate <= 0.20.3 ### Expected behavior Allow `eval_accumulation_steps` to actually do something even if `accelerate > 0.20.3`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25641/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/huggingface/transformers/issues/25641/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25639
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25639/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25639/comments
https://api.github.com/repos/huggingface/transformers/issues/25639/events
https://github.com/huggingface/transformers/issues/25639
1,860,134,694
I_kwDOCUB6oc5u32cm
25,639
hidden_dropout_prob and attention_probs_dropout_prob values in the documentation doesn't match with that in the code
{ "login": "SwapnanilMukherjee", "id": 63775342, "node_id": "MDQ6VXNlcjYzNzc1MzQy", "avatar_url": "https://avatars.githubusercontent.com/u/63775342?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SwapnanilMukherjee", "html_url": "https://github.com/SwapnanilMukherjee", "followers_url": "https://api.github.com/users/SwapnanilMukherjee/followers", "following_url": "https://api.github.com/users/SwapnanilMukherjee/following{/other_user}", "gists_url": "https://api.github.com/users/SwapnanilMukherjee/gists{/gist_id}", "starred_url": "https://api.github.com/users/SwapnanilMukherjee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SwapnanilMukherjee/subscriptions", "organizations_url": "https://api.github.com/users/SwapnanilMukherjee/orgs", "repos_url": "https://api.github.com/users/SwapnanilMukherjee/repos", "events_url": "https://api.github.com/users/SwapnanilMukherjee/events{/privacy}", "received_events_url": "https://api.github.com/users/SwapnanilMukherjee/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for catching the typo, would you like to open a PR to change the values to `0.0`? 🤗 ", "Yes, I'll do that right away. But should the value in the documentation or the model init be changed? ", "I think the value should be changed in the docstring to reflect whats in the `init` (pinging @ArthurZucker for confirmation):\r\n\r\nhttps://github.com/huggingface/transformers/blob/41aef33758ae166291d72bc381477f2db84159cf/src/transformers/models/vilt/configuration_vilt.py#L112", "Values in both the documentation and the default value should match that of https://huggingface.co/dandelin/vilt-b32-mlm 😉 ", "The value of dropout in the original implementation by dandelin is 0.1 . Since the value in the docstring is already that, I'm changing the `init` to that and creating a PR." ]
1,692
1,694
1,694
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.15.0-79-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.16.2 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: RTX A5000 - Using distributed or parallel set-up in script?: No ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am training ViLT on e-SNLI-VE. This is basically what I am doing. ``` from transformers import ViltConfig, ViltProcessor, ViltForQuestionAnswering label2id = {'contradiction':0, 'entailment':1, 'neutral':2} id2label = {0:'contradiction', 1:'entailment', 2:'neutral'} vilt_config = ViltConfig(label2id=label2id, id2label=id2label, max_position_embeddings=100) vilt_config ``` This gives the following output. ![Screenshot from 2023-08-22 02-21-17](https://github.com/huggingface/transformers/assets/63775342/22d449f5-a69c-4ad8-9a85-6e337a3edd83) The values for both variables should be 0.1 as indicated by the documentation. ![Screenshot from 2023-08-22 02-23-18](https://github.com/huggingface/transformers/assets/63775342/77fb4fbd-a041-4d16-aa5f-a27537864720) ### Expected behavior The values of the variables should have been zero.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25639/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25639/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25638
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25638/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25638/comments
https://api.github.com/repos/huggingface/transformers/issues/25638/events
https://github.com/huggingface/transformers/pull/25638
1,859,923,127
PR_kwDOCUB6oc5YapTw
25,638
Add missing Maskformer dataclass decorator, add dataclass check in ModelOutput for subclasses
{ "login": "rachthree", "id": 46288912, "node_id": "MDQ6VXNlcjQ2Mjg4OTEy", "avatar_url": "https://avatars.githubusercontent.com/u/46288912?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rachthree", "html_url": "https://github.com/rachthree", "followers_url": "https://api.github.com/users/rachthree/followers", "following_url": "https://api.github.com/users/rachthree/following{/other_user}", "gists_url": "https://api.github.com/users/rachthree/gists{/gist_id}", "starred_url": "https://api.github.com/users/rachthree/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rachthree/subscriptions", "organizations_url": "https://api.github.com/users/rachthree/orgs", "repos_url": "https://api.github.com/users/rachthree/repos", "events_url": "https://api.github.com/users/rachthree/events{/privacy}", "received_events_url": "https://api.github.com/users/rachthree/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @amyeroberts ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25638). All of your documentation changes will be reflected on that endpoint.", "Thank you for reviewing, @amyeroberts! How do you feel about my 2nd bullet point under the additional notes? Should I open another issue?", "@rachthree For the second point, is it necessary that the checks in `__post_init__` are run within the pipeline code? Providing our model output classes inherit from `ModelOutput`, the checks will be performed when the classes are used, which I think is sufficient. We don't want to add an additional output class in this case - pipelines are essentially just glue which holds together the preprocessing, inference and postprocessing steps, so should handle the same objects. \r\n\r\nI'm not sure I got all the implications of your question however. If you think I've missed the point, let me know.", "@amyeroberts My apologies for my late response, I was on travel for a bit. Could the workflow for my PR be approved? CI got triggered again once I pushed the commit. Thanks!\r\n\r\nRegarding dataclasses, the checks are not necessary if those checks are specifically for a model's subclassed `ModelOutput` with the `@dataclass` decorator, but the reason why I'm suggesting an additional output class is that `output = ModelOutput(foo_inputs)` by itself is not a true `dataclass`. A new class that is a true `dataclass` can be created using `dataclasses.makedataclass`, perhaps in some new classmethod for `ModelOutput`. I originally was thinking something like `PipelineOutput.from_dict` if using a new class.\r\n\r\nShould anyone try to use `dataclass` utilities such as `dataclass.fields` on a Pipeline's `ModelOutput`, those utilities will not function properly:\r\n\r\n```python\r\nfrom transformers.utils.generic import ModelOutput\r\nimport dataclasses\r\n\r\noutput = ModelOutput({\"a\": 1, \"b\": 2})\r\n\r\nfields = dataclasses.fields(output)\r\n```\r\n\r\nResults in `TypeError: must be called with a dataclass type or instance`", "@rachthree Let's just stick to the changes here in this PR for now. If pipeline consistently use `ModelOutput` directly, then it shouldn't cause an issue. If it does. then we can open a new, separate PR to address it. ", "@rachthree Let me know if there's any other changes you'd like to push before I merge", "@amyeroberts Totally fair, and didn't meant to add more dataclass-related changes to this PR. Leaving it for now sounds good to me. No more changes on my end, thanks!", "@amyeroberts \r\nHi, I found the same problem in the model: BiomedVLP-CXR-BERT-specialized.\r\ntransformer version: transformers-4.35.2\r\n\r\nreproduce (code found in https://huggingface.co/microsoft/BiomedVLP-CXR-BERT-specialized):\r\n\r\n> import torch\r\n> from transformers import AutoModel, AutoTokenizer\r\n> \r\n> url = \"microsoft/BiomedVLP-CXR-BERT-specialized\"\r\n> tokenizer = AutoTokenizer.from_pretrained(url, trust_remote_code=True)\r\n> model = AutoModel.from_pretrained(url, trust_remote_code=True)\r\n> \r\n> text_prompts = [\"There is no pneumothorax or pleural effusion\",\r\n> \"No pleural effusion or pneumothorax is seen\",\r\n> \"The extent of the pleural effusion is constant.\"]\r\n> \r\n> tokenizer_output = tokenizer.batch_encode_plus(batch_text_or_text_pairs=text_prompts,\r\n> add_special_tokens=True,\r\n> padding='longest',\r\n> return_tensors='pt')\r\n> embeddings = model.get_projected_text_embeddings(input_ids=tokenizer_output.input_ids,\r\n> attention_mask=tokenizer_output.attention_mask)\r\n> \r\n> sim = torch.mm(embeddings, embeddings.t())\r\n> \r\n\r\nError that I encountered:\r\nTypeError: transformers_modules.microsoft.BiomedVLP-CXR-BERT-specialized.b59c09e51ab2410b24f4be214bbb49043fe63fc2.modeling_cxrbert.CXRBertOutput is not a dataclasss. This is a subclass of ModelOutput and so must use the @dataclass decorator.\r\n\r\n\r\nCould you add the missing datacalss decorator to this model also?\r\nThank you so much!\r\n", "Hi @lixiaoqingnnz, the model output is defined as [code on the hub](https://huggingface.co/microsoft/BiomedVLP-CXR-BERT-specialized/blob/b59c09e51ab2410b24f4be214bbb49043fe63fc2/modeling_cxrbert.py#L19). I suggest opening a PR on the repo to add the decorator there or opening a discussion asking for the decorator to be added by the repo maintainers ", "@amyeroberts Thanks for your reply, I already added the decorated and the problem have been solved. Sorry that I bothered you cause I thought this repo maintainers is also you. \r\nThanks anyway again!", "@lixiaoqingnnz No worries - it wasn't a bother :) " ]
1,692
1,702
1,694
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # https://github.com/huggingface/transformers/issues/25504 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @amyeroberts Some additional notes: * I tried finding a way to find all subclasses of `ModelOutput` that could be tested in one unittest to check to see if they were a `dataclass`, but this may not be possible without finding all files that use `ModelOutput` and importing each subclass. I instead opted for a check inside `ModelOutput` that happens if a subclass is instantiated. * One problem I ran into was that `Pipeline` uses `ModelOutput` directly: https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/base.py#L933. This is not a true `dataclass`, so `dataclasses` utilities such as `is_dataclass` cannot be reliably used. The checks done in `ModelOutput.__post_init__` would not be executed because `__post_init__` is a `dataclass`-specific dunder. This makes me think there should be a separate `PipelineOutput` class that can be created from a `ModelOutput` to create a true `dataclass`, or a class method in `ModelOutput` that creates a true `dataclass` given arguments similar as ones for `OrderedDict`. However, I don't know what the potential impacts could be since I don't know all the inner workings of `transformers.`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25638/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25638/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25638", "html_url": "https://github.com/huggingface/transformers/pull/25638", "diff_url": "https://github.com/huggingface/transformers/pull/25638.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25638.patch", "merged_at": 1694683850000 }
https://api.github.com/repos/huggingface/transformers/issues/25637
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25637/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25637/comments
https://api.github.com/repos/huggingface/transformers/issues/25637/events
https://github.com/huggingface/transformers/pull/25637
1,859,877,698
PR_kwDOCUB6oc5YafUZ
25,637
stringify config
{ "login": "AleksanderWWW", "id": 58885668, "node_id": "MDQ6VXNlcjU4ODg1NjY4", "avatar_url": "https://avatars.githubusercontent.com/u/58885668?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AleksanderWWW", "html_url": "https://github.com/AleksanderWWW", "followers_url": "https://api.github.com/users/AleksanderWWW/followers", "following_url": "https://api.github.com/users/AleksanderWWW/following{/other_user}", "gists_url": "https://api.github.com/users/AleksanderWWW/gists{/gist_id}", "starred_url": "https://api.github.com/users/AleksanderWWW/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AleksanderWWW/subscriptions", "organizations_url": "https://api.github.com/users/AleksanderWWW/orgs", "repos_url": "https://api.github.com/users/AleksanderWWW/repos", "events_url": "https://api.github.com/users/AleksanderWWW/events{/privacy}", "received_events_url": "https://api.github.com/users/AleksanderWWW/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25637). All of your documentation changes will be reflected on that endpoint.", "LGTM 🚀 ", "@AleksanderWWW can you put your PR out of draft mode so we can merge it?", "@sgugger Done :)" ]
1,692
1,692
1,692
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25637/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25637/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25637", "html_url": "https://github.com/huggingface/transformers/pull/25637", "diff_url": "https://github.com/huggingface/transformers/pull/25637.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25637.patch", "merged_at": 1692717661000 }
https://api.github.com/repos/huggingface/transformers/issues/25636
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25636/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25636/comments
https://api.github.com/repos/huggingface/transformers/issues/25636/events
https://github.com/huggingface/transformers/pull/25636
1,859,849,311
PR_kwDOCUB6oc5YaY9d
25,636
Correct attention mask dtype for Flax GPT2
{ "login": "liutianlin0121", "id": 10226549, "node_id": "MDQ6VXNlcjEwMjI2NTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/10226549?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liutianlin0121", "html_url": "https://github.com/liutianlin0121", "followers_url": "https://api.github.com/users/liutianlin0121/followers", "following_url": "https://api.github.com/users/liutianlin0121/following{/other_user}", "gists_url": "https://api.github.com/users/liutianlin0121/gists{/gist_id}", "starred_url": "https://api.github.com/users/liutianlin0121/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liutianlin0121/subscriptions", "organizations_url": "https://api.github.com/users/liutianlin0121/orgs", "repos_url": "https://api.github.com/users/liutianlin0121/repos", "events_url": "https://api.github.com/users/liutianlin0121/events{/privacy}", "received_events_url": "https://api.github.com/users/liutianlin0121/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25636). All of your documentation changes will be reflected on that endpoint.", "@ArthurZucker Sure! I added a test :-)\r\n", ">would be great to make the test a fast test by defining a tester function in the model tester, and then executing it in the model test\r\n\r\n@sanchit-gandhi Good point! Done. Let me know if you have further suggestions. :-)", "cc @sanchit-gandhi feel free to merge if it's alright with you!", "@sanchit-gandhi Hey thanks! I change to assertTrue.", "No problem! Feel free to merge it (it seems that I can't)." ]
1,692
1,692
1,692
CONTRIBUTOR
null
# What does this PR do? Fixes #25634 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. Link: https://github.com/huggingface/transformers/issues/25634 - [N/A] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sanchit-gandhi @ArthurZucker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25636/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25636/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25636", "html_url": "https://github.com/huggingface/transformers/pull/25636", "diff_url": "https://github.com/huggingface/transformers/pull/25636.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25636.patch", "merged_at": 1692977797000 }
https://api.github.com/repos/huggingface/transformers/issues/25635
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25635/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25635/comments
https://api.github.com/repos/huggingface/transformers/issues/25635/events
https://github.com/huggingface/transformers/pull/25635
1,859,587,535
PR_kwDOCUB6oc5YZfOE
25,635
fix documentation for CustomTrainer
{ "login": "minhtriet", "id": 2603847, "node_id": "MDQ6VXNlcjI2MDM4NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/2603847?v=4", "gravatar_id": "", "url": "https://api.github.com/users/minhtriet", "html_url": "https://github.com/minhtriet", "followers_url": "https://api.github.com/users/minhtriet/followers", "following_url": "https://api.github.com/users/minhtriet/following{/other_user}", "gists_url": "https://api.github.com/users/minhtriet/gists{/gist_id}", "starred_url": "https://api.github.com/users/minhtriet/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/minhtriet/subscriptions", "organizations_url": "https://api.github.com/users/minhtriet/orgs", "repos_url": "https://api.github.com/users/minhtriet/repos", "events_url": "https://api.github.com/users/minhtriet/events{/privacy}", "received_events_url": "https://api.github.com/users/minhtriet/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25635). All of your documentation changes will be reflected on that endpoint." ]
1,692
1,692
1,692
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #25542 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25635/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25635/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25635", "html_url": "https://github.com/huggingface/transformers/pull/25635", "diff_url": "https://github.com/huggingface/transformers/pull/25635.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25635.patch", "merged_at": 1692631397000 }
https://api.github.com/repos/huggingface/transformers/issues/25634
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25634/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25634/comments
https://api.github.com/repos/huggingface/transformers/issues/25634/events
https://github.com/huggingface/transformers/issues/25634
1,859,500,264
I_kwDOCUB6oc5u1bjo
25,634
Problem caused by boolean attention mask in `pretrained_model.generate` of Flax GPT2
{ "login": "liutianlin0121", "id": 10226549, "node_id": "MDQ6VXNlcjEwMjI2NTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/10226549?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liutianlin0121", "html_url": "https://github.com/liutianlin0121", "followers_url": "https://api.github.com/users/liutianlin0121/followers", "following_url": "https://api.github.com/users/liutianlin0121/following{/other_user}", "gists_url": "https://api.github.com/users/liutianlin0121/gists{/gist_id}", "starred_url": "https://api.github.com/users/liutianlin0121/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liutianlin0121/subscriptions", "organizations_url": "https://api.github.com/users/liutianlin0121/orgs", "repos_url": "https://api.github.com/users/liutianlin0121/repos", "events_url": "https://api.github.com/users/liutianlin0121/events{/privacy}", "received_events_url": "https://api.github.com/users/liutianlin0121/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sanchit-gandhi ", "Hey @liutianlin0121! Thanks for the comprehensive issue description! That's a good spot - we actually covert the `attention_mask` to `\"i4\"` dtype under-the-hood when we call the Flax module:\r\nhttps://github.com/huggingface/transformers/blob/450a181d8b963b4e896be4aac701815aa554a6bb/src/transformers/models/gpt2/modeling_flax_gpt2.py#L510\r\n\r\nBut this happens **after** the `prepare_inputs_for_generation` method. So at the point you've mentioned, we could have multiple dtypes for the attention mask (bool or int)\r\n\r\nGiven we automatically convert the attention mask to `\"i4\"` when we call the Flax module, I think it's safe to assume we can also do so in the `prepare_inputs_for_generation` method. This won't be surprising for the user - there's no change to behaviour here since ultimately the attention mask will be `\"i4\"` anyway\r\n\r\nFeel free to open a PR to make this change and I can get you a quick approval!", "Thank you! @ArthurZucker @sanchit-gandhi I've submitted a PR." ]
1,692
1,692
1,692
CONTRIBUTOR
null
Hi! I notice that the usage of a boolean attention mask in `pretrained_model.generate` of Flax GPT2 can cause an error. Here is a short, self-contained code block to showcase the problem; I also prepared a [colab notebook here](https://colab.research.google.com/drive/1fIfOr0AFfWlAho1dwuk8zqxKxlKmzd7i?usp=sharing): ``` python import transformers import jax import jax.numpy as jnp tokenizer = transformers.AutoTokenizer.from_pretrained( "gpt2", padding_side="right") tokenizer.pad_token = tokenizer.eos_token query = jnp.array([ [tokenizer.pad_token_id, tokenizer.pad_token_id, 23073], ]) response_length = 4 # temperature = 0.7 pretrained_model = transformers.FlaxAutoModelForCausalLM.from_pretrained("gpt2") generation_config = transformers.GenerationConfig( max_new_tokens=response_length, min_new_tokens=response_length, do_sample=True, ) generation_config.pad_token_id = tokenizer.pad_token_id context_length = query.shape[1] attention_mask = query != tokenizer.pad_token_id input_ids = query.clone() # set padding tokens to 0 input_ids = jnp.where(attention_mask, input_ids, 0) output = pretrained_model.generate( input_ids=input_ids, attention_mask=attention_mask, generation_config=generation_config, ) # TypeError: lax.dynamic_update_slice requires arguments to have the same dtypes, got int32, bool. ``` The type error occurs because the `attention_mask` in our example above is a boolean array. But the `extended_attention_mask` used in [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt2/modeling_flax_gpt2.py#L753) internally for response generation has an integer type. This leads to an error in the `lax.dynamic_update_slice` [line here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt2/modeling_flax_gpt2.py#L756), as it can't handle inputs with different data types (integer and boolean). I think this can be a bug, because boolean attention mask should be permitted. To fix it, one can simply update [this line](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt2/modeling_flax_gpt2.py#L756) in `transformers.models.gpt2.modelling_flax_gpt2.py`, which currently reads `extended_attention_mask = lax.dynamic_update_slice(extended_attention_mask, attention_mask, (0, 0))` into the following new line: `extended_attention_mask = lax.dynamic_update_slice(extended_attention_mask, attention_mask.astype("i4"), (0, 0))` This will correct the mismatch in dtypes. Happy to submit a PR for that! ### Who can help? @sanchit-gandhi, @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Here is a short, self-contained code block to showcase the problem; I also prepared a [colab notebook here](https://colab.research.google.com/drive/1fIfOr0AFfWlAho1dwuk8zqxKxlKmzd7i?usp=sharing): ``` python import torch import transformers import jax import jax.numpy as jnp tokenizer = transformers.AutoTokenizer.from_pretrained( "gpt2", padding_side="right") tokenizer.pad_token = tokenizer.eos_token query = jnp.array([ [tokenizer.pad_token_id, tokenizer.pad_token_id, 23073], ]) response_length = 4 # temperature = 0.7 pretrained_model = transformers.FlaxAutoModelForCausalLM.from_pretrained("gpt2") generation_config = transformers.GenerationConfig( max_new_tokens=response_length, min_new_tokens=response_length, do_sample=True, ) generation_config.pad_token_id = tokenizer.pad_token_id context_length = query.shape[1] attention_mask = query != tokenizer.pad_token_id input_ids = query.clone() # set padding tokens to 0 input_ids = jnp.where(attention_mask, input_ids, 0) output = pretrained_model.generate( input_ids=input_ids, attention_mask=attention_mask, generation_config=generation_config, ) # TypeError: lax.dynamic_update_slice requires arguments to have the same dtypes, got int32, bool. ``` ### Expected behavior I expected to execute the line `output = pretrained_model.generate( input_ids=input_ids, attention_mask=attention_mask, generation_config=generation_config, )` in the above example, when `attention_mask` is a boolean mask.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25634/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25634/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25633
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25633/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25633/comments
https://api.github.com/repos/huggingface/transformers/issues/25633/events
https://github.com/huggingface/transformers/pull/25633
1,859,437,909
PR_kwDOCUB6oc5YY-Ib
25,633
Support loading base64 images in pipelines
{ "login": "InventivetalentDev", "id": 6525296, "node_id": "MDQ6VXNlcjY1MjUyOTY=", "avatar_url": "https://avatars.githubusercontent.com/u/6525296?v=4", "gravatar_id": "", "url": "https://api.github.com/users/InventivetalentDev", "html_url": "https://github.com/InventivetalentDev", "followers_url": "https://api.github.com/users/InventivetalentDev/followers", "following_url": "https://api.github.com/users/InventivetalentDev/following{/other_user}", "gists_url": "https://api.github.com/users/InventivetalentDev/gists{/gist_id}", "starred_url": "https://api.github.com/users/InventivetalentDev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/InventivetalentDev/subscriptions", "organizations_url": "https://api.github.com/users/InventivetalentDev/orgs", "repos_url": "https://api.github.com/users/InventivetalentDev/repos", "events_url": "https://api.github.com/users/InventivetalentDev/events{/privacy}", "received_events_url": "https://api.github.com/users/InventivetalentDev/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @amyeroberts ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25633). All of your documentation changes will be reflected on that endpoint.", "The PR looks good.\r\n\r\nI'm uneasy about the actual format suggested, because some stuff are base64 decodable and yet real strings.\r\nI feel like this should be in user code not really in `transformers`.\r\n\r\nBut I have the same feeling about loading images from URL so..", "@Narsil Understood! As passing in strings is already supported for loading from url, I think this is an acceptable addition. Is there anything you'd like us to add to this PR to handle that? ", "No not really.\r\n\r\nIt's choice. I'm not in favor, but I'm not super strongly opposed to it either.\r\nPlease choose which route you prefer.", "@Narsil I'm inclined to agree. As we already accept strings, I'm going to merge. If continue support of this ends up being a headache (lots of code / lots of if/else logic) then we can think about ways to deprecate the support or reduce its scope. " ]
1,692
1,693
1,693
CONTRIBUTOR
null
# What does this PR do? Adds support for loading base64-encoded images in pipelines. I primarily added this so it can be used by `transformers-cli serve`, since I found having to use an URL or reference to a local file a bit inconvenient for my use-case. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @Narsil
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25633/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25633/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25633", "html_url": "https://github.com/huggingface/transformers/pull/25633", "diff_url": "https://github.com/huggingface/transformers/pull/25633.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25633.patch", "merged_at": 1693333465000 }
https://api.github.com/repos/huggingface/transformers/issues/25632
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25632/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25632/comments
https://api.github.com/repos/huggingface/transformers/issues/25632/events
https://github.com/huggingface/transformers/issues/25632
1,859,410,081
I_kwDOCUB6oc5u1Fih
25,632
Batching behaviour of Pipelines with datasets.Dataset as input could be clarified/improved
{ "login": "jack89roberts", "id": 16308271, "node_id": "MDQ6VXNlcjE2MzA4Mjcx", "avatar_url": "https://avatars.githubusercontent.com/u/16308271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jack89roberts", "html_url": "https://github.com/jack89roberts", "followers_url": "https://api.github.com/users/jack89roberts/followers", "following_url": "https://api.github.com/users/jack89roberts/following{/other_user}", "gists_url": "https://api.github.com/users/jack89roberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/jack89roberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jack89roberts/subscriptions", "organizations_url": "https://api.github.com/users/jack89roberts/orgs", "repos_url": "https://api.github.com/users/jack89roberts/repos", "events_url": "https://api.github.com/users/jack89roberts/events{/privacy}", "received_events_url": "https://api.github.com/users/jack89roberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @Narsil", "I'm pretty sure `datasets.Dataset` will not work out of the box, because `datasets.Dataset` uses different keys for different items (Same as `torch.Dataset` actually, but you're in control of those.)\r\n\r\nActually the easiest/most general way would be to use a generator for everything (you both have full control and works with any dataset) https://huggingface.co/docs/transformers/v4.32.0/en/main_classes/pipelines#pipeline-batching\r\n\r\nThe *only* reason for `KeyDataset` to exist, is because it can do the adaptation layer over `datasets.Dataset` and be sizeable (which generators aren't) and therefore it works better with tools like `tqdm`.\r\n\r\n@ArthurZucker Should we just remove `KeyDataset` from the example and purely promote the generator one ?", "Yes, I think it makes sense 👍🏻 ", "> I'm pretty sure `datasets.Dataset` will not work out of the box, because `datasets.Dataset` uses different keys for different items (Same as `torch.Dataset` actually, but you're in control of those.)\r\n\r\nThanks for having a look at this! I'm not familiar with the internals of the various dataset classes, but for reference here are a couple of examples where creating a data loader from a `datasets.Dataset` seems to work (this is what the pipeline class does for other input types currently, but not for `datasets.Dataset`):\r\n\r\n```python\r\nfrom datasets import Dataset, load_dataset\r\nfrom torch.utils.data import DataLoader\r\nfrom transformers.pipelines.base import pad_collate_fn\r\n\r\ntest_collate_fn = pad_collate_fn(None, \"TEST\")\r\n\r\ndataset = Dataset.from_dict({\"idx\": range(16)})\r\ndataloader = DataLoader(dataset, batch_size=4, collate_fn=test_collate_fn)\r\ndata_iter = iter(dataloader)\r\nprint(next(data_iter))\r\n\r\ndataset = load_dataset(\"pcuenq/oxford-pets\", split=\"train\")\r\ndataloader = DataLoader(dataset, batch_size=4, collate_fn=test_collate_fn)\r\ndata_iter = iter(dataloader)\r\nprint(next(data_iter))\r\n```", "@jack89roberts \r\n\r\nPipeline expects a raw `string` not a dict of ` {\"input\": \"myinput}` which all dataset contain.\r\nAnd since there's no standard key naming for dataset, we cannot infer those.", "Many users seem to have this problem, so I think we should do something about it.\r\n\r\nWe could consider making the `Dataset` class a subclass of `torch.utils.data.Dataset` (as we do with the iterable version) to pass the type check, but this change is not cheap as it requires `import torch`.\r\n\r\nAlternatively, the pipeline code could check for the presence of `__getitem__` and `__len__` instead of `isinstance(inputs, torch.utils.data.Dataset)`. This is what we suggested to fix a DeepSpeed limitation [here](https://github.com/huggingface/datasets/issues/2165) (the rationale was that `DataLoader` doesn't require map-style datasets to be an instance of Torch Dataset)\r\n\r\nSo, even though one can bypass this limitation by providing an HF dataset as a generator (`iter(dataset)`), allowing `pipeline(dataset)` would still be nice (for new users).\r\n\r\n(This assumes the `dataset`'s columns match input names that the `pipeline` expects, e.g., `text` for `TextClassificationPipeline`)\r\n\r\n", "I'd be happy to see a proof of concept working.\r\n\r\nIn my experience, the name matching just doesn't work on most datasets (and is even more confusing when it doesn't).", "Yes, it doesn't always work, so [here](https://colab.research.google.com/drive/1r65SbdJWIZAeAadtRKTVhtj0_f9Zicnk?usp=sharing) is a notebook that suggests a solution.\r\n\r\nWDYT?\r\n\r\nPS: One more reason for these changes is that the `Trainer` is well-integrated with HF datasets, so it makes sense to do the same for the `Pipeline`.", "I read a little bit, the working code in the notebook is pretty clear already I think `iter(dataset)`.\r\n\r\nMy main issue with \"magic\" is just that when it doesn't work (which should happen unfortunately quite a lot of the time since the names don't have a lot of reasons to match in general) then users will be confused.\r\n\"It work in the example, but it doesn't work on my dataset\". And I'm not sure how we could convey easy simple steps for users to update their code to make it work.\r\n\r\n```\r\nValueError: Incorrect format used for image. Should be an url linking to an image, a base64 string, a local path, or a PIL image.\r\n```\r\nFor instance is pretty confusing imo. (I did send a URL linking to an image).\r\n\r\nAgain if you have some working PR I'm happy to take a look, but this is what I'll be looking for (How easy it is to know why the dataset is failing, and how easy is it to update from non working to working code)", "The names can be aligned with `dataset.{rename_columns, remove_columns, select_columns}` (`KeyError` is raised if they don't match, so it should be clear what's the issue) \r\n\r\n> For instance is pretty confusing imo. (I did send a URL linking to an image).\r\n\r\nI've updated the notebook with a call that works but does not scale.\r\n\r\nTo make my proposed solution clear, this is what the usage would look like:\r\n* For the \"single-input\" pipelines (e.g., classification pipelines): `pipe(dataset[\"input_col\"])` (no need for `KeyDataset`)\r\n* For the \"multi-input\" pipelines (e.g., question answering pipelines): `pipe(dataset)` with the `dataset` columns required to match the input arguments' names (e.g., \"image\" and \"question\" for `VisualQuestionAnswertingPipeline`)\r\n\r\nI think it shouldn't be too hard to understand with some basic examples added to the `Pipeline` docs (e.g., replacing `KeyDataset` with `dataset[\"input_col\"]` where possible)\r\n\r\nI'll open a PR (should be pretty small) here as soon as `dataset[\"column\"]` in `datasets` is optimized so that it doesn't load everything in memory :).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,692
1,702
1,702
NONE
null
### System Info - `transformers` version: 4.31.0 - Platform: macOS-13.4.1-arm64-arm-64bit - Python version: 3.11.4 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.0.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @Narsil ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I wasn't sure whether to classify this as a bug or feature request so apologies if I've put it in the wrong place. The wording of the documentation made me think batching should work if the input to a `Pipeline` is a `datasets.Dataset` instance, but this is not the case. It is the case for an `IterableDataset` instance. Here's a script to show what I mean: ```python from datasets import Dataset from transformers import ( Pipeline, ViTConfig, ViTForImageClassification, ViTImageProcessor, ) # dummy custom pipeline that just passes through the input and prints what was passed to `_forward` class MyPipeline(Pipeline): def _sanitize_parameters(self): return {}, {}, {} def preprocess(self, inputs): return inputs def _forward(self, model_inputs): print("_forward", model_inputs["idx"]) return model_inputs def postprocess(self, model_outputs): return model_outputs # make a small dummy model and processor to initialise the pipeline with model = ViTForImageClassification( ViTConfig( hidden_size=1, num_hidden_layers=1, num_attention_heads=1, intermediate_size=1, image_size=1, patch_size=1, num_channels=1, encoder_stride=1, ) ) extractor = ViTImageProcessor() pipeline = MyPipeline(model=model, image_processor=extractor) # this will call _forward on the whole dataset in one go (despite setting a batch size) print("Using datasets.Dataset as input:") dataset = Dataset.from_dict({"idx": range(16)}) for i in pipeline(dataset, batch_size=4): continue # output: # _forward [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] # if dataset converted to IterableDataset the calls to _forward are batched as expected print("Using datasets.IterableDataset as input:") dataset = dataset.to_iterable_dataset() for i in pipeline(dataset, batch_size=4): continue # output: # _forward [0, 1, 2, 3] # _forward [4, 5, 6, 7] # _forward [8, 9, 10, 11] # _forward [12, 13, 14, 15] ``` ### Expected behavior I am happy to make a PR for any of these but unsure which would be most appropriate or if there are implementation details I'm missing. #### Docs I think the documentation could be clarified. I found this confusing: https://github.com/huggingface/transformers/blob/2f8acfea1ca11fe3479fb379ccbded516d0cff57/docs/source/en/main_classes/pipelines.md?plain=1#L110-L113 Batching will only be used if the `Dataset` that is passed is one that has `isinstance(inputs, torch.utils.data.Dataset) == True`. This is not the case for `datasets.arrow_dataset.Dataset`, which generally seems to be the default meaning of `Dataset` within the `datasets` library. Both `datasets.iterable_dataset.IterableDataset` and `transformers.pipelines.pt_utils.KeyDataset` do inherit from `torch.utils.data.Dataset`, so the pipeline will use batching if inputs is one of those. A minimal change could be something like changing this: > (so when passing lists or `Dataset` or `generator`) to this: > (so when passing lists or `torch.utils.data.Dataset` or `generator` or `datasets.IterableDataset` or `KeyDataset`). Maybe the pipeline documentation should also explicitly say batching will not work with `datasets.Dataset` if this is the expected behaviour. #### Pipeline base class Adjustments that could be made to the base Pipeline class to clarify/change this: ##### 1) Raise a warning/error if a `batch_size` has been set but the type of inputs means the inference will not actually be batched. ##### 2) A one-line change to the base class should make them compatible with batching over `datasets.Dataset`. If this line: https://github.com/huggingface/transformers/blob/e42587f596181396e1c4b63660abf0c736b10dae/src/transformers/pipelines/base.py#L1089 also included a check for `isinstance(inputs, datasets.Dataset)` (as well as a torch Dataset, which is what the above check is against) then the input dataset would later be converted to a `DataLoader` here: https://github.com/huggingface/transformers/blob/e42587f596181396e1c4b63660abf0c736b10dae/src/transformers/pipelines/base.py#L1054 This worked fine for my custom pipeline but I'm unsure if there are other consequences/reasons not to be inputting `datasets.Dataset` instances into pipelines.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25632/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25632/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25631
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25631/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25631/comments
https://api.github.com/repos/huggingface/transformers/issues/25631/events
https://github.com/huggingface/transformers/pull/25631
1,859,324,113
PR_kwDOCUB6oc5YYk78
25,631
Skip doctest for some recent files
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,692
1,692
1,692
COLLABORATOR
null
# What does this PR do? skip doctests for `Idefics` and some `peft` stuff. They are not `documentation_tests.txt` (to be removed soon) anyway. `Idefics` cause runner crash --> we need to see why this happens (and mark it a slow doctest manually if this is necessary)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25631/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25631/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25631", "html_url": "https://github.com/huggingface/transformers/pull/25631", "diff_url": "https://github.com/huggingface/transformers/pull/25631.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25631.patch", "merged_at": 1692624044000 }
https://api.github.com/repos/huggingface/transformers/issues/25630
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25630/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25630/comments
https://api.github.com/repos/huggingface/transformers/issues/25630/events
https://github.com/huggingface/transformers/pull/25630
1,859,234,117
PR_kwDOCUB6oc5YYQ9q
25,630
TF 2.14 compatibility
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Amazingly, except the Hub tests, this seems to just work! I did some manual testing with 2.14-rc0 on my local machine and all tests seem to pass.", "cc @ArthurZucker for core maintainer review" ]
1,692
1,692
1,692
MEMBER
null
This PR adds compatibility with TF 2.14. Right now it just updates the pin to see what breaks, but hopefully I'll also fix the things that have broken, if the mood takes me.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25630/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25630/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25630", "html_url": "https://github.com/huggingface/transformers/pull/25630", "diff_url": "https://github.com/huggingface/transformers/pull/25630.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25630.patch", "merged_at": 1692706418000 }
https://api.github.com/repos/huggingface/transformers/issues/25629
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25629/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25629/comments
https://api.github.com/repos/huggingface/transformers/issues/25629/events
https://github.com/huggingface/transformers/issues/25629
1,859,229,877
I_kwDOCUB6oc5u0Zi1
25,629
blip itm task implementation
{ "login": "MasKong", "id": 23203109, "node_id": "MDQ6VXNlcjIzMjAzMTA5", "avatar_url": "https://avatars.githubusercontent.com/u/23203109?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MasKong", "html_url": "https://github.com/MasKong", "followers_url": "https://api.github.com/users/MasKong/followers", "following_url": "https://api.github.com/users/MasKong/following{/other_user}", "gists_url": "https://api.github.com/users/MasKong/gists{/gist_id}", "starred_url": "https://api.github.com/users/MasKong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MasKong/subscriptions", "organizations_url": "https://api.github.com/users/MasKong/orgs", "repos_url": "https://api.github.com/users/MasKong/repos", "events_url": "https://api.github.com/users/MasKong/events{/privacy}", "received_events_url": "https://api.github.com/users/MasKong/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is a duplicate of #22732. cc @younesbelkada as it seems to be confusing ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,692
1,695
1,695
NONE
null
### System Info In the official implementation of blip as well as the paper, the multi-modal embedding that is used to perform the itm task comes from a special token named enc_token_id. But the transformer implementation still employs [CLS] token. ``` text.input_ids[:,0] = self.tokenizer.enc_token_id output = self.text_encoder(text.input_ids, attention_mask = text.attention_mask, encoder_hidden_states = image_embeds, encoder_attention_mask = image_atts, return_dict = True, ) return output.last_hidden_state ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction https://github.com/huggingface/transformers/blob/v4.31.0/src/transformers/models/blip/modeling_blip.py#L1418 ### Expected behavior https://github.com/salesforce/BLIP/blob/main/models/blip.py#L67
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25629/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25629/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25628
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25628/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25628/comments
https://api.github.com/repos/huggingface/transformers/issues/25628/events
https://github.com/huggingface/transformers/pull/25628
1,859,181,611
PR_kwDOCUB6oc5YYFVz
25,628
Add docstrings and fix VIVIT examples
{ "login": "Geometrein", "id": 65066173, "node_id": "MDQ6VXNlcjY1MDY2MTcz", "avatar_url": "https://avatars.githubusercontent.com/u/65066173?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Geometrein", "html_url": "https://github.com/Geometrein", "followers_url": "https://api.github.com/users/Geometrein/followers", "following_url": "https://api.github.com/users/Geometrein/following{/other_user}", "gists_url": "https://api.github.com/users/Geometrein/gists{/gist_id}", "starred_url": "https://api.github.com/users/Geometrein/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Geometrein/subscriptions", "organizations_url": "https://api.github.com/users/Geometrein/orgs", "repos_url": "https://api.github.com/users/Geometrein/repos", "events_url": "https://api.github.com/users/Geometrein/events{/privacy}", "received_events_url": "https://api.github.com/users/Geometrein/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25628). All of your documentation changes will be reflected on that endpoint.", "cc @Rocketknight1 ", "@Geometrein Thanks for the PR - ping me whenever you're happy for me to merge!", "Thanks for your review @Rocketknight1!\r\nIt's good to go, feel free to merge :)" ]
1,692
1,693
1,693
CONTRIBUTOR
null
# What does this PR do? - Add docstring for `sample_frame_indices`method - Fix broken examples for VIVIT - Add missing `Pytorch` & `VivitForVideoClassification` imports - Resolve the referenced before assignment error caused by an undeclared `videoreader` variable. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25628/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25628/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25628", "html_url": "https://github.com/huggingface/transformers/pull/25628", "diff_url": "https://github.com/huggingface/transformers/pull/25628.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25628.patch", "merged_at": 1693076927000 }
https://api.github.com/repos/huggingface/transformers/issues/25627
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25627/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25627/comments
https://api.github.com/repos/huggingface/transformers/issues/25627/events
https://github.com/huggingface/transformers/pull/25627
1,859,134,368
PR_kwDOCUB6oc5YX69C
25,627
fix ACT_FN
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,692
1,692
1,692
COLLABORATOR
null
# What does this PR do? Fixes #24821 Superseeds #24823
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25627/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25627/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25627", "html_url": "https://github.com/huggingface/transformers/pull/25627", "diff_url": "https://github.com/huggingface/transformers/pull/25627.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25627.patch", "merged_at": 1692621224000 }
https://api.github.com/repos/huggingface/transformers/issues/25626
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25626/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25626/comments
https://api.github.com/repos/huggingface/transformers/issues/25626/events
https://github.com/huggingface/transformers/pull/25626
1,859,111,862
PR_kwDOCUB6oc5YX17z
25,626
[`TokenizerFast`] `can_save_slow_tokenizer` as a property for when `vocab_file`'s folder was removed
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,692
1,693
1,693
COLLABORATOR
null
# What does this PR do? Fixes #25602, making `can_save_slow` a property rather than an attribute as we need to check if the vocab_file still exists!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25626/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25626/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25626", "html_url": "https://github.com/huggingface/transformers/pull/25626", "diff_url": "https://github.com/huggingface/transformers/pull/25626.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25626.patch", "merged_at": 1693484246000 }
https://api.github.com/repos/huggingface/transformers/issues/25625
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25625/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25625/comments
https://api.github.com/repos/huggingface/transformers/issues/25625/events
https://github.com/huggingface/transformers/issues/25625
1,859,034,823
I_kwDOCUB6oc5uzp7H
25,625
self.pad_token detect should be is None rather than "not self.pad_token"
{ "login": "aohan237", "id": 3992281, "node_id": "MDQ6VXNlcjM5OTIyODE=", "avatar_url": "https://avatars.githubusercontent.com/u/3992281?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aohan237", "html_url": "https://github.com/aohan237", "followers_url": "https://api.github.com/users/aohan237/followers", "following_url": "https://api.github.com/users/aohan237/following{/other_user}", "gists_url": "https://api.github.com/users/aohan237/gists{/gist_id}", "starred_url": "https://api.github.com/users/aohan237/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aohan237/subscriptions", "organizations_url": "https://api.github.com/users/aohan237/orgs", "repos_url": "https://api.github.com/users/aohan237/repos", "events_url": "https://api.github.com/users/aohan237/events{/privacy}", "received_events_url": "https://api.github.com/users/aohan237/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks! I agree.\r\n\r\n@ArthurZucker If you agree, I can open a PR to update this tiny place.", "Sure!" ]
1,692
1,692
1,692
NONE
null
https://github.com/huggingface/transformers/blob/2f8acfea1ca11fe3479fb379ccbded516d0cff57/src/transformers/tokenization_utils_base.py#L2506C71-L2506C71
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25625/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25625/timeline
completed
null
null