url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/24819
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24819/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24819/comments
https://api.github.com/repos/huggingface/transformers/issues/24819/events
https://github.com/huggingface/transformers/issues/24819
1,804,382,140
I_kwDOCUB6oc5rjK-8
24,819
Model training with torch_dtype=torch.bfloat16 is possible?
{ "login": "cnut1648", "id": 37067883, "node_id": "MDQ6VXNlcjM3MDY3ODgz", "avatar_url": "https://avatars.githubusercontent.com/u/37067883?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cnut1648", "html_url": "https://github.com/cnut1648", "followers_url": "https://api.github.com/users/cnut1648/followers", "following_url": "https://api.github.com/users/cnut1648/following{/other_user}", "gists_url": "https://api.github.com/users/cnut1648/gists{/gist_id}", "starred_url": "https://api.github.com/users/cnut1648/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cnut1648/subscriptions", "organizations_url": "https://api.github.com/users/cnut1648/orgs", "repos_url": "https://api.github.com/users/cnut1648/repos", "events_url": "https://api.github.com/users/cnut1648/events{/privacy}", "received_events_url": "https://api.github.com/users/cnut1648/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You can try to train in full bfloat16 but it's not as stable as mixed precision bfloat16 training. When running `run_clm` with `--torch_dtype=bfloat16` you train in full bfloat16 so the flag `--bf16` (mixed precision training) is not really useful.", "@sgugger thank you for the info!", "@sgugger Hi, recently I use run_clm.py to train my LM. I am a little confused about what you said, why `--bf` did nothing when passed `--torch_dtype=bfloat16` ? `torch_dtype` seems to be used just when load a pretrained model, `--bf ` is a training arg to mixed precision training . what is the connection?\r\nThanks \r\n " ]
1,689
1,697
1,689
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.10.184-174.730.amzn2.x86_64-x86_64-with-glibc2.26 - Python version: 3.9.16 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction From #23165 and specifically this comment https://github.com/huggingface/transformers/issues/23165#issuecomment-1536439098 of @sgugger , it seems that we should not set `torch_dtype` during training. However I think this is possible. See for example the following script ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_name = "gpt2" input = "Hello world!" tokenizer = AutoTokenizer.from_pretrained(model_name) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16).to(device) optimizer = torch.optim.Adam(model.parameters(), lr=5e-5) input_ids = tokenizer.encode(input, return_tensors="pt").to(device) output = model(input_ids, labels=input_ids) output.loss.backward() optimizer.step() print(model.get_input_embeddings().weight.grad) ``` I think gpt2 is trained using fp32 but I can load it in bfloat16 and train it (or at least get a gradient with bf16). Thus I wonder if I misunderstood something. I have another project running to train LLaMA using bfloat16 (essentially using [`run_clm.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py) from the official repo with `--torch_dtype=bfloat16 --bf16` command line flag), so if it turns out that I should not use `torch_dtype` for training then it means that I need to stop the experiments and rerun a lot of things :( Thank you. ### Expected behavior I am not sure if we should use `torch_dtype` for training.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24819/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24819/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24818
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24818/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24818/comments
https://api.github.com/repos/huggingface/transformers/issues/24818/events
https://github.com/huggingface/transformers/issues/24818
1,804,297,034
I_kwDOCUB6oc5ri2NK
24,818
添加peft模型的resume_from_checkpoint实现
{ "login": "shell-nlp", "id": 39985245, "node_id": "MDQ6VXNlcjM5OTg1MjQ1", "avatar_url": "https://avatars.githubusercontent.com/u/39985245?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shell-nlp", "html_url": "https://github.com/shell-nlp", "followers_url": "https://api.github.com/users/shell-nlp/followers", "following_url": "https://api.github.com/users/shell-nlp/following{/other_user}", "gists_url": "https://api.github.com/users/shell-nlp/gists{/gist_id}", "starred_url": "https://api.github.com/users/shell-nlp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shell-nlp/subscriptions", "organizations_url": "https://api.github.com/users/shell-nlp/orgs", "repos_url": "https://api.github.com/users/shell-nlp/repos", "events_url": "https://api.github.com/users/shell-nlp/events{/privacy}", "received_events_url": "https://api.github.com/users/shell-nlp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @506610466, thanks for raising an issue. \r\n\r\nCould you please follow the issue template and provide all the information requested, including the running environment, a minimal code snippet to reproduce the error, full traceback and expected behaviour?" ]
1,689
1,689
1,689
NONE
null
### Feature request 现阶段 trainer.train(resume_from_checkpoint=resume_from_checkpoint) 不能够加载adapter 权重进行 resume ### Motivation 现阶段 trainer.train(resume_from_checkpoint=resume_from_checkpoint) 不能够加载adapter 权重进行 resume ### Your contribution 没有
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24818/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24818/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24817
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24817/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24817/comments
https://api.github.com/repos/huggingface/transformers/issues/24817/events
https://github.com/huggingface/transformers/pull/24817
1,804,155,448
PR_kwDOCUB6oc5Veziq
24,817
Fix Dropout Implementation in Graphormer
{ "login": "alexanderkrauck", "id": 17174445, "node_id": "MDQ6VXNlcjE3MTc0NDQ1", "avatar_url": "https://avatars.githubusercontent.com/u/17174445?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexanderkrauck", "html_url": "https://github.com/alexanderkrauck", "followers_url": "https://api.github.com/users/alexanderkrauck/followers", "following_url": "https://api.github.com/users/alexanderkrauck/following{/other_user}", "gists_url": "https://api.github.com/users/alexanderkrauck/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexanderkrauck/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexanderkrauck/subscriptions", "organizations_url": "https://api.github.com/users/alexanderkrauck/orgs", "repos_url": "https://api.github.com/users/alexanderkrauck/repos", "events_url": "https://api.github.com/users/alexanderkrauck/events{/privacy}", "received_events_url": "https://api.github.com/users/alexanderkrauck/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @clefourrier and others,\r\nthis is the first time contributing to Huggingface for me. In the course of my thesis I found some improvements/fixes to your current Graphormer implementation including this one which is quite simple but has a big impact and should be easy to review. I plan to make some more pull requests with more performance related changes to speed Graphormer up and possibly also to add the 3D version in the following days/weeks. Feel free to reach out.\r\nBest wishes, Alexander", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "It still needs to be adressed! @clefourrier or whoever is responsible for it, am I doing something wrong with my pull request or what is taking so long for anyone to anwer?! How am I supposed to contribute if I am being ignored. ", "Hi @alexanderkrauck !\r\nI have been very busy taking care of the Open LLM Leaderboard, and I put the graph transformers issues on the backburner for the summer. I was hoping to come back to this quite earlier than now, I'm very sorry about that.\r\nI'll do my best to come back to these before the end of September", "@amyeroberts or @ArthurZucker ?" ]
1,689
1,694
1,694
CONTRIBUTOR
null
# What does this PR do? This commit corrects the dropout implementation in Graphormer, aligning it with the original implementation (https://github.com/microsoft/Graphormer) and improving performance. Specifically: 1. The `attention_dropout` variable, intended for use in GraphormerMultiheadAttention, was defined but not used. This has been corrected to use `attention_dropout` instead of the regular `dropout`. 2. The `activation_dropout` for the activations in the feed-forward layers was missing. Instead, the regular `dropout` was used. This commit adds `activation_dropout` to the feed-forward layers and to the GraphormerConfig including documentation. These changes ensure the dropout implementation matches the original Graphormer and delivers empirically better performance. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @clefourrier
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24817/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24817/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24817", "html_url": "https://github.com/huggingface/transformers/pull/24817", "diff_url": "https://github.com/huggingface/transformers/pull/24817.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24817.patch", "merged_at": 1694173780000 }
https://api.github.com/repos/huggingface/transformers/issues/24816
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24816/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24816/comments
https://api.github.com/repos/huggingface/transformers/issues/24816/events
https://github.com/huggingface/transformers/issues/24816
1,804,098,517
I_kwDOCUB6oc5riFvV
24,816
Proper way to monkey patch a customized model no in transformers?
{ "login": "lucasjinreal", "id": 21303438, "node_id": "MDQ6VXNlcjIxMzAzNDM4", "avatar_url": "https://avatars.githubusercontent.com/u/21303438?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lucasjinreal", "html_url": "https://github.com/lucasjinreal", "followers_url": "https://api.github.com/users/lucasjinreal/followers", "following_url": "https://api.github.com/users/lucasjinreal/following{/other_user}", "gists_url": "https://api.github.com/users/lucasjinreal/gists{/gist_id}", "starred_url": "https://api.github.com/users/lucasjinreal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucasjinreal/subscriptions", "organizations_url": "https://api.github.com/users/lucasjinreal/orgs", "repos_url": "https://api.github.com/users/lucasjinreal/repos", "events_url": "https://api.github.com/users/lucasjinreal/events{/privacy}", "received_events_url": "https://api.github.com/users/lucasjinreal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @lucasjinreal, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.", "@amyeroberts Please help me find someone answer this issue, I posted serveral on forum none of them got response. than k u", "@lucasjinreal Another place to ask questions like this is in [our discord](https://discord.com/invite/hugging-face-879548962464493619). ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,692
1,692
NONE
null
### Feature request Hi, nowadays more and more model emerging out, many of them hard to merge transformers, but we have more and more customized patchs such as condensing rotary, xformers attn etc. What is the best to monkey patch a customized model which not inside transformers library but just loaded with AutoModel? ### Motivation need guidance ### Your contribution current not
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24816/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24816/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24815
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24815/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24815/comments
https://api.github.com/repos/huggingface/transformers/issues/24815/events
https://github.com/huggingface/transformers/issues/24815
1,804,060,453
I_kwDOCUB6oc5rh8cl
24,815
transformers-like library for Prompt or Agent library?
{ "login": "ghosthamlet", "id": 758325, "node_id": "MDQ6VXNlcjc1ODMyNQ==", "avatar_url": "https://avatars.githubusercontent.com/u/758325?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghosthamlet", "html_url": "https://github.com/ghosthamlet", "followers_url": "https://api.github.com/users/ghosthamlet/followers", "following_url": "https://api.github.com/users/ghosthamlet/following{/other_user}", "gists_url": "https://api.github.com/users/ghosthamlet/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghosthamlet/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghosthamlet/subscriptions", "organizations_url": "https://api.github.com/users/ghosthamlet/orgs", "repos_url": "https://api.github.com/users/ghosthamlet/repos", "events_url": "https://api.github.com/users/ghosthamlet/events{/privacy}", "received_events_url": "https://api.github.com/users/ghosthamlet/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @ghosthamlet, \r\n\r\nHave you checked out Agents and Tools from Hugging Face? https://huggingface.co/docs/transformers/v4.30.0/en/transformers_agents", "@amyeroberts Thanks, it looks great. \r\nI found the code is all in src/transformers/tools, seems like it is not as flexible as transformers.\r\nif someone wants to integrate a new agent like `Reflexion: Language Agents with Verbal Reinforcement Learning` https://arxiv.org/abs/2303.11366, can it be as easy as integrate a transformers model?", "@ghosthamlet Easiness is subjective, so it would depend on what you do and don't find easy with the transformers library :) \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/), as we try to reserve the github issues for feature requests and bug reports.\r\n\r\nHowever, if it's a request for the model to be implemented, could you open a separate issue with the feature request?", "@amyeroberts Thanks, I understand now. \r\nI will create a separate issue." ]
1,689
1,689
1,689
NONE
null
### Feature request Add advanced Prompt or Agent methods as models to transformers library, or build a new advanced Prompt or Agent library use ARCH like transformers library. ### Motivation There are many Prompt/Agent libraries like: https://langchain.com/ https://github.com/yoheinakajima/babyagi https://github.com/Significant-Gravitas/Auto-GPT, but they are all have too many abstracts in code or just a framework for automatic tasks. transformers/diffusers library has simple abstracts, If there is advanced Prompt or Agent library like transformers/diffusers, it will be much better for researchers to do research work. and i think huggingface is the best to build this library. ### Your contribution Currently no.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24815/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24815/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24814
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24814/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24814/comments
https://api.github.com/repos/huggingface/transformers/issues/24814/events
https://github.com/huggingface/transformers/issues/24814
1,803,909,670
I_kwDOCUB6oc5rhXom
24,814
Is there a way to use Blip2Model for Zero-Shot Classification?
{ "login": "danielamassiceti", "id": 15345596, "node_id": "MDQ6VXNlcjE1MzQ1NTk2", "avatar_url": "https://avatars.githubusercontent.com/u/15345596?v=4", "gravatar_id": "", "url": "https://api.github.com/users/danielamassiceti", "html_url": "https://github.com/danielamassiceti", "followers_url": "https://api.github.com/users/danielamassiceti/followers", "following_url": "https://api.github.com/users/danielamassiceti/following{/other_user}", "gists_url": "https://api.github.com/users/danielamassiceti/gists{/gist_id}", "starred_url": "https://api.github.com/users/danielamassiceti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danielamassiceti/subscriptions", "organizations_url": "https://api.github.com/users/danielamassiceti/orgs", "repos_url": "https://api.github.com/users/danielamassiceti/repos", "events_url": "https://api.github.com/users/danielamassiceti/events{/privacy}", "received_events_url": "https://api.github.com/users/danielamassiceti/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @danielamassiceti, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.", "Thanks, will repost it there!" ]
1,689
1,689
1,689
NONE
null
Hi there I am attempting to adapt the Blip2Model for a zero-shot classification task as follows: - N text sentences/classes --> x = N text embeddings - 1 test image -> y = 1 image embedding - soft-max(dot-product(x, y)) to get the probabilities over classes This is my solution so far: ``` def get_img_embedding(images]): """ Turn a list of image inputs into tensor of embedding vectors images should be of shape (batch_size, channels, height, width) """ image_tensors = blip2model.preproc([ Image.open(i.path) # type: ignore for i in images], return_tensors='pt') # Dict with 'pixel_values' entry of size batch_size, C, H, W image_tensors = image_tensors.to(self.device, torch.float16) # type: ignore # pass images through the vision model and then the qformer to get query-conditional image features query_outputs = blip2model.get_qformer_features(**image_tensors) # tuple (last_hidden_state, pooler_output) query_output = query_outputs['pooler_output'] # (batch_size, hidden_size) # project query-conditional image features into language space image_features = blip2model.language_projection(query_output) # shape (batch_size, hidden_size) image_features /= image_features.norm(dim=-1, keepdim=True) return image_features def get_text_embedding(texts): """ Turn a list of text inputs into tensor of embedding vectors. texts is a list of strings to embed. """ text_tokens = blip2model.text_tokenizer(texts, padding=True, return_tensors='pt') text_tokens = text_tokens.to(self.device) text_outputs = blip2model.get_text_features(**text_tokens, output_hidden_states=True) # type: ignore text_features = text_outputs['hidden_states'][-1][:, 0, :] # extract [CLS] embedding from last hidden state, shape (batch_size, hidden_size) text_features /= text_features.norm(dim=-1, keepdim=True) return text_features ``` Then I would take the dot product between the two. Am I on the right track? Thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24814/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24814/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24813
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24813/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24813/comments
https://api.github.com/repos/huggingface/transformers/issues/24813/events
https://github.com/huggingface/transformers/issues/24813
1,803,853,048
I_kwDOCUB6oc5rhJz4
24,813
Replacing agent image_qa tool with InstructBLIP
{ "login": "austinmw", "id": 12224358, "node_id": "MDQ6VXNlcjEyMjI0MzU4", "avatar_url": "https://avatars.githubusercontent.com/u/12224358?v=4", "gravatar_id": "", "url": "https://api.github.com/users/austinmw", "html_url": "https://github.com/austinmw", "followers_url": "https://api.github.com/users/austinmw/followers", "following_url": "https://api.github.com/users/austinmw/following{/other_user}", "gists_url": "https://api.github.com/users/austinmw/gists{/gist_id}", "starred_url": "https://api.github.com/users/austinmw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/austinmw/subscriptions", "organizations_url": "https://api.github.com/users/austinmw/orgs", "repos_url": "https://api.github.com/users/austinmw/repos", "events_url": "https://api.github.com/users/austinmw/events{/privacy}", "received_events_url": "https://api.github.com/users/austinmw/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @austinmw, thanks for raising this issue! \r\n\r\nThis is because the model is being loaded [using the AutoModelForVisualQuestionAnswering](https://github.com/huggingface/transformers/blob/91d7df58b6537d385e90578dac40204cb550f706/src/transformers/tools/image_question_answering.py#L39C2-L39C2) class, [which only has Vilt listed as a compatible model](https://github.com/huggingface/transformers/blob/91d7df58b6537d385e90578dac40204cb550f706/src/transformers/models/auto/modeling_auto.py#L818).\r\n\r\nBLIP can be loaded using `AutoModelForVision2Seq` - listed here with [other models like Pix2Struct](https://github.com/huggingface/transformers/blob/91d7df58b6537d385e90578dac40204cb550f706/src/transformers/models/auto/modeling_auto.py#L543C1-L543C37). \r\n\r\nThe reason for this distinction is that the 'answers' from these models are obtained in two different ways. ViltForQuestionAnswering is really a classifier, and predicts the class most likely to be the answer to the question. You can see an example of the [categories here](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa/blob/d0a1f6ab88522427a7ae76ceb6e1e1e7b68a1d08/config.json#L9). Whereas BLIP generates an answer using causal language modeling.\r\n\r\nIf you wish to use BLIP, you can easily define your own `ImageQuestionAnsweringTool` which you can modify to suit the behaviour (and generation strategy) you desire e.g.:\r\n\r\n```python\r\nimport requests\r\n\r\nfrom PIL import Image\r\n\r\nfrom transformers import AutoModelForVision2Seq, AutoProcessor\r\nfrom transformers.tools import PipelineTool\r\nfrom transformers.utils import requires_backends\r\n\r\nclass ImageQuestionAnsweringTool(PipelineTool):\r\n default_checkpoint = \"Salesforce/blip2-opt-2.7b\"\r\n description = (\r\n \"This is a tool that answers a question about an image. It takes an input named `image` which should be the \"\r\n \"image containing the information, as well as a `question` which should be the question in English. It \"\r\n \"returns a text that is the answer to the question.\"\r\n )\r\n name = \"image_qa\"\r\n pre_processor_class = AutoProcessor\r\n model_class = AutoModelForVision2Seq\r\n\r\n inputs = [\"image\", \"text\"]\r\n outputs = [\"text\"]\r\n\r\n def __init__(self, *args, **kwargs):\r\n requires_backends(self, [\"vision\"])\r\n super().__init__(*args, **kwargs)\r\n\r\n def encode(self, image, question: str):\r\n return self.pre_processor(image, question, return_tensors=\"pt\")\r\n\r\n def forward(self, inputs):\r\n outputs = self.model.generate(**inputs, max_new_tokens=50)\r\n return outputs\r\n\r\n def decode(self, outputs):\r\n return self.pre_processor.batch_decode(outputs, skip_special_tokens=True)[0]\r\n\r\n\r\nurl = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\nquestion = \"Question: what is the colour of the sofa? Answer: \"\r\n\r\ntool = ImageQuestionAnsweringTool()\r\nresponse = tool(image=image, question=question)\r\nprint(response)\r\n```\r\n\r\n\r\ncc @LysandreJik", "Ah I see, thanks, and I really appreciate the example you provided!! 🙏🏻\r\n\r\nOne follow-up question though, how can I set `load_in_4bit=True, torch_dtype=torch.float16` for the model_class?", "@austinmw You can pass in an [instantiated model](https://huggingface.co/docs/transformers/v4.31.0/en/main_classes/agent#transformers.PipelineTool.model) when instantiating the tool. " ]
1,689
1,689
1,689
NONE
null
### System Info Hi, I'm attempting to use InstructBLIP for image QA: ```python from transformers import HfAgent, load_tool from diffusers.utils import load_image agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder") image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png") agent.toolbox["image_qa"] = load_tool(task_or_repo_id="image-question-answering", model_repo_id="Salesforce/instructblip-vicuna-13b") agent.run("what colors are in this image?", image=image) ``` However this gives me the error: > ValueError: Unrecognized configuration class for this kind of AutoModel: AutoModelForVisualQuestionAnswering. > Model type should be one of ViltConfig. I'm not sure if I'm doing this incorrectly, it's unsupported, or there's a bug. ### Who can help? @amyeroberts @Narsil ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Code above ### Expected behavior Was hoping that InstructBLIP would answer the question instead of VilT
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24813/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24813/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24812
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24812/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24812/comments
https://api.github.com/repos/huggingface/transformers/issues/24812/events
https://github.com/huggingface/transformers/pull/24812
1,803,759,598
PR_kwDOCUB6oc5VddQR
24,812
Fixing double `use_auth_token.pop` (preventing private models from being visible).
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? Should fix: https://github.com/huggingface/transformers/issues/14334#issuecomment-1634527833 Repro: Have a private repo, with `vocab.json` (spread out files for the tokenizer) and use `AutoTokenizer.from_pretrained(..., use_auth_token="token")`. Not sure if we already have private visibility tests to maybe add/fix some so we can detect this in our suite. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24812/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24812/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24812", "html_url": "https://github.com/huggingface/transformers/pull/24812", "diff_url": "https://github.com/huggingface/transformers/pull/24812.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24812.patch", "merged_at": 1689340802000 }
https://api.github.com/repos/huggingface/transformers/issues/24811
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24811/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24811/comments
https://api.github.com/repos/huggingface/transformers/issues/24811/events
https://github.com/huggingface/transformers/issues/24811
1,803,653,245
I_kwDOCUB6oc5rgZB9
24,811
Unable to run compute_transition_scores : facing 'CodeT5pConfig' object has no attribute 'vocab_size' error
{ "login": "MansiShinde", "id": 29672533, "node_id": "MDQ6VXNlcjI5NjcyNTMz", "avatar_url": "https://avatars.githubusercontent.com/u/29672533?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MansiShinde", "html_url": "https://github.com/MansiShinde", "followers_url": "https://api.github.com/users/MansiShinde/followers", "following_url": "https://api.github.com/users/MansiShinde/following{/other_user}", "gists_url": "https://api.github.com/users/MansiShinde/gists{/gist_id}", "starred_url": "https://api.github.com/users/MansiShinde/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MansiShinde/subscriptions", "organizations_url": "https://api.github.com/users/MansiShinde/orgs", "repos_url": "https://api.github.com/users/MansiShinde/repos", "events_url": "https://api.github.com/users/MansiShinde/events{/privacy}", "received_events_url": "https://api.github.com/users/MansiShinde/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "#### Facing the below error:\r\n```\r\nAttributeError Traceback (most recent call last)\r\n[<ipython-input-22-821b8f7c4aac>](https://localhost:8080/#) in <cell line: 1>()\r\n----> 1 transition_scores = model.compute_transition_scores(gen_tokens.sequences, gen_tokens.scores,gen_tokens.beam_indices, normalize_logits=True)\r\n\r\n1 frames\r\n[/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py](https://localhost:8080/#) in __getattribute__(self, key)\r\n 259 if key != \"attribute_map\" and key in super().__getattribute__(\"attribute_map\"):\r\n 260 key = super().__getattribute__(\"attribute_map\")[key]\r\n--> 261 return super().__getattribute__(key)\r\n 262 \r\n 263 def __init__(self, **kwargs):\r\n\r\nAttributeError: 'CodeT5pConfig' object has no attribute 'vocab_size'\r\n```\r\n\r\nI checked the config file of CodeT5p-2B model on https://huggingface.co/Salesforce/codet5p-2b/blob/main/config.json, it has a vocab_size attribute, I believe this is the right config file to check. I am not sure what is causing this error. \r\n\r\nCould you please help?", "Hey @MansiShinde, the issue is that the `CodeT5pConfig` class defined [here](https://huggingface.co/Salesforce/instructcodet5p-16b/blob/70bb08afa3d6f081b347e67752ca8e031a35ac4a/configuration_codet5p.py#L71-L90) does not have a `vocab_size` attribute but rather `encoder` and `decoder` attributes of type `CodeT5pModuleConfig` which then holds the vocab size attribute. You should be able calculate transition scores by using the model's decoder like this:\r\n```\r\ntransition_scores = model.decoder.compute_transition_scores(gen_tokens.sequences, gen_tokens.scores,gen_tokens.beam_indices, normalize_logits=True)\r\n```", "> Hey @MansiShinde, the issue is that the `CodeT5pConfig` class defined [here](https://huggingface.co/Salesforce/instructcodet5p-16b/blob/70bb08afa3d6f081b347e67752ca8e031a35ac4a/configuration_codet5p.py#L71-L90) does not have a `vocab_size` attribute but rather `encoder` and `decoder` attributes of type `CodeT5pModuleConfig` which then holds the vocab size attribute. You should be able calculate transition scores by using the model's decoder like this:\r\n> \r\n> ```\r\n> transition_scores = model.decoder.compute_transition_scores(gen_tokens.sequences, gen_tokens.scores,gen_tokens.beam_indices, normalize_logits=True)\r\n> ```\r\n\r\n\r\nOhh okay got it. Thanks for the help @fadynakhla !!\r\n", "Thank you for jumping in, @fadynakhla 🚀 " ]
1,689
1,689
1,689
NONE
null
### System Info Hello Team , your help is appreciated in the below issue. I am running the below code snippet on Google Colab. - `transformers` version: 4.30.2 - Platform: Linux-5.15.109+-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.7.0 (gpu) - Jax version: 0.4.13 - JaxLib version: 0.4.13 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @gante ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` from transformers import AutoModelForSeq2SeqLM, AutoTokenizer DEVICE = "cuda" if torch.cuda.is_available() else "cpu" beam_size = 5 max_len = 500 model = "Salesforce/codet5p-2b" tokenizer = AutoTokenizer.from_pretrained(model) model = AutoModelForSeq2SeqLM.from_pretrained(model, trust_remote_code=True, torch_dtype=torch.float16, low_cpu_mem_usage=True) model.eval() model.to(DEVICE) prompt = "\n\n\ndef sum_squares(lst):\n \"\"\"\"\n This function will take a list of integers. For all entries in the list, the function shall square the integer entry if its index is a \n multiple of 3 and will cube the integer entry if its index is a multiple of 4 and not a multiple of 3. The function will not \n change the entries in the list whose indexes are not a multiple of 3 or 4. The function shall then return the sum of all entries. \n \n Examples:\n For lst = [1,2,3] the output should be 6\n For lst = [] the output should be 0\n For lst = [-1,-5,2,-1,-5] the output should be -126\n \"\"\"\n" prompt = prompt.replace(' ', '\t') prompt_batch_decoder = [prompt] encoding_decoder = tokenizer(prompt_batch_decoder, return_tensors="pt", truncation=True, max_length=max_len).to(DEVICE) input_ids=encoding_decoder['input_ids'] with torch.no_grad(): gen_tokens = model.generate(**encoding_decoder, decoder_input_ids=encoding_decoder['input_ids'], max_length=max_len, num_beams= beam_size, do_sample=False, num_return_sequences=1, return_dict_in_generate=True, output_scores=True, early_stopping=True) transition_scores = model.compute_transition_scores(gen_tokens.sequences, gen_tokens.scores,gen_tokens.beam_indices, normalize_logits=True) ``` ### Expected behavior transition_scores are computed
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24811/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24811/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24810
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24810/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24810/comments
https://api.github.com/repos/huggingface/transformers/issues/24810/events
https://github.com/huggingface/transformers/pull/24810
1,803,510,485
PR_kwDOCUB6oc5VcmaM
24,810
Use _BaseAutoModelClass's register method
{ "login": "fadynakhla", "id": 67917337, "node_id": "MDQ6VXNlcjY3OTE3MzM3", "avatar_url": "https://avatars.githubusercontent.com/u/67917337?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fadynakhla", "html_url": "https://github.com/fadynakhla", "followers_url": "https://api.github.com/users/fadynakhla/followers", "following_url": "https://api.github.com/users/fadynakhla/following{/other_user}", "gists_url": "https://api.github.com/users/fadynakhla/gists{/gist_id}", "starred_url": "https://api.github.com/users/fadynakhla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fadynakhla/subscriptions", "organizations_url": "https://api.github.com/users/fadynakhla/orgs", "repos_url": "https://api.github.com/users/fadynakhla/repos", "events_url": "https://api.github.com/users/fadynakhla/events{/privacy}", "received_events_url": "https://api.github.com/users/fadynakhla/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> Thanks a lot!\r\n\r\nGlad to help!\r\n\r\nAlso, what are your thoughts on adding some typehinting to the `_model_mapping` variable e.g. \r\n```\r\nclass _BaseAutoModelClass:\r\n # Base class for auto models.\r\n _model_mapping: Optional[\"_LazyAutoMapping\"] = None\r\n```\r\ninstead of\r\n```\r\nclass _BaseAutoModelClass:\r\n # Base class for auto models.\r\n _model_mapping = None\r\n```\r\nit took quite some time during our discussion yesterday for me to track down what type of object `_model_mapping` was", "_The documentation is not available anymore as the PR was closed or merged._", "This is all internal code, so we don't really document types as rigorously as in public facing classes :-)", "Sounds good just thought I'd ask" ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? Switching `_BaseAutoModelClass`'s `from_pretrained` and `from_config` to use the register classmethod that it defines rather than using the `_LazyAutoMapping` register method directly. This makes use of the additional consistency check within `_BaseAutoModelClass`'s register method. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? Yes - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Discussed briefly in [#24737 ](https://github.com/huggingface/transformers/issues/24737) - [ ] Did you make sure to update the documentation with your changes? No public methods/classes changed - [ ] Did you write any new necessary tests? None necessary ## Who can review? Anyone in the community is free to review the PR once the tests have passed. @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24810/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24810/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24810", "html_url": "https://github.com/huggingface/transformers/pull/24810", "diff_url": "https://github.com/huggingface/transformers/pull/24810.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24810.patch", "merged_at": 1689276292000 }
https://api.github.com/repos/huggingface/transformers/issues/24809
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24809/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24809/comments
https://api.github.com/repos/huggingface/transformers/issues/24809/events
https://github.com/huggingface/transformers/pull/24809
1,803,212,583
PR_kwDOCUB6oc5VblEA
24,809
Fix typo 'submosules'
{ "login": "dymil", "id": 30931139, "node_id": "MDQ6VXNlcjMwOTMxMTM5", "avatar_url": "https://avatars.githubusercontent.com/u/30931139?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dymil", "html_url": "https://github.com/dymil", "followers_url": "https://api.github.com/users/dymil/followers", "following_url": "https://api.github.com/users/dymil/following{/other_user}", "gists_url": "https://api.github.com/users/dymil/gists{/gist_id}", "starred_url": "https://api.github.com/users/dymil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dymil/subscriptions", "organizations_url": "https://api.github.com/users/dymil/orgs", "repos_url": "https://api.github.com/users/dymil/repos", "events_url": "https://api.github.com/users/dymil/events{/privacy}", "received_events_url": "https://api.github.com/users/dymil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? Fixes a one-character typo in the docs for large-model loading. ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24809/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24809/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24809", "html_url": "https://github.com/huggingface/transformers/pull/24809", "diff_url": "https://github.com/huggingface/transformers/pull/24809.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24809.patch", "merged_at": 1689263813000 }
https://api.github.com/repos/huggingface/transformers/issues/24808
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24808/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24808/comments
https://api.github.com/repos/huggingface/transformers/issues/24808/events
https://github.com/huggingface/transformers/pull/24808
1,803,177,272
PR_kwDOCUB6oc5Vbdjz
24,808
Remove Falcon docs for the release until TGI is ready
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Fixed, my bad!", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
MEMBER
null
Make sure we're not advertising docs for the model until we're ready to support it! cc @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24808/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24808/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24808", "html_url": "https://github.com/huggingface/transformers/pull/24808", "diff_url": "https://github.com/huggingface/transformers/pull/24808.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24808.patch", "merged_at": 1689265678000 }
https://api.github.com/repos/huggingface/transformers/issues/24807
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24807/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24807/comments
https://api.github.com/repos/huggingface/transformers/issues/24807/events
https://github.com/huggingface/transformers/pull/24807
1,803,155,513
PR_kwDOCUB6oc5VbY1i
24,807
Run hub tests
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Fixed the tests that did not work, so can merge :-)" ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? It looks like the hub tests are currently not running :grimacing: This is because the env variable for staging test is not set to `True`, due to me when we moved the circle CI to a dynamic config. Hopefully nothing is broken...
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24807/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24807/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24807", "html_url": "https://github.com/huggingface/transformers/pull/24807", "diff_url": "https://github.com/huggingface/transformers/pull/24807.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24807.patch", "merged_at": 1689276346000 }
https://api.github.com/repos/huggingface/transformers/issues/24806
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24806/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24806/comments
https://api.github.com/repos/huggingface/transformers/issues/24806/events
https://github.com/huggingface/transformers/pull/24806
1,803,112,535
PR_kwDOCUB6oc5VbPcR
24,806
Add accelerate version in transformers-cli env
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger I've added the config too, matching the accelerate logic: a specific config file can be passed in, or the default config is read if it exists. LMKWYT :) " ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? What is says on the tin. Now Trainer uses accelerate, me & others are having to ask for the accelerate version often in issues. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24806/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24806/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24806", "html_url": "https://github.com/huggingface/transformers/pull/24806", "diff_url": "https://github.com/huggingface/transformers/pull/24806.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24806.patch", "merged_at": 1689263420000 }
https://api.github.com/repos/huggingface/transformers/issues/24805
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24805/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24805/comments
https://api.github.com/repos/huggingface/transformers/issues/24805/events
https://github.com/huggingface/transformers/pull/24805
1,802,825,990
PR_kwDOCUB6oc5VaQUG
24,805
Fix MobileVitV2 doctest checkpoint
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> So in MobileVitV2 doc example, the torch has to be imported?\r\n\r\nNo, not to have the test pass. I added it because it wouldn't run if you copy-pasted the snippet from the docs.\r\n\r\nI've added it to MobileVit v1 too now for the same reason :)" ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? Doctests CI currently fails on MobileVitV2 tests. Doctest was copied from the V1 mobilevit, and checkpoints didn't match. Remove the copied from as there's just some very simple and standard model head logic coped. Interestingly, the [example for MobileVit V1](https://github.com/huggingface/transformers/blob/21946a8cf4a273f35ac2f3a53edafc398699f527/src/transformers/models/mobilevit/modeling_mobilevit.py#L1027) I'd expect to fail as it doesn't import torch. [It is included in the doc tests](https://github.com/huggingface/transformers/blob/21946a8cf4a273f35ac2f3a53edafc398699f527/utils/documentation_tests.txt#L300). ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24805/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24805/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24805", "html_url": "https://github.com/huggingface/transformers/pull/24805", "diff_url": "https://github.com/huggingface/transformers/pull/24805.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24805.patch", "merged_at": 1689256080000 }
https://api.github.com/repos/huggingface/transformers/issues/24804
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24804/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24804/comments
https://api.github.com/repos/huggingface/transformers/issues/24804/events
https://github.com/huggingface/transformers/pull/24804
1,802,811,269
PR_kwDOCUB6oc5VaNII
24,804
Support RefinedWebModel as a model_type for Falcon
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "No this is not enough. If we go down that road we need to erase the `RefinedWeb` model type on loading to replace it with `falcon`, so that a new saved version does not keep it.", "_The documentation is not available anymore as the PR was closed or merged._", "This already happens! The model type that is saved in `config.json` is set in `configuration_falcon.py`, with the line `model_type = \"falcon\"`. If you load a model with model_type `RefinedWebModel` and save it, the output `config.json` has model_type `falcon`.", "Closing because just changing the `model_name` field won't be enough anyway - we also need to revert all the `config.json` parameters." ]
1,689
1,689
1,689
MEMBER
null
This PR allows us to temporarily revert the model_type for Falcon repos to fix some issues. cc @sgugger @LysandreJik @Narsil @OlivierDehaene @slippylolo
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24804/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24804/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24804", "html_url": "https://github.com/huggingface/transformers/pull/24804", "diff_url": "https://github.com/huggingface/transformers/pull/24804.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24804.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24803
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24803/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24803/comments
https://api.github.com/repos/huggingface/transformers/issues/24803/events
https://github.com/huggingface/transformers/pull/24803
1,802,760,427
PR_kwDOCUB6oc5VaB09
24,803
Lag llama
{ "login": "kashif", "id": 8100, "node_id": "MDQ6VXNlcjgxMDA=", "avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kashif", "html_url": "https://github.com/kashif", "followers_url": "https://api.github.com/users/kashif/followers", "following_url": "https://api.github.com/users/kashif/following{/other_user}", "gists_url": "https://api.github.com/users/kashif/gists{/gist_id}", "starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kashif/subscriptions", "organizations_url": "https://api.github.com/users/kashif/orgs", "repos_url": "https://api.github.com/users/kashif/repos", "events_url": "https://api.github.com/users/kashif/events{/privacy}", "received_events_url": "https://api.github.com/users/kashif/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,707
1,707
CONTRIBUTOR
null
# What does this PR do? Implementation of general time series forecaster and classifier using only the target values.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24803/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24803/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24803", "html_url": "https://github.com/huggingface/transformers/pull/24803", "diff_url": "https://github.com/huggingface/transformers/pull/24803.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24803.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24802
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24802/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24802/comments
https://api.github.com/repos/huggingface/transformers/issues/24802/events
https://github.com/huggingface/transformers/issues/24802
1,802,193,828
I_kwDOCUB6oc5ra0uk
24,802
bug when Bert train on multi gpus
{ "login": "z379035389", "id": 48674444, "node_id": "MDQ6VXNlcjQ4Njc0NDQ0", "avatar_url": "https://avatars.githubusercontent.com/u/48674444?v=4", "gravatar_id": "", "url": "https://api.github.com/users/z379035389", "html_url": "https://github.com/z379035389", "followers_url": "https://api.github.com/users/z379035389/followers", "following_url": "https://api.github.com/users/z379035389/following{/other_user}", "gists_url": "https://api.github.com/users/z379035389/gists{/gist_id}", "starred_url": "https://api.github.com/users/z379035389/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/z379035389/subscriptions", "organizations_url": "https://api.github.com/users/z379035389/orgs", "repos_url": "https://api.github.com/users/z379035389/repos", "events_url": "https://api.github.com/users/z379035389/events{/privacy}", "received_events_url": "https://api.github.com/users/z379035389/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The normal BERT model can train on multiple GPUs, the bug is thus likely due to the modifications you made. You should ask no the [forums](https://discuss.huggingface.co/) to help debug your code.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,692
1,692
NONE
null
### System Info - `transformers` version: 4.28.1 - Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.15.1 - Safetensors version: not installed - PyTorch version (GPU?): 1.12.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker @younesbelkada @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I had alter the BertEncoder defined in modeling_bert.py, like below: ``` class BertEncoder(nn.Module): def __init__(self, config, meta_layer_index=None, scale=1): super().__init__() self.config = config self.layer = nn.ModuleList([BertLayer(config) for _ in range(config.num_hidden_layers)]) self.gradient_checkpointing = False # added by me self.meta_layer = BertLayer(config) self.meta_layer_index = meta_layer_index self.scale = scale self.optimizer_for_meta_layer = torch.optim.SGD(self.meta_layer.parameters(), lr=1e-5, weight_decay=0.005) self.inputs_for_metalayer = None self.outputs_for_metalayer = None self.meta_layer_outputs = None self.st_loss = None def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = False, output_hidden_states: Optional[bool] = False, return_dict: Optional[bool] = True, ) -> Union[Tuple[torch.Tensor], BaseModelOutputWithPastAndCrossAttentions]: all_hidden_states = () if output_hidden_states else None all_self_attentions = () if output_attentions else None all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None self.inputs_for_metalayer = (hidden_states.clone().detach(), head_mask[0] if head_mask is not None else None, past_key_values[0] if past_key_values is not None else None) if self.gradient_checkpointing and self.training: if use_cache: logger.warning_once( "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." ) use_cache = False next_decoder_cache = () if use_cache else None # for i, layer_module in enumerate(self.layer): if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) layer_head_mask = head_mask[i] if head_mask is not None else None past_key_value = past_key_values[i] if past_key_values is not None else None if self.gradient_checkpointing and self.training: def create_custom_forward(module): def custom_forward(*inputs): return module(*inputs, past_key_value, output_attentions) return custom_forward layer_outputs = torch.utils.checkpoint.checkpoint( create_custom_forward(layer_module), hidden_states, attention_mask, layer_head_mask, encoder_hidden_states, encoder_attention_mask, ) else: layer_outputs = layer_module( hidden_states, attention_mask, layer_head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value, output_attentions, ) # layer_outputs : Tuple[torch.Tensor] hidden_states = layer_outputs[0] #added by me if i == self.meta_layer_index - 1: self.inputs_for_metalayer = (hidden_states.clone().detach(), layer_head_mask, past_key_value) if i == self.meta_layer_index + self.scale - 1: self.outputs_for_metalayer = hidden_states.clone().detach() if use_cache: next_decoder_cache += (layer_outputs[-1],) if output_attentions: all_self_attentions = all_self_attentions + (layer_outputs[1],) if self.config.add_cross_attention: all_cross_attentions = all_cross_attentions + (layer_outputs[2],) if self.inputs_for_metalayer is not None and self.outputs_for_metalayer is not None: self.meta_layer_outputs = self.meta_layer( self.inputs_for_metalayer[0], attention_mask, self.inputs_for_metalayer[1], encoder_hidden_states, encoder_attention_mask, self.inputs_for_metalayer[2], output_attentions, )[0] self.st_loss = torch.mean((self.meta_layer_outputs - self.outputs_for_metalayer) ** 2) if self.st_loss.requires_grad is True: self.optimizer_for_meta_layer.zero_grad() self.st_loss.backward() self.optimizer_for_meta_layer.step() if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) if not return_dict: return tuple( v for v in [ hidden_states, next_decoder_cache, all_hidden_states, all_self_attentions, all_cross_attentions, ] if v is not None ) return BaseModelOutputWithPastAndCrossAttentions( last_hidden_state=hidden_states, past_key_values=next_decoder_cache, hidden_states=all_hidden_states, attentions=all_self_attentions, cross_attentions=all_cross_attentions, ) ``` then I just train a Bert Model on multi gpus, and that didn't work because optimizer_for_meta_layer is None. But it works using only one gpu. The train code like below: ``` from transformers import BertLayer,BertConfig,BertModel,BertForMaskedLM from transformers import BertForMaskedLM from transformers import BertConfig from transformers import BertTokenizer import datasets import json import sys import copy from datasets import load_dataset BertBaseconfig = BertConfig() BertBase = BertForMaskedLM(BertBaseconfig) layer = BertLayer(BertBaseconfig) MetaModel = BertForMaskedLM.from_pretrained('/home/wanzhipeng/deepincubation/MetaModel_bert_wiki/checkpoint-36500') MetaModelEncoderBertLayer = MetaModel.bert.encoder.layer BaseModelEncoderBertLayer = BertBase.bert.encoder.layer BaseLayerNums = BaseModelEncoderBertLayer.__len__() MetaLayerNums = MetaModelEncoderBertLayer.__len__() Submodules = [] # def initSubmodules(): global Submodules Submodules = [] scale = BaseLayerNums // MetaLayerNums for i in range(MetaLayerNums): layers = [BertLayer(BertBaseconfig) for _ in range(scale)] Submodule = copy.deepcopy(MetaModel) Submodule.bert.encoder.layer = Submodule.bert.encoder.layer[0:i+1] + layers + Submodule.bert.encoder.layer[i+1:] del Submodule.bert.encoder.layer[i] Submodules.append(Submodule) def tokenize_function(examples): return tokenizer(examples["text"]) initSubmodules() model=Submodules[0] model.config.num_hidden_layers = 6 model.bert.encoder.meta_layer_index = 0 tokenizer = BertTokenizer.from_pretrained('bert-base-uncased',use_fast=True) datasets = load_dataset('wikitext', 'wikitext-2-raw-v1') tokenized_datasets = datasets.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"]) block_size = 128 def group_texts(examples): # Concatenate all texts. concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} total_length = len(concatenated_examples[list(examples.keys())[0]]) # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can # customize this part to your needs. total_length = (total_length // block_size) * block_size # Split by chunks of max_len. result = { k: [t[i : i + block_size] for i in range(0, total_length, block_size)] for k, t in concatenated_examples.items() } result["labels"] = result["input_ids"].copy() return result lm_datasets = tokenized_datasets.map( group_texts, batched=True, batch_size=1000, num_proc=4, ) from transformers import Trainer, TrainingArguments from transformers import DataCollatorForLanguageModeling data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15) training_args = TrainingArguments( output_dir="sub1", # output directory to where save model checkpoint evaluation_strategy="steps", # evaluate each `logging_steps` steps logging_strategy="steps", overwrite_output_dir=True, num_train_epochs=10, # number of training epochs, feel free to tweak logging_steps=10, # evaluate, log and save model checkpoints every 1000 step save_steps=10, load_best_model_at_end=True, # whether to load the best model (in terms of loss) at the end of training save_total_limit=3, # whether you don't have much space so you let only 3 model weights saved in the disk learning_rate=1e-5, weight_decay=0.01, warmup_steps=10000, per_device_train_batch_size=64, per_gpu_eval_batch_size=64, ) from transformers import Trainer, TrainingArguments,EarlyStoppingCallback,TrainerCallback from transformers import DataCollatorForLanguageModeling import torch ## 不同卡的情况下会出问题 class CallbackForMetaLayer(TrainerCallback): def __init__(self): super().__init__() self.meta_layer_outputs = None def on_step_begin(self, args, state, control, model=None, **kwargs): self.meta_layer_outputs = [model.bert.encoder.meta_layer_outputs] print("step_begin:") print(id(self.meta_layer_outputs[0])) def on_step_end(self, args, state, control, model=None, **kwargs): print("step_end:") print(id(self.meta_layer_outputs[0])) # print("*********************************************************") # print(model.bert.encoder.meta_layer_outputs) # print("*********************************************************") # print(model.bert.encoder.outputs_for_metalayer) # print("*********************************************************") # model.bert.encoder.st_loss = torch.mean((model.bert.encoder.meta_layer_outputs - model.bert.encoder.outputs_for_metalayer) ** 2) # model.bert.encoder.optimizer_for_meta_layer.zero_grad() # model.bert.encoder.st_loss.backward() # model.bert.encoder.optimizer_for_meta_layer.step() trainer = Trainer( model=model.to("cuda"), args=training_args, train_dataset=lm_datasets["train"], eval_dataset=lm_datasets["validation"], data_collator=data_collator, callbacks = [EarlyStoppingCallback(early_stopping_patience=5)], ) # trainer.train(resume_from_checkpoint=True) trainer.train() ``` error : Traceback (most recent call last): File "test2/sub1.py", line 130, in <module> trainer.train() File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/transformers/trainer.py", line 1662, in train return inner_training_loop( File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/transformers/trainer.py", line 1929, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/transformers/trainer.py", line 2699, in training_step loss = self.compute_loss(model, inputs) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/transformers/trainer.py", line 2731, in compute_loss outputs = model(**inputs) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply output.reraise() File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/_utils.py", line 461, in reraise raise exception RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 1393, in forward outputs = self.bert( File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 1055, in forward encoder_outputs = self.encoder( File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 643, in forward self.meta_layer_outputs = self.meta_layer( File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 495, in forward self_attention_outputs = self.attention( File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 425, in forward self_outputs = self.self( File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 284, in forward mixed_query_layer = self.query(hidden_states) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) RuntimeError: Output 101 of BroadcastBackward is a view and its base or another view of its base has been modified inplace. This view is the output of a function that returns multiple views. Such functions do not allow the output views to be modified inplace. You should replace the inplace operation by an out-of-place one. ### Expected behavior I think it shoule be trained on multi gpus.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24802/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24802/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24801
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24801/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24801/comments
https://api.github.com/repos/huggingface/transformers/issues/24801/events
https://github.com/huggingface/transformers/issues/24801
1,802,087,328
I_kwDOCUB6oc5raaug
24,801
Bug on compute_transition_scores, inconsistency between two ways of evaluating probabilities
{ "login": "hongzhoulin89", "id": 29802555, "node_id": "MDQ6VXNlcjI5ODAyNTU1", "avatar_url": "https://avatars.githubusercontent.com/u/29802555?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hongzhoulin89", "html_url": "https://github.com/hongzhoulin89", "followers_url": "https://api.github.com/users/hongzhoulin89/followers", "following_url": "https://api.github.com/users/hongzhoulin89/following{/other_user}", "gists_url": "https://api.github.com/users/hongzhoulin89/gists{/gist_id}", "starred_url": "https://api.github.com/users/hongzhoulin89/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hongzhoulin89/subscriptions", "organizations_url": "https://api.github.com/users/hongzhoulin89/orgs", "repos_url": "https://api.github.com/users/hongzhoulin89/repos", "events_url": "https://api.github.com/users/hongzhoulin89/events{/privacy}", "received_events_url": "https://api.github.com/users/hongzhoulin89/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Usually, `model.generate` does some post-process on the logits. As you can see in your code snippet, there are arguments you passed\r\n```python\r\n do_sample=True, \r\n top_p = 0.9, \r\n top_k = 15,\r\n```\r\nIf we just go through the model outputs (your second way) and compute `log_softmax` on the model raw logits, this value doesn't go through the postprocess. So the value won't be the same.\r\n\r\nStill tag @gante to see if he has further comments.", "> Usually, `model.generate` does some post-process on the logits. As you can see in your code snippet, there are arguments you passed\r\n> \r\n> ```python\r\n> do_sample=True, \r\n> top_p = 0.9, \r\n> top_k = 15,\r\n> ```\r\n> \r\n> If we just go through the model outputs (your second way) and compute `log_softmax` on the model raw logits, this value doesn't go through the postprocess. So the value won't be the same.\r\n> \r\n> Still tag @gante to see if he has further comments.\r\n\r\nThanks for the prompt response! \r\n\r\nIt sounds like there are some postprocess steps affecting the logits/probability, I am curious what are those postprocess mechanism. Would you mind provide some more context on what the postprocess is trying to achieve, or maybe point me to the source code. \r\n\r\nFrom my understanding, the part on` do_sample, top_p, top_k ` is only affecting the sampling strategy, not the underlying probability, or maybe it re-normalize the conditional probability? Thanks! ", "You might be right regarding @hongzhoulin89 . It has been sometime I haven't deal with those arguments. Let me take a look, unless our generation super expert @gante faster than me for a comment! ", "@hongzhoulin89 \r\n\r\nOne place is \r\n\r\nhttps://github.com/huggingface/transformers/blob/91d7df58b6537d385e90578dac40204cb550f706/src/transformers/generation/utils.py#L2372-L2375\r\n\r\nor \r\n\r\nhttps://github.com/huggingface/transformers/blob/91d7df58b6537d385e90578dac40204cb550f706/src/transformers/generation/utils.py#L2652-L2656\r\n\r\nAnd you can check inside `generate` what `logits_warper = self._get_logits_warper(generation_config)` gives in your case.\r\n\r\nhttps://github.com/huggingface/transformers/blob/91d7df58b6537d385e90578dac40204cb550f706/src/transformers/generation/utils.py#L1576-L1588", "@hongzhoulin89 👋 \r\n\r\n@ydshieh said it all -- we often (almost ways, actually) manipulate the logits after the forward pass while generating. There are many reasons to do so, and each reason may add an additional post-processing step. Here are a few examples:\r\n- Whisper has special sequences at the beginning of the generation, to select its mode\r\n- We might want to block certain words from being generated\r\n- We might want to adjust the distribution to be more/less biased towards the most likely tokens\r\n\r\nThey are applied in the places @ydshieh pointed out, and you can check these further docs:\r\n- List of possible manipulations, triggered through the config file: https://huggingface.co/docs/transformers/v4.30.0/en/main_classes/text_generation#transformers.GenerationConfig\r\n- Implementation of the logit manipulations: https://github.com/huggingface/transformers/blob/main/src/transformers/generation/logits_process.py", "Thanks a lot, this is really helpful! " ]
1,689
1,690
1,690
NONE
null
### System Info Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.30.2 - Platform: Linux-5.15.109+-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.11 (gpu) - Jax version: 0.4.13 - JaxLib version: 0.4.13 - Using GPU in script?: y - Using distributed or parallel set-up in script?: n ### Who can help? @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Colab Link: https://colab.research.google.com/drive/1ldLuHr2h4nSlTv5TGO06iORByJJNjCJ6?usp=sharing ```python import torch import sentencepiece import accelerate import transformers from transformers import GenerationConfig, LlamaForCausalLM, LlamaTokenizer if torch.cuda.is_available(): num_gpus = torch.cuda.device_count() device = "cuda" else: device = "cpu" print(device) model_path ="openlm-research/open_llama_3b" tokenizer = LlamaTokenizer.from_pretrained(model_path) model = LlamaForCausalLM.from_pretrained(model_path, device_map='auto') input_text = 'Hello, I am frustrated' n_seq = 1 max_new_tokens = 5 model.eval() input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(device) print(input_ids) output_sample = model.generate(input_ids=input_ids, max_new_tokens=max_new_tokens, do_sample=True, top_p = 0.9, top_k = 15, num_return_sequences=n_seq, return_dict_in_generate=True, output_scores=True) # Evaluate transition scores using official method transition_scores = model.compute_transition_scores(output_sample.sequences, output_sample.scores, normalize_logits=True) print(transition_scores) print('generated sequence: ', tokenizer.batch_decode(output_sample.sequences, skip_special_tokens= True)[0]) # Take the text generated and re-evaluate the probability text_generated = tokenizer.batch_decode(output_sample.sequences, skip_special_tokens= True)[0] generated_input_ids = tokenizer(text_generated, return_tensors="pt").input_ids.to(device) print(generated_input_ids) with torch.no_grad(): model_output = model(generated_input_ids) # collect the probability of the generated token -- probability at index 0 corresponds to the token at index 1 probs = torch.log_softmax(model_output.logits, dim=-1).detach() probs = probs[:, :-1, :] generated_input_ids_shifted = generated_input_ids[:, 1:] gen_probs = torch.gather(probs, 2, generated_input_ids_shifted[:, :, None]).squeeze(-1) print(gen_probs[:,-max_new_tokens:]) ``` ### Expected behavior I am comparing the transition matrix by using 1. The official implementation `model.compute_transition_scores(output_sample.sequences, output_sample.scores, normalize_logits=True)` 2. Another official suggestion by @gante explained in the Announcement of the probability generation https://discuss.huggingface.co/t/announcement-generation-get-probabilities-for-generated-output/30075/17?u=redpig-at-imo As they are both suggesting by Joao, I am expecting the two ways to return the exact same probability, however this is not the case, which seems to be weird to me. Am I missing anything or this is expected? <img width="1106" alt="Screen Shot 2023-07-12 at 8 28 09 PM" src="https://github.com/huggingface/transformers/assets/29802555/9f8d4b6f-d62a-4fb3-abe9-510c9f28b11e"> <img width="1069" alt="Screen Shot 2023-07-12 at 8 28 02 PM" src="https://github.com/huggingface/transformers/assets/29802555/ff089ce5-da5e-452a-b375-a19d3003a3d6"> Thanks in advance!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24801/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24801/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24800
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24800/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24800/comments
https://api.github.com/repos/huggingface/transformers/issues/24800/events
https://github.com/huggingface/transformers/pull/24800
1,802,017,495
PR_kwDOCUB6oc5VXe7W
24,800
Revert "Unpin protobuf in docker file (for daily CI)"
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Merge directly so we can have a better CI report in the next run.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24800). All of your documentation changes will be reflected on that endpoint.", "@sgugger Just to let you know I revert a merged PR #24761: There is no super easy way to move (all) the ONNX-related tests to another job. We get stuck with protobuf v3 on daily CI.", "Shouldn't all the ONNX tests be done on the optimum side now?", "Yes. I know you once mentioned to me that we will ignore failing ONNX tests (`tests/onnx`) - but I don't know if you are OK for us to completely remove it (as all of them are failing with protobuf v4).\r\n\r\nBut the story is longer: we have tests like `TFGPT2ModelTest::test_onnx_runtime_optimize` that are defined in individual model test file, and not in `tests/onnx`. Of course, I am happy if the Optimum can take care of this on their side.", "cc @michaelbenayoun Can you confirm it's okay for us to remove all ONNX tests? They all test the deprecated way of using ONNX as far as I know.", "The issue here is that these tests rely on a pretty old release (v1.12.0) of `onnx`: https://github.com/huggingface/transformers/actions/runs/5526985620/jobs/10082374086#step:8:159\r\n\r\nThe last release should be compatible with `protobuf` v4.\r\n\r\nOther than this, it sounds good to me to remove all ONNX tests in Transformers :+1: \r\n@michaelbenayoun and @fxmarty will know better though.", "@regisss Thank you for the heads up ❤️ - but yes it would be great if we can delegate the ONNX testing to Optimum CI.", "@ydshieh @sgugger Yes I believe it is fine to remove the ONNX tests from transformers, as the export in Optimum is now mature, extended and well tested!", "Thanks!" ]
1,689
1,689
1,689
COLLABORATOR
null
Reverts huggingface/transformers#24761 ONNX unfortunately doesn't support using protobuf v4, and our daily CI have many ONNX tests broken after #24761. See [failing jobs](https://github.com/huggingface/transformers/actions/runs/5526985620)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24800/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24800/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24800", "html_url": "https://github.com/huggingface/transformers/pull/24800", "diff_url": "https://github.com/huggingface/transformers/pull/24800.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24800.patch", "merged_at": 1689214785000 }
https://api.github.com/repos/huggingface/transformers/issues/24799
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24799/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24799/comments
https://api.github.com/repos/huggingface/transformers/issues/24799/events
https://github.com/huggingface/transformers/pull/24799
1,801,987,846
PR_kwDOCUB6oc5VXYll
24,799
Add UnivNet Vocoder Model for Tortoise TTS Diffusers Integration
{ "login": "dg845", "id": 58458699, "node_id": "MDQ6VXNlcjU4NDU4Njk5", "avatar_url": "https://avatars.githubusercontent.com/u/58458699?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dg845", "html_url": "https://github.com/dg845", "followers_url": "https://api.github.com/users/dg845/followers", "following_url": "https://api.github.com/users/dg845/following{/other_user}", "gists_url": "https://api.github.com/users/dg845/gists{/gist_id}", "starred_url": "https://api.github.com/users/dg845/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dg845/subscriptions", "organizations_url": "https://api.github.com/users/dg845/orgs", "repos_url": "https://api.github.com/users/dg845/repos", "events_url": "https://api.github.com/users/dg845/events{/privacy}", "received_events_url": "https://api.github.com/users/dg845/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @dg845 if you are planning to add it to the `models` folder, then I think it should have a doc file(`univnet.md`) in the docs.", "For now I've added the UnivNet code to the `/src/transformers/models/univnet/` directory. @sanchit-gandhi, since the UnivNet model isn't technically a transformer model (in that it doesn't use any attention mechanisms), is this the best place to put it? For example, the [`SpeechT5HifiGan`](https://huggingface.co/docs/transformers/main/model_doc/speecht5#transformers.SpeechT5HifiGan) vocoder is in `/src/transformers/models/speecht5/` along with the other SpeechT5 models, but I assume most of the other Tortoise TTS code will go into `diffusers` rather than `transformers`.", "Nice start @dg845! Yep fine to have it as a standalone model - we have ResNet in transformers as well which is not strictly attention-based.", "Hi @sanchit-gandhi, I think the PR is ready for review.\r\n\r\nThe following are the differences between the [`SpeechT5HifiGan`](https://huggingface.co/docs/transformers/main/model_doc/speecht5#transformers.SpeechT5HifiGan) and the `UnivNetGan` model:\r\n\r\n- The `SpeechT5HifiGan` outer residual blocks* (that is, [`HifiGanResidualBlock`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/speecht5/modeling_speecht5.py#L3074)) upsamples the number of hidden channels between each outer residual block, but the `UnivNetGan` outer residual blocks* (`UnivNetLVCBlock`) keep the number of hidden channels constant.\r\n- Although the structures of the inner residual blocks (for UnivNet, the `UnivNetLVCResidualBlock` module) are similar: `LReLU` => dilated `Conv1d` => `LReLU` => `Conv1d` => skip connection, the UnivNet model uses a [location variable convolutional layer](https://arxiv.org/pdf/2102.10815.pdf) followed by a [gated activation unit](https://proceedings.neurips.cc/paper_files/paper/2016/file/b1301141feffabac455e1f90a7de2054-Paper.pdf) in place of the second `Conv1d` layer.\r\n- Accordingly, each outer residual block (`UnivNetLVCBlock`) in UnivNet has a kernel predictor residual network (`UnivNetKernelPredictor`) to predict the kernels and biases for the location variable convolutional layer in each inner residual block in the main resnet.\r\n- In addition to a conditioning log-mel `spectrogram`, UnivNet takes in a noise sequence as input. The `noise_waveform` is the input to the \"main\" resnet (e.g. the stack of `UnivNetLVCResidualBlock`s), while the conditioning `spectrogram` is the input to the kernel predictor in each `UnivNetLVCBlock`.\r\n\r\n(*) \"Outer residual block\" is a bit of a misnomer, since for both blocks in question (`HifiGanResidualBlock`, `UnivNetLVCBlock`) there's no skip connection between the input to the block and the main computation in the block.", "Also, I'm not sure why `utils/check_table.py` is failing. I ran `make fix-copies` to create a table entry for UnivNet in `docs/source/en/index.md`, and then added a checkmark for PyTorch support, but for some reason `check_table.py` doesn't seem to like that.", "> Also, I'm not sure why `utils/check_table.py` is failing.\r\n\r\n`utils/check_table.py` is no longer failing after I merged `main` into the PR branch. Running `make fix-copies` adds an entry for UnivNet, but I'm not sure why it doesn't add a checkmark in the \"PyTorch support\" column, perhaps the model information is mis-configured.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24799). All of your documentation changes will be reflected on that endpoint.", "All of the current tests (including the integration tests) should be passing now. The code to calculate reference slices from the [reference implementation](https://github.com/maum-ai/univnet) is in [this repository](https://github.com/dg845/univnet/tree/inference-test).", "Hi @ArthurZucker, thanks for reviewing! I think I have addressed all of the review comments. In particular, the model class names should be consistently in the form `UnivNetGan*` and the model implementation should be simplified using the configs (although I am currently keeping the `apply_weight_norm`/`remove_weight_norm` methods, see https://github.com/huggingface/transformers/pull/24799#discussion_r1326830801).\r\n\r\nFrom my side, I have the following questions:\r\n\r\n- https://github.com/huggingface/transformers/pull/24799#discussion_r1327008418 (regarding using `global_rng = random.Random()` with `floats_list`)\r\n- https://github.com/huggingface/transformers/pull/24799#discussion_r1327894425 (regarding isolating copied feature extractor tests)\r\n- https://github.com/huggingface/transformers/pull/24799#discussion_r1327892862 (regarding the proper org to move the checkpoints to)\r\n- https://github.com/huggingface/transformers/pull/24799#discussion_r1326841344 (regarding the usage of `_no_split_modules` for `accelerate` support)", "Gentle ping @ArthurZucker for a follow up review.", "Hey! Sorry I'll get this done by the end of the week 😉 ", "Thanks for iterating @dg845! Requesting a final review from @ArthurZucker when he gets the chance 🤗", "Hi @sanchit-gandhi, I have a question about the design of a potential `batch_decode` method for the feature extractor (as a follow up to https://github.com/huggingface/transformers/pull/24799#discussion_r1344041876). The simplest implementation of a `batch_decode` method would be something like\r\n\r\n```python\r\nclass UnivNetGanFeatureExtractor(SequenceFeatureExtractor):\r\n ...\r\n def batch_decode(self, audio):\r\n return audio[..., :-(self.pad_end_length * self.hop_length)]\r\n ...\r\n```\r\n\r\nwhich directly follows the current usage example. The downside here is that this `batch_decode` method should only be called if we padded the end of the spectrogram when calling the feature extractor, but it doesn't have a way to know whether this is the case. Having something like a `padding_mask` argument would give it the necessary information it needs, but UnivNet doesn't use any attention mechanisms so the feature extractor currently doesn't produce an attention mask and `UnivNetGan.forward` currently doesn't take a mask argument. I'm currently considering several designs and am not sure which one is best:\r\n\r\n1. The feature extractor does not output a `padding_mask` and `UnivNetGan.forward` does not take a `padding_mask` argument. `batch_decode` is implemented as above, where it only strips the end padding and should only be called if `pad_end=True` when the feature extractor was called.\r\n2. The feature extractor outputs a `padding_mask` and `UnivNetGan.forward` takes a `padding_mask` argument, but does nothing with it. The `batch_decode` method takes in both `audio` and `padding_mask` arguments and uses the mask to remove padding from the `audio` output.\r\n3. The feature extractor outputs a `padding_mask` and `UnivNetGan.forward` takes a `padding_mask` argument and uses it to remove padding from the output waveform right before returning it. There is no need for a `batch_decode` method in this case.\r\n\r\nI'm leaning toward (3) because it's the most user-friendly, since all padding is handled under the hood and there's no need to additionally call `feature_extractor.batch_decode`.\r\n", "Thanks for the comprehensive summary @dg845! In the proposed implementation for 3, would the model still return a PyTorch tensor of outputs? If we have different padding for each element in the batch, we'll need to strip each audio by different amounts, so it won't be possible to output a PyTorch tensor from the model? The sequence length dimensions will be different for each element in the batch, so we can't output a tensor with the same seq len dim across items in the batch. We try and keep a `tensor in -> tensor out` approach to the modelling code to make it easy to compile and handle putting/removing inputs/outputs from torch device respectively.\r\n\r\nSo I think we'll need to go with 2, and have the feature extractor to strip this padding a return a list of numpy arrays (i.e. a ragged list of audio arrays). WDYT? ", "Hi @dg845 and @sanchit-gandhi, IMO, we should lean towards a solution that looks like 3, but still allows to return Pytorch tensors. It would go like this:\r\n- if `return_dict=False`, it returns a tuple `(waveform, waveform_length)` (or similar naming) with `waveform` of shape `(batch_size, max_waveform_length)` and `waveform_length` of shape `(batch_size,)` which stores the waveform lengths that you would compute from the `padding_mask`\r\n- if `return_dict=True`, returns a `UnivNetOutput` which inherits from `ModelOuput`, with `waveform` and `waveform_length` as keys. It would look a bit like [VitsModelOuput](https://github.com/huggingface/transformers/blob/e8fdd7875def7be59e2c9b823705fbf003163ea0/src/transformers/models/vits/modeling_vits.py#L52) but with less keys!\r\n\r\nWDYT of going for something like that ?\r\n\r\nIn that way, it would look a lot like the regular outputs of NLP models in transformers, while keeping our requirements. ", "@sanchit-gandhi @ylacombe thanks for the input! I have added an implementation based on https://github.com/huggingface/transformers/pull/24799#issuecomment-1755905748 with the following details:\r\n\r\n- `UnivNetFeatureExtractor.__call__` can return a `padding_mask` argument (the `attention_mask` output from `SequenceFeatureExtractor.pad`). When `pad_end = True` we will pad with audio silence instead of spectrogram silence following https://github.com/huggingface/transformers/pull/24799#discussion_r1325158242.\r\n- `UnivNetModel.forward` accepts a `padding_mask` argument and will use it to calculate the length of each original unpadded waveform and return it as `waveform_lengths`. (The audio output `waveforms` will always be a batched, padded tensor of waveforms).\r\n- `UnivNetFeatureExtractor.batch_decode` takes in the outputs of `UnivNetModel.forward` and returns a ragged list of 1D waveform arrays with padding removed if `waveform_lengths` is available.", "Nice - I think this is a good design! Gently pinging @ArthurZucker for a final review here when you get the chance 🙌", "@ArthurZucker @sanchit-gandhi @ylacombe I think one thing left to resolve is where to put the UnivNet model checkpoint (currently at [`dg845/univnet-dev`](https://huggingface.co/dg845/univnet-dev)). I'm not sure which org to put the model checkpoint under since the [original paper](https://arxiv.org/pdf/2106.07889.pdf) is from Kakao, but the checkpoint is from an [unofficial implementation](https://github.com/maum-ai/univnet) by maum.ai (see https://github.com/huggingface/transformers/pull/24799#discussion_r1325159440, https://github.com/huggingface/transformers/pull/24799#discussion_r1327892862).", "We could reach out to maum.ai to ask them if they can create and org (if not already) and host the weights", "@ArthurZucker Sounds good! I believe that they have an org at https://huggingface.co/maum-ai.", "Hi @ArthurZucker @sanchit-gandhi @ylacombe, is there anything I can do to help out with transferring the checkpoint weights? (As a note, the checkpoint weights are currently stored at [`dg845/univnet-dev`](https://huggingface.co/dg845/univnet-dev) [with the model card written] and this is the checkpoint identifier used e.g. in the integration tests.)", "Hi @dg845, I've contacted some people from maum-ai to move the weights to their organization (without any response yet)!", "Hi @ArthurZucker @sanchit-gandhi @ylacombe would it be possible to merge this PR? @susnato and I have made a lot of progress on the tortoise-tts PR over at `diffusers`: https://github.com/huggingface/diffusers/pull/4106 and it would be helpful to have this PR merged to test the pipeline with the UnivNet vocoder.", "Hey both! Yeah no problem, let's use the current path for the checkpoints and merge for now as they are slow to respond! ", "Last nit is, would you mind rebasing on main to make sure you have the correct styling? 🙏🏻 ", "Hi @ArthurZucker, I have rebased on `main` and the CI is green :).", "Thanks a lot! " ]
1,689
1,700
1,700
CONTRIBUTOR
null
# What does this PR do? This PR adds the UnivNet GAN vocoder model ([paper](https://arxiv.org/pdf/2106.07889.pdf), [code](https://github.com/mindslab-ai/univnet)) to `transformers`, which is the vocoder used in the Tortoise TTS text-to-speech model ([paper](https://arxiv.org/pdf/2305.07243.pdf), [code](https://github.com/neonbjb/tortoise-tts)) which is currently being integrated into `diffusers`. See [this issue](https://github.com/huggingface/diffusers/issues/3891) in `diffusers`. ![univnet_model_architecture](https://github.com/huggingface/transformers/assets/58458699/8a33190a-cd52-4e81-a6ed-2fc921b9f86f) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sanchit-gandhi @susnato
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24799/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24799/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24799", "html_url": "https://github.com/huggingface/transformers/pull/24799", "diff_url": "https://github.com/huggingface/transformers/pull/24799.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24799.patch", "merged_at": 1700670097000 }
https://api.github.com/repos/huggingface/transformers/issues/24797
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24797/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24797/comments
https://api.github.com/repos/huggingface/transformers/issues/24797/events
https://github.com/huggingface/transformers/issues/24797
1,801,901,270
I_kwDOCUB6oc5rZtTW
24,797
Trn1 LoRA finetuning with HF reaches RuntimeError: Invalid device format: cpu
{ "login": "kct22aws", "id": 87498815, "node_id": "MDQ6VXNlcjg3NDk4ODE1", "avatar_url": "https://avatars.githubusercontent.com/u/87498815?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kct22aws", "html_url": "https://github.com/kct22aws", "followers_url": "https://api.github.com/users/kct22aws/followers", "following_url": "https://api.github.com/users/kct22aws/following{/other_user}", "gists_url": "https://api.github.com/users/kct22aws/gists{/gist_id}", "starred_url": "https://api.github.com/users/kct22aws/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kct22aws/subscriptions", "organizations_url": "https://api.github.com/users/kct22aws/orgs", "repos_url": "https://api.github.com/users/kct22aws/repos", "events_url": "https://api.github.com/users/kct22aws/events{/privacy}", "received_events_url": "https://api.github.com/users/kct22aws/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @pacman100 ", "Issue reported to https://github.com/huggingface/optimum-neuron/issues/134", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,692
1,692
CONTRIBUTOR
null
### System Info optimum 1.9.0 optimum-neuron 0.0.7 transformers 4.30.2 DLAMI: https://aws.amazon.com/marketplace/pp/prodview-gr3e6yiscria2 AWS instance: trn1.2xl ` File "<string>", line 111, in __init__ File "/usr/local/lib/python3.10/dist-packages/transformers/training_args.py", line 1341, in __post_init__ and (get_xla_device_type(self.device) != "GPU") File "/usr/local/lib/python3.10/dist-packages/transformers/training_args.py", line 127, in get_xla_device_type return xm.xla_real_devices([device])[0].split(":")[0] File "/usr/local/lib/python3.10/dist-packages/torch_xla/core/xla_model.py", line 268, in xla_real_devices return [_xla_real_device(device) for device in devices] File "/usr/local/lib/python3.10/dist-packages/torch_xla/core/xla_model.py", line 268, in <listcomp> return [_xla_real_device(device) for device in devices] File "/usr/local/lib/python3.10/dist-packages/torch_xla/core/xla_model.py", line 263, in _xla_real_device raise RuntimeError('Invalid device format: {}'.format(device_str)) RuntimeError: Invalid device format: cpu ` Python training script attached as txt format [train-vit.txt](https://github.com/huggingface/transformers/files/12034074/train-vit.txt) ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Steps to reproduce: 1. Create an EC2 Trn1 instance with [Hugging Face DLAMI ](https://aws.amazon.com/marketplace/pp/prodview-gr3e6yiscria2) 2. `pip install peft` 3. install optimum-neuron form source: `pip install git+https://github.com/huggingface/optimum-neuron.git` 4. run python3 train-vit.py (script attached) ### Expected behavior To run training script to completion.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24797/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24797/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24796
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24796/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24796/comments
https://api.github.com/repos/huggingface/transformers/issues/24796/events
https://github.com/huggingface/transformers/pull/24796
1,801,878,538
PR_kwDOCUB6oc5VXAub
24,796
new model: IDEFICS via HuggingFaceM4
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Is it possible to be a private repo ? ;-) The m4 repo from huggingface organisation does not exist", "Thank you for your interest, @flozi00 - please give us some time. It says WIP because it's not ready for a public consumption. I edited the OP to clarify that.", "I'm not able to rebase as this recently merged PR https://github.com/huggingface/transformers/pull/25174 breaks `tests/models/idefics/test_image_processing_idefics.py::IdeficsImageProcessingTest::test_torchvision_numpy_transforms_equivalency`\r\n\r\ncc: @amyeroberts, if I need to adapt our image processing code please let me know - the function in question is called here:\r\n\r\nhttps://github.com/huggingface/transformers/pull/24796/files#diff-e1b90eb52340b91c2471bac7c6fd34c67c7cd530050c607852fd426f397b3b3fR162", "@sgugger, I addressed your feedback and this PR is ready for a detailed review. \r\n\r\nThank you!", "Thank you, @sgugger, @HugoLaurencon and @leot13 for your reviews - I have addressed everything you have raised.", "> ```\r\n> prompts = [\r\n> [\r\n> \"User:\",\r\n> \"https://hips.hearstapps.com/hmg-prod/images/cute-photos-of-cats-in-grass-1593184777.jpg\",\r\n> \"Describe this image.\"\r\n> \r\n> \"Assistant: An image of two kittens in grass.\",\r\n> \r\n> \"User:\",\r\n> \"https://hips.hearstapps.com/hmg-prod/images/dog-puns-1581708208.jpg\",\r\n> \"Describe this image\".\r\n> \r\n> \"Assistant:\",\r\n> ],\r\n> [\r\n> \"User:\",\r\n> \"https://hips.hearstapps.com/hmg-prod/images/dog-puns-1581708208.jpg\",\r\n> \"Describe this image.\"\r\n> \r\n> \"Assistant: An image of a dog wearing funny glasses.\",\r\n> \r\n> \"User:\",\r\n> \"https://hips.hearstapps.com/hmg-prod/images/cute-photos-of-cats-in-grass-1593184777.jpg\",\r\n> \"Describe this image\".\r\n> \r\n> \"Assistant:\",\r\n> ],\r\n> ]\r\n> ```\r\n\r\nFor posterity, that part of the OP (i can't edit unfortunately) is missing some \",\" (commas) at some end of string (for instance `\"Describe this image\".` -> `\"Describe this image\",`). this is important for the tokenization in particular when we call processor with `add_end_of_utterance_token=True`.", "I can edit if need be. You should also be able to push commits to this branch, since it's in the main fork and you have write permissions @VictorSanh :-)", "Thanks a lot, @gante, for the suggestions - merged" ]
1,689
1,692
1,692
CONTRIBUTOR
null
**important: The following notes are for my team mates and they won't work for anybody else as the data isn't ready for the public yet. should be made public next week ** Meanwhile to try it out: ``` $ git clone https://github.com/huggingface/transformers -b add-model-idefics $ cd transformers $ cat generate.py import torch from transformers import IdeficsForVisionText2Text, AutoProcessor device = "cuda" if torch.cuda.is_available() else "cpu" checkpoint = "HuggingFaceM4/idefics-9b" #checkpoint = "HuggingFaceM4/tiny-random-idefics" model = IdeficsForVisionText2Text.from_pretrained(checkpoint, torch_dtype=torch.bfloat16).to(device) processor = AutoProcessor.from_pretrained(checkpoint) prompts = [ [ "User:", "https://hips.hearstapps.com/hmg-prod/images/cute-photos-of-cats-in-grass-1593184777.jpg", "Describe this image.", "Assistant: An image of two kittens in grass.", "User:", "https://hips.hearstapps.com/hmg-prod/images/dog-puns-1581708208.jpg", "Describe this image.", "Assistant:", ], [ "User:", "https://hips.hearstapps.com/hmg-prod/images/dog-puns-1581708208.jpg", "Describe this image.", "Assistant: An image of a dog wearing funny glasses.", "User:", "https://hips.hearstapps.com/hmg-prod/images/cute-photos-of-cats-in-grass-1593184777.jpg", "Describe this image.", "Assistant:", ], ] # batched mode inputs = processor(prompts, return_tensors="pt").to(device) # single sample mode #inputs = processor(prompts[0], return_tensors="pt").to(device) generated_ids = model.generate(**inputs, max_length=100) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) for i,t in enumerate(generated_text): print(f"{i}:\n{t}\n") ``` and then run: ``` CUDA_VISIBLE_DEVICES=0 PYTHONPATH=src python generate.py ``` # Demos A PR with examples/demos, including finetuning, is here: https://github.com/huggingface/notebooks/pull/418 # TODOs before merging - [ ] make the models public - which coincides with the announcement/release
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24796/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 1, "hooray": 1, "confused": 0, "heart": 1, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24796/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24796", "html_url": "https://github.com/huggingface/transformers/pull/24796", "diff_url": "https://github.com/huggingface/transformers/pull/24796.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24796.patch", "merged_at": 1692393148000 }
https://api.github.com/repos/huggingface/transformers/issues/24795
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24795/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24795/comments
https://api.github.com/repos/huggingface/transformers/issues/24795/events
https://github.com/huggingface/transformers/pull/24795
1,801,875,822
PR_kwDOCUB6oc5VXAGL
24,795
Pop
{ "login": "jamesthesnake", "id": 8227820, "node_id": "MDQ6VXNlcjgyMjc4MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/8227820?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jamesthesnake", "html_url": "https://github.com/jamesthesnake", "followers_url": "https://api.github.com/users/jamesthesnake/followers", "following_url": "https://api.github.com/users/jamesthesnake/following{/other_user}", "gists_url": "https://api.github.com/users/jamesthesnake/gists{/gist_id}", "starred_url": "https://api.github.com/users/jamesthesnake/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jamesthesnake/subscriptions", "organizations_url": "https://api.github.com/users/jamesthesnake/orgs", "repos_url": "https://api.github.com/users/jamesthesnake/repos", "events_url": "https://api.github.com/users/jamesthesnake/events{/privacy}", "received_events_url": "https://api.github.com/users/jamesthesnake/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,689
1,689
1,689
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24795/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24795/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24795", "html_url": "https://github.com/huggingface/transformers/pull/24795", "diff_url": "https://github.com/huggingface/transformers/pull/24795.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24795.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24794
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24794/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24794/comments
https://api.github.com/repos/huggingface/transformers/issues/24794/events
https://github.com/huggingface/transformers/issues/24794
1,801,868,732
I_kwDOCUB6oc5rZlW8
24,794
Bert Generation misses **model_kwargs in prepare_inputs_for_generation()
{ "login": "luyuzhe111", "id": 55512809, "node_id": "MDQ6VXNlcjU1NTEyODA5", "avatar_url": "https://avatars.githubusercontent.com/u/55512809?v=4", "gravatar_id": "", "url": "https://api.github.com/users/luyuzhe111", "html_url": "https://github.com/luyuzhe111", "followers_url": "https://api.github.com/users/luyuzhe111/followers", "following_url": "https://api.github.com/users/luyuzhe111/following{/other_user}", "gists_url": "https://api.github.com/users/luyuzhe111/gists{/gist_id}", "starred_url": "https://api.github.com/users/luyuzhe111/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/luyuzhe111/subscriptions", "organizations_url": "https://api.github.com/users/luyuzhe111/orgs", "repos_url": "https://api.github.com/users/luyuzhe111/repos", "events_url": "https://api.github.com/users/luyuzhe111/events{/privacy}", "received_events_url": "https://api.github.com/users/luyuzhe111/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @luyuzhe111 \r\n\r\nThank you for rasing the question!\r\n\r\nThe model of type `BertGeneration`, when we want to use `generation` along `encoder`'s output, it would be a (decoder) component in an encoder-decoder (here `class EncoderDecoderModel`).\r\n\r\nThe `generation` takes care to create the `encoder_outputs`\r\nhttps://github.com/huggingface/transformers/blob/906afa1d5c6054a641cb6abb009cdec732a5a094/src/transformers/generation/utils.py#L1342-L1347\r\n\r\nand `EncoderDecoderModel.prepare_inputs_for_generation` pass it to the underlying decoder model.\r\nhttps://github.com/huggingface/transformers/blob/906afa1d5c6054a641cb6abb009cdec732a5a094/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L668-L681\r\n\r\nSo everything work correctly 🤗 .\r\n\r\nHowever, if you want to use that model without our `class EncoderDecoderModel` but you still want to pass `BertGenerationDecoder`, then you will have to modify the code on your own.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,692
1,692
NONE
null
### System Info https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert_generation/modeling_bert_generation.py#L989 This function should clearly return ```**model_kwargs``` but it is not. This results in passed args such as ```encoder_hidden_states``` not being used for generation. ### Who can help? @gante @ArthurZucker @younesbelkada ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I used this model for image-captioning task, where the visual features should be used as ```encoder_hidden_states``` for the model.generate() method. The current implementation will simply neglect this input and generates the same texts for every image. Hope this information is sufficient to see why the current implementation is problematic. ### Expected behavior current implementation: def prepare_inputs_for_generation(self, input_ids, past_key_values=None, attention_mask=None, **model_kwargs): input_shape = input_ids.shape # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly if attention_mask is None: attention_mask = input_ids.new_ones(input_shape) # cut decoder_input_ids if past is used if past_key_values is not None: input_ids = input_ids[:, -1:] return {"input_ids": input_ids, "attention_mask": attention_mask, "past_key_values": past_key_values} Correct implementation (simply change the last line): def prepare_inputs_for_generation(self, input_ids, past_key_values=None, attention_mask=None, **model_kwargs): input_shape = input_ids.shape # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly if attention_mask is None: attention_mask = input_ids.new_ones(input_shape) # cut decoder_input_ids if past is used if past_key_values is not None: input_ids = input_ids[:, -1:] return {"input_ids": input_ids, "attention_mask": attention_mask, "past_key_values": past_key_values, **model_kwargs}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24794/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24794/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24793
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24793/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24793/comments
https://api.github.com/repos/huggingface/transformers/issues/24793/events
https://github.com/huggingface/transformers/pull/24793
1,801,841,629
PR_kwDOCUB6oc5VW4hb
24,793
[🔗 Docs] Fixed Incorrect Migration Link
{ "login": "kadirnar", "id": 36204372, "node_id": "MDQ6VXNlcjM2MjA0Mzcy", "avatar_url": "https://avatars.githubusercontent.com/u/36204372?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kadirnar", "html_url": "https://github.com/kadirnar", "followers_url": "https://api.github.com/users/kadirnar/followers", "following_url": "https://api.github.com/users/kadirnar/following{/other_user}", "gists_url": "https://api.github.com/users/kadirnar/gists{/gist_id}", "starred_url": "https://api.github.com/users/kadirnar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kadirnar/subscriptions", "organizations_url": "https://api.github.com/users/kadirnar/orgs", "repos_url": "https://api.github.com/users/kadirnar/repos", "events_url": "https://api.github.com/users/kadirnar/events{/privacy}", "received_events_url": "https://api.github.com/users/kadirnar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @amyeroberts , I am new to transformes library. I wanted to fix this error that I saw in the documentation. Because the link is broken.\r\n\r\n\r\nThe documentation page MIGRATION doesn't exist in v4.30.0, but exists on the main version. Click here to redirect to the main version of the documentation.\r\n\r\nI looked in transformers documentation(https://huggingface.co/docs) and couldn't find it. Can you help?", "Thank you for the help💖" ]
1,689
1,689
1,689
CONTRIBUTOR
null
I couldn't find it in transformers files. Can you check? Is it true?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24793/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24793/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24793", "html_url": "https://github.com/huggingface/transformers/pull/24793", "diff_url": "https://github.com/huggingface/transformers/pull/24793.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24793.patch", "merged_at": 1689371270000 }
https://api.github.com/repos/huggingface/transformers/issues/24792
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24792/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24792/comments
https://api.github.com/repos/huggingface/transformers/issues/24792/events
https://github.com/huggingface/transformers/issues/24792
1,801,834,472
I_kwDOCUB6oc5rZc_o
24,792
AttributeError: 'Parameter' object has no attribute 'ds_numel'
{ "login": "vecorro", "id": 49245780, "node_id": "MDQ6VXNlcjQ5MjQ1Nzgw", "avatar_url": "https://avatars.githubusercontent.com/u/49245780?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vecorro", "html_url": "https://github.com/vecorro", "followers_url": "https://api.github.com/users/vecorro/followers", "following_url": "https://api.github.com/users/vecorro/following{/other_user}", "gists_url": "https://api.github.com/users/vecorro/gists{/gist_id}", "starred_url": "https://api.github.com/users/vecorro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vecorro/subscriptions", "organizations_url": "https://api.github.com/users/vecorro/orgs", "repos_url": "https://api.github.com/users/vecorro/repos", "events_url": "https://api.github.com/users/vecorro/events{/privacy}", "received_events_url": "https://api.github.com/users/vecorro/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,689
1,689
1,689
NONE
null
### System Info Python 3.10 CUDA 11.8 torch 2.0.1 transfromers 4.30.2 bitsandbytes 0.39.1 datasets 2.13.0 einops 0.6.1 trl 0.4.4 accelerate 0.20.3 deepspeed 0.9.5 ### Who can help? @pacman100 ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Hi, I'm trying to reproduce the Falcon LLM fine-tuning by using a modified version of the [HF Collab script](https://colab.research.google.com/drive/1BiQiw31DT7-cDp1-0ySXvvhzqomTdI-o?usp=sharing). The Jupyter notebook runs well when DeepSpeed is not in the mix, but when I introduce the DeepSpeed ZeRO-3 in `TrainingArguments` (which gets fed into `SFTTrainer` the `trainer.train()` call fails with error `AttributeError: 'Parameter' object has no attribute 'ds_numel'.` **Here the DeepSpeed config `dict` I'm using:** ``` ds_config = { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "none", "pin_memory": "true" }, "offload_param": { "device": "none", "pin_memory": "true" }, "overlap_comm": "true", "contiguous_gradients": "true", "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": "true" }, "gradient_accumulation_steps": GRADIENT_ACCUMULATION_STEPS, "gradient_clipping": "auto", "steps_per_print": 10, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": "false" ``` **Stack trace: ``` File ~/miniconda3/envs/falcon/lib/python3.10/site-packages/transformers/trainer.py:1793, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval) 1791 logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") 1792 logger.info(f" Total optimization steps = {max_steps:,}") -> 1793 logger.info(f" Number of trainable parameters = {get_model_param_count(model, trainable_only=True):,}") 1795 self.state.epoch = 0 1796 start_time = time.time() File ~/miniconda3/envs/falcon/lib/python3.10/site-packages/transformers/trainer_pt_utils.py:1053, in get_model_param_count(model, trainable_only) 1050 def numel(p): 1051 return p.numel() -> 1053 return sum(numel(p) for p in model.parameters() if not trainable_only or p.requires_grad) File ~/miniconda3/envs/falcon/lib/python3.10/site-packages/transformers/trainer_pt_utils.py:1053, in <genexpr>(.0) 1050 def numel(p): 1051 return p.numel() -> 1053 return sum(numel(p) for p in model.parameters() if not trainable_only or p.requires_grad) File ~/miniconda3/envs/falcon/lib/python3.10/site-packages/transformers/trainer_pt_utils.py:1046, in get_model_param_count.<locals>.numel(p) 1045 def numel(p): -> 1046 return p.ds_numel AttributeError: 'Parameter' object has no attribute 'ds_numel' ``` **Here the core section of the code** ``` # Dataset loader DATASET_PATH = "timdettmers/openassistant-guanaco" # Params for AutoModelForCausalLM DEVICE_MAP = "auto" # Instructs Accelerate to use all GPUs available in the node. LOAD_IN_8BIT = True # 8-bit precision requires ~ 1.2-1.4GB memory per 1B parameters MODEL_NAME = "tiiuae/falcon-7b" # Could use "tiiuae/falcon-40b" or "tiiuae/falcon-7b" TRUST_REMOTE_CODE = True # Required when a model is not yet part of the Transformers library # LoRA configuration (see https://huggingface.co/docs/peft/conceptual_guides/lora) # LoRA allows efficient fine-tuning of LLMs by training low rank (small) matrices LORA_ALPHA = 16 # LoRA scaling factor. LORA_DROPOUT = 0.1 # Probability of a neuron link to get disabled during a step LORA_R = 32 # Rank of update matrices. Lower rank results in smaller update matrices with fewer trainable parameters. #List of modules apart from LoRA layers to be set as trainable and saved in the final checkpoint. LORA_TARGET_MODULES = ["query_key_value", "dense", "dense_h_to_4h", "dense_4h_to_h"] # Trainer configuration BF16 = True # Whether to use bf16 precision. Requires Ampere or higher NVIDIA architecture. EVAL_STEPS = 8 # Number of update steps between two evaluations if evaluation_strategy="steps" EVAL_STRATEGY = 'steps' # Evaluation is done (and logged) every eval_steps. FP16 = not BF16 # Whether to use fp16 16-bit (mixed) precision training instead of 32-bit training. GRADIENT_ACCUMULATION_STEPS = 4 # Accumulates gradients from 'n' batches before stepping the optimizer GROUP_BY_LENGTH = True # group samples of similar length to minimize padding and be more efficient. LOAD_BEST = True # Load the checkpoint with the lowest loss at the end. LOGGING_STEPS = 4 # Number of update steps between two logs if logging_strategy="steps". LOGGING_STRATEGY = 'steps' # Logging is done every logging_steps LR = 2e-4 # The initial learning rate. LR_SCHEDULER_TYPE = 'constant' # Other options are 'cosine' or 'linear' MAX_GRAD_NORM = 0.3 # Maximum gradient norm (for gradient clipping). MAX_STEPS = 184 # Start with a small test (64) then increase the number to multiple epochs OPTIMIZER = "paged_adamw_32bit" # Optimizer function OUTPUT_DIR = "./results" # Where checkpoints will be saved PER_DEV_TRAIN_BATCH_SIZE = 4 # Use a low number if getting out of memory errors REPORT_ENDPOINT = "wandb" # Comment out if don't want to use wandb. Ensure you had run 'wandb login' previously. SAVE_STEPS = 8 # Number of updates steps before two checkpoint saves if save_strategy="steps" SAVE_STRATEGY = 'steps' # Save is done every save_steps. SAVE_TOTAL_LIMIT = 2 # Only save the last and the best checkpoints USE_CACHE = False # Can't use cache with gradient check pointing WARMUP_RATIO = 0.03 # Ratio of total training steps used for a linear warmup from 0 to learning_rate. WEIGHT_DECAY = 0.001 # AdamW regularization parameter # SFTTrainer config (see https://huggingface.co/docs/trl/main/en/sft_trainer) MAX_SEQ_LENGTH = 512 # Max length is token sequence in an example model = AutoModelForCausalLM.from_pretrained( MODEL_NAME, load_in_8bit = LOAD_IN_8BIT, trust_remote_code = TRUST_REMOTE_CODE, device_map = DEVICE_MAP, ) model.config.use_cache = USE_CACHE tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, trust_remote_code = TRUST_REMOTE_CODE) tokenizer.pad_token = tokenizer.eos_token # Setup LoRA peft_config = LoraConfig( lora_alpha = LORA_ALPHA, lora_dropout = LORA_DROPOUT, r = LORA_R, bias = "none", task_type = "CAUSAL_LM", target_modules = LORA_TARGET_MODULES ) # Setup training arguments training_arguments = TrainingArguments( output_dir = OUTPUT_DIR, per_device_train_batch_size = PER_DEV_TRAIN_BATCH_SIZE, gradient_accumulation_steps = GRADIENT_ACCUMULATION_STEPS, #optim = OPTIMIZER, save_steps = SAVE_STEPS, save_strategy = SAVE_STRATEGY, logging_steps = LOGGING_STEPS, logging_strategy = LOGGING_STRATEGY, learning_rate = LR, #lr_scheduler_type = LR_SCHEDULER_TYPE, fp16 = FP16, bf16 = BF16, max_grad_norm = MAX_GRAD_NORM, max_steps = MAX_STEPS, warmup_ratio = WARMUP_RATIO, group_by_length = GROUP_BY_LENGTH, report_to = REPORT_ENDPOINT, evaluation_strategy = EVAL_STRATEGY, eval_steps = EVAL_STEPS, load_best_model_at_end = LOAD_BEST, greater_is_better = False, save_total_limit = SAVE_TOTAL_LIMIT, deepspeed=ds_config, disable_tqdm=True, #log_level= "error", ) trainer = SFTTrainer( model = model, train_dataset = train_dataset, eval_dataset = eval_dataset, peft_config = peft_config, dataset_text_field = "text", max_seq_length = MAX_SEQ_LENGTH, tokenizer = tokenizer, args = training_arguments, ) for name, module in trainer.model.named_modules(): if "norm" in name: module = module.to(torch.float32) # Fine-tune the model trainer.train() ``` Thanks! ### Expected behavior I expected the training process to run with DeepSpeed in the mix as it was doing when it DS wasn't called. Thanks in advance for your help!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24792/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24792/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24791
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24791/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24791/comments
https://api.github.com/repos/huggingface/transformers/issues/24791/events
https://github.com/huggingface/transformers/pull/24791
1,801,770,315
PR_kwDOCUB6oc5VWoyL
24,791
Upgrade jax/jaxlib/flax pin versions
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24791). All of your documentation changes will be reflected on that endpoint." ]
1,689
1,693
1,689
COLLABORATOR
null
# What does this PR do? So we can have latest TF versions.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24791/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24791/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24791", "html_url": "https://github.com/huggingface/transformers/pull/24791", "diff_url": "https://github.com/huggingface/transformers/pull/24791.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24791.patch", "merged_at": 1689249450000 }
https://api.github.com/repos/huggingface/transformers/issues/24790
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24790/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24790/comments
https://api.github.com/repos/huggingface/transformers/issues/24790/events
https://github.com/huggingface/transformers/issues/24790
1,801,767,832
I_kwDOCUB6oc5rZMuY
24,790
run_mlm is not working with TPU
{ "login": "Shiro-LK", "id": 26505641, "node_id": "MDQ6VXNlcjI2NTA1NjQx", "avatar_url": "https://avatars.githubusercontent.com/u/26505641?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Shiro-LK", "html_url": "https://github.com/Shiro-LK", "followers_url": "https://api.github.com/users/Shiro-LK/followers", "following_url": "https://api.github.com/users/Shiro-LK/following{/other_user}", "gists_url": "https://api.github.com/users/Shiro-LK/gists{/gist_id}", "starred_url": "https://api.github.com/users/Shiro-LK/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Shiro-LK/subscriptions", "organizations_url": "https://api.github.com/users/Shiro-LK/orgs", "repos_url": "https://api.github.com/users/Shiro-LK/repos", "events_url": "https://api.github.com/users/Shiro-LK/events{/privacy}", "received_events_url": "https://api.github.com/users/Shiro-LK/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It looks like PyTorch XLA cannot see your TPUs, are you sure you properly set up your instance?", "I suppose it is, I am using colab TPU and I have installed thiis package : \r\n`pip install cloud-tpu-client==0.10 torch==2.0.0 torchvision==0.15.1 https://storage.googleapis.com/tpu-pytorch/wheels/colab/torch_xla-2.0-cp310-cp310-linux_x86_64.whl`", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,692
1,692
NONE
null
### System Info I am using colab : python:3.10 for torch and xla : `pip install cloud-tpu-client==0.10 torch==2.0.0 torchvision==0.15.1 https://storage.googleapis.com/tpu-pytorch/wheels/colab/torch_xla-2.0-cp310-cp310-linux_x86_64.whl` transformers == 4.30.2 after using this command the training get this error : `2023-07-12 21:07:17.770888: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT 2023-07-12 21:08:09.577189: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT 2023-07-12 21:08:09.645535: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT 2023-07-12 21:08:09.848706: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT 2023-07-12 21:08:10.028873: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT 2023-07-12 21:08:10.122547: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT 2023-07-12 21:08:10.322647: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT 2023-07-12 21:08:10.612495: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT 2023-07-12 21:08:10.867921: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT Exception in device=TPU:0: Invalid device format: cpu Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 334, in _mp_start_fn _start_fn(index, pf_cfg, fn, args) File "/usr/local/lib/python3.10/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 328, in _start_fn fn(gindex, *args) File "/content/run_mlm.py", line 654, in _mp_fn main() File "/content/run_mlm.py", line 239, in main model_args, data_args, training_args = parser.parse_args_into_dataclasses() File "/usr/local/lib/python3.10/dist-packages/transformers/hf_argparser.py", line 346, in parse_args_into_dataclasses obj = dtype(**inputs) File "<string>", line 111, in __init__ File "/usr/local/lib/python3.10/dist-packages/transformers/training_args.py", line 1341, in __post_init__ and (get_xla_device_type(self.device) != "GPU") File "/usr/local/lib/python3.10/dist-packages/transformers/training_args.py", line 127, in get_xla_device_type return xm.xla_real_devices([device])[0].split(":")[0] File "/usr/local/lib/python3.10/dist-packages/torch_xla/core/xla_model.py", line 271, in xla_real_devices return [_xla_real_device(device) for device in devices] File "/usr/local/lib/python3.10/dist-packages/torch_xla/core/xla_model.py", line 271, in <listcomp> return [_xla_real_device(device) for device in devices] File "/usr/local/lib/python3.10/dist-packages/torch_xla/core/xla_model.py", line 266, in _xla_real_device raise RuntimeError('Invalid device format: {}'.format(device_str)) RuntimeError: Invalid device format: cpu ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ │ /content/xla_spawn.py:83 in <module> │ │ │ │ 80 │ │ 81 │ │ 82 if __name__ == "__main__": │ │ ❱ 83 │ main() │ │ 84 │ │ │ │ /content/xla_spawn.py:79 in main │ │ │ │ 76 │ # Patch sys.argv │ │ 77 │ sys.argv = [args.training_script] + args.training_script_args + ["- │ │ 78 │ │ │ ❱ 79 │ xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores) │ │ 80 │ │ 81 │ │ 82 if __name__ == "__main__": │ │ │ │ /usr/local/lib/python3.10/dist-packages/torch_xla/distributed/xla_multiproce │ │ ssing.py:397 in spawn │ │ │ │ 394 if pf_cfg.num_devices == 1: │ │ 395 │ _start_fn(0, pf_cfg, fn, args) │ │ 396 else: │ │ ❱ 397 │ result = torch.multiprocessing.start_processes( │ │ 398 │ │ _mp_start_fn, │ │ 399 │ │ args=(pf_cfg, fn, args), │ │ 400 │ │ nprocs=pf_cfg.num_devices, │ │ │ │ /usr/local/lib/python3.10/dist-packages/torch/multiprocessing/spawn.py:197 │ │ in start_processes │ │ │ │ 194 │ │ return context │ │ 195 │ │ │ 196 │ # Loop on join until it returns True or raises an exception. │ │ ❱ 197 │ while not context.join(): │ │ 198 │ │ pass │ │ 199 │ │ 200 │ │ │ │ /usr/local/lib/python3.10/dist-packages/torch/multiprocessing/spawn.py:149 │ │ in join │ │ │ │ 146 │ │ │ │ │ signal_name=name │ │ 147 │ │ │ │ ) │ │ 148 │ │ │ else: │ │ ❱ 149 │ │ │ │ raise ProcessExitedException( │ │ 150 │ │ │ │ │ "process %d terminated with exit code %d" % │ │ 151 │ │ │ │ │ (error_index, exitcode), │ │ 152 │ │ │ │ │ error_index=error_index, │ ╰──────────────────────────────────────────────────────────────────────────────╯ ProcessExitedException: process 0 terminated with exit code 17` ### Who can help? @sgugger , @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. choose TPU platform on colab 2. ``` !python xla_spawn.py --num_cores 8 run_mlm.py \ --model_name_or_path roberta-base \ --tpu_num_cores 8 \ --train_file tr.txt \ --validation_file dev.txt \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 8 \ --do_train \ --do_eval \ --max_seq_len 200 \ --line_by_line True \ --pad_to_max_length True \ --output_dir mlm_tpu-v2 ``` ### Expected behavior The training should run.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24790/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24790/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24789
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24789/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24789/comments
https://api.github.com/repos/huggingface/transformers/issues/24789/events
https://github.com/huggingface/transformers/pull/24789
1,801,749,260
PR_kwDOCUB6oc5VWkJu
24,789
Update setup.py to be compatible with pipenv
{ "login": "georgiemathews", "id": 13368542, "node_id": "MDQ6VXNlcjEzMzY4NTQy", "avatar_url": "https://avatars.githubusercontent.com/u/13368542?v=4", "gravatar_id": "", "url": "https://api.github.com/users/georgiemathews", "html_url": "https://github.com/georgiemathews", "followers_url": "https://api.github.com/users/georgiemathews/followers", "following_url": "https://api.github.com/users/georgiemathews/following{/other_user}", "gists_url": "https://api.github.com/users/georgiemathews/gists{/gist_id}", "starred_url": "https://api.github.com/users/georgiemathews/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/georgiemathews/subscriptions", "organizations_url": "https://api.github.com/users/georgiemathews/orgs", "repos_url": "https://api.github.com/users/georgiemathews/repos", "events_url": "https://api.github.com/users/georgiemathews/events{/privacy}", "received_events_url": "https://api.github.com/users/georgiemathews/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> I'm confused. `install_requires` is already a list as seen on line 415.\r\n\r\nThanks for taking a look @sgugger!\r\n\r\nIt seems to be a requirementslib bug that occurs when one of the items in the list of dependencies is declared with a string interpolation..\r\n\r\nI'm not sure why using the list constructor fixes the bug. This is the behavior when installing transformers through pipenv without the bugfix:\r\n\r\n```\r\n$ pipenv install git+https://github.com/huggingface/transformers#egg=transformers\r\nInstalling git+https://github.com/huggingface/[email protected]#egg=transformers...\r\nResolving\r\ngit+https://github.com/huggingface/[email protected]#egg=transformers...\r\n✘ Locking Failed!\r\nTraceback (most recent call last):\r\n ...\r\n File \"/home/gmathews/.local/lib/python3.8/site-packages/pipenv/vendor/requirementslib/models/setup_info.py\", line 659, in _find_install_requires\r\n return [el.s for el in variable.elts]\r\n File \"/home/gmathews/.local/lib/python3.8/site-packages/pipenv/vendor/requirementslib/models/setup_info.py\", line 659, in <listcomp>\r\n return [el.s for el in variable.elts]\r\nAttributeError: 'Subscript' object has no attribute 's'\r\n```", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24789). All of your documentation changes will be reflected on that endpoint." ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? This enables installing transformers from source using pipenv. Currently installing transformers through pipenv via a git source is blocked by this issue: https://github.com/pypa/pipenv/issues/5167#issuecomment-1349316531 Installation will fail with: ``` AttributeError: 'Subscript' object has no attribute 's' ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24789/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24789/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24789", "html_url": "https://github.com/huggingface/transformers/pull/24789", "diff_url": "https://github.com/huggingface/transformers/pull/24789.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24789.patch", "merged_at": 1689267403000 }
https://api.github.com/repos/huggingface/transformers/issues/24788
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24788/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24788/comments
https://api.github.com/repos/huggingface/transformers/issues/24788/events
https://github.com/huggingface/transformers/pull/24788
1,801,675,334
PR_kwDOCUB6oc5VWT0f
24,788
set correct model input names for gptsw3tokenizer
{ "login": "DarioSucic", "id": 7669299, "node_id": "MDQ6VXNlcjc2NjkyOTk=", "avatar_url": "https://avatars.githubusercontent.com/u/7669299?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DarioSucic", "html_url": "https://github.com/DarioSucic", "followers_url": "https://api.github.com/users/DarioSucic/followers", "following_url": "https://api.github.com/users/DarioSucic/following{/other_user}", "gists_url": "https://api.github.com/users/DarioSucic/gists{/gist_id}", "starred_url": "https://api.github.com/users/DarioSucic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DarioSucic/subscriptions", "organizations_url": "https://api.github.com/users/DarioSucic/orgs", "repos_url": "https://api.github.com/users/DarioSucic/repos", "events_url": "https://api.github.com/users/DarioSucic/events{/privacy}", "received_events_url": "https://api.github.com/users/DarioSucic/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "LGTM\r\nWe're not using token type ids during training, so there is no reason for the tokenizer to output them, and doing so just leads to unintended behaviour." ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? Makes it so the tokenizer doesn't output `token_type_ids`, as these break generation. Seems like a harmless change, but I'm not sure what these are used for so let me know if this is the wrong approach! ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker @amyeroberts @ekgren @Apsod
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24788/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24788/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24788", "html_url": "https://github.com/huggingface/transformers/pull/24788", "diff_url": "https://github.com/huggingface/transformers/pull/24788.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24788.patch", "merged_at": 1689354825000 }
https://api.github.com/repos/huggingface/transformers/issues/24787
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24787/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24787/comments
https://api.github.com/repos/huggingface/transformers/issues/24787/events
https://github.com/huggingface/transformers/pull/24787
1,801,639,368
PR_kwDOCUB6oc5VWL83
24,787
Deprecate models
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "FMI: (M=my)\r\n\r\nI probably need to do something with daily CI to take this PR's change into account!" ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? This PR creates the precedent of deprecating models in the library. By deprecating we indicate that we will stop maintaining such models, but there is no intention of actually removing those models and breaking support for them (they might one day move into a separate repo/on the Hub but we would still add the necessary imports to make sure backward compatibility stays). The main point is that we stop testing those models (to ease a bit the burden on our CI). Deprecated models are moved in models/deprecated so direct import of objects from their modeling files will break (though that's easily fixed by adding the `.deprecated` in the path). They are removed from the `tests` folder and a mention is added in the doc page of the model. The heuristic to pick the deprecated models in this PR is: models older than a year that got less than a cumulated 1,000 downloads (over all checkpoints) in the last 30 days (counting deduplicated downloads).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24787/reactions", "total_count": 5, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 5, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24787/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24787", "html_url": "https://github.com/huggingface/transformers/pull/24787", "diff_url": "https://github.com/huggingface/transformers/pull/24787.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24787.patch", "merged_at": 1689263214000 }
https://api.github.com/repos/huggingface/transformers/issues/24786
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24786/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24786/comments
https://api.github.com/repos/huggingface/transformers/issues/24786/events
https://github.com/huggingface/transformers/pull/24786
1,801,629,259
PR_kwDOCUB6oc5VWJu9
24,786
Added support for dtype in .to() method.
{ "login": "amannagarkar", "id": 36772718, "node_id": "MDQ6VXNlcjM2NzcyNzE4", "avatar_url": "https://avatars.githubusercontent.com/u/36772718?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amannagarkar", "html_url": "https://github.com/amannagarkar", "followers_url": "https://api.github.com/users/amannagarkar/followers", "following_url": "https://api.github.com/users/amannagarkar/following{/other_user}", "gists_url": "https://api.github.com/users/amannagarkar/gists{/gist_id}", "starred_url": "https://api.github.com/users/amannagarkar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amannagarkar/subscriptions", "organizations_url": "https://api.github.com/users/amannagarkar/orgs", "repos_url": "https://api.github.com/users/amannagarkar/repos", "events_url": "https://api.github.com/users/amannagarkar/events{/privacy}", "received_events_url": "https://api.github.com/users/amannagarkar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts hey, yes will do!", "Thanks for your PR but what is the purpose of this? The tokenization results are all integers and changing their dtype will make then unusable by the model.", "@sgugger I was using `BatchEncoding` in a Processor class (`InstructBlipProcessor`) and noticed that it didn't support `dtype` as `BatchFeature` does. However it's a bit unclear to me whether I need to use `BatchFeature` or `BatchEncoding` for multi-modal processors", "Probably batch feature if you have float values. As the name indicates, `BatchEncoding` is for encoded values (so ints).", "@amyeroberts passed all cases. Kindly check!", "> Thanks for your PR but what is the purpose of this? The tokenization results are all integers and changing their dtype will make then unusable by the model.\r\n\r\nYou did not answer that question though.", "@amannagarkar thanks for your PR, but given the comment by @sgugger it probably makes sense to close this PR, and instead update multimodal processors in the library that return a`BatchEncoding` instead of a `BatchFeature`.\r\n\r\nThis is because `BatchEncoding` is only used by text-only tokenizers, for which the `dtype` isn't relevant, since they always return LongTensors.", "@sgugger sorry for not responding, I thought Niels answered your question. I will be more careful in the future!\r\n@NielsRogge okay, noted. I will take a look into it!" ]
1,689
1,690
1,690
NONE
null
Issue #24068. The updated method now accepts both "device" and "dtype" as keyword arguments. When "dtype" is provided, the tensors within the object will be cast to the specified data type. # What does this PR do? This PR adds support for the `dtype` parameter in the `.to()` method of the `BatchEncoding` class. Previously, only the `device` parameter was supported. With this enhancement, users can now specify the desired data type for tensor casting and allocation. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @amyeroberts @NielsRogge
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24786/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24786/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24786", "html_url": "https://github.com/huggingface/transformers/pull/24786", "diff_url": "https://github.com/huggingface/transformers/pull/24786.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24786.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24785
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24785/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24785/comments
https://api.github.com/repos/huggingface/transformers/issues/24785/events
https://github.com/huggingface/transformers/pull/24785
1,801,556,106
PR_kwDOCUB6oc5VV5z_
24,785
Copy code when using local trust remote code
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? When using the `trust_remote_code=True` feature with local models (for instance using a clone of a repo with custom code) the custom code files are not copied over if the user does `save_pretrained` (as a result of #22814) but they should in this specific case. This PR fixes that. Fixes #24737
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24785/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24785/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24785", "html_url": "https://github.com/huggingface/transformers/pull/24785", "diff_url": "https://github.com/huggingface/transformers/pull/24785.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24785.patch", "merged_at": 1689281841000 }
https://api.github.com/repos/huggingface/transformers/issues/24784
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24784/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24784/comments
https://api.github.com/repos/huggingface/transformers/issues/24784/events
https://github.com/huggingface/transformers/pull/24784
1,801,526,948
PR_kwDOCUB6oc5VVzdu
24,784
Link with accelerate
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? Replicates the logic in https://github.com/huggingface/accelerate/pull/1718 here on the trainer, to reduce the sync overhead as `get_scale` is a full-sync operation, meaning both GPU and CPU need to fully stop before continuing. This PR reduces it by half when not using the Accelerator. Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24784/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24784/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24784", "html_url": "https://github.com/huggingface/transformers/pull/24784", "diff_url": "https://github.com/huggingface/transformers/pull/24784.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24784.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24783
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24783/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24783/comments
https://api.github.com/repos/huggingface/transformers/issues/24783/events
https://github.com/huggingface/transformers/issues/24783
1,801,437,588
I_kwDOCUB6oc5rX8GU
24,783
Generate: have an example on each `LogitsProcessor` class docstring
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[ { "id": 1990918270, "node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue", "name": "Good First Issue", "color": "bbf794", "default": false, "description": "" }, { "id": 3551105283, "node_id": "LA_kwDOCUB6oc7TqZED", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Documentation%20Issue", "name": "Good First Documentation Issue", "color": "AB0BA8", "default": false, "description": "" } ]
closed
false
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[ { "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false } ]
[ "Hi @gante I'm happy to give one of these a go as it seems a nice learning experience... `TemperatureLogitsWarper` feels as good as anyone else, can you assign it to me if it's not taken yet?", "@nablabits thank you for your interest. It's all yours! \r\n\r\n(The assignment is informal -- the first one to mention a certain class gets automatically assigned to it 🤗 )", "Hello @gante 👋\r\nThanks for opening up this contrib!\r\nWould like to give it a try to [NoBadWordsLogitsProcessor ](https://github.com/huggingface/transformers/blob/f1732e1374a082bf8e43bd0e4aa8a2da21a32a21/src/transformers/generation/logits_process.py#L725). Alredy tried `bad_words_ids` argument and looking forward to digging and learning more into the impact of `eos_token_id` for this class. \r\n\r\nLKM if that works for you !", "Hey @gante 👋 \r\n\r\nI would like to try [RepetitionPenaltyLogitsProcessor ](https://github.com/huggingface/transformers/blob/5bb4430edc7df9f9950d412d98bbe505cc4d328b/src/transformers/generation/logits_process.py#L194)to start with.\r\n\r\nI hope that works!", "Hey Shauray (@shauray8), Nice one! I can see you have opened a PR for the RepetitionPenaltyLogitsProcessor. I was working on it. Just a note for next time, please go through the \"How to Participate\" and confirm no one is working on it.\r\n\r\n@gante I'll look at other Processor Classes soon and take up something else. 👍 ", "Hey @Rishab26, I didn't go through the comments and I appreciate your understanding.\r\n", "Hey @Rishab26 I would like to highlight your level of empathy towards this OSS Governance issue.\r\nIMO, example of level 4 in [trust-level system](https://blog.discourse.org/2018/06/understanding-discourse-trust-levels/) HF relies upon . \r\n\r\n🤗 Thanks for setting a positive example for the Open Source Community. 🤗\r\n\r\n", "Working on `TopKLogitsWarper`", "Hey Folks!\r\nStill working on this, having fun though.\r\nIm opening the WIP in this [repo](https://github.com/SoyGema/contrib_schema/) in case someone wants to have a look before PR.\r\nHave already some things , but giving it a careful thought to the example and digging into some things.\r\nFound this gem #22168 !\r\n", "Hey @gante I would like to work on `SuppressTokensLogitsProcessor` 🙂", "Hey @gante I am working on `NoRepeatNGramLogitsProcessor` 🙂", "hey @gante I am working on **TypicalLogitsWarper**", "hey @gante I would like to work on **EtaLogitsWarper** 😊", "Hi! May I claim `TopPLogitsWarper`?", "Hey @gante, yes, me again :upside_down_face: , are you happy for me to pick `MinNewTokensLengthLogitsProcessor`?", "Hi @gante I want to claim `ForcedEOSTokenLogitsProcessor` 😄 ", "Hi @gante can I try to work on `LogitNormalization`?", "Hi @gante , can I try working on `ForceTokensLogitsProcessor`?", "hi @gante i am working on `EncoderRepetitionPenaltyLogitsProcessor`, is there a way I can create a new issue named `EncoderRepetitionPenaltyLogitsProcessor` l then link that here and it shows active in the list?\r\nlike this, you can put a checklist there\r\n\r\n![image](https://github.com/huggingface/transformers/assets/64583161/6efd3f34-c7ec-4360-95c8-671f808c852f)\r\n", "Hello @gante, I would like to work on `InfNanRemoveLogitsProcessor`. I see no one has referred to it in the comments section. But just wanted to make sure no one is working on it. Could you please confirm?", "> hi @gante i am working on EncoderRepetitionPenaltyLogitsProcessor, is there a way I can create a new issue named EncoderRepetitionPenaltyLogitsProcessor l then link that here and it shows active in the list?\r\nlike this, you can put a checklist there\r\n\r\n@rajveer43 We can, but I don't see the benefit of it (each issue would be very small and straightforward), so I'm not convinced about having this extra step :D ", "Hi @gante , I'd love to try writing examples for `HammingDiversityLogitsProcessor` !", "Hi @gante can I work on ExponentialDecayLengthPenalty? Thanks!", "Hi @gante , \r\nWorking on [EpsilonLogitsWarper](https://github.com/huggingface/transformers/blob/a6e6b1c622d8d08e2510a82cb6266d7b654f1cbf/src/transformers/generation/logits_process.py#L442)", "Hi @gante ,\r\nCan I work on SuppressTokensAtBeginLogitsProcessor ~TopPLogitsWarper~ ? Thanks! :) ", "> Hi @gante , Can I work on TopPLogitsWarper? Thanks! :)\r\n\r\nHi! I'm already working on it.", "> > Hi @gante , Can I work on TopPLogitsWarper? Thanks! :)\r\n> \r\n> Hi! I'm already working on it.\r\n\r\nAh I see, It looks like you spelled it as `TopPLogitsWrapper` rather than `TopPLogitsWarper` so I couldn't find it with a CTRL + F my bad! I'll take up SuppressTokensAtBeginLogitsProcessor, don't want to steal that from you haha 👍 ", "> > > Hi @gante , Can I work on TopPLogitsWarper? Thanks! :)\r\n> > \r\n> > \r\n> > Hi! I'm already working on it.\r\n> \r\n> Ah I see, It looks like you spelled it as `TopPLogitsWrapper` rather than `TopPLogitsWarper` so I couldn't find it with a CTRL + F my bad! I'll take up SuppressTokensAtBeginLogitsProcessor, don't want to steal that from you haha 👍\r\n\r\nMy bad, I'll correct it. Thanks!", "Hello, I'll work on the WhisperTimeStampLogitsProcessor. ", "@gante While working on the docs, I noticed there might be an issue with the current implementation of ExponentialDecayLengthPenalty.\r\n\r\nThe processor is intended to exponentially increase the score of the eos_token_id after start_index has been reached, allowing generating shorter sequences without having a hard cutoff.\r\n\r\nWhen working with shorter sequences (Up to ~200) it doesn't necessarily cut the sequence, no matter how large the decay factor is.\r\nIn the following line, the processor attempts to increase the score of EOS. However when EOS score is negative, this actually decreases the score, as the exponent will be positive. As I understand, giving a negative decay factor won't work as well due to the power. Due to this it will only succeed if EOS becomes positive.\r\nhttps://github.com/huggingface/transformers/blob/f1732e1374a082bf8e43bd0e4aa8a2da21a32a21/src/transformers/generation/logits_process.py#L982\r\n\r\nTwo questions:\r\n\r\n1. Is this actually an issue?\r\n2. If it is, I believe I can fix it. Should I open an issue, or just fix it with the docs PR?" ]
1,689
1,693
1,692
MEMBER
null
# Context `.generate()` can be extensively manipulated through `LogitsProcessor` (and `LogitsWarper`) classes. Those classes are the code implementation behind flags like `temperature` or `top_k`. Most of our `LogitsProcessor` classes have a docstring that briefly describes their effect. However, unless you are an expert in text generation, it's hard to fully grasp the impact of using each class. In some cases, it is also non-trivial to prepare the arguments to initialize the `LogitsProcessor` class. As such, each class should have a clear usage example with `.generate()` in their docstring! 💪 Here is an example: [SequenceBiasLogitsProcessor docstring](https://github.com/huggingface/transformers/blob/f1732e1374a082bf8e43bd0e4aa8a2da21a32a21/src/transformers/generation/logits_process.py#L559). Contrarily to the other classes (at the time of writing), we can quickly learn how to use it just by reading its docstring. We are also immediately aware of a few caveats 🤓 Bonus points: our docstring examples are part of our CI, so we would be beefing up our tests to ensure we don't add regressions 🤗 This issue is part of the [text generation docs rework](https://github.com/huggingface/transformers/issues/24575). # How to participate? 1. Ensure you've read our contributing [guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) 📜 2. Claim your `LogitProcessor` class in this thread (confirm no one is working on it). You can check the full list of classes below, and you can find their implementation in [this file](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/logits_process.py) 🎯 - You may need to do some detective work to fully understand the purpose of the class. For instance, some classes were created as part of a paper to be applied to any model, others are model-specific, and some exist to avoid weird bugs 🕵️ - Looking at the git history is a great way to understand how a `LogitsProcessor` came to be. 3. Implement your changes, taking the [SequenceBiasLogitsProcessor docstring](https://github.com/huggingface/transformers/blob/f1732e1374a082bf8e43bd0e4aa8a2da21a32a21/src/transformers/generation/logits_process.py#L559) as reference 💪 - Add a clear example that calls the processor through `.generate()`. Make sure the example's outputs are correct and that the model used in the test is a small model (anything larger than GPT2 needs explicit approval); - If you feel like the original docstring could be better, feel free to enhance it as well! - Don't forget to run `make fixup` before your final commit. 4. Open the PR and tag me in it 🎊 # Tracker - [x] MinNewTokensLengthLogitsProcessor - [x] TemperatureLogitsWarper - [x] RepetitionPenaltyLogitsProcessor - [ ] EncoderRepetitionPenaltyLogitsProcessor - [x] TopPLogitsWarper - [ ] TopKLogitsWarper - [ ] TypicalLogitsWarper - [x] EpsilonLogitsWarper - [x] EtaLogitsWarper - [x] NoRepeatNGramLogitsProcessor - [ ] EncoderNoRepeatNGramLogitsProcessor - [x] SequenceBiasLogitsProcessor - [x] NoBadWordsLogitsProcessor - [ ] PrefixConstrainedLogitsProcessor - [x] HammingDiversityLogitsProcessor - [ ] ForcedBOSTokenLogitsProcessor - [ ] ForcedEOSTokenLogitsProcessor - [ ] InfNanRemoveLogitsProcessor - [ ] ExponentialDecayLengthPenalty - [ ] LogitNormalization - [ ] SuppressTokensAtBeginLogitsProcessor - [ ] SuppressTokensLogitsProcessor - [ ] ForceTokensLogitsProcessor - [ ] WhisperTimeStampLogitsProcessor - [ ] ClassifierFreeGuidanceLogitsProcessor
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24783/reactions", "total_count": 9, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 9, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24783/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24782
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24782/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24782/comments
https://api.github.com/repos/huggingface/transformers/issues/24782/events
https://github.com/huggingface/transformers/pull/24782
1,801,352,214
PR_kwDOCUB6oc5VVMwl
24,782
Skip torchscript tests for `MusicgenForConditionalGeneration`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? This model class requires the model tester to prepare `input_values` and `padding_mask` for torchscript tests. So far I think it is fine to skip it until we have high usage.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24782/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24782/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24782", "html_url": "https://github.com/huggingface/transformers/pull/24782", "diff_url": "https://github.com/huggingface/transformers/pull/24782.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24782.patch", "merged_at": 1689256458000 }
https://api.github.com/repos/huggingface/transformers/issues/24781
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24781/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24781/comments
https://api.github.com/repos/huggingface/transformers/issues/24781/events
https://github.com/huggingface/transformers/issues/24781
1,801,282,248
I_kwDOCUB6oc5rXWLI
24,781
Add text-mesh models inside Hugginfaces
{ "login": "math-sasso", "id": 23565626, "node_id": "MDQ6VXNlcjIzNTY1NjI2", "avatar_url": "https://avatars.githubusercontent.com/u/23565626?v=4", "gravatar_id": "", "url": "https://api.github.com/users/math-sasso", "html_url": "https://github.com/math-sasso", "followers_url": "https://api.github.com/users/math-sasso/followers", "following_url": "https://api.github.com/users/math-sasso/following{/other_user}", "gists_url": "https://api.github.com/users/math-sasso/gists{/gist_id}", "starred_url": "https://api.github.com/users/math-sasso/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/math-sasso/subscriptions", "organizations_url": "https://api.github.com/users/math-sasso/orgs", "repos_url": "https://api.github.com/users/math-sasso/repos", "events_url": "https://api.github.com/users/math-sasso/events{/privacy}", "received_events_url": "https://api.github.com/users/math-sasso/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" }, { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "Hi @math-sasso, \r\n\r\nThis would be a great addition to the library! \r\n\r\nI don't know the paper's in depth, but I believe that both of these models - CLIPMesh and Shap-e - are diffusion models and so might be better suggestions for the diffusers library: https://github.com/huggingface/diffusers" ]
1,689
1,689
null
NONE
null
### Feature request Text to 3D models are really getting the scene in some industries but actually the state of the art techiniques are very hard to integrate in production code. Some examples are: https://www.nasir.lol/clipmesh https://github.com/openai/shap-e Would be awesome to the community if HF has that integrated. ### Motivation Text-3D models can have a big space in multiple types of industry ### Your contribution If I have some guidance I can help working on this side. But I will need HF developers help.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24781/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24781/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/24780
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24780/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24780/comments
https://api.github.com/repos/huggingface/transformers/issues/24780/events
https://github.com/huggingface/transformers/pull/24780
1,801,223,888
PR_kwDOCUB6oc5VUwfX
24,780
Rm duplicate pad_across_processes
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? Accelerate now handles `pad_across_processes` directly, so removes code copied from Accelerate. As it's internal, no need for a deprecation cycle Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24780/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24780/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24780", "html_url": "https://github.com/huggingface/transformers/pull/24780", "diff_url": "https://github.com/huggingface/transformers/pull/24780.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24780.patch", "merged_at": 1689176841000 }
https://api.github.com/repos/huggingface/transformers/issues/24779
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24779/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24779/comments
https://api.github.com/repos/huggingface/transformers/issues/24779/events
https://github.com/huggingface/transformers/issues/24779
1,801,100,266
I_kwDOCUB6oc5rWpvq
24,779
Best aproach to fine tune a GPT model for feature extraction
{ "login": "Luke-4", "id": 138615931, "node_id": "U_kgDOCEMcew", "avatar_url": "https://avatars.githubusercontent.com/u/138615931?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Luke-4", "html_url": "https://github.com/Luke-4", "followers_url": "https://api.github.com/users/Luke-4/followers", "following_url": "https://api.github.com/users/Luke-4/following{/other_user}", "gists_url": "https://api.github.com/users/Luke-4/gists{/gist_id}", "starred_url": "https://api.github.com/users/Luke-4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Luke-4/subscriptions", "organizations_url": "https://api.github.com/users/Luke-4/orgs", "repos_url": "https://api.github.com/users/Luke-4/repos", "events_url": "https://api.github.com/users/Luke-4/events{/privacy}", "received_events_url": "https://api.github.com/users/Luke-4/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! This is better on [HF forums](https://huggingface.co/).\r\n\r\nThis github repository is mainly for issues and feature requests 🙏 Thank you for your comprehension.\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,692
1,692
NONE
null
Hi All, I am trying to use BioGPT as a feature encoder and I want to compare if fine-tuning is going to improve the quality of the embeddings. So I have two options the first is to fine-tune BioGPT without passing the labels and then use the last token of the last hidden state for classification using a separate machine-learning model. (Is it possible to fine-tune BioGPT as an encoder with the labels? Do the labels make any difference since the model is not attempting to classify?) The second option would be to use BioGptForSequenceClassification which has a sequence classification head on top (linear layer) and fine-tune this by passing the labels to the model, I can then use this fine-tuned model for the classification or use the last token of the last hidden state for classification using a separate machine learning classifier.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24779/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24779/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24778
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24778/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24778/comments
https://api.github.com/repos/huggingface/transformers/issues/24778/events
https://github.com/huggingface/transformers/issues/24778
1,801,056,132
I_kwDOCUB6oc5rWe-E
24,778
save quantized model throws error.
{ "login": "nemesis00sam", "id": 112406441, "node_id": "U_kgDOBrMvqQ", "avatar_url": "https://avatars.githubusercontent.com/u/112406441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nemesis00sam", "html_url": "https://github.com/nemesis00sam", "followers_url": "https://api.github.com/users/nemesis00sam/followers", "following_url": "https://api.github.com/users/nemesis00sam/following{/other_user}", "gists_url": "https://api.github.com/users/nemesis00sam/gists{/gist_id}", "starred_url": "https://api.github.com/users/nemesis00sam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nemesis00sam/subscriptions", "organizations_url": "https://api.github.com/users/nemesis00sam/orgs", "repos_url": "https://api.github.com/users/nemesis00sam/repos", "events_url": "https://api.github.com/users/nemesis00sam/events{/privacy}", "received_events_url": "https://api.github.com/users/nemesis00sam/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "`load_in_8bit=True,` --> cc @younesbelkada as he knows much better 🙏 ", "Hi @nemesis00sam \r\nThanks for the issue, https://github.com/huggingface/transformers/pull/24416 fixed the issue you mentioned\r\nplease install transformers from source \r\n```\r\npip uninstall transformers\r\npip install git+https://github.com/huggingface/transformers.git\r\n```\r\nAnd it should be solved right after", "Thanks for prompt answer. @younesbelkada ", "I still see the same issue while saving `meta-llama/Llama-2-13b-chat-hf` as safetensors\r\n\r\nMy setup:\r\n```\r\npip list | grep -E 'trans|accel|bits|safe'\r\naccelerate 0.21.0\r\nbitsandbytes 0.41.0\r\nsafetensors 0.3.1\r\ntransformers 4.32.0.dev0 # uninstalled and installed from git on 7/28\r\n```\r\n\r\nScript:\r\n```\r\nmodel_name = \"meta-llama/Llama-2-13b-chat-hf\"\r\nsave_dir = \"/home/abc/local_models/Llama-2-13b-chat-8bit\"\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\ntokenizer.save_pretrained(save_dir, save_config=True)\r\nmax_memory = {0: \"22GIB\", 1: \"22GIB\", 2: \"22GIB\", 3: \"22GIB\", 4: \"22GIB\", 5: \"22GIB\", 6: \"22GIB\", 7: \"22GIB\"}\r\nmodel = AutoModelForCausalLM.from_pretrained(model_name, load_in_8bit=True, max_memory=max_memory)\r\nmodel.save_pretrained(save_dir, save_config=True, safe_serialization=True)\r\n```\r\n\r\nError:\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/hmohapa/search/llama/8bit_quantize.py\", line 26, in <module>\r\n model.save_pretrained(save_dir, save_config=True, safe_serialization=True)\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py\", line 1803, in save_pretrained\r\n ptrs[id_tensor_storage(tensor)].append(name)\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/pytorch_utils.py\", line 287, in id_tensor_storage\r\n return tensor.device, storage_ptr(tensor), storage_size(tensor)\r\nAttributeError: 'str' object has no attribute 'device'\r\n```", "i had the same issue with `transformers==4.30.2`. it's gone with `transformers==4.31.0`. ", "@hrushikesh198 for safetensors there is indeed a bug, would be happy opening a new ticket for that?", "I'm still getting the message in transformers 4.33.3 and bitsandbytes==0.41.0\r\nNot using 4.34.0 yet, because it runs into a different bug that causes a crash.\r\n\r\nI'ts not safetransfomers, I'm just using a pytorch bin file\r\n\r\n```\r\nhome/coen/.local/lib/python3.10/site-packages/transformers/modeling_utils.py:1830: UserWarning: You are calling `save_pretrained` to a 8-bit converted model you may likely encounter unexepected behaviors. If you want to save 8-bit models, make sure to have `bitsandbytes>0.37.2` installed.\r\n warnings.warn(\r\n```", "@coen22 can you open a new ticket with a clean reproducer and tag me?", "!pip install -q -U trl transformers accelerate git+https://github.com/huggingface/peft.git\r\n!pip install -q datasets bitsandbytes einops\r\n!pip install -q -U torch==2.0.1 bitsandbytes==0.40.2\r\nworks with this combination of versions\r\n" ]
1,689
1,698
1,689
NONE
null
### System Info ===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please run python -m bitsandbytes ================================================================================ bin /opt/conda/envs/pytorch/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda118.so CUDA SETUP: CUDA runtime path found: /opt/conda/envs/pytorch/lib/libcudart.so.11.0 CUDA SETUP: Highest compute capability among GPUs detected: 7.5 CUDA SETUP: Detected CUDA version 118 CUDA SETUP: Loading binary /opt/conda/envs/pytorch/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda118.so... [2023-07-12 13:52:54,626] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect) Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.30.2 - Platform: Linux-5.15.0-1038-aws-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.16.2 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Hi, I'm trying to save quantized model. First attempt didn't work. (I also opened an issue, https://github.com/huggingface/accelerate/issues/1713, to clarify it). I opened this issue because I'm receiving an error message when I run following code. I'm not sure I'm following the right instructions written on https://huggingface.co/docs/transformers/main_classes/quantization. Because model is pushed to hub in documentation. But I expect to save it to local filesystem. Thanks for your help in advance. ``` ### load packages ### import transformers import textwrap from transformers import LlamaTokenizer, LlamaForCausalLM import os import sys from typing import List import accelerate from peft import ( LoraConfig, get_peft_model, get_peft_model_state_dict, prepare_model_for_int8_training, ) #import fire import torch from datasets import load_dataset import pandas as pd import deepspeed DEVICE = "cuda" if torch.cuda.is_available() else "cpu" DEVICE ### load model ### BASE_MODEL = "decapoda-research/llama-7b-hf" model = LlamaForCausalLM.from_pretrained( BASE_MODEL, load_in_8bit=True, torch_dtype=torch.float16, device_map="auto", ) model.save_pretrained(save_directory="quantized_decapoda-research_llama-7b-hf_v2") ``` Error Message: ``` /opt/conda/envs/pytorch/lib/python3.10/site-packages/transformers/modeling_utils.py:1709: UserWarning: You are calling `save_pretrained` to a 8-bit converted model you may likely encounter unexepected behaviors. If you want to save 8-bit models, make sure to have `bitsandbytes>0.37.2` installed. warnings.warn( --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[3], line 1 ----> 1 model.save_pretrained(save_directory="quantized_decapoda-research_llama-7b-hf_v2") File /opt/conda/envs/pytorch/lib/python3.10/site-packages/transformers/modeling_utils.py:1820, in PreTrainedModel.save_pretrained(self, save_directory, is_main_process, state_dict, save_function, push_to_hub, max_shard_size, safe_serialization, variant, **kwargs) 1817 weights_name = SAFE_WEIGHTS_NAME if safe_serialization else WEIGHTS_NAME 1818 weights_name = _add_variant(weights_name, variant) -> 1820 shards, index = shard_checkpoint(state_dict, max_shard_size=max_shard_size, weights_name=weights_name) 1822 # Clean the folder from a previous save 1823 for filename in os.listdir(save_directory): File /opt/conda/envs/pytorch/lib/python3.10/site-packages/transformers/modeling_utils.py:318, in shard_checkpoint(state_dict, max_shard_size, weights_name) 315 storage_id_to_block = {} 317 for key, weight in state_dict.items(): --> 318 storage_id = id_tensor_storage(weight) 320 # If a `weight` shares the same underlying storage as another tensor, we put `weight` in the same `block` 321 if storage_id in storage_id_to_block: File /opt/conda/envs/pytorch/lib/python3.10/site-packages/transformers/pytorch_utils.py:290, in id_tensor_storage(tensor) 283 def id_tensor_storage(tensor: torch.Tensor) -> Tuple[torch.device, int, int]: 284 """ 285 Unique identifier to a tensor storage. Multiple different tensors can share the same underlying storage. For 286 example, "meta" tensors all share the same storage, and thus their identifier will all be equal. This identifier is 287 guaranteed to be unique and constant for this tensor's storage during its lifetime. Two tensor storages with 288 non-overlapping lifetimes may have the same id. 289 """ --> 290 return tensor.device, storage_ptr(tensor), storage_size(tensor) AttributeError: 'str' object has no attribute 'device' ``` ### Expected behavior Save quantized model to local filesystem.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24778/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24778/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24777
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24777/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24777/comments
https://api.github.com/repos/huggingface/transformers/issues/24777/events
https://github.com/huggingface/transformers/pull/24777
1,800,954,617
PR_kwDOCUB6oc5VT1o2
24,777
Make CLIP model could use new added tokens with meaningful pooling
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I will fix the CI by using `fix-copies` later", "_The documentation is not available anymore as the PR was closed or merged._", "I will merge once the branch is cut tonight.", "Sorry for the spam: @sgugger said the branch cut would be on next Monday. I think it's safer to wait until then." ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? Fix #24650 This is to address the feature request #24650. Although we have default values of bos/eos been corrected in #24773, the existing config on the Hub still have the incorrect value `1` and `2`, which prevents CLIP model to use new added tokens when a user add them. Although we can open mass PRs on the Hub, I want to decouple (slightly) this with the ability to support such feature. With this PR, if a user want to use new added tokens, they has to specify/update the `eos_token_id`. **We don't need to wait all Hub repo. to be updated to merge this PR.**
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24777/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24777/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24777", "html_url": "https://github.com/huggingface/transformers/pull/24777", "diff_url": "https://github.com/huggingface/transformers/pull/24777.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24777.patch", "merged_at": 1689618920000 }
https://api.github.com/repos/huggingface/transformers/issues/24776
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24776/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24776/comments
https://api.github.com/repos/huggingface/transformers/issues/24776/events
https://github.com/huggingface/transformers/pull/24776
1,800,891,310
PR_kwDOCUB6oc5VTnnA
24,776
To work out tokenization_utils_base.py:731 list to tensor so slow
{ "login": "askxiaozhang", "id": 112556925, "node_id": "U_kgDOBrV7fQ", "avatar_url": "https://avatars.githubusercontent.com/u/112556925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/askxiaozhang", "html_url": "https://github.com/askxiaozhang", "followers_url": "https://api.github.com/users/askxiaozhang/followers", "following_url": "https://api.github.com/users/askxiaozhang/following{/other_user}", "gists_url": "https://api.github.com/users/askxiaozhang/gists{/gist_id}", "starred_url": "https://api.github.com/users/askxiaozhang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/askxiaozhang/subscriptions", "organizations_url": "https://api.github.com/users/askxiaozhang/orgs", "repos_url": "https://api.github.com/users/askxiaozhang/repos", "events_url": "https://api.github.com/users/askxiaozhang/events{/privacy}", "received_events_url": "https://api.github.com/users/askxiaozhang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@askxiaozhang Thanks for opening this PR and contributing to improving the transformers library! There is another PR opened #24772 which addresses this, and so this PR will not be merged in. " ]
1,689
1,689
1,689
NONE
null
tokenization_utils_base.py:731: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.asarray() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:230.) Solved this problem to accelerate list to tensor.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24776/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24776/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24776", "html_url": "https://github.com/huggingface/transformers/pull/24776", "diff_url": "https://github.com/huggingface/transformers/pull/24776.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24776.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24775
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24775/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24775/comments
https://api.github.com/repos/huggingface/transformers/issues/24775/events
https://github.com/huggingface/transformers/pull/24775
1,800,866,471
PR_kwDOCUB6oc5VTiRf
24,775
Fix pad across processes dim in trainer and not being able to set the timeout
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yep, sorry 😅 Right one is used now", "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24775). All of your documentation changes will be reflected on that endpoint." ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? Reverts tiny regression where `dim=1` is needed during `pad_across_processes`, and `ddp_timeout` wasn't trickled down through `PartialState` Fixes # (issue) Solves https://github.com/huggingface/transformers/issues/24751 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24775/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24775/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24775", "html_url": "https://github.com/huggingface/transformers/pull/24775", "diff_url": "https://github.com/huggingface/transformers/pull/24775.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24775.patch", "merged_at": 1689170511000 }
https://api.github.com/repos/huggingface/transformers/issues/24774
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24774/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24774/comments
https://api.github.com/repos/huggingface/transformers/issues/24774/events
https://github.com/huggingface/transformers/issues/24774
1,800,845,241
I_kwDOCUB6oc5rVre5
24,774
torch_dtype='auto' is not working when using AutoModel.from_pretrained(...)
{ "login": "Cyrilvallez", "id": 71554963, "node_id": "MDQ6VXNlcjcxNTU0OTYz", "avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Cyrilvallez", "html_url": "https://github.com/Cyrilvallez", "followers_url": "https://api.github.com/users/Cyrilvallez/followers", "following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}", "gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}", "starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions", "organizations_url": "https://api.github.com/users/Cyrilvallez/orgs", "repos_url": "https://api.github.com/users/Cyrilvallez/repos", "events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}", "received_events_url": "https://api.github.com/users/Cyrilvallez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Pinging @younesbelkada as I think he looked into this previously! ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Still needs to be addressed", "Are you sure? Running:\r\n```py\r\nfrom transformers import AutoModelForCausalLM\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained('facebook/opt-125m', torch_dtype='auto')\r\nmodel.dtype\r\n```\r\ngives me `float16`.\r\n\r\nIn general, note that we do not recomment using `torch_dtype=\"auto\"` but setting the `torch_dtype` you want yourself, as users of the Hub never set the dtype in the config to a value they thought about (this is just an indication of the dtype they used before pushing the model on the Hub). One dtype might work on their hardware without working on yours.\r\n\r\nAs for your other question of why `AutoModelForCausalLM.from_pretrained(...)` does not automatically respects the dtype of the config, we just follow PyTorch convention here: if you instantiate a model without specifying a dtype, then load a state dict into it, the resulting model will be in float32 (the default floating point format) whatever the type of the state dict. The dtype of the original model is preserved. Wanting to load in a specific dtype should be specifically indicated by the user, as float32 (the default dtype) is the only dtype that works on all kinds of hardware (float16 fails on the CPU, bflaot16 on older GPUs etc.).", "Ha yes, it seems that the latest version (v4.31.0) solved the issue! Sorry about that, and thanks for the additional explanation!" ]
1,689
1,692
1,692
NONE
null
### System Info - `transformers` version: 4.29.2 - Platform: macOS-12.2.1-x86_64-i386-64bit - Python version: 3.11.3 - Huggingface_hub version: 0.15.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @younesbelkada @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The `torch_dtype='auto'` argument is not forwarded correctly when using `AutoModelForCausalLM(model_name, torch_dtype='auto')`. For example, the opt model 'facebook/opt-125m' has `torch_dtype: float16` in its config file but the following happens: ```python from transformers import AutoModelForCausalLM, OPTForCausalLM model = OPTForCausalLM.from_pretrained('facebook/opt-125m', torch_dtype='auto') model.dtype # CORRECT dtype >>> torch.float16 model = AutoModelForCausalLM.from_pretrained('facebook/opt-125m', torch_dtype='auto') model.dtype # INCORRECT dtype >>> torch.float32 ``` ### Expected behavior Both outputs should be `torch.float16` as in the config file specification. From what I've looked into, this comes from ```python if kwargs_copy.get("torch_dtype", None) == "auto": _ = kwargs_copy.pop("torch_dtype") ``` in `transformers.models.auto.auto_factory.py`, line 441. The additional kwarg specifying dtype is poped and the dtype is only inferred by the dtype argument of the config file, which is then not given explicitly (only implicitly in the config) to `PretrainedModel.from_pretrained(model_name, config=config,...)`, which does not use it if the explicit `torch_dtype` argument is not provided. I would be happy to help solve the issue if needed. Also, I find it strange that ```python from transformers import AutoConfig, AutoModelForCausalLM config = AutoConfig.from_pretrained('facebook/opt-125m') config.torch_dtype >>> torch.float16 model = AutoModelForCausalLM.from_pretrained('facebook/opt-125m', config=config) model.dtype >>> torch.float32 ``` i.e. `AutoModelForCausalLM.from_pretrained(...)` does not respect the dtype of the config as I was saying before (this is the reason of the previous bug). But maybe this is to avoid errors when model configs specify `torch_dtype: bfloat16` and users try to instantiate on the cpu? Anyway, when specifying 'auto', i.e. `AutoModelForCausalLM.from_pretrained(...torch_dtype='auto')`, I think it is absolutely necessary for the model to be instantiated with the dtype specified on the config file if any, even if it may break code for `torch.bfloat16` models, because users using this feature are aware of what they are doing.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24774/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24774/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24773
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24773/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24773/comments
https://api.github.com/repos/huggingface/transformers/issues/24773/events
https://github.com/huggingface/transformers/pull/24773
1,800,620,811
PR_kwDOCUB6oc5VSrok
24,773
Update default values of bos/eos token ids in `CLIPTextConfig`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Regarding the padding token:\r\n\r\n(copy past from (partial) internal discussion given @patil-suraj)\r\n\r\n> When we added CLIP I tested for the text_projection , logits_per_image and logits_per_text. For the text_projection the model pulls the embeddings of the last token i.e the eos token. The rest of the tokens i.e the padding tokens are ignored. We can see in this [colab](https://colab.research.google.com/drive/1kgGMnFpkc4TP7otlhAOngp9Wlke4tKJw?usp=sharing) that text_projection , logits_per_image and logits_per_text match with the OAI model because we only take the pooled embeddings. And when CLIP was released it was intended for these features which are needed for contrastive tasks. Hence I didn't test against all token embeddings.\r\n\r\n> IMO the wrong padding token will only affect inference when using all token emebeddings i.e Stable Diffusion. For training even if the padding token is wrong it shouldn't affect because\r\n\r\n > - Because CLIP did not use attention_mask during training.\r\n\r\n\r\n > - CLIPTextEncoder uses casual mask, so the tokens to the right don't influence the hidden states of tokens to the left.\r\n > - CLIP is trained with contrastive loss which is computed using the projections, and as I said above the text_projection is computed by pooling the eos token embeddings, which will be always similar no matter what the padding token is, because CLIPTextEncoder is causal, so the eos embeddings won't be affected by tokens on the right.\r\n > - Hence, for downstream training (like SD) as long as a consistent token is used for padding it shouldn't severely affect the training. But for inference we will need to use the same token as Patrick explained. \r\nThis could also be the reason that we didn't have any issue related to this.\r\n\r\n> As far as I can understand, it'll only affect the inference if a different token (compared to the padding token used for training) is used for padding. (edited)" ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? Currently the default values are not the ones from the corresponding tokenizers. See discussion in #24650 However, we can't use the `config.eos_token_id` in the modeling file (which is the ultimate goal in #24650) with only the change in this PR. We will have to update all the Hub repo. config files first 😢 . (Probably there is something easier to do)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24773/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24773/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24773", "html_url": "https://github.com/huggingface/transformers/pull/24773", "diff_url": "https://github.com/huggingface/transformers/pull/24773.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24773.patch", "merged_at": 1689162626000 }
https://api.github.com/repos/huggingface/transformers/issues/24772
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24772/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24772/comments
https://api.github.com/repos/huggingface/transformers/issues/24772/events
https://github.com/huggingface/transformers/pull/24772
1,800,589,080
PR_kwDOCUB6oc5VSknO
24,772
fix "UserWarning: Creating a tensor from a list of numpy.ndarrays is …
{ "login": "liucw2012", "id": 743552, "node_id": "MDQ6VXNlcjc0MzU1Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/743552?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liucw2012", "html_url": "https://github.com/liucw2012", "followers_url": "https://api.github.com/users/liucw2012/followers", "following_url": "https://api.github.com/users/liucw2012/following{/other_user}", "gists_url": "https://api.github.com/users/liucw2012/gists{/gist_id}", "starred_url": "https://api.github.com/users/liucw2012/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liucw2012/subscriptions", "organizations_url": "https://api.github.com/users/liucw2012/orgs", "repos_url": "https://api.github.com/users/liucw2012/repos", "events_url": "https://api.github.com/users/liucw2012/events{/privacy}", "received_events_url": "https://api.github.com/users/liucw2012/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@ArthurZucker Do I need to make any changes to this PR? What's the next step?", "@ydshieh could review this pr?", "Seems to have ~30x speed up.\r\n\r\n```python\r\nimport numpy as np\r\nimport torch\r\n\r\nimport time\r\n\r\ndef measure(batch_size, seq_len):\r\n\r\n a = np.ones(shape=(batch_size, seq_len, 16))\r\n # A list of numpy arrary\r\n b = [x for x in a]\r\n\r\n # directly to torch.tensor\r\n st = time.time()\r\n c = torch.tensor(b)\r\n t1 = time.time() - st\r\n\r\n # np -> tensor\r\n st = time.time()\r\n d = np.array(b)\r\n e = torch.tensor(d)\r\n t2 = time.time() - st\r\n\r\n print(f\"batch_size: {batch_size} | seq_len: {seq_len} | main: {t1} sec. | PR: {t2} sec.\")\r\n\r\n\r\nbatch_size = 128\r\nseq_len = 32\r\nfor idx in range(10):\r\n batch_size = batch_size * 2\r\n measure(batch_size, seq_len)\r\n\r\n\r\nbatch_size = 128\r\nseq_len = 256\r\nfor idx in range(8):\r\n batch_size = batch_size * 2\r\n measure(batch_size, seq_len)\r\n```\r\n\r\n\r\nresults:\r\n\r\n```bash\r\nbatch_size: 256 | seq_len: 32 | main: 0.010269403457641602 sec. | PR: 0.002008676528930664 sec.\r\nbatch_size: 512 | seq_len: 32 | main: 0.015998125076293945 sec. | PR: 0.0010027885437011719 sec.\r\nbatch_size: 1024 | seq_len: 32 | main: 0.03223681449890137 sec. | PR: 0.0019538402557373047 sec.\r\nbatch_size: 2048 | seq_len: 32 | main: 0.0663607120513916 sec. | PR: 0.004067182540893555 sec.\r\nbatch_size: 4096 | seq_len: 32 | main: 0.13183259963989258 sec. | PR: 0.0060040950775146484 sec.\r\nbatch_size: 8192 | seq_len: 32 | main: 0.26061558723449707 sec. | PR: 0.011055707931518555 sec.\r\nbatch_size: 16384 | seq_len: 32 | main: 0.5237565040588379 sec. | PR: 0.02300405502319336 sec.\r\nbatch_size: 32768 | seq_len: 32 | main: 1.0568530559539795 sec. | PR: 0.041966915130615234 sec.\r\nbatch_size: 65536 | seq_len: 32 | main: 2.0813064575195312 sec. | PR: 0.0868995189666748 sec.\r\nbatch_size: 131072 | seq_len: 32 | main: 4.243735074996948 sec. | PR: 0.17353129386901855 sec.\r\n```\r\n\r\n```bash\r\n\r\nbatch_size: 256 | seq_len: 256 | main: 0.06456398963928223 sec. | PR: 0.0034742355346679688 sec.\r\nbatch_size: 512 | seq_len: 256 | main: 0.12811279296875 sec. | PR: 0.005001068115234375 sec.\r\nbatch_size: 1024 | seq_len: 256 | main: 0.26175403594970703 sec. | PR: 0.010001659393310547 sec.\r\nbatch_size: 2048 | seq_len: 256 | main: 0.5197086334228516 sec. | PR: 0.019011259078979492 sec.\r\nbatch_size: 4096 | seq_len: 256 | main: 1.040560245513916 sec. | PR: 0.03655409812927246 sec.\r\nbatch_size: 8192 | seq_len: 256 | main: 2.089771032333374 sec. | PR: 0.07351517677307129 sec.\r\nbatch_size: 16384 | seq_len: 256 | main: 4.197775602340698 sec. | PR: 0.1453232765197754 sec.\r\nbatch_size: 32768 | seq_len: 256 | main: 8.368194103240967 sec. | PR: 0.36582493782043457 sec.\r\n```", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24772). All of your documentation changes will be reflected on that endpoint." ]
1,689
1,690
1,690
CONTRIBUTOR
null
fix "UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor." # What does this PR do? reduce latency of codes below from 0.744675874710083s to 0.013312816619873047s. Fixes #24764 ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24772/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24772/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24772", "html_url": "https://github.com/huggingface/transformers/pull/24772", "diff_url": "https://github.com/huggingface/transformers/pull/24772.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24772.patch", "merged_at": 1690376842000 }
https://api.github.com/repos/huggingface/transformers/issues/24771
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24771/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24771/comments
https://api.github.com/repos/huggingface/transformers/issues/24771/events
https://github.com/huggingface/transformers/pull/24771
1,800,568,735
PR_kwDOCUB6oc5VSgOe
24,771
Add MobileVitV2 to doctests
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> ValueError: Files in `utils/documentation_tests.txt` are not in alphabetical order.\r\n\r\n@amyeroberts \r\n\r\nYou are the one creating this check 😆 ", "![image](https://github.com/huggingface/transformers/assets/22614925/6636d6fa-8da3-4305-b69f-a7925ecde3a8)\r\n", "Oh, I am wrong! The PR doctest is not triggered as this PR doesn't change modeling file. Great!" ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? Adds MobileVitV2 to the doctests. The example snippet wasn't working because the model's config files pointed to an image processor that doesn't exist. This adds the models to the doctests so that this is caught. Also removes a duplicate line in image_processing_auto.py Fixes #24763 (partially) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24771/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24771/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24771", "html_url": "https://github.com/huggingface/transformers/pull/24771", "diff_url": "https://github.com/huggingface/transformers/pull/24771.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24771.patch", "merged_at": 1689159978000 }
https://api.github.com/repos/huggingface/transformers/issues/24770
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24770/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24770/comments
https://api.github.com/repos/huggingface/transformers/issues/24770/events
https://github.com/huggingface/transformers/pull/24770
1,800,544,672
PR_kwDOCUB6oc5VSa8x
24,770
Add multi-label text classification support to pytorch example
{ "login": "ranchlai", "id": 5043767, "node_id": "MDQ6VXNlcjUwNDM3Njc=", "avatar_url": "https://avatars.githubusercontent.com/u/5043767?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ranchlai", "html_url": "https://github.com/ranchlai", "followers_url": "https://api.github.com/users/ranchlai/followers", "following_url": "https://api.github.com/users/ranchlai/following{/other_user}", "gists_url": "https://api.github.com/users/ranchlai/gists{/gist_id}", "starred_url": "https://api.github.com/users/ranchlai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ranchlai/subscriptions", "organizations_url": "https://api.github.com/users/ranchlai/orgs", "repos_url": "https://api.github.com/users/ranchlai/repos", "events_url": "https://api.github.com/users/ranchlai/events{/privacy}", "received_events_url": "https://api.github.com/users/ranchlai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@ranchlai Thanks for opening this PR and contributing to the examples! \r\n\r\nCould you add to the README for this example a snippet for running on a multi-label classification task?", "> @ranchlai Thanks for opening this PR and contributing to the examples!\r\n> \r\n> Could you add to the README for this example a snippet for running on a multi-label classification task?\r\n\r\nSure^_^. Working on the [reuters21578](https://huggingface.co/datasets/reuters21578) dataset as a minimum example. Will update README accordingly. ", "@ranchlai Please note that the examples are kept simple to be more readable. This adds a lot of complexity to the original example for something that is not covered by the primary goal of that example (run GLUE benchmark) so I would keep it separate.", "Thanks for commenting @sgugger. I understand and that's why I am trying to make the change minimum. Multiple label classification is indeed more complicated . Hence, adding a demo in the \"text-classification\" example could be helpful. Thank you~", "Yes, but maybe it could go in a new file focused on text classification only (and not GLUE)?", "That's a good idea. How about run_classification.py in parallel to run_glue.py? I can try to work it out. ", "Perfect!", "@sgugger please would you leave more comments, although I am still running more tests", "_The documentation is not available anymore as the PR was closed or merged._", "> \r\n\r\n@sgugger I think I have finished my tests. Scripts [here](https://github.com/ranchlai/transformers/tree/add_test_scripts/examples/pytorch/text-classification/test) at another branch. Please merge if looks good. ", "Thanks again for your contribution!", "I think the added content in README should be placed in the bottom." ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? The transoformer config supports multi-label classification by setting config.problem_type = "multi_label_classification", but the run_glue.py does not support it. This PR add `run_classification.py` to support multi-label classification task. Main changes compared to `run_glue.py`: - [x] Add support for multi-label classification task and datasets, e.g., [Reuters-21578](https://huggingface.co/datasets/reuters21578). - [x] Remove code related to glue tasks - [x] Update README.md for multi-label classification task. - Add parameraters and code to support single/multi-label classification and regression task - Add `shuffle_train_dataset` option to shuffle train dataset. This is useful to avoid problems caused by ordered labels. - Add `metric_name` to specify the metric used to evaluate the model. - Add `remove_splits` to remove some unnsed splits from the dataset, e.g., Reuter dataset has "unused" split, IMDB dataset has "unsupervised" split. - Add `remove_columns` to remove some unnsed columns from the dataset - Add `text_column_names` to specify the (possibly multiple) columns containing the text. - Add `label_column_name` to specify the column containing the labels, e.g., `stars" for amazon review dataset - Add train/validation/test_split_name to specify the split name for train/validation/test dataset - Add do_regression to force treating text-classification task as regression task. This remove the need to change the label dtype of the dataset.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24770/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24770/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24770", "html_url": "https://github.com/huggingface/transformers/pull/24770", "diff_url": "https://github.com/huggingface/transformers/pull/24770.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24770.patch", "merged_at": 1689850965000 }
https://api.github.com/repos/huggingface/transformers/issues/24769
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24769/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24769/comments
https://api.github.com/repos/huggingface/transformers/issues/24769/events
https://github.com/huggingface/transformers/pull/24769
1,800,417,004
PR_kwDOCUB6oc5VR_L5
24,769
[fix] Change the condition of ValueError in "convert_checkpoint_from_transformers_to_megatron"
{ "login": "SeongBeomLEE", "id": 65529313, "node_id": "MDQ6VXNlcjY1NTI5MzEz", "avatar_url": "https://avatars.githubusercontent.com/u/65529313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SeongBeomLEE", "html_url": "https://github.com/SeongBeomLEE", "followers_url": "https://api.github.com/users/SeongBeomLEE/followers", "following_url": "https://api.github.com/users/SeongBeomLEE/following{/other_user}", "gists_url": "https://api.github.com/users/SeongBeomLEE/gists{/gist_id}", "starred_url": "https://api.github.com/users/SeongBeomLEE/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SeongBeomLEE/subscriptions", "organizations_url": "https://api.github.com/users/SeongBeomLEE/orgs", "repos_url": "https://api.github.com/users/SeongBeomLEE/repos", "events_url": "https://api.github.com/users/SeongBeomLEE/events{/privacy}", "received_events_url": "https://api.github.com/users/SeongBeomLEE/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @pacman100 ", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
CONTRIBUTOR
null
The "target_tensor_model_parallel_size" is related to "num_attention_heads", and the "target_pipeline_model_parallel_size" is related to "num_hidden_layers". However, the old code had "target_tensor_model_parallel_size" related to "num_hidden_layers". So we modified the code and added the part about "target_tensor_model_parallel_size". Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24769/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24769/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24769", "html_url": "https://github.com/huggingface/transformers/pull/24769", "diff_url": "https://github.com/huggingface/transformers/pull/24769.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24769.patch", "merged_at": 1689245876000 }
https://api.github.com/repos/huggingface/transformers/issues/24768
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24768/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24768/comments
https://api.github.com/repos/huggingface/transformers/issues/24768/events
https://github.com/huggingface/transformers/pull/24768
1,800,406,931
PR_kwDOCUB6oc5VR8-1
24,768
🐛 torch baddbmm error fixed for BigCode models
{ "login": "mayank31398", "id": 32954280, "node_id": "MDQ6VXNlcjMyOTU0Mjgw", "avatar_url": "https://avatars.githubusercontent.com/u/32954280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mayank31398", "html_url": "https://github.com/mayank31398", "followers_url": "https://api.github.com/users/mayank31398/followers", "following_url": "https://api.github.com/users/mayank31398/following{/other_user}", "gists_url": "https://api.github.com/users/mayank31398/gists{/gist_id}", "starred_url": "https://api.github.com/users/mayank31398/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mayank31398/subscriptions", "organizations_url": "https://api.github.com/users/mayank31398/orgs", "repos_url": "https://api.github.com/users/mayank31398/repos", "events_url": "https://api.github.com/users/mayank31398/events{/privacy}", "received_events_url": "https://api.github.com/users/mayank31398/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24768). All of your documentation changes will be reflected on that endpoint.", "> I think we don't use PT>2.0.0 that includes the fix you mentioned above, there should be a reason for that. cc @ydshieh\r\n\r\nYou prediction is 200% correct: we have torch `2.0.1`. The mentioned torch fix is not included in that minor bug release.", "Ah I see thanks for double checking @ydshieh !", "closing this since its not relevant at this point in time." ]
1,689
1,690
1,690
CONTRIBUTOR
null
Fixes # (issue) This was needed because of a bug in pytorch https://github.com/pytorch/pytorch/issues/80588. The bug was fixed in https://github.com/pytorch/pytorch/pull/96086 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? @ArthurZucker @younesbelkada @jlamypoirier
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24768/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24768/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24768", "html_url": "https://github.com/huggingface/transformers/pull/24768", "diff_url": "https://github.com/huggingface/transformers/pull/24768.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24768.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24767
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24767/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24767/comments
https://api.github.com/repos/huggingface/transformers/issues/24767/events
https://github.com/huggingface/transformers/pull/24767
1,800,342,203
PR_kwDOCUB6oc5VRu-w
24,767
add aquila
{ "login": "shunxing1234", "id": 33774367, "node_id": "MDQ6VXNlcjMzNzc0MzY3", "avatar_url": "https://avatars.githubusercontent.com/u/33774367?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shunxing1234", "html_url": "https://github.com/shunxing1234", "followers_url": "https://api.github.com/users/shunxing1234/followers", "following_url": "https://api.github.com/users/shunxing1234/following{/other_user}", "gists_url": "https://api.github.com/users/shunxing1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/shunxing1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shunxing1234/subscriptions", "organizations_url": "https://api.github.com/users/shunxing1234/orgs", "repos_url": "https://api.github.com/users/shunxing1234/repos", "events_url": "https://api.github.com/users/shunxing1234/events{/privacy}", "received_events_url": "https://api.github.com/users/shunxing1234/received_events", "type": "User", "site_admin": false }
[ { "id": 5724035499, "node_id": "LA_kwDOCUB6oc8AAAABVS3Zqw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Model%20on%20the%20Hub", "name": "Model on the Hub", "color": "9CA0E9", "default": false, "description": "" } ]
closed
false
null
[]
[ "Hi @shunxing1234, \r\n\r\nThanks a lot for opening a PR and contributing to the HF ecosystem! 🤗\r\nWe have recently been trying to push for `model on the hub` and have as much support as we can there. It will also be easier to integrate it! Here is a [tutorial](https://huggingface.co/docs/transformers/custom_models) if that sound good to you!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,692
1,692
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24767/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24767/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24767", "html_url": "https://github.com/huggingface/transformers/pull/24767", "diff_url": "https://github.com/huggingface/transformers/pull/24767.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24767.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24766
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24766/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24766/comments
https://api.github.com/repos/huggingface/transformers/issues/24766/events
https://github.com/huggingface/transformers/issues/24766
1,800,320,785
I_kwDOCUB6oc5rTrcR
24,766
Saving LLAMA 13B checkpoint with FSDP finetuning results in disk full error
{ "login": "ari9dam", "id": 14134882, "node_id": "MDQ6VXNlcjE0MTM0ODgy", "avatar_url": "https://avatars.githubusercontent.com/u/14134882?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ari9dam", "html_url": "https://github.com/ari9dam", "followers_url": "https://api.github.com/users/ari9dam/followers", "following_url": "https://api.github.com/users/ari9dam/following{/other_user}", "gists_url": "https://api.github.com/users/ari9dam/gists{/gist_id}", "starred_url": "https://api.github.com/users/ari9dam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ari9dam/subscriptions", "organizations_url": "https://api.github.com/users/ari9dam/orgs", "repos_url": "https://api.github.com/users/ari9dam/repos", "events_url": "https://api.github.com/users/ari9dam/events{/privacy}", "received_events_url": "https://api.github.com/users/ari9dam/received_events", "type": "User", "site_admin": false }
[ { "id": 5616426447, "node_id": "LA_kwDOCUB6oc8AAAABTsPdzw", "url": "https://api.github.com/repos/huggingface/transformers/labels/solved", "name": "solved", "color": "B1D6DC", "default": false, "description": "" } ]
closed
false
null
[]
[ "It seems saving a checkpoint requires more than 213GB (Free memory on my hard disk). Not sure if it is intended.\r\n\r\n```\r\nFilesystem Size Used Avail Use% Mounted on\r\noverlay 251G 27G 213G 11% /\r\ntmpfs 64M 0 64M 0% /dev\r\ntmpfs 434G 0 434G 0% /sys/fs/cgroup\r\nshm 2.0G 0 2.0G 0% /dev/shm\r\n/dev/sdb1 251G 27G 213G 11% /tmp\r\ntmpfs 434G 12K 434G 1% /proc/driver/nvidia\r\n/dev/root 124G 23G 102G 18% /usr/bin/nvidia-smi\r\ntmpfs 87G 2.4M 87G 1% /run/nvidia-persistenced/socket\r\ndevtmpfs 434G 0 434G 0% /dev/nvidia0\r\n```", "Hi @ari9dam, thanks for raising this issue. \r\n\r\nCould you a minimal code snippet we can use to reproduce the error? Specifically how accelerate launcher is being used, training arguments, and FDSP config. \r\n\r\nFor the transformers and accelerate source installs, which commit are you running from? \r\n\r\nWhen you say saving a checkpoint - am I right in saying this is the memory requirement for saving a single checkpoint after 1 epoch of training is > 213 GB? ", "Yes, \" the memory requirement for saving a single checkpoint after 1 epoch of training is > 213 GB?\". \r\n\r\n`accelerate launch --config_file accelerate_config.yaml --num_machines 4 --num_processes 16 --machine_rank $NODE_RANK --main_process_ip $MASTER_ADDR --main_process_port $MASTER_PORT ./trainer.py --model_name_or_path \"..\" --data_path \"...\" --per_device_train_batch_size 16 --per_device_eval_batch_size 16 --do_train --evaluation_strategy no --output_dir outputs --learning_rate 2e-5 --num_train_epochs 4 --lr_scheduler_type cosine --warmup_ratio 0.03 --weight_decay 0.0 --logging_steps 1 --save_strategy epoch --bf16 true --tf32 true --load_best_model_at_end false --model_max_length 1024 --gradient_checkpointing true --save_total_limit 1 --model_resume_from_checkpoint false --torch_compile false`\r\n\r\n### accelerate_config.yaml\r\n```\r\ncompute_environment: LOCAL_MACHINE\r\ndistributed_type: FSDP\r\ndowncast_bf16: 'no'\r\nfsdp_config:\r\n fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP\r\n fsdp_backward_prefetch_policy: BACKWARD_PRE\r\n fsdp_forward_prefetch: false\r\n fsdp_offload_params: false\r\n fsdp_sharding_strategy: 1\r\n fsdp_state_dict_type: FULL_STATE_DICT\r\n fsdp_sync_module_states: true\r\n fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer\r\n fsdp_use_orig_params: true\r\nmain_training_function: main\r\nnum_machines: 1\r\nnum_processes: 2\r\nmixed_precision: bf16\r\nrdzv_backend: static\r\ntpu_env: []\r\ntpu_use_cluster: false\r\ntpu_use_sudo: false\r\nuse_cpu: false\r\n```\r\ntransformers==4.31.0.dev0 (https://github.com/huggingface/transformers/commit/45025d92f815675e483f32812caa28cce3a960e7)\r\n accelerate==0.21.0.dev0 (https://github.com/huggingface/accelerate/commit/7954a28a71d484c4182a6b1074c1b8cc51642fc9)\r\n \r\n ", "@ari9dam Thanks for the additional info\r\n\r\ncc @pacman100 @muellerzr ", "Hello @ari9dam, please show the contents of the checkpoint along with their sizes", "total 201G\r\n 69 Jul 14 10:40 added_tokens.json\r\n 656 Jul 14 10:40 config.json\r\n 137 Jul 14 10:40 generation_config.json\r\n 97G Jul 14 10:46 optimizer.bin\r\n6.1G Jul 14 10:41 optimizer.pt\r\n9.3G Jul 14 10:42 pytorch_model-00001-of-00006.bin\r\n9.3G Jul 14 10:42 pytorch_model-00002-of-00006.bin\r\n9.3G Jul 14 10:42 pytorch_model-00003-of-00006.bin\r\n9.2G Jul 14 10:42 pytorch_model-00004-of-00006.bin\r\n9.2G Jul 14 10:42 pytorch_model-00005-of-00006.bin\r\n2.4G Jul 14 10:41 pytorch_model-00006-of-00006.bin\r\n 49G Jul 14 10:44 pytorch_model.bin\r\n 33K Jul 14 10:40 pytorch_model.bin.index.json\r\n 18K Jul 14 10:40 rng_state_0.pth\r\n 18K Jul 14 10:40 rng_state_10.pth\r\n 18K Jul 14 10:40 rng_state_11.pth\r\n 18K Jul 14 10:40 rng_state_12.pth\r\n 18K Jul 14 10:40 rng_state_13.pth\r\n 18K Jul 14 10:40 rng_state_14.pth\r\n 18K Jul 14 10:40 rng_state_15.pth\r\n 18K Jul 14 10:40 rng_state_1.pth\r\n 18K Jul 14 10:40 rng_state_2.pth\r\n 18K Jul 14 10:40 rng_state_3.pth\r\n 18K Jul 14 10:40 rng_state_4.pth\r\n 18K Jul 14 10:40 rng_state_5.pth\r\n 18K Jul 14 10:40 rng_state_6.pth\r\n 18K Jul 14 10:40 rng_state_7.pth\r\n 18K Jul 14 10:40 rng_state_8.pth\r\n 18K Jul 14 10:40 rng_state_9.pth\r\n 627 Jul 14 10:40 scheduler.pt\r\n 435 Jul 14 10:40 special_tokens_map.json\r\n 745 Jul 14 10:40 tokenizer_config.json\r\n1.8M Jul 14 10:40 tokenizer.json\r\n489K Jul 14 10:40 tokenizer.model\r\n 11K Jul 14 10:40 trainer_state.json\r\n4.1K Jul 14 10:40 training_args.bin", "I'm not sure about `49G pytorch_model.bin`. It looks to be a duplicate.", "Hello, PR https://github.com/huggingface/transformers/pull/24926 should resolve the duplicate saving issue. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,692
1,692
NONE
null
### System Info transformers - installed from source accelerate - installed from source torch 2.0.1 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Finetune LLAMA 13B with accelerate launcher. Saving strategy "epoch". FSDP based training. ### Expected behavior I would be able to save the checkpoint. Now getting disk full error. Note that the disk initially had space. My code was working with transformers-4.28, accelerate 0.18 and torch 1.13. This error started after I moved to accelerate based launcher and upgraded the packages.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24766/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24766/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24765
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24765/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24765/comments
https://api.github.com/repos/huggingface/transformers/issues/24765/events
https://github.com/huggingface/transformers/pull/24765
1,800,271,239
PR_kwDOCUB6oc5VRfht
24,765
fix: "UserWarning: Creating a tensor from a list of numpy.ndarrays is…
{ "login": "liucw2012", "id": 743552, "node_id": "MDQ6VXNlcjc0MzU1Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/743552?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liucw2012", "html_url": "https://github.com/liucw2012", "followers_url": "https://api.github.com/users/liucw2012/followers", "following_url": "https://api.github.com/users/liucw2012/following{/other_user}", "gists_url": "https://api.github.com/users/liucw2012/gists{/gist_id}", "starred_url": "https://api.github.com/users/liucw2012/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liucw2012/subscriptions", "organizations_url": "https://api.github.com/users/liucw2012/orgs", "repos_url": "https://api.github.com/users/liucw2012/repos", "events_url": "https://api.github.com/users/liucw2012/events{/privacy}", "received_events_url": "https://api.github.com/users/liucw2012/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,689
1,689
1,689
CONTRIBUTOR
null
fix "UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor." # What does this PR do? reduce latency of codes below from 0.744675874710083s to 0.013312816619873047s. ``` st = time.time() inputs = tokenizer(query_list, return_tensors="pt" ,padding=True) print(time.time() - st) ``` Fixes #24764 ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24765/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24765/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24765", "html_url": "https://github.com/huggingface/transformers/pull/24765", "diff_url": "https://github.com/huggingface/transformers/pull/24765.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24765.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24764
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24764/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24764/comments
https://api.github.com/repos/huggingface/transformers/issues/24764/events
https://github.com/huggingface/transformers/issues/24764
1,800,262,980
I_kwDOCUB6oc5rTdVE
24,764
UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow
{ "login": "liucw2012", "id": 743552, "node_id": "MDQ6VXNlcjc0MzU1Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/743552?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liucw2012", "html_url": "https://github.com/liucw2012", "followers_url": "https://api.github.com/users/liucw2012/followers", "following_url": "https://api.github.com/users/liucw2012/following{/other_user}", "gists_url": "https://api.github.com/users/liucw2012/gists{/gist_id}", "starred_url": "https://api.github.com/users/liucw2012/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liucw2012/subscriptions", "organizations_url": "https://api.github.com/users/liucw2012/orgs", "repos_url": "https://api.github.com/users/liucw2012/repos", "events_url": "https://api.github.com/users/liucw2012/events{/privacy}", "received_events_url": "https://api.github.com/users/liucw2012/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@liucw2012 Thank you for the opening the issue and PR!\r\n\r\nI haven't check the PR in detail, but I am wondering what's the **total time** if you convert it to numpy first + numpy to torch tensor.\r\n\r\nAlso a remark: when giving a code snippet, please provide all the necessary variable value so it could run directly.\r\n(I know, the above one is simple enough, but it's a good thing to provide, thank you!)", "@ydshieh \r\nthe latency is 0.0147s if convert it to numpy.", "The speed looks very good. But see my comment in the PR :-)", "@ydshieh \r\ni have another pr . it's already went through a lot checks. but i don't know is it accepted or what should i do next. \r\ncould u give a review pls ?\r\nhttps://github.com/huggingface/transformers/pull/24772#issuecomment-1635168756", "Thank you @liucw2012 for the PR ❤️ \r\n\r\nThe CI in that PR is green 🚀 . However I would like to check a bit deeper what the (nested) inputs would be possible for that method, and if every case works and if none of the case will slow down.", "BTW, could you maybe provide what `query_list` you used.\r\n\r\n(it's always a nice thing to provide the actual definition for variables in a code snippet 🙏 )", "sorry, i was a lit bit busy recently. the query_list is just two chats, each one is almost 800 Chinese characters。all others examples is ok if u used tokenier with padding=true;", "It's fine. But see\r\n\r\nhttps://github.com/huggingface/transformers/pull/24772#discussion_r1265559341" ]
1,689
1,690
1,690
CONTRIBUTOR
null
### System Info System Info nvidia CUDA Version: 12.1, Driver Version: 525.105.17 transformers-cli env is - `transformers` version: 4.29.2 - Platform: Linux-4.19.87-netease6-1-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.1.0a0+fe05266 (True) ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction when i use tokenizer(...,padding=True), i will get a warnning : "UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor." and this function latency is 0.744675874710083 if query_list over 800 words . ``` st = time.time() inputs = tokenizer(query_list, return_tensors="pt" ,padding=True) print(time.time() - st) ``` ### Expected behavior Reduce latency and fix the UserWarning.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24764/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24764/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24763
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24763/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24763/comments
https://api.github.com/repos/huggingface/transformers/issues/24763/events
https://github.com/huggingface/transformers/issues/24763
1,800,136,586
I_kwDOCUB6oc5rS-eK
24,763
sample code in for mobilevitv2 is not working
{ "login": "darwinharianto", "id": 44696192, "node_id": "MDQ6VXNlcjQ0Njk2MTky", "avatar_url": "https://avatars.githubusercontent.com/u/44696192?v=4", "gravatar_id": "", "url": "https://api.github.com/users/darwinharianto", "html_url": "https://github.com/darwinharianto", "followers_url": "https://api.github.com/users/darwinharianto/followers", "following_url": "https://api.github.com/users/darwinharianto/following{/other_user}", "gists_url": "https://api.github.com/users/darwinharianto/gists{/gist_id}", "starred_url": "https://api.github.com/users/darwinharianto/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/darwinharianto/subscriptions", "organizations_url": "https://api.github.com/users/darwinharianto/orgs", "repos_url": "https://api.github.com/users/darwinharianto/repos", "events_url": "https://api.github.com/users/darwinharianto/events{/privacy}", "received_events_url": "https://api.github.com/users/darwinharianto/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @darwinharianto \r\n\r\nThank you for opening the issue.\r\n\r\nCould you specify which code sample in the link is the exact one that has problem to run? Thank a lot!", "@darwinharianto Thanks for reporting this! It seems the issue is coming from the preprocessor config files on the hub for this checkpoint: [it points to a class which doesn't exist](https://huggingface.co/apple/mobilevitv2-1.0-imagenet1k-256/blob/6229cf24f57fe7210db6c6f1ad872a616b802679/preprocessor_config.json#L10). If I clone and modify the config file, the image processor will load correctly.\r\n\r\nI'll also add this modeling file to our doctests. " ]
1,689
1,689
1,689
NONE
null
### System Info - `transformers` version: 4.31.0.dev0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?:no ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction run sample code from [MobileViTV2ForImageClassification](https://huggingface.co/docs/transformers/v4.30.0/en/model_doc/mobilevitv2#transformers.MobileViTV2ForImageClassification) throws an error ValueError: Unrecognized image processor in apple/mobilevitv2-1.0-imagenet1k-256. Should have a `image_processor_type` key in its preprocessor_config.json of config.json, or one of the following `model_type` keys in its config.json: align, beit, bit, blip, blip-2, bridgetower, chinese_clip, clip, clipseg, conditional_detr, convnext, convnextv2, cvt, data2vec-vision, deformable_detr, deit, deta, detr, dinat, donut-swin, dpt, efficientformer, efficientnet, flava, focalnet, git, glpn, groupvit, imagegpt, instructblip, layoutlmv2, layoutlmv3, levit, mask2former, maskformer, mgp-str, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, nat, oneformer, owlvit, perceiver, pix2struct, poolformer, regnet, resnet, sam, segformer, swiftformer, swin, swin2sr, swinv2, table-transformer, timesformer, tvlt, upernet, van, videomae, vilt, vit, vit_hybrid, vit_mae, vit_msn, xclip, yolos ### Expected behavior sample code should not throw an error
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24763/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24763/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24762
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24762/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24762/comments
https://api.github.com/repos/huggingface/transformers/issues/24762/events
https://github.com/huggingface/transformers/issues/24762
1,799,786,957
I_kwDOCUB6oc5rRpHN
24,762
Abnormally slow inference speed of quantized model?
{ "login": "JiancongWang", "id": 23178680, "node_id": "MDQ6VXNlcjIzMTc4Njgw", "avatar_url": "https://avatars.githubusercontent.com/u/23178680?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JiancongWang", "html_url": "https://github.com/JiancongWang", "followers_url": "https://api.github.com/users/JiancongWang/followers", "following_url": "https://api.github.com/users/JiancongWang/following{/other_user}", "gists_url": "https://api.github.com/users/JiancongWang/gists{/gist_id}", "starred_url": "https://api.github.com/users/JiancongWang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JiancongWang/subscriptions", "organizations_url": "https://api.github.com/users/JiancongWang/orgs", "repos_url": "https://api.github.com/users/JiancongWang/repos", "events_url": "https://api.github.com/users/JiancongWang/events{/privacy}", "received_events_url": "https://api.github.com/users/JiancongWang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Having the same issue here. Also want to ask if it is true that 8bit model is slower than fp16 during inference", "cc @younesbelkada for `4-bit` 🙏 ", "Hi everyone,\r\nSadly 8bit models are expected to be slower than fp16 models, see this: https://huggingface.co/blog/hf-bitsandbytes-integration#faster-inference-speed-for-smaller-models for reference\r\nbitsandbytes juste released a new version for faster inference (batch_size=1) https://github.com/TimDettmers/bitsandbytes/releases/tag/0.40.0\r\nCan you try to upgrade bitsandbytes and run the benchmark again?", "> Hi everyone, Sadly 8bit models are expected to be slower than fp16 models, see this: https://huggingface.co/blog/hf-bitsandbytes-integration#faster-inference-speed-for-smaller-models for reference bitsandbytes juste released a new version for faster inference (batch_size=1) https://github.com/TimDettmers/bitsandbytes/releases/tag/0.40.0 Can you try to upgrade bitsandbytes and run the benchmark again?\r\n\r\nHi younesbelkada, I get my bitsandbytes from directly pulling the github repository and compile from source yesterday. So it is already the latest 0.40 version that includes the 4 bits bs1 inference. ", "This is strange .. Can you report that to bitsandbytes library? 🙏 ", "> This is strange .. Can you report that to bitsandbytes library? 🙏\r\n\r\nSure. I will post on the issues of bitsandbytes library and link this issue. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "I don't think 4/8 bits models should be faster than fp16 models.\r\nThey are designed to reduce the memory requirements, not the computation times." ]
1,689
1,702
1,692
NONE
null
### System Info To reproduce, I am running on CUDA 12.1/Driver 530 on an A100 with Ubuntu 20.04. I am running with the following packages accelerate 0.21.0.dev0 triton 2.0.0 transformers 4.31.0.dev0 torch 2.0.1 bitsandbytes 0.40.0.post3 Output from the transformers-cli env is - `transformers` version: 4.31.0.dev0 - Platform: Linux-5.15.0-1015-aws-x86_64-with-glibc2.17 - Python version: 3.8.12 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Hi guys, I am trying out the load_in_4bit/load_in_8bit options to see if it speed up model inference. My understanding is that by quantizing the model, the inference speed will improve. However that's not the case. I use the following simple script to test out speed on a T5 XXL model for 4bits/8bits/fp32, and actually fp32 model runs the fastest (0.07sec), and the 4bits/8bits run almost in the same speed (0.1sec). So I want to double check. The script I am using to test the speed is here ``` from transformers import T5Tokenizer, T5ForConditionalGeneration import torch import pdb import gc import time import os device_id = 3 tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xxl") input_text = "translate English to German: How old are you?" * 20 input_ids = tokenizer(input_text, return_tensors="pt").input_ids[:, :128].to(f"cuda:{device_id}") # Actual model loading max_memory = f'{int(torch.cuda.mem_get_info()[0]/1024**3)-2}GB' model_4bits = T5ForConditionalGeneration.from_pretrained( "google/flan-t5-xxl", load_in_4bit=True, max_memory=max_memory) # model_8bits = T5ForConditionalGeneration.from_pretrained( # "google/flan-t5-xxl", # load_in_8bit=True, # max_memory=max_memory) # model_fp32 = T5ForConditionalGeneration.from_pretrained( # "google/flan-t5-xxl").to(f"cuda:{device_id}") def benchmark(model_name, model, input_ids): # warmup for _ in range(200): model.encoder(input_ids) torch.cuda.synchronize() with torch.no_grad(): start = time.time() for i in range(200): model.encoder(input_ids) torch.cuda.synchronize() end = time.time() print(f"{model_name} inference time is {(end-start)/200} sec") benchmark("model_4bits", model_4bits, input_ids) # model_4bits inference time is 0.1036052393913269 sec # benchmark("model_8bits", model_8bits, input_ids) # model_8bits inference time is 0.1006016504764557 sec # benchmark("model_fp32", model_fp32, input_ids) # model_fp32 inference time is 0.0731453263759613 sec ``` ### Expected behavior I expect the 4 bits model runs faster than the 8bits model, which in turn runs faster than the fp32 model. That's not the case I observe.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24762/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24762/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24761
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24761/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24761/comments
https://api.github.com/repos/huggingface/transformers/issues/24761/events
https://github.com/huggingface/transformers/pull/24761
1,799,776,758
PR_kwDOCUB6oc5VPz25
24,761
Unpin protobuf in docker file (for daily CI)
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? I forgot to unpin protobuf (in the docker file) in my previous PR #24599. Currently, CircleCI is testing against with protobuf 4, but daily CI is still v3. Let's move on on daily CI too.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24761/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24761/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24761", "html_url": "https://github.com/huggingface/transformers/pull/24761", "diff_url": "https://github.com/huggingface/transformers/pull/24761.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24761.patch", "merged_at": 1689112555000 }
https://api.github.com/repos/huggingface/transformers/issues/24760
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24760/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24760/comments
https://api.github.com/repos/huggingface/transformers/issues/24760/events
https://github.com/huggingface/transformers/pull/24760
1,799,719,155
PR_kwDOCUB6oc5VPnTP
24,760
Allow existing configs to be registered
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24760). All of your documentation changes will be reflected on that endpoint." ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? If a model has a class defined both on the Hub and locally, there is a clash appearing when loading it in the auto API and `trust_remote_code=True` coming from [this line](https://github.com/huggingface/transformers/blob/253d43d46d1291633fb21116b737f2bd8799d3da/src/transformers/models/auto/auto_factory.py#L421). This PR fixes it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24760/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24760/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24760", "html_url": "https://github.com/huggingface/transformers/pull/24760", "diff_url": "https://github.com/huggingface/transformers/pull/24760.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24760.patch", "merged_at": 1689108755000 }
https://api.github.com/repos/huggingface/transformers/issues/24759
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24759/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24759/comments
https://api.github.com/repos/huggingface/transformers/issues/24759/events
https://github.com/huggingface/transformers/pull/24759
1,799,649,264
PR_kwDOCUB6oc5VPYDY
24,759
:bug: Handle empty gen_kwargs for seq2seq trainer prediction_step function
{ "login": "gkumbhat", "id": 10690477, "node_id": "MDQ6VXNlcjEwNjkwNDc3", "avatar_url": "https://avatars.githubusercontent.com/u/10690477?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gkumbhat", "html_url": "https://github.com/gkumbhat", "followers_url": "https://api.github.com/users/gkumbhat/followers", "following_url": "https://api.github.com/users/gkumbhat/following{/other_user}", "gists_url": "https://api.github.com/users/gkumbhat/gists{/gist_id}", "starred_url": "https://api.github.com/users/gkumbhat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gkumbhat/subscriptions", "organizations_url": "https://api.github.com/users/gkumbhat/orgs", "repos_url": "https://api.github.com/users/gkumbhat/repos", "events_url": "https://api.github.com/users/gkumbhat/events{/privacy}", "received_events_url": "https://api.github.com/users/gkumbhat/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> All Trainers expose function `prediction_step`. For `Seq2SeqTrainer`, it seems like the `prediction_step` function is relying on availability of `_gen_kwargs` attribute, which will get set automatically, if `prediction_step` will get called from other functions like `predict` or `evaluate`. However, if someone calls `prediction_step` directly, then this field will not get set and currently will throw `AttributeError: 'Seq2SeqTrainer' object has no attribute '_gen_kwargs'`. In this PR, I am trying to resolve above issue by accepting `gen_kwargs` as an argument to `prediction_step` function, in addition to automatically get it from `self` if it has been set previously while falling back to empty `{}` in case its not set in either of those methods. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. cc: @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24759/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24759/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24759", "html_url": "https://github.com/huggingface/transformers/pull/24759", "diff_url": "https://github.com/huggingface/transformers/pull/24759.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24759.patch", "merged_at": 1689108486000 }
https://api.github.com/repos/huggingface/transformers/issues/24758
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24758/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24758/comments
https://api.github.com/repos/huggingface/transformers/issues/24758/events
https://github.com/huggingface/transformers/pull/24758
1,799,648,287
PR_kwDOCUB6oc5VPX1x
24,758
Fix lr scheduler not being reset on reruns
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24758). All of your documentation changes will be reflected on that endpoint." ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? This PR ensures that a new learning rate scheduler is created each time we rerun the `inner_training_loop`, so that if we have an lr such as `linear`, a new LR is generated based on the new batch size and step count. I don't believe a new optimizer is needed here to be recreated, just the scheduler as adjusting the bs and lr *shouldn't* matter? But if we think it is we can go ahead and add a reset to the optimizer as well. Fixes # (issue) The true solution to https://github.com/huggingface/transformers/pull/24521 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24758/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24758/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24758", "html_url": "https://github.com/huggingface/transformers/pull/24758", "diff_url": "https://github.com/huggingface/transformers/pull/24758.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24758.patch", "merged_at": 1689107825000 }
https://api.github.com/repos/huggingface/transformers/issues/24757
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24757/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24757/comments
https://api.github.com/repos/huggingface/transformers/issues/24757/events
https://github.com/huggingface/transformers/pull/24757
1,799,624,563
PR_kwDOCUB6oc5VPTWK
24,757
Replacement of 20 asserts with exceptions
{ "login": "Baukebrenninkmeijer", "id": 9077462, "node_id": "MDQ6VXNlcjkwNzc0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/9077462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Baukebrenninkmeijer", "html_url": "https://github.com/Baukebrenninkmeijer", "followers_url": "https://api.github.com/users/Baukebrenninkmeijer/followers", "following_url": "https://api.github.com/users/Baukebrenninkmeijer/following{/other_user}", "gists_url": "https://api.github.com/users/Baukebrenninkmeijer/gists{/gist_id}", "starred_url": "https://api.github.com/users/Baukebrenninkmeijer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Baukebrenninkmeijer/subscriptions", "organizations_url": "https://api.github.com/users/Baukebrenninkmeijer/orgs", "repos_url": "https://api.github.com/users/Baukebrenninkmeijer/repos", "events_url": "https://api.github.com/users/Baukebrenninkmeijer/events{/privacy}", "received_events_url": "https://api.github.com/users/Baukebrenninkmeijer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You will need to put your PR out of draft mode for us to be able to merge it :-)", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? Replaces 20 assertions with relevant errors, mostly ValueError. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes part of #12789 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? @JuheonChu @sgugger I saw both of you tagged in above issue. Please have a look when you have time! :) <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24757/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24757/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24757", "html_url": "https://github.com/huggingface/transformers/pull/24757", "diff_url": "https://github.com/huggingface/transformers/pull/24757.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24757.patch", "merged_at": 1689162309000 }
https://api.github.com/repos/huggingface/transformers/issues/24756
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24756/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24756/comments
https://api.github.com/repos/huggingface/transformers/issues/24756/events
https://github.com/huggingface/transformers/pull/24756
1,799,568,712
PR_kwDOCUB6oc5VPIeV
24,756
Fix eval_accumulation_steps leading to incorrect metrics
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "id": 2155169140, "node_id": "MDU6TGFiZWwyMTU1MTY5MTQw", "url": "https://api.github.com/repos/huggingface/transformers/labels/trainer", "name": "trainer", "color": "2ef289", "default": false, "description": "" }, { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? Uses the logic in the `GradientState` to know when we've reached the end of training and should sync the gradients. Doing so relies on [this](https://github.com/huggingface/accelerate/blob/main/src/accelerate/accelerator.py#L862-L869) code, which already checks for the case of if a dataloader has no length and works properly Fixes # (issue) Solves https://github.com/huggingface/transformers/issues/24734 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24756/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24756/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24756", "html_url": "https://github.com/huggingface/transformers/pull/24756", "diff_url": "https://github.com/huggingface/transformers/pull/24756.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24756.patch", "merged_at": 1689155352000 }
https://api.github.com/repos/huggingface/transformers/issues/24755
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24755/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24755/comments
https://api.github.com/repos/huggingface/transformers/issues/24755/events
https://github.com/huggingface/transformers/pull/24755
1,799,538,082
PR_kwDOCUB6oc5VPBv5
24,755
gpt-bigcode: avoid `zero_` to support Core ML
{ "login": "pcuenca", "id": 1177582, "node_id": "MDQ6VXNlcjExNzc1ODI=", "avatar_url": "https://avatars.githubusercontent.com/u/1177582?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pcuenca", "html_url": "https://github.com/pcuenca", "followers_url": "https://api.github.com/users/pcuenca/followers", "following_url": "https://api.github.com/users/pcuenca/following{/other_user}", "gists_url": "https://api.github.com/users/pcuenca/gists{/gist_id}", "starred_url": "https://api.github.com/users/pcuenca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pcuenca/subscriptions", "organizations_url": "https://api.github.com/users/pcuenca/orgs", "repos_url": "https://api.github.com/users/pcuenca/repos", "events_url": "https://api.github.com/users/pcuenca/events{/privacy}", "received_events_url": "https://api.github.com/users/pcuenca/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Note: to fully test conversion of `gpt-bigcode` models, the following `coremltools` PRs (or equivalent workarounds) need to be applied as well: https://github.com/apple/coremltools/pull/1910, https://github.com/apple/coremltools/pull/1911.", "_The documentation is not available anymore as the PR was closed or merged._", "@younesbelkada I think this is already fixed in PT.\r\nShould we just drop this logic?\r\nOpened a PR: https://github.com/huggingface/transformers/pull/24768 which supercedes this one", "We support versions of PyTorch from 1.10 and onward, so we need to keep the workaround for the bug.", "Merging to unblock @pcuenca , let's maybe address @jlamypoirier 's comments in a follow up PR ! " ]
1,689
1,689
1,689
MEMBER
null
# What does this PR do? In-place `zero_` is not supported by the Core ML conversion process. This PR replaces it with `zeros_like` so conversion can proceed. The change only affects a workaround for a PyTorch bug on the `cpu` device. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @younesbelkada, @loubnabnl, @jlamypoirier
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24755/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24755/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24755", "html_url": "https://github.com/huggingface/transformers/pull/24755", "diff_url": "https://github.com/huggingface/transformers/pull/24755.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24755.patch", "merged_at": 1689172706000 }
https://api.github.com/repos/huggingface/transformers/issues/24754
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24754/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24754/comments
https://api.github.com/repos/huggingface/transformers/issues/24754/events
https://github.com/huggingface/transformers/pull/24754
1,799,479,167
PR_kwDOCUB6oc5VO1DS
24,754
📝 Add parameter names to code examples in README
{ "login": "kadirnar", "id": 36204372, "node_id": "MDQ6VXNlcjM2MjA0Mzcy", "avatar_url": "https://avatars.githubusercontent.com/u/36204372?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kadirnar", "html_url": "https://github.com/kadirnar", "followers_url": "https://api.github.com/users/kadirnar/followers", "following_url": "https://api.github.com/users/kadirnar/following{/other_user}", "gists_url": "https://api.github.com/users/kadirnar/gists{/gist_id}", "starred_url": "https://api.github.com/users/kadirnar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kadirnar/subscriptions", "organizations_url": "https://api.github.com/users/kadirnar/orgs", "repos_url": "https://api.github.com/users/kadirnar/repos", "events_url": "https://api.github.com/users/kadirnar/events{/privacy}", "received_events_url": "https://api.github.com/users/kadirnar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> Thanks for your PR. I don't believe this makes those basic examples easier to understand, however, so I would leave things as is.\r\n\r\nHi @sgugger, thank you for your feedback. It might be nice to specify the 'model' parameter of the pipeline function. This is how I will update all the tasks in the https://huggingface.co/tasks section.\r\n\r\nExample(task=depth-estimation):\r\nhuggingface/hub-docs#890\r\n\r\nIf you want, I can close this pull request.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,692
1,692
CONTRIBUTOR
null
@sgugger, @stevhliu and @MKhalusova Updated the code examples in the README file to include parameter names for better clarity and readability. Previously, the examples were missing parameter names, which could lead to confusion. By adding the parameter names, it becomes easier for users to understand and utilize the code correctly. Thanks! 🙌
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24754/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24754/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24754", "html_url": "https://github.com/huggingface/transformers/pull/24754", "diff_url": "https://github.com/huggingface/transformers/pull/24754.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24754.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24753
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24753/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24753/comments
https://api.github.com/repos/huggingface/transformers/issues/24753/events
https://github.com/huggingface/transformers/pull/24753
1,799,471,323
PR_kwDOCUB6oc5VOzSh
24,753
Skip some slow tests for doctesting in PRs (Circle)CI
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi!\r\n\r\n- This file `.circleci/create_circleci_config.py` is running on CircleCI. See #23245\r\n- If we don't accept the change is this PR: we can simply skip some (doc) test file that takes longer time to run)\r\n\r\n- This PR addresses the per test timeout issue while there is a global timeout: motivated by #23318.\r\n - Usually I will try to respect the 120s timeout (per test). But **since in this doctests on CircleCI (where we have 1200s global timeout), I think overall it's fine (?)**\r\n\r\n", "Can we skip the longest tests? We are trying to rationalize the costs of circleCI so want to make sure we don't run something too beefy on it, especially since all those tests are run on GPU nightly.", "> Can we skip the longest tests? \r\n\r\nSure! We will need to have two lists: the exiting `utils/documentation_tests` and a new `slow_doctest_to_ignore`.\r\n\r\n(I am not sure how to mark some doctest file as slow doctest as we have done for usual tests)", "That works for me!", "Me too! ", "The latest version should skip the slow doctests.\r\n\r\n(I don't really run the full doctest on CircleCI to get all of them: we just update the list if we see some files being slow by doctest)", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24753). All of your documentation changes will be reflected on that endpoint." ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? For doctest: each`.md` file is seen as a single test by pytest, and some takes more time (say `task_summary.md`) than others. Let's allow a 5 minute timeout per test. For the job step, it still has the total 1200s timeout.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24753/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24753/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24753", "html_url": "https://github.com/huggingface/transformers/pull/24753", "diff_url": "https://github.com/huggingface/transformers/pull/24753.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24753.patch", "merged_at": 1689106095000 }
https://api.github.com/repos/huggingface/transformers/issues/24752
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24752/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24752/comments
https://api.github.com/repos/huggingface/transformers/issues/24752/events
https://github.com/huggingface/transformers/issues/24752
1,799,394,908
I_kwDOCUB6oc5rQJZc
24,752
Training stage error with batch mode on conditional generation for multimodal models
{ "login": "cramraj8", "id": 8756708, "node_id": "MDQ6VXNlcjg3NTY3MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/8756708?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cramraj8", "html_url": "https://github.com/cramraj8", "followers_url": "https://api.github.com/users/cramraj8/followers", "following_url": "https://api.github.com/users/cramraj8/following{/other_user}", "gists_url": "https://api.github.com/users/cramraj8/gists{/gist_id}", "starred_url": "https://api.github.com/users/cramraj8/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cramraj8/subscriptions", "organizations_url": "https://api.github.com/users/cramraj8/orgs", "repos_url": "https://api.github.com/users/cramraj8/repos", "events_url": "https://api.github.com/users/cramraj8/events{/privacy}", "received_events_url": "https://api.github.com/users/cramraj8/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is an issue for[ this merged PR](https://github.com/huggingface/transformers/pull/22424)", "Hey @cramraj8 👋 \r\n\r\nThe output shape is correct -- the sequence length of the logits is as long as the input sequence length. \r\n\r\nPerhaps you're interested in using the model to complete the prefix, in which case you'll need to use generative tools like `.generate()` or the `Seq2SeqTrainer`. Have a look at the documentation for these terms :)", "Hi @gante , Thanks for the reply. Yes, I tried using `.generate() `and `Seq2SeqTrainer` to do the training. But if I provide prefix `input_ids` and parse them as `decoder_input_ids ` to the `.generate()` function, the Trainer throws error during the training stage claiming the loss calculation end up getting different shape for prediction and label. That is why I looked up `model.decode() `function. It is true that output sequence length must be same as input sequence length for the `decode() `function. But for the prefix completion task, how do we adapt it for `generate`() and `decode`() function.\r\n\r\nI do have this question - I am interested in conditional decoding where the a multimodal completes a prefix or a prompt. Do we train the model with complete text (prefix and completion text) during the training stage, and only provide prefix as an additional input during inference stage ?\r\n\r\nOr we can still provide prefix as additional input to both training and inference stages ?", "@cramraj8 I see, now I understand what your goal is :) \r\n\r\nAFAIK We do not support passing a prefix at train time, I'm afraid you'll have to build a custom solution. In any case, it must be based on `.generate()` and `Seq2SeqTrainer`, as your task relies on auto-regressive text generation!\r\n\r\nYou can also train without a prefix at all, even if you expect a certain prefix (or set of prefixes) at inference time. For instance, Whisper does this (see [section 2.3 of its paper](https://arxiv.org/pdf/2212.04356.pdf)). At train time, treat the prefixes as variables. At inference time, starts generating from the prefix.", "Thank you! This is helpful. Looks like not doing anything during train time, and applying prefix during test time works better. ", "Hi @gante, during my implementation I found that I am getting an error of device mismatch at the following location.\r\n```\r\nFile \"/mnt/azureml/cr/.../exe/wd/trainer_seq2seq.py\", line 296, in prediction_step\r\n generated_tokens = self.model.generate(**inputs, **gen_kwargs)\r\n File \"/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/autograd/grad_mode.py\", line 27, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/generation/utils.py\", line 1328, in generate\r\n input_ids, model_kwargs = self._prepare_decoder_input_ids_for_generation(\r\n File \"/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/generation/utils.py\", line 676, in _prepare_decoder_input_ids_for_generation\r\n decoder_input_ids = torch.cat([decoder_input_ids_start, decoder_input_ids], dim=-1)\r\nRuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:4 and cpu! (when checking argument for argument tensors in method wrapper_cat)\r\n```\r\n\r\nThis happens when I do `Seq2SeqTrainer `evaluation with prefix (`decoder_input_ids`) provided. In CPU machine, the code works perfectly fine. But at the GPU presence, it was throwing this device error. After debugging I found that the `decoder_input_ids` were never placed in the corresponding cuda device IDs even though other input values were placed in GPU. I did the following change, and it worked fine now. I am not sure if it's a bug or not, but I am bringing this up to your attention if anyone face similar issues in future. \r\n\r\nIn addition, I had to overwrite Seq2SeqTrainer to separate `decoder_input_ids` from `inputs `and assign it with `self._gen_kwargs` so that the code works. Otherwise, I was getting complex errors.\r\n\r\n```\r\ndecoder_input_ids = inputs.pop(\"decoder_input_ids\")\r\nself._gen_kwargs[\"decoder_input_ids\"] = decoder_input_ids\r\n```\r\n\r\n(adding @wgx998877 for reference)\r\n", "@cramraj8 would you be able to share a short reproducer? (like the one you shared at the top)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,693
1,693
NONE
null
### System Info - `transformers` version: 4.29.2 - Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cpu (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @younesbelkada @gante @sgugger ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` import torch from transformers import VisionEncoderDecoderModel, TrOCRProcessor loc = "microsoft/trocr-small-handwritten" processor = TrOCRProcessor.from_pretrained(loc) model = VisionEncoderDecoderModel.from_pretrained(loc) decoder_input_ids = torch.tensor([[0, 7344, 2159, 12, 345], [0, 7344, 2159, 12, 346]]) # a batch_size of 2 Prefixes for each examples decoder_attention_mask = None encoder_hidden_states = torch.randn(2, 578, 384) # a random encoder input to the decoder encoder_attention_mask = None decoder_inputs_embeds = None output_attentions = None output_hidden_states = None use_cache = None past_key_values = None return_dict = True kwargs_decoder = {} decoder_outputs = model.decoder( input_ids=decoder_input_ids, attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, inputs_embeds=decoder_inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, use_cache=use_cache, past_key_values=past_key_values, return_dict=return_dict, **kwargs_decoder, ) decoder_outputs['logits'].shape, decoder_input_ids.shape ``` ### Expected behavior The decoder default max_target_length is set to 128. So I expect the logits with shape [2, 128, 64044], where batch_size is 2. But I only get shape of [2, 5, 64044], where prefix length is 5.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24752/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24752/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24751
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24751/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24751/comments
https://api.github.com/repos/huggingface/transformers/issues/24751/events
https://github.com/huggingface/transformers/issues/24751
1,799,279,931
I_kwDOCUB6oc5rPtU7
24,751
Stalled loop during prediction with deepspeed
{ "login": "avivbrokman", "id": 35349273, "node_id": "MDQ6VXNlcjM1MzQ5Mjcz", "avatar_url": "https://avatars.githubusercontent.com/u/35349273?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avivbrokman", "html_url": "https://github.com/avivbrokman", "followers_url": "https://api.github.com/users/avivbrokman/followers", "following_url": "https://api.github.com/users/avivbrokman/following{/other_user}", "gists_url": "https://api.github.com/users/avivbrokman/gists{/gist_id}", "starred_url": "https://api.github.com/users/avivbrokman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avivbrokman/subscriptions", "organizations_url": "https://api.github.com/users/avivbrokman/orgs", "repos_url": "https://api.github.com/users/avivbrokman/repos", "events_url": "https://api.github.com/users/avivbrokman/events{/privacy}", "received_events_url": "https://api.github.com/users/avivbrokman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Not really sure, but let me tag @pacman100 (?) and see if he knows this is more an issue in `transformers/accelerate` or should go to DeepSpeed repo issue page.", "I wasn't sure which repo it belonged in either. I couldn't seem to locate the source of the bug. Based on all the print statements I added, it looked like it had to be in the `dataloader`, but then I added some code just to iterate through the `dataloader` without doing anything, and that worked without issue.", "Hello @avivbrokman, so you mean the issue is not with dataloaders?\r\n", "@pacman100 I couldn't figure it out—I think this is beyond my coding skill level. I spent a few days trying to locate the source of the bug, but failed. Normally, when I get an error, the traceback helps me find the problem. Here, there's no error message, so my (probably highly inadvisable) solution was to add print statements in between every single line in the source code so I can see the last line that was executed. \r\n\r\nWhen I add `print(f'finished step {step}')` at line 3179 of `trainer.py` with one less level of indentation than line 3178, it prints, but then the `print(f'beginning step {step}')` at line 3114 doesn't execute for a second batch. This led me to believe that the issue was with the `dataloader`. So I inserted the following code at line 3112:\r\n\r\n```\r\nfor step, inputs in enumerate(dataloader):\r\n print(step)\r\n```\r\n\r\nThis fully executed, which led me to to believe the problem is not the `dataloader`. At this point, I reached the limits of my understanding, and submitted my bug report.", "Thank you. This isn't a deepspeed issue as this also happens on just using DDP", "> for step, inputs in enumerate(dataloader):\r\n> print(step)\r\n> This fully executed, which led me to to believe the problem is not the dataloader. At this point, I reached the limits of my understanding, and submitted my bug report.\r\n\r\nThis doesn't print for 2nd step for me", "Seems to be related to dataloader, cc @muellerzr, post completion of 1 step, it hangs when using DDP:\r\n\r\n<img width=\"1502\" alt=\"Screenshot 2023-07-12 at 5 02 43 PM\" src=\"https://github.com/huggingface/transformers/assets/13534540/dd55966a-0bd0-4004-b72e-be9069a9535b\">\r\n\r\n\r\nCommand:\r\n```\r\naccelerate launch --num_processes 2 examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --per_device_train_batch_size 1 --output_dir output_dir --overwrite_output_dir --fp16 --do_train --max_train_samples 64 --num_train_epochs 1 --dataset_name wmt16 --dataset_config \"ro-en\" --source_lang en --target_lang ro --do_predict --max_predict_samples 64 --predict_with_generate\r\n```\r\n\r\nMain branch of transformers and accelerate. @muellerzr, any ideas about what might be going wrong?\r\n", "Hmmm @pacman100 can you grab the absolutely latest version of main on Accelerate and try again? (Like within the last 5 minutes)", "Hello, just updated the accelerate to main and still the issue persists", "@muellerzr Is the #24775 PR just for the Trainer? I am hitting this issue in my eval loop, but with a custom loop, not the trainer. I am using zero-3.", "@init-random yes, we'd need a reproducer to know what's going on with your custom loop, but in general that's the correct solution to do if you're mimicking what the trainer should be doing. (And it's not directly deepspeed related)", "@muellerzr OK, thank you! I'll look into it and open a new issue, if need be." ]
1,689
1,689
1,689
NONE
null
### System Info - `transformers` version: 4.31.0.dev0 - Platform: Linux-5.10.173-154.642.amzn2.x86_64-x86_64-with-glibc2.26 - Python version: 3.10.10 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help? @pacman100 (b/c deepspeed-only problem) @sgugger (b/c this is a documentation example script) ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am running `examples/pytorch/translation/run_translation.py` on a machine with 4 V100's. To replicate my issue, run deepspeed examples/pytorch/translation/run_translation.py --deepspeed tests/deepspeed/ds_config_zero3.json / --model_name_or_path t5-small / --per_device_train_batch_size 1 / --output_dir output_dir / --overwrite_output_dir / --fp16 / --do_train / --max_train_samples 64 / --num_train_epochs 1 / --dataset_name wmt16 / --dataset_config "ro-en" / --source_lang en / --target_lang ro / --do_predict / --max_predict_samples 64 / --predict_with_generate ### Expected behavior I would expect the script to fully run using `deepspeed`, not just without it. Right now, it outputs an warning message ``` Invalidate trace cache @ step 0: expected module 2, but got module 0 Invalidate trace cache @ step 1: expected module 116, but got module 2 ``` and gets stuck during the `.evaluation_loop()` method. I added some printing steps to the code, and it appeared that the code was stalling after the first `.prediction_step()`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24751/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24751/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24750
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24750/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24750/comments
https://api.github.com/repos/huggingface/transformers/issues/24750/events
https://github.com/huggingface/transformers/pull/24750
1,799,242,863
PR_kwDOCUB6oc5VOBpN
24,750
Add PEFT support directly in transformers pipeline
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24750). All of your documentation changes will be reflected on that endpoint.", "Thanks ! If the PoC gets validated by @Narsil I can extend it to more tasks (seq2seq generation, seq-cls) and add nice tests in the current testing suite", "So far `self.check_model_type` is removed, and I know @Narsil suggests it's ok.\r\n\r\nI think it could be simply skipped by checking if the model is an instance of PEFT model, and we don't really need to remove it. Leave @sgugger to make the final call though.", "Sounds good to me!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Closing as https://github.com/huggingface/transformers/pull/25077 should allow peft models to work out of the box for transformers pipeline" ]
1,689
1,692
1,692
CONTRIBUTOR
null
# What does this PR do? Replaces https://github.com/huggingface/peft/pull/585 After discussing with @LysandreJik I made this PoC PR to see whether it is simpler to add PEFT support directly in transformers and centralize all sort of pipelines in transformers pipeline. In the future, we can concentrate the efforts on diffusers side to add PEFT support there as well. Do not merge before the next PEFT release Currently the API looks as follows: ```python from transformers import pipeline peft_model_id = "ybelkada/opt-350m-lora" pipe = pipeline("text-generation", peft_model_id) print(pipe("hello")) pipe = pipeline("text-generation", peft_model_id, peft_model_kwargs={"adapter_name": "default"}) print(pipe("hello")) local_peft_pipeline_path = "./test_lora_pipeline" pipe.model.save_pretrained(local_peft_pipeline_path) pipe = pipeline("text-generation", local_peft_pipeline_path) print(pipe("hello")) ``` ## TODOS: - [ ] seq2seq generation - [ ] seq classification - [ ] Add check with task_type - [ ] add clean tests - [ ] update docs
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24750/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24750/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24750", "html_url": "https://github.com/huggingface/transformers/pull/24750", "diff_url": "https://github.com/huggingface/transformers/pull/24750.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24750.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24749
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24749/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24749/comments
https://api.github.com/repos/huggingface/transformers/issues/24749/events
https://github.com/huggingface/transformers/pull/24749
1,799,201,320
PR_kwDOCUB6oc5VN4Zw
24,749
Skip keys not in the state dict when finding mismatched weights
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? When looping through the keys in `find_mismatched_weights`, we loop through all the `loaded_keys` which are all the keys in the checkpoint. If the checkpoint is sharded, the `state_dict` passed won't contain all those keys, only a subset of them, so we need to skip the keys not present in the `state_dict`. Fixes #24704
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24749/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24749/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24749", "html_url": "https://github.com/huggingface/transformers/pull/24749", "diff_url": "https://github.com/huggingface/transformers/pull/24749.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24749.patch", "merged_at": 1689093622000 }
https://api.github.com/repos/huggingface/transformers/issues/24748
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24748/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24748/comments
https://api.github.com/repos/huggingface/transformers/issues/24748/events
https://github.com/huggingface/transformers/pull/24748
1,799,163,198
PR_kwDOCUB6oc5VNv-i
24,748
Docs: Added benchmarks for `torch.compile()` for vision models
{ "login": "merveenoyan", "id": 53175384, "node_id": "MDQ6VXNlcjUzMTc1Mzg0", "avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/merveenoyan", "html_url": "https://github.com/merveenoyan", "followers_url": "https://api.github.com/users/merveenoyan/followers", "following_url": "https://api.github.com/users/merveenoyan/following{/other_user}", "gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions", "organizations_url": "https://api.github.com/users/merveenoyan/orgs", "repos_url": "https://api.github.com/users/merveenoyan/repos", "events_url": "https://api.github.com/users/merveenoyan/events{/privacy}", "received_events_url": "https://api.github.com/users/merveenoyan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sayakpaul can you check this out when you get time? after that, we can merge. ", "@sayakpaul I added more models.", "I thought I committed more models but apparently haven't committed the work, just did now", "@sayakpaul @amyeroberts @stevhliu I added more models and visualizations, can you give another round of review?", "I made a mistake when doing visualizations for T4 batch=4 ViT, I did replace the image in HF documentation-images but since it's once uploaded for md preview on GitHub, GitHub doesn't update that (so everything's actually fine, it's just GitHub)", "_Note_: this PR will be stale until the benchmarks are improved. ", "Hey @amyeroberts I added `nightly` + `nightly`/`reduce-overhead` comparisons." ]
1,689
1,691
1,691
CONTRIBUTOR
null
As discussed with @amyeroberts & @sayakpaul, this PR adds `torch.compile()` benchmarks to our documentation. I mainly benchmarked for latency, I can add throughput as well. I got it with doc-builder on my local, it looks like below. <img width="928" alt="Screenshot 2023-07-11 at 18 05 42" src="https://github.com/huggingface/transformers/assets/53175384/fecf12e0-750b-4085-8224-2fe91705bbfd"> <img width="529" alt="Screenshot 2023-07-11 at 18 04 39" src="https://github.com/huggingface/transformers/assets/53175384/abe6a3f5-0e20-4d76-9e89-8724c7b1cb45">
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24748/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 4, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24748/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24748", "html_url": "https://github.com/huggingface/transformers/pull/24748", "diff_url": "https://github.com/huggingface/transformers/pull/24748.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24748.patch", "merged_at": 1691425124000 }
https://api.github.com/repos/huggingface/transformers/issues/24747
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24747/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24747/comments
https://api.github.com/repos/huggingface/transformers/issues/24747/events
https://github.com/huggingface/transformers/issues/24747
1,798,922,179
I_kwDOCUB6oc5rOV_D
24,747
`device_map="auto"` support multi-node
{ "login": "guozhiyao", "id": 21999339, "node_id": "MDQ6VXNlcjIxOTk5MzM5", "avatar_url": "https://avatars.githubusercontent.com/u/21999339?v=4", "gravatar_id": "", "url": "https://api.github.com/users/guozhiyao", "html_url": "https://github.com/guozhiyao", "followers_url": "https://api.github.com/users/guozhiyao/followers", "following_url": "https://api.github.com/users/guozhiyao/following{/other_user}", "gists_url": "https://api.github.com/users/guozhiyao/gists{/gist_id}", "starred_url": "https://api.github.com/users/guozhiyao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/guozhiyao/subscriptions", "organizations_url": "https://api.github.com/users/guozhiyao/orgs", "repos_url": "https://api.github.com/users/guozhiyao/repos", "events_url": "https://api.github.com/users/guozhiyao/events{/privacy}", "received_events_url": "https://api.github.com/users/guozhiyao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada \r\n\r\nHi @guozhiyao, \r\n\r\nThanks for raising this issue. At the moment, there isn't enough information for us to be able to help you. Could you specify what you mean by \"doesn't work\"? \r\n\r\nAs a side note, I don't think you can do `device_map=auto\"` and then `.half()`, instead you can pass in a flag to `from_pretrained` to specify the precision e.g. `torch_dtype=torch.float16` or `load_in_8bit=True`. @younesbelkada can confirm :) \r\n\r\n\r\n\r\n", "hi @guozhiyao \r\nthanks for raising this up ! \r\nfirstly as @amyeroberts suggested, the canonical way of loading a model with a specific dtype (in your case `half`=`torch.float16`) is by passing `torch_dtype=torch.float16` thus you avoid any unexpected issue you may encounter\r\n\r\nRegarding your second question, I don't think this is supported by `device_map=\"auto\"` for inference. Usually the multi-node paradigm is useful for training, where you have an entire training process running independently on each node. I think accelerate supports multi-node training (you can select mutli node training when running `accelerate config` and we have made some training process work under multi-node regime using accelerate internally). \r\n\r\nHowever I doubt that you can run multi-node inference out of the box with `device_map='auto'` as this is intended only for single node (single / multi GPU or CPU only). In multi-node setting each process will run independently `AutoModel.from_pretrained(model_dir, device_map=\"auto\", trust_remote_code=True).half()` thus the model will not be shared across both processes.\r\nI am also unsure about the benefits of such protocol - the only case it might be interesting to see this would be if someone wants to fit a model that can't fit in more than a node (more than 8xA100 80GB at most)\r\n\r\nWould like also to hear from @sgugger or @muellerzr , in case I missed something I am not aware of.", "I can confirm that multi-node is not supported by `device_map=\"auto\"` :-) ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Not everyone has access to 8xA100, so I think there is benefit to support multi-node inference (e.g. if someone has access to multi-node V100s setup but not A100, then this feature would allow them to try a big model like Llama-2-80B on their V100s. Correct me if I'm wrong, but at present, there doesn't appear to be a straightforward method for running inference with models of the scale of Llama-2-80B on multi-node V100 setups)", "@runopti just to be clear, our language here is multinode == multiple *computers*, not multinode == multiple GPUs on the *same computer*, correct?", "@muellerzr Yes, what I mean is multinode == multiple computers. ", "I'd recommend opening a feature request on the Accelerate github: https://github.com/huggingface/accelerate" ]
1,689
1,692
1,692
NONE
null
### Feature request `AutoModel.from_pretrained(model_dir, device_map="auto", trust_remote_code=True).half()` I want to load a huge model in multi-node for inference, such as 4 node with 1 gpu per node. But I do not know how to do it. The `device_map="auto"` seems only work for one node. ### Motivation I want to test the long-context ppl. When I increase the context, the gpu memory increase too. So I need more node to do the inference. ### Your contribution not yet.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24747/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24747/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24746
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24746/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24746/comments
https://api.github.com/repos/huggingface/transformers/issues/24746/events
https://github.com/huggingface/transformers/issues/24746
1,798,736,892
I_kwDOCUB6oc5rNov8
24,746
CPM_BEE model should have local model_path to infer and don't use trust_remote_code=True
{ "login": "fxrhhx", "id": 42543089, "node_id": "MDQ6VXNlcjQyNTQzMDg5", "avatar_url": "https://avatars.githubusercontent.com/u/42543089?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxrhhx", "html_url": "https://github.com/fxrhhx", "followers_url": "https://api.github.com/users/fxrhhx/followers", "following_url": "https://api.github.com/users/fxrhhx/following{/other_user}", "gists_url": "https://api.github.com/users/fxrhhx/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxrhhx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxrhhx/subscriptions", "organizations_url": "https://api.github.com/users/fxrhhx/orgs", "repos_url": "https://api.github.com/users/fxrhhx/repos", "events_url": "https://api.github.com/users/fxrhhx/events{/privacy}", "received_events_url": "https://api.github.com/users/fxrhhx/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @fxrhhx, \r\n\r\n`cache_dir` is the [directory where the checkpoint is located](https://huggingface.co/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoModel.from_pretrained.cache_dir).\r\n\r\n`pretrained_model_name_or_path` is either [the checkpoint name, or the full path to the directory containing weights & configs](https://huggingface.co/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoModel.from_pretrained.pretrained_model_name_or_path).\r\n\r\nIn this case, following your example, this should work: \r\n\r\n```python\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\ncache_dir=\"/home/users/fanxingran/workspace/workspace/cpm_bee_cpu\"\r\ncheckpoint = \"cpm-bee-10b\"\r\nmodel = AutoModelForCausalLM.from_pretrained(checkpoint, cache_dir=cache_dir)\r\n\r\n# Pass in the full model path\r\nmodel_path = \"/home/users/fanxingran/workspace/workspace/cpm_bee_cpu/cpm-bee-10b\"\r\nmodel = AutoModelForCausalLM.from_pretrained(model_path)\r\n```\r\n", "> Thank you for your answer!! I found use the code like this, is also dont't work, will still download something is .cache/huggface and use it, the model_path which i give containing the weights and configs, i couldn't understant why\r\n> \r\n`from transformers import AutoModelForCausalLM, AutoTokenizer`\r\n`model_path=\"/home/users/fanxingran/workspace/workspace/cpm_bee_cpu/cpm-bee-10b\"`\r\n`tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)`\r\n`model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True).cpu() `\r\n`result = model.generate({\"input\": \"今天天气不错,\", \"<ans>\": \"\"}, tokenizer)`\r\n`print(result)`\r\n\r\n", "\r\n![9f353864f24ba9a6688e9ba99280747e](https://github.com/huggingface/transformers/assets/42543089/97396528-6f7b-4739-99ec-82f45a6176fc)\r\n\r\n", "![01b809b730e290bb1b536122829191c5](https://github.com/huggingface/transformers/assets/42543089/646f00c9-1a69-401c-92f4-e85799ff3fb8)\r\n", "@fxrhhx Without knowing what's in the model path, it won't be possible to debug this. Could you run:\r\n\r\n```\r\nls /home/users/fanxingran/workspace/workspace/cpm_bee_cpu/cpm-bee-10b\r\n```\r\n\r\nAnd\r\n```\r\nless /home/users/fanxingran/workspace/workspace/cpm_bee_cpu/cpm-bee-10b/config.json\r\n```\r\n?\r\n\r\nJust from this, it looks like the modeling config is pointing to the model on the hub: https://huggingface.co/openbmb/cpm-bee-10b/blob/4b1905b3195203330c462ed367d97c3361288937/config.json#L3\r\n", "> @fxrhhx Without knowing what's in the model path, it won't be possible to debug this. Could you run:\r\n> \r\n> ```\r\n> ls /home/users/fanxingran/workspace/workspace/cpm_bee_cpu/cpm-bee-10b\r\n> ```\r\n> \r\n> And\r\n> \r\n> ```\r\n> less /home/users/fanxingran/workspace/workspace/cpm_bee_cpu/cpm-bee-10b/config.json\r\n> ```\r\n> \r\n> ?\r\n> \r\n> Just from this, it looks like the modeling config is pointing to the model on the hub: https://huggingface.co/openbmb/cpm-bee-10b/blob/4b1905b3195203330c462ed367d97c3361288937/config.json#L3\r\n\r\n![63e85e8609d966a4ac2704b71c79f710](https://github.com/huggingface/transformers/assets/42543089/eeef3212-6b4c-4c5d-854c-cc82c374c4f2)\r\n![3c53e82442b8b7aa8b2139c5b9ba9156](https://github.com/huggingface/transformers/assets/42543089/ffecd4aa-f344-4f44-a71a-cd9a471a4bb3)\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,694
1,694
NONE
null
### System Info I want to use my local model path for infer openbmb/cpm-bee-10b, like model_path="/home/users/fanxingran/workspace/workspace/cpm_bee_cpu/cpm-bee-10b" tokenizer = AutoTokenizer.from_pretrained(model_path, cache_dir=model_path, subfolder="scheduler", trust_remote_code=False) model = AutoModelForCausalLM.from_pretrained(model_path, cache_dir=model_path, trust_remote_code=False).cpu() # but it coulud't work , if I use # tokenizer = AutoTokenizer.from_pretrained("openbmb/cpm-bee-10b", trust_remote_code=True) # model = AutoModelForCausalLM.from_pretrained("openbmb/cpm-bee-10b", trust_remote_code=True).cpu() every time will down something in .cache/huggface ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import AutoModelForCausalLM, AutoTokenizer model_path="/home/users/fanxingran/workspace/workspace/cpm_bee_cpu/cpm-bee-10b" cache_dir=model_path tokenizer = AutoTokenizer.from_pretrained(model_path, cache_dir=model_path, subfolder="scheduler", trust_remote_code=False) model = AutoModelForCausalLM.from_pretrained(model_path, cache_dir=model_path, trust_remote_code=False).cpu() # result = model.generate({"input": "今天天气不错,", "<ans>": ""}, tokenizer) print(result) ### Expected behavior i want to infer cpm-bee-10b in local model_path and i could change the code (very import change code) use the .cache will update the code and the change is will be delete with the update
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24746/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24746/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24745
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24745/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24745/comments
https://api.github.com/repos/huggingface/transformers/issues/24745/events
https://github.com/huggingface/transformers/pull/24745
1,798,336,665
PR_kwDOCUB6oc5VK5yb
24,745
Add CLVP
{ "login": "susnato", "id": 56069179, "node_id": "MDQ6VXNlcjU2MDY5MTc5", "avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4", "gravatar_id": "", "url": "https://api.github.com/users/susnato", "html_url": "https://github.com/susnato", "followers_url": "https://api.github.com/users/susnato/followers", "following_url": "https://api.github.com/users/susnato/following{/other_user}", "gists_url": "https://api.github.com/users/susnato/gists{/gist_id}", "starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/susnato/subscriptions", "organizations_url": "https://api.github.com/users/susnato/orgs", "repos_url": "https://api.github.com/users/susnato/repos", "events_url": "https://api.github.com/users/susnato/events{/privacy}", "received_events_url": "https://api.github.com/users/susnato/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc: @sanchit-gandhi, @dg845", "Very cool @susnato! Let me know if you have any questions / queries - more than happy to lend a hand here!", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24745). All of your documentation changes will be reflected on that endpoint.", "Hi @sanchit-gandhi, this PR is ready for review!\r\n\r\nSome notes related to this design I wanted to mention - \r\n1. Although CLIP has both tokenizer and Image Processor, in tortoise the text is encoded and pushed into both the autoregressive model and the CLVP model so I think its better to have only one tokenizer(for tortoise) and encoding the text one time and pushing it to both of the models rather than defining a seperate tokenization_clvp.py.\r\n2. Instead of processing Images and checking which image fits the description(from text) best, CLVP compares Speech Token Candidates and text. The speech tokens come from the output of the auto-regressive model itself, so we don't need the Image Processor too!\r\n3. CLVP uses Rotary Position Embeddings.", "Hello @sanchit-gandhi I have - \r\n- pushed the changed that you asked.\r\n- implemented the tokenizer and respective tests.\r\n\r\nFor the `Feature Extractor` I wanted to ask some things before going forward - \r\n\r\n`CLVP` compares the text embeddings generated by the tokenizer and the latent speech embeddings generated by an autoregressive model(here `gpt2`), So IMO we need a Feature Extractor which generates latent embeddings, in other words the feature extractor must take the tokenizer's outputs and then apply a gpt2 model on it and then give us the output. But I don't think that it complies with the design of the library. (I don't know any Feature Extractor which uses another transformer model for processing. :( )\r\n\r\nIn CLIP it was reasonable to use a Feature Extractor to just read images but I don't think it will be straight forward in this case.\r\n\r\nSo how should we design the Feature Extractor and what are your thoughts on this? ", "Hi @ylacombe thanks a lot for the guidance!\r\n\r\nI have a small doubt/question about the `Prenet` part - \r\nI believe that we must add an autoregressive model in the prenet after the MEL encoder. The reasons are that \r\n\r\n- The speech inputs(mel spectrograms) will be passed to the prenet and then to the speech transformer(`feature extractor->prenet->speech transformer`) and since the speech transformer has an embedding layer at the front, it will give us error if we directly pass the outputs of the [mel encoder](https://github.com/neonbjb/tortoise-tts/blob/3c4d9c51316cd2421cc2dea11ac3a7a2d3394acd/tortoise/models/autoregressive.py#L198) to the transformer. So we must use the autoregressive model to convert the outputs of the `mel encoder` into `codes` and then pass it to the `speech transformer`. \r\n- Also the CLVP speech transformer was trained to differentiate between `text tokens `and the `codes` generated by an `gpt2` so it only makes sense to have the `gpt2` model in the pipeline. \r\n\r\n\r\nSo to wrap it up I would suggest something like - \r\n\r\n```python\r\nfrom transformers import GPT2LMHeadModel # the autoregressive model\r\n\r\nclass CLVPPrenet(nn.Module):\r\n def __init__(cfg):\r\n super().__init__()\r\n self.mel_encoder = CLVPMelEncoder(cfg)\r\n self.autoreg = GPT2LMHeadModel(cfg)\r\n def forward(self, voice_mel_spectrograms, text_tokens):\r\n x = self.mel_encoder(voice_mel_spectrograms)\r\n # similar to this line - https://github.com/neonbjb/tortoise-tts/blob/3c4d9c51316cd2421cc2dea11ac3a7a2d3394acd/tortoise/api.py#L416\r\n codes = self.autoreg.generate(x, text_tokens)\r\n \r\n return codes\r\n```\r\n\r\nAnd the `CLVPFeatureExtractor` will only care about outputting `mel spectrograms` as you said.\r\n\r\nPlease let me know what do you think of this design.", "Hi @susnato,\r\nIndeed, it makes sense to add all the submodels that are used by the model, so I agree to add them to your modeling code.\r\n\r\nOne note though: you shouldn't mix forward and generate in your forward pass, as this mixes two different concepts and as users might want to pass some specific generate kwargs when generating.\r\n\r\nA possible solution for that is to bypass the use of the `CLVPPrenet` and to directly use `CLVPMelEncoder` and `GPT2LMHeadModel` in your final model. (and at the end of the day, using them during the final model.generate)\r\n\r\nTwo last things to consider is that:\r\n1. you should check that the gpt2 model of the original code indeed corresponds to our `GPT2LMHeadModel` before using it\r\n2. If it corresponds, you might want to load it with the `AutoModel` class (which will load `GPT2LMHeadModel` if the config is correctly set!)\r\n\r\n\r\nLet me know if you have more questions!\r\n", "Hi @ylacombe first of all apologies for the huge delay. I have pushed some major changes - \r\n\r\n- Added Feature Extractor and the tests\r\n- Added Processor and the tests\r\n- Added `CLVPAutoRegressiveLMHeadModel` which is just a `GPT2LMHeadModel` but with changes to make sure that the outputs are same\r\n- Added a `generate()` method to the `CLVPModel` which calls the `CLVPAutoRegressiveLMHeadModel`'s generate method to get `speech_ids` and then processes it using the speech model.\r\n- reworked the tests.", "Hi @ylacombe , I have addressed all of your comments except these 3 - \r\n- For [this](https://github.com/huggingface/transformers/pull/24745#discussion_r1313185072) and [this](https://github.com/huggingface/transformers/pull/24745#discussion_r1314639079) I am still waiting for the views/thoughts of @ArthurZucker, @sanchit-gandhi and @amyeroberts before proceeding.\r\n- For the [comment](https://github.com/huggingface/transformers/pull/24745#discussion_r1312992773) related to adding `CLVPTokenizerFast`, I have asked some additional questions in that thread.\r\n\r\nPlease review it and let me know if more changes are needed or not other than those 3.", "Hi @ylacombe, I have pushed the changes you asked in your last review and answered the questions in their respective thread.\r\n> Anyways, I left some small comments, mostly nits. My last request would be to verify that your model also work when passing multiple audio inputs, since I only code snippets with mostly one sample. Does batching correctly work with CLVP ?\r\n\r\nAs I see this, there are 4 possible batching scenarios :\r\n- [x] 1 Text vs N Audios \r\nHere we just repeat the text tokens N times, such that we are generating different responses of the same text with different voices.\r\n- [x] N Texts vs 1 Audio\r\nHere we just repeat the audio N times, such that we are generating responses for multiple texts with the same voice.\r\n- [x] N Texts vs N Audios\r\nHere we generate the response in a way that the 2nd response will be generated using the 2nd text and the 2nd audio.\r\n- [ ] N Texts vs M Audio\r\nThis is where it gets messy. We throw a [Value error here](https://github.com/susnato/transformers/blob/fbbff32b5d611a08804c1be7a63498901040753a/src/transformers/models/clvp/modeling_clvp.py#L520) because we don't know which text correspond to which audio. We can improve this by considering that each text corresponds to every audio and generating N*M speech candidates and comparing them with N Texts.\r\n\r\nSo , yes the batching works for CLVP except the last scenario(as of now).", "Hi @ArthurZucker, just to be clear on the naming part - \r\nWe will have 3 configs - `ClvpTextAndSpeechConfig`, `ClvpDecoderConfig` and `ClvpConfig` to bind them. \r\n\r\nFor the models we will have `ClvpTextAndSpeechModel` (which is the `CLVPTransformerWithProjection`), `ClvpDecoder`, `ClvpDecoderLMHead` and will keep the `ClvpModel` as it is.\r\n\r\nDo you approve these naming? ", "Let's forget about the `TextAndSpeech` it's just a general transformers. We could just have a `ClvpEncoderConfig` (encoder that can be used for both the speech and the text encoder) `ClvpDecoderConfig` and the `ClvpConfig`. \r\nRegarding the modeling, `ClvpEncoder`, `ClvpDecoder` , `ClvpPretrainedModel`, ,`ClvpModel` and `ClvpModelForConditionalGeneration` . You should really have a look at `MusicGen`'s modelling code 😉 ", "Hi @ArthurZucker, I have refactored both the `modeling` code and `checkpoint conversion` code and worked on your comments. \r\nFor the modeling code I have taken inspiration from the `MusicGen` and for the checkpoint script I have followed the `Whisper` (as you suggested). ", "Hi @ylacombe, I have pushed the changes and answered the remaining question [here](https://github.com/huggingface/transformers/pull/24745#discussion_r1340236251). ", "Hey @susnato, let's discuss the remaining question [here](https://github.com/huggingface/transformers/pull/24745#discussion_r1340236251) before asking @ArthurZucker to review!", "Hey @susnato - would you mind marking all completed conversations as 'resolved'? This greatly helps the next reviewer know which parts of the PR are still pending, and which are complete. I'll endeavour to provide you with a review as soon as possible after this! Ping me as soon as you've done so and I'll get you a review 🤗", "Hi @sanchit-gandhi , I have marked all completed conversations as 'resolved'. Here is a brief explanation about the changes from your last review - \r\n- Feature Extractor and Tokenizer of `tortoise-tts` have been implemented for this model.\r\n- we have incorporated the `autoregressive model` and `conditional_encoder` from `tortoise-tts` here. They are called `ClvpDecoder` and `ClvpConditioningEncoder` respectively. Also we have introduced `ClvpDecoderConfig` to store the configs for decoder.\r\n- The Clvp text and speech model's have been squeezed into a single `ClvpEncoder` to reduce reduncancy also their configs(`clvptextconfig` and `clvpspeechconfig`) have been reduced to `ClvpEncoderConfig`.\r\n- We have implemented a single model `ClvpForConditionalGeneration` which takes care for the decoder and encoder models. `ClvpForConditionalGeneration.generate()` first calls the generate of the `ClvpDecoder`(`autoregressive_model`) and generates the `speech_candidates` and right after that it applies the both text and speech encoders (`ClvpEncoder`) to filter out the best candidates (just as clvp does.)", "Hi @sanchit-gandhi , I have pushed the changes except [this](https://github.com/huggingface/transformers/pull/24745#discussion_r1344603740), [this](https://github.com/huggingface/transformers/pull/24745#discussion_r1344246928) and [this](https://github.com/huggingface/transformers/pull/24745#discussion_r1343917195).\r\nI have mentioned the reasons in their respective threads.\r\n\r\nAlso I have marked all implemented comments as resolved.", "Hi @sanchit-gandhi , I have modified the tokenizer to output two sets of `input_ids`. Let me know if you are ok with the change.", "Hi @ArthurZucker, I have pushed the changes that you requested in the last review.", "Okay I'll review again then ", "Hello @ArthurZucker, I have simplified the logic of the tokenizer.(I have reverted back to the previous version where we only pass `input_ids` and add bos and eos for `ClvpConditioningEncoder`)\r\nAlso I have worked on your comments \r\n\r\nPlease review it and let me know if this works with you. ", "Gentle ping @ArthurZucker for a follow up review. :)\r\n", "Yes!", "Hi @ArthurZucker, I have pushed the changes that you requested in the last review. ", "Let's just fix the CIs and we can merge", "Hello @ArthurZucker, the CI is green now! :hugs: ", "Hi @ArthurZucker, I have pushed some changes related to using `_prepare_4d_causal_attention_mask` and slightly reworked the `.md` file.\r\n\r\nPlease let me know if we need any more changes before we can merge. ", "Awesome, I'll review these and wait for the CI to go green! 🚀 ", " The CI is red but the error seems to be unrelated to this PR.\r\nRebasing again." ]
1,689
1,699
1,699
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adds CLVP, which is an integral part of `tortoise-tts`. Required for `tortoise-tts` integration in HF diffusers(Please see [this issue](https://github.com/huggingface/diffusers/issues/3891)). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. ([link](https://github.com/huggingface/diffusers/issues/3891)) - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24745/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24745/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24745", "html_url": "https://github.com/huggingface/transformers/pull/24745", "diff_url": "https://github.com/huggingface/transformers/pull/24745.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24745.patch", "merged_at": 1699624150000 }
https://api.github.com/repos/huggingface/transformers/issues/24744
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24744/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24744/comments
https://api.github.com/repos/huggingface/transformers/issues/24744/events
https://github.com/huggingface/transformers/issues/24744
1,798,284,989
I_kwDOCUB6oc5rL6a9
24,744
Import error for relative import of module_name = 'testing_utils'
{ "login": "teddius", "id": 890232, "node_id": "MDQ6VXNlcjg5MDIzMg==", "avatar_url": "https://avatars.githubusercontent.com/u/890232?v=4", "gravatar_id": "", "url": "https://api.github.com/users/teddius", "html_url": "https://github.com/teddius", "followers_url": "https://api.github.com/users/teddius/followers", "following_url": "https://api.github.com/users/teddius/following{/other_user}", "gists_url": "https://api.github.com/users/teddius/gists{/gist_id}", "starred_url": "https://api.github.com/users/teddius/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/teddius/subscriptions", "organizations_url": "https://api.github.com/users/teddius/orgs", "repos_url": "https://api.github.com/users/teddius/repos", "events_url": "https://api.github.com/users/teddius/events{/privacy}", "received_events_url": "https://api.github.com/users/teddius/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @teddius \r\n\r\nCould you show us the command or the python script you run that gives this error.\r\n\r\nIt's not super clear what\r\n```\r\nimport transformers in a pytest\r\n```\r\nthis means. \r\n\r\nThank you in advance!\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,692
1,692
NONE
null
### System Info Error message: ` self = <module 'transformers' from '/home/user/miniconda3/envs/my_project/lib/python3.8/site-packages/transformers/__init__.py'> module_name = 'testing_utils' def _get_module(self, module_name: str): try: > return importlib.import_module("." + module_name, self.__name__) ../../miniconda3/envs/my_project/lib/python3.8/site-packages/transformers/utils/import_utils.py:1086: name = '.testing_utils', package = 'transformers' def import_module(name, package=None): """Import a module. The 'package' argument is required when performing a relative import. It specifies the package to use as the anchor point from which to resolve the relative import to an absolute import. """ level = 0 if name.startswith('.'): if not package: msg = ("the 'package' argument is required to perform a relative " "import for {!r}") raise TypeError(msg.format(name)) for character in name: if character != '.': break level += 1 > return _bootstrap._gcd_import(name[level:], package, level) ../../miniconda3/envs/my_project/lib/python3.8/importlib/__init__.py:127: name = 'transformers.testing_utils', package = 'transformers', level = 1 > ??? <frozen importlib._bootstrap>:1014: name = 'transformers.testing_utils' import_ = <function _gcd_import at 0x7fc0abbb0430> > ??? <frozen importlib._bootstrap>:991: name = 'transformers.testing_utils' import_ = <function _gcd_import at 0x7fc0abbb0430> > ??? <frozen importlib._bootstrap>:975: spec = ModuleSpec(name='transformers.testing_utils', loader=<_pytest.assertion.rewrite.AssertionRewritingHook object at 0x7fc0975af460>, origin='/home/user/miniconda3/envs/my_project/lib/python3.8/site-packages/transformers/testing_utils.py') > ??? <frozen importlib._bootstrap>:671: self = <_pytest.assertion.rewrite.AssertionRewritingHook object at 0x7fc0975af460> module = <module 'transformers.testing_utils' from '/home/user/miniconda3/envs/my_project/lib/python3.8/site-packages/transformers/testing_utils.py'> def exec_module(self, module: types.ModuleType) -> None: assert module.__spec__ is not None assert module.__spec__.origin is not None fn = Path(module.__spec__.origin) state = self.config.stash[assertstate_key] self._rewritten_names[module.__name__] = fn # The requested module looks like a test file, so rewrite it. This is # the most magical part of the process: load the source, rewrite the # asserts, and load the rewritten source. We also cache the rewritten # module code in a special pyc. We must be aware of the possibility of # concurrent pytest processes rewriting and loading pycs. To avoid # tricky race conditions, we maintain the following invariant: The # cached pyc is always a complete, valid pyc. Operations on it must be # atomic. POSIX's atomic rename comes in handy. write = not sys.dont_write_bytecode cache_dir = get_cache_dir(fn) if write: ok = try_makedirs(cache_dir) if not ok: write = False state.trace(f"read only directory: {cache_dir}") cache_name = fn.name[:-3] + PYC_TAIL pyc = cache_dir / cache_name # Notice that even if we're in a read-only directory, I'm going # to check for a cached pyc. This may not be optimal... co = _read_pyc(fn, pyc, state.trace) if co is None: state.trace(f"rewriting {fn!r}") source_stat, co = _rewrite_test(fn, self.config) if write: self._writing_pyc = True try: _write_pyc(state, co, source_stat, pyc) finally: self._writing_pyc = False else: state.trace(f"found cached rewritten pyc for {fn}") > exec(co, module.__dict__) ../../miniconda3/envs/my_project/lib/python3.8/site-packages/_pytest/assertion/rewrite.py:168: import collections import contextlib import doctest import functools import inspect import logging import multiprocessing import os import re import shlex import shutil import subprocess import sys import tempfile import time import unittest from collections.abc import Mapping from io import StringIO from pathlib import Path from typing import Iterable, Iterator, List, Optional, Union from unittest import mock import huggingface_hub import requests > from _pytest.doctest import ( Module, _get_checker, _get_continue_on_failure, _get_runner, _is_mocked, _patch_unwrap_mock_aware, get_optionflags, import_path, ) E ImportError: cannot import name 'Module' from '_pytest.doctest' (/home/user/miniconda3/envs/my_project/lib/python3.8/site-packages/_pytest/doctest.py) ../../miniconda3/envs/my_project/lib/python3.8/site-packages/transformers/testing_utils.py:39: ImportError The above exception was the direct cause of the following exception: args = () kwargs = {'end_date_str': '1990-02-13', 'is_valid': False, 'start_date_str': '1980-02-12'} def wrapper(*args, **kwargs): > with self as time_factory: ../../miniconda3/envs/my_project/lib/python3.8/site-packages/freezegun/api.py:800: self = <freezegun.api._freeze_time object at 0x7fbf5285f8e0> def __enter__(self): > return self.start() ../../miniconda3/envs/my_project/lib/python3.8/site-packages/freezegun/api.py:633: self = <freezegun.api._freeze_time object at 0x7fbf5285f8e0> def start(self): if self.auto_tick_seconds: freeze_factory = StepTickTimeFactory(self.time_to_freeze, self.auto_tick_seconds) elif self.tick: freeze_factory = TickingDateTimeFactory(self.time_to_freeze, real_datetime.now()) else: freeze_factory = FrozenDateTimeFactory(self.time_to_freeze) is_already_started = len(freeze_factories) > 0 freeze_factories.append(freeze_factory) tz_offsets.append(self.tz_offset) ignore_lists.append(self.ignore) tick_flags.append(self.tick) if is_already_started: return freeze_factory # Change the modules datetime.datetime = FakeDatetime datetime.date = FakeDate time.time = fake_time time.monotonic = fake_monotonic time.perf_counter = fake_perf_counter time.localtime = fake_localtime time.gmtime = fake_gmtime time.strftime = fake_strftime if uuid_generate_time_attr: setattr(uuid, uuid_generate_time_attr, None) uuid._UuidCreate = None uuid._last_timestamp = None copyreg.dispatch_table[real_datetime] = pickle_fake_datetime copyreg.dispatch_table[real_date] = pickle_fake_date # Change any place where the module had already been imported to_patch = [ ('real_date', real_date, FakeDate), ('real_datetime', real_datetime, FakeDatetime), ('real_gmtime', real_gmtime, fake_gmtime), ('real_localtime', real_localtime, fake_localtime), ('real_monotonic', real_monotonic, fake_monotonic), ('real_perf_counter', real_perf_counter, fake_perf_counter), ('real_strftime', real_strftime, fake_strftime), ('real_time', real_time, fake_time), ] if _TIME_NS_PRESENT: time.time_ns = fake_time_ns to_patch.append(('real_time_ns', real_time_ns, fake_time_ns)) if _MONOTONIC_NS_PRESENT: time.monotonic_ns = fake_monotonic_ns to_patch.append(('real_monotonic_ns', real_monotonic_ns, fake_monotonic_ns)) if _PERF_COUNTER_NS_PRESENT: time.perf_counter_ns = fake_perf_counter_ns to_patch.append(('real_perf_counter_ns', real_perf_counter_ns, fake_perf_counter_ns)) if real_clock is not None: # time.clock is deprecated and was removed in Python 3.8 time.clock = fake_clock to_patch.append(('real_clock', real_clock, fake_clock)) self.fake_names = tuple(fake.__name__ for real_name, real, fake in to_patch) self.reals = {id(fake): real for real_name, real, fake in to_patch} fakes = {id(real): fake for real_name, real, fake in to_patch} add_change = self.undo_changes.append # Save the current loaded modules self.modules_at_start = set(sys.modules.keys()) with warnings.catch_warnings(): warnings.filterwarnings('ignore') for mod_name, module in list(sys.modules.items()): if mod_name is None or module is None or mod_name == __name__: continue elif mod_name.startswith(self.ignore) or mod_name.endswith('.six.moves'): continue elif (not hasattr(module, "__name__") or module.__name__ in ('datetime', 'time')): continue > module_attrs = _get_cached_module_attributes(module) ../../miniconda3/envs/my_project/lib/python3.8/site-packages/freezegun/api.py:722: module = <module 'transformers' from '/home/user/miniconda3/envs/my_project/lib/python3.8/site-packages/transformers/__init__.py'> def _get_cached_module_attributes(module): module_hash, cached_attrs = _GLOBAL_MODULES_CACHE.get(module.__name__, ('0', [])) if _get_module_attributes_hash(module) == module_hash: return cached_attrs # cache miss: update the cache and return the refreshed value > _setup_module_cache(module) ../../miniconda3/envs/my_project/lib/python3.8/site-packages/freezegun/api.py:129: module = <module 'transformers' from '/home/user/miniconda3/envs/my_project/lib/python3.8/site-packages/transformers/__init__.py'> def _setup_module_cache(module): date_attrs = [] > all_module_attributes = _get_module_attributes(module) ../../miniconda3/envs/my_project/lib/python3.8/site-packages/freezegun/api.py:108: module = <module 'transformers' from '/home/user/miniconda3/envs/my_project/lib/python3.8/site-packages/transformers/__init__.py'> def _get_module_attributes(module): result = [] try: module_attributes = dir(module) except (ImportError, TypeError): return result for attribute_name in module_attributes: try: > attribute_value = getattr(module, attribute_name) ../../miniconda3/envs/my_project/lib/python3.8/site-packages/freezegun/api.py:97: self = <module 'transformers' from '/home/user/miniconda3/envs/my_project/lib/python3.8/site-packages/transformers/__init__.py'> name = 'testing_utils' def __getattr__(self, name: str) -> Any: if name in self._objects: return self._objects[name] if name in self._modules: > value = self._get_module(name) ../../miniconda3/envs/my_project/lib/python3.8/site-packages/transformers/utils/import_utils.py:1074: self = <module 'transformers' from '/home/user/miniconda3/envs/my_project/lib/python3.8/site-packages/transformers/__init__.py'> module_name = 'testing_utils' def _get_module(self, module_name: str): try: return importlib.import_module("." + module_name, self.__name__) except Exception as e: > raise RuntimeError( f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its" f" traceback):\n{e}" ) from e E RuntimeError: Failed to import transformers.testing_utils because of the following error (look up to see its traceback): E cannot import name 'Module' from '_pytest.doctest' (/home/user/miniconda3/envs/my_project/lib/python3.8/site-packages/_pytest/doctest.py) ../../miniconda3/envs/my_project/lib/python3.8/site-packages/transformers/utils/import_utils.py:1088: RuntimeError ` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction import transformers in a pytest ### Expected behavior No import error
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24744/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24744/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24743
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24743/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24743/comments
https://api.github.com/repos/huggingface/transformers/issues/24743/events
https://github.com/huggingface/transformers/issues/24743
1,798,240,479
I_kwDOCUB6oc5rLvjf
24,743
T5 Tokenizer Adds Space after Each Added (Extra) Token
{ "login": "tshu-w", "id": 13161779, "node_id": "MDQ6VXNlcjEzMTYxNzc5", "avatar_url": "https://avatars.githubusercontent.com/u/13161779?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tshu-w", "html_url": "https://github.com/tshu-w", "followers_url": "https://api.github.com/users/tshu-w/followers", "following_url": "https://api.github.com/users/tshu-w/following{/other_user}", "gists_url": "https://api.github.com/users/tshu-w/gists{/gist_id}", "starred_url": "https://api.github.com/users/tshu-w/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tshu-w/subscriptions", "organizations_url": "https://api.github.com/users/tshu-w/orgs", "repos_url": "https://api.github.com/users/tshu-w/repos", "events_url": "https://api.github.com/users/tshu-w/events{/privacy}", "received_events_url": "https://api.github.com/users/tshu-w/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I think a fix is in\r\n\r\nhttps://github.com/huggingface/transformers/pull/24622\r\n\r\n", "FYI: that PR is not merged yet into `main` branch", "Let's wait until we merge to close! ", "@ArthurZucker Hi, this issue still exists after updating transformers to the latest 4.31.0 with #24622", "Hey! \r\nIt is adressed for slow tokenizer, which are part of transformers! Fast tokenizers will need to wait a bit. It is also linked to the conversion script and meta space that need to be used similar to Llama\r\n```python \r\nIn [6]: tokenizer = AutoTokenizer.from_pretrained(\"t5-base\", legacy = False, use_fast = False)\r\n/fsx/arthur/miniconda3/envs/py10/lib/python3.10/site-packages/transformers/models/t5/tokenization_t5.py:199: FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5.\r\nFor now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with `truncation is True`.\r\n- Be aware that you SHOULD NOT rely on t5-base automatically truncating your input to 512 when padding/encoding.\r\n- If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with `model_max_length` or pass `max_length` when encoding/padding.\r\n- To avoid this warning, please instantiate this tokenizer with `model_max_length` set to your preferred value.\r\n warnings.warn(\r\n\r\nIn [7]: tokenizer.add_tokens([\"asdfg\"], special_tokens=False)\r\nOut[7]: 1\r\n\r\nIn [8]: tokenizer.tokenize(\"asdfgwordtimeasdfgtime\")\r\nOut[8]: ['asdfg', 'word', 'time', 'asdfg', 'time']\r\n```\r\nthe key is that you need to set `legacy=False` and `use_fast = False` because fast tokenizer is not fixed yet 😉 " ]
1,689
1,689
1,689
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.4.0-146-generic-x86_64-with-glibc2.35 - Python version: 3.11.3 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: (NA) - Using distributed or parallel set-up in script?: (NA) ### Who can help? @Arthu ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```ipython In [1]: from transformers import AutoTokenizer In [2]: tokenizer = AutoTokenizer.from_pretrained("./models/t5-base/") In [3]: tokenizer.add_tokens(["asdfg"], special_tokens=False) Out[3]: 1 In [4]: tokenizer.tokenize("asdfgwordtimeasdfgtime") Out[4]: ['asdfg', '▁word', 'time', 'asdfg', '▁time'] ``` ### Expected behavior tokenizer return `['asdfg', 'word', 'time', 'asdfg', 'time']`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24743/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24743/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24742
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24742/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24742/comments
https://api.github.com/repos/huggingface/transformers/issues/24742/events
https://github.com/huggingface/transformers/issues/24742
1,798,204,938
I_kwDOCUB6oc5rLm4K
24,742
Problems when using PyTorch Class _Dataset_ in model fineturn
{ "login": "Cassius31", "id": 112740294, "node_id": "U_kgDOBrhHxg", "avatar_url": "https://avatars.githubusercontent.com/u/112740294?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Cassius31", "html_url": "https://github.com/Cassius31", "followers_url": "https://api.github.com/users/Cassius31/followers", "following_url": "https://api.github.com/users/Cassius31/following{/other_user}", "gists_url": "https://api.github.com/users/Cassius31/gists{/gist_id}", "starred_url": "https://api.github.com/users/Cassius31/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Cassius31/subscriptions", "organizations_url": "https://api.github.com/users/Cassius31/orgs", "repos_url": "https://api.github.com/users/Cassius31/repos", "events_url": "https://api.github.com/users/Cassius31/events{/privacy}", "received_events_url": "https://api.github.com/users/Cassius31/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Cassius31 \r\n\r\nThis kind of question is better on [Hugging Face Forums](https://discuss.huggingface.co/).\r\n\r\nWe reserve the `transformers` GitHub repository for issues and feature requests.", "> Hi @Cassius31\r\n> \r\n> This kind of question is better on [Hugging Face Forums](https://discuss.huggingface.co/).\r\n> \r\n> We reserve the `transformers` GitHub repository for issues and feature requests.\r\n\r\nThanks for reminding! I am new to github and sorry for making trouble. I will delete this 3 hours later." ]
1,689
1,689
1,689
NONE
null
### System Info ```shell transformer 0.15.1 ``` ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I try to use PyTorch Class _Dataset_ to create my own training task, but it seems make the model worse. After 4 epochs training the model outputs null string. Only a little part of the input can get right answer. I apprecitate it if someone could save me!!! ``` from datasets import load_dataset from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from transformers import DataCollatorForSeq2Seq, Seq2SeqTrainingArguments, Seq2SeqTrainer import evaluate import numpy as np import torch from torch.utils.data import Dataset def compute_metrics(eval_preds): metric = evaluate.load("sacrebleu") preds, labels = eval_preds # In case the model returns more than the prediction logits if isinstance(preds, tuple): preds = preds[0] decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True) # Replace -100s in the labels as we can't decode them labels = np.where(labels != -100, labels, tokenizer.pad_token_id) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) # Some simple post-processing to remove the "\n", "\t" and so on decoded_preds = [pred.strip() for pred in decoded_preds] decoded_labels = [[label.strip()] for label in decoded_labels] for i in range(10): print(decoded_preds[i]) print(decoded_labels[i]) result = metric.compute(predictions=decoded_preds, references=decoded_labels) return {"bleu": result["score"]} class MyDataset(Dataset): def __init__(self, file_name, tokenizer): self.text1 = [] self.text2 = [] self.read(file_name) self.read(file_name) self.encoding = tokenizer(self.text1, text_target=self.text2, truncation=True, max_length=128, padding=True, return_tensors="pt") def read(self, file_name): # Train data is like: "Go.\tVa !" with open(file_name, "r", encoding="utf-8") as file: while True: line = file.readline() if line == "": break self.text1.append(line.split("\t")[0]) self.text2.append(line.split("\t")[1]) def __getitem__(self, index): item = {k: v[index].clone().detach() for k, v in self.encoding.items()} return item def __len__(self): return len(self.text1) def train(): train_dataset = MyDataset(train_file, tokenizer) eval_dataset = MyDataset(eval_file, tokenizer) training_args = Seq2SeqTrainingArguments( output_dir="save_model", learning_rate=2e-5, per_device_train_batch_size=8, per_device_eval_batch_size=16, num_train_epochs=4, evaluation_strategy="no", save_strategy="epoch", save_total_limit=1, predict_with_generate=True, ) trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, tokenizer=tokenizer, compute_metrics=compute_metrics ) print(trainer.evaluate()) trainer.train() print(trainer.evaluate()) if __name__ == "__main__": model_name = "t5-small" train_file = "fra-eng.txt" eval_file = "fra-eng.txt" model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to("cuda:0") tokenizer = AutoTokenizer.from_pretrained(model_name) train() ``` ### Expected behavior ```shell I hope anyone could tell me if I use _Dataset_ Class in a wong way. ``` ### Checklist - [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - [X] I checked if a related official extension example runs on my machine.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24742/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24742/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24741
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24741/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24741/comments
https://api.github.com/repos/huggingface/transformers/issues/24741/events
https://github.com/huggingface/transformers/issues/24741
1,798,100,464
I_kwDOCUB6oc5rLNXw
24,741
past_key_values supporting more-than-one-token inputs
{ "login": "namespace-Pt", "id": 61188463, "node_id": "MDQ6VXNlcjYxMTg4NDYz", "avatar_url": "https://avatars.githubusercontent.com/u/61188463?v=4", "gravatar_id": "", "url": "https://api.github.com/users/namespace-Pt", "html_url": "https://github.com/namespace-Pt", "followers_url": "https://api.github.com/users/namespace-Pt/followers", "following_url": "https://api.github.com/users/namespace-Pt/following{/other_user}", "gists_url": "https://api.github.com/users/namespace-Pt/gists{/gist_id}", "starred_url": "https://api.github.com/users/namespace-Pt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/namespace-Pt/subscriptions", "organizations_url": "https://api.github.com/users/namespace-Pt/orgs", "repos_url": "https://api.github.com/users/namespace-Pt/repos", "events_url": "https://api.github.com/users/namespace-Pt/events{/privacy}", "received_events_url": "https://api.github.com/users/namespace-Pt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Thanks for opening an issue! 🤗 \r\nThe reason why this is not working currently is that GPT2 is a pretty old model, thus it requires the attention mask to be passed when you want to generate. \r\nNow this is not an issue most of the time people use `gpt2_model.generate(input_ids, attention_mask)` and thus don't need to handle the past key values on their own! This is why, no it's not a very urgent problem and it's pretty much expected. Someone had a similar question see [here.](https://github.com/huggingface/transformers/issues/16811)\r\n\r\nThe issue rather lies in the creation of the positional ids, see in #18104 ", "Hey! Thanks for replying so soon.\r\n\r\nI tried `.generate` method with `input_ids` and `past_key_values` but it does not work as expected when I have more than one token in the `input_ids`. \r\n\r\nTo be specific, assume I'm building a QA system with GPT2. My input would be like \r\n```\r\nQ: a question\\nA: an answer\\nQ: a new question\\nA:\r\n```\r\nAfter generating the first answer, I have `past_key_values` until the token `answer`. However, when I want to generate the second answer, due to the insertion of the new question and prompts, I have to input `\\nQ: a new question\\nA: ` together with the `past_key_values` to the model. \r\n\r\nI expect the model to compute the hidden states of the given input, then generate the next token. However, according to the [code](https://github.com/huggingface/transformers/blob/35eac0df75c692c5b93c12f7eaf3279cab8bd7ce/src/transformers/models/gpt2/modeling_gpt2.py#L1012), the model will automatically truncate the `input_ids` as long as there is `past_key_values` passed alongside. This leads to false generation results.\r\n\r\nHere is my snippet:\r\n```\r\nfrom transformers import AutoModel, AutoTokenizer\r\n\r\nmodel = AutoModel.from_pretrained(\"gpt2\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"gpt2\")\r\n\r\ninputs = \"I love you. Me too\"\r\ninputs = tokenizer(inputs, return_tensors=\"pt\")\r\ninputs1 = {}\r\ninputs2 = {}\r\ninputs3 = {}\r\ninputs4 = {}\r\nfor k, v in inputs.items():\r\n inputs1[k] = v[:, :-3]\r\n inputs2[k] = v[:, -3:]\r\n inputs3[k] = v[:, :-1]\r\n inputs4[k] = v[:, -1:]\r\n\r\nprint(f\"All inputs: {tokenizer.batch_decode(inputs['input_ids'])}\")\r\nprint(f\"Inputs1: {tokenizer.batch_decode(inputs1['input_ids'])}\")\r\nprint(f\"Inputs2: {tokenizer.batch_decode(inputs2['input_ids'])}\")\r\nprint(f\"Inputs3: {tokenizer.batch_decode(inputs3['input_ids'])}\")\r\nprint(f\"Inputs4: {tokenizer.batch_decode(inputs4['input_ids'])}\")\r\n\r\n# 1. Generate without cache. This is the standard output.\r\noutputs = model.generate(**inputs, max_new_tokens=5)\r\nprint(tokenizer.batch_decode(outputs[:, inputs['input_ids'].shape[1]:]))\r\n\r\n# 2. WRONG!!! Generate with partial past_key_values. Extend attention_mask by past_length because the model expect the input_ids of shape [B, 1] when past_key_values is not None\r\noutputs1 = model(**inputs1)\r\npast_length = outputs1.past_key_values[0][0].size(-2)\r\ninputs2[\"attention_mask\"] = torch.cat([torch.ones(1, past_length), inputs2['attention_mask'][:, :1]], dim=-1)\r\nprint(inputs2, past_length)\r\noutputs2 = model.generate(**inputs2, past_key_values=outputs1.past_key_values, max_new_tokens=5)\r\nprint(tokenizer.batch_decode(outputs2[:, inputs2['input_ids'].shape[1]:]))\r\n\r\n# 3. CORRECT!!! Generate with past_key_values of all previous tokens except the most recent one. Extend attention_mask by past_length because the model expect the input_ids of shape [B, 1] when past_key_values is not None\r\noutputs3 = model(**inputs3)\r\npast_length = outputs3.past_key_values[0][0].size(-2)\r\ninputs4[\"attention_mask\"] = torch.cat([torch.ones(1, past_length), inputs4['attention_mask'][:, :1]], dim=-1)\r\nprint(inputs4, past_length)\r\noutputs4 = model.generate(**inputs4, past_key_values=outputs3.past_key_values, max_new_tokens=5)\r\nprint(tokenizer.batch_decode(outputs4[:, inputs4['input_ids'].shape[1]:]))\r\n```", "What you are trying to do is very akin to the [`QuestionAnsweringPipeline](https://huggingface.co/docs/transformers/main/main_classes/pipelines#transformers.QuestionAnsweringPipeline)`, which implements all the pre-processing and post processing.\r\n\r\nThe generate function is made for general generation, and uses indeed the `prepare_inputs_for_generation` function. What you are expecting is not a supported behaviour, but rather a specific usage. In the `generate` function, we expect the `new_tokens` to be a single token per batch:\r\n\r\nhttps://github.com/huggingface/transformers/blob/f092997ca669750d4f32ada127b2624bd450aee5/src/transformers/generation/utils.py#L2474\r\n\r\n It should be possible to hack your way trough the code, probably by writing a logits processor that only returns the last prediction, so that when you compute the next token: \r\n\r\nhttps://github.com/huggingface/transformers/blob/f092997ca669750d4f32ada127b2624bd450aee5/src/transformers/generation/utils.py#L2465\r\n\r\nthe shape is correct. You also need to modify the `prepare_inputs_for_generation`.\r\n\r\ncc @gante for visibility\r\n", "Got it. Thank you Arthur." ]
1,689
1,690
1,690
CONTRIBUTOR
null
### System Info - `transformers` version: 4.30.0 - Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction `past_key_values` does not work when my `input_ids` are more than 1 token. For example: ```python from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("gpt2") tokenizer = AutoTokenizer.from_pretrained("gpt2") inputs = "I love hugging face. Me too" inputs = tokenizer(inputs, return_tensors="pt") inputs1 = {} inputs2 = {} for k, v in inputs.items(): inputs1[k] = v[:, :-3] inputs2[k] = v[:, -3:] outputs = model(**inputs) input1_outputs = model(**inputs1) # Error!! input2_outputs = model(**inputs2, past_key_values=input1_outputs.past_key_values) ``` ### Expected behavior I find the error is because you only extend the keys and values [here](https://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/models/gpt2/modeling_gpt2.py#L319), while you forget to extend the `attention_mask` to the same size as the extended keys and values and hence the error. I also find this is common across different models, e.g. gpt-neo, gpt2. I think this is an urgent problem because many downstream applications like chatbots require this feature. I think you can extend the attention mask simply by concatenating `torch.ones((batch_size, past_length))` in front of the input attention mask to solve the problem. Here is my work around: ```python from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("gpt2") tokenizer = AutoTokenizer.from_pretrained("gpt2") inputs = "I love hugging face. Me too" inputs = tokenizer(inputs, return_tensors="pt") inputs1 = {} inputs2 = {} for k, v in inputs.items(): inputs1[k] = v[:, :-3] inputs2[k] = v[:, -3:] outputs = model(**inputs) input1_outputs = model(**inputs1) # without the following line, will raise errors inputs2["attention_mask"] = inputs.attention_mask input2_outputs = model(**inputs2, past_key_values=input1_outputs.past_key_values) # check print(((input2_outputs.logits - outputs.logits[:, -3:]) < 1e-4).all()) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24741/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24741/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24740
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24740/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24740/comments
https://api.github.com/repos/huggingface/transformers/issues/24740/events
https://github.com/huggingface/transformers/issues/24740
1,798,054,398
I_kwDOCUB6oc5rLCH-
24,740
docker/transformers-all-latest-gpu/Dockerfile Not Work
{ "login": "zyh3826", "id": 31238754, "node_id": "MDQ6VXNlcjMxMjM4NzU0", "avatar_url": "https://avatars.githubusercontent.com/u/31238754?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zyh3826", "html_url": "https://github.com/zyh3826", "followers_url": "https://api.github.com/users/zyh3826/followers", "following_url": "https://api.github.com/users/zyh3826/following{/other_user}", "gists_url": "https://api.github.com/users/zyh3826/gists{/gist_id}", "starred_url": "https://api.github.com/users/zyh3826/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zyh3826/subscriptions", "organizations_url": "https://api.github.com/users/zyh3826/orgs", "repos_url": "https://api.github.com/users/zyh3826/repos", "events_url": "https://api.github.com/users/zyh3826/events{/privacy}", "received_events_url": "https://api.github.com/users/zyh3826/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The problem in the used (pip) `index-url` being `https://mirrors.aliyun.com/pypi/simple`, where it can't find higher version of `huggingface_hub`. From the provided log, the highest version it has is `0.4.0`.\r\n\r\nIt should work fine if you are using `https://pypi.org/simple` as the pip index.", "> The problem in the used (pip) `index-url` being `https://mirrors.aliyun.com/pypi/simple`, where it can't find higher version of `huggingface_hub`. From the provided log, the highest version it has is `0.4.0`.\r\n> \r\n> It should work fine if you are using `https://pypi.org/simple` as the pip index.\r\n\r\nThanks for your reply. I find the reason is `Python version`, this command `apt install python3` only installs the python3.6, but huggingface-hub seems like needs python >= 3.7, so I change it to `apt install python3.8` at \r\n[line19](https://github.com/huggingface/transformers/blob/main/docker/transformers-all-latest-gpu/Dockerfile#L19), and it worked, but when installing `kenml`, a compile fail error happened, I think `kenml` should be installed via build not pip.\r\nFinally, I pull this image from your docker hub since build it myself really slow", "Yeah nice! If the pull from our docker hub works, it's definitely easier :-) \r\n\r\nBTW, may I wonder why you need to use our docker image. We only use it for our CI testing.", "> Yeah nice! If the pull from our docker hub works, it's definitely easier :-)\r\n> \r\n> BTW, may I wonder why you need to use our docker image. We only use it for our CI testing.\r\n\r\nbecause I need to install the newest pytorch and tensorflow, the easiest way is docker, since tensorflow2.xx having packages compatibility problems, so I find your Dockerfile", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,692
1,692
NONE
null
### System Info Docker: 20.10.12 ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. change pip mirror to aliyun 2. `docker build -t huggingface .` ### Expected behavior an error happened: ``` Step 16/24 : RUN python3 -m pip install --no-cache-dir -e ./transformers[dev,onnxruntime] ---> Running in 1dc79ba974ba Looking in indexes: https://mirrors.aliyun.com/pypi/simple Obtaining file:///root/transformers Installing build dependencies: started Installing build dependencies: finished with status 'done' Checking if build backend supports build_editable: started Checking if build backend supports build_editable: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting packaging>=20.0 Downloading https://mirrors.aliyun.com/pypi/packages/05/8e/8de486cbd03baba4deef4142bd643a3e7bbe954a784dc1bb17142572d127/packaging-21.3-py3-none-any.whl (40 kB) Collecting tokenizers!=0.11.3,<0.14,>=0.11.1 Downloading https://mirrors.aliyun.com/pypi/packages/29/9c/936ebad6dd963616189d6362f4c2c03a0314cf2a221ba15e48dd714d29cf/tokenizers-0.13.3.tar.gz (314 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting tqdm>=4.27 Downloading https://mirrors.aliyun.com/pypi/packages/47/bb/849011636c4da2e44f1253cd927cfb20ada4374d8b3a4e425416e84900cc/tqdm-4.64.1-py2.py3-none-any.whl (78 kB) Collecting requests Downloading https://mirrors.aliyun.com/pypi/packages/2d/61/08076519c80041bc0ffa1a8af0cbd3bf3e2b62af10435d269a9d0f40564d/requests-2.27.1-py2.py3-none-any.whl (63 kB) ERROR: Could not find a version that satisfies the requirement huggingface-hub<1.0,>=0.14.1 (from transformers[dev,onnxruntime]) (from versions: 0.0.1, 0.0.2, 0.0.3rc1, 0.0.3rc2, 0.0.5, 0.0.6, 0.0.7, 0.0.8, 0.0.9, 0.0.10, 0.0.11, 0.0.12, 0.0.13, 0.0.14, 0.0.15, 0.0.16, 0.0.17, 0.0.18, 0.0.19, 0.1.0, 0.1.1, 0.1.2, 0.2.0, 0.2.1, 0.4.0) ERROR: No matching distribution found for huggingface-hub<1.0,>=0.14.1 The command 'sh -lc python3 -m pip install --no-cache-dir -e ./transformers[dev,onnxruntime]' returned a non-zero code: 1 ``` Could you tell me the right ` huggingface-hub` version? thanks a lot
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24740/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24740/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24739
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24739/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24739/comments
https://api.github.com/repos/huggingface/transformers/issues/24739/events
https://github.com/huggingface/transformers/issues/24739
1,797,740,474
I_kwDOCUB6oc5rJ1e6
24,739
Sorting FAISS scores for similarity search
{ "login": "NamburiSrinath", "id": 40389487, "node_id": "MDQ6VXNlcjQwMzg5NDg3", "avatar_url": "https://avatars.githubusercontent.com/u/40389487?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NamburiSrinath", "html_url": "https://github.com/NamburiSrinath", "followers_url": "https://api.github.com/users/NamburiSrinath/followers", "following_url": "https://api.github.com/users/NamburiSrinath/following{/other_user}", "gists_url": "https://api.github.com/users/NamburiSrinath/gists{/gist_id}", "starred_url": "https://api.github.com/users/NamburiSrinath/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NamburiSrinath/subscriptions", "organizations_url": "https://api.github.com/users/NamburiSrinath/orgs", "repos_url": "https://api.github.com/users/NamburiSrinath/repos", "events_url": "https://api.github.com/users/NamburiSrinath/events{/privacy}", "received_events_url": "https://api.github.com/users/NamburiSrinath/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! Thank you for opening the issue.\r\n\r\nI tag someone in the team on the course chapter discussion page. Let's wait a reply first 🤗 .", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,692
1,692
NONE
null
### System Info - `transformers` version: 4.29.2 - Platform: Linux-5.10.157-139.675.amzn2.x86_64-x86_64-with-glibc2.26 - Python version: 3.9.15 - Huggingface_hub version: 0.15.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @sgugger, @stevhliu, @MKhalusova ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` import pandas as pd samples_df = pd.DataFrame.from_dict(samples) samples_df["scores"] = scores samples_df.sort_values("scores", ascending=False, inplace=True) ``` Please refer [https://huggingface.co/learn/nlp-course/chapter5/6?fw=pt#using-faiss-for-efficient-similarity-search](https://huggingface.co/learn/nlp-course/chapter5/6?fw=pt#using-faiss-for-efficient-similarity-search) ### Expected behavior I think the sorting of scores should be in ascending and not descending; because the default index is IndexFlatL2 which is L2/Euclidean distance. It will be great if these two changes are made to relevant documentation 1. Change `ascending=False` to `ascending=True` 2. There can be a reference that the default scores returned is the Euclidean distances (by digging sourcecode, I understood it's IndexFlatL2), but it will be easy to include this in documentation **Refer:** [https://discuss.huggingface.co/t/chapter-5-questions/11744/58?u=namburisrinath](https://discuss.huggingface.co/t/chapter-5-questions/11744/58?u=namburisrinath) **P.S:** I am sorry if this is the correct place to create the bug as the documentation needs to be changed accordingly. People consume Huggingface documentation a lot, so it needs to be fool-proof, so please correct if I am wrong!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24739/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24739/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24738
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24738/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24738/comments
https://api.github.com/repos/huggingface/transformers/issues/24738/events
https://github.com/huggingface/transformers/pull/24738
1,797,621,112
PR_kwDOCUB6oc5VIcQO
24,738
Add missing attention mask in ASTFeatureExtractor
{ "login": "lu-wo", "id": 71704466, "node_id": "MDQ6VXNlcjcxNzA0NDY2", "avatar_url": "https://avatars.githubusercontent.com/u/71704466?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lu-wo", "html_url": "https://github.com/lu-wo", "followers_url": "https://api.github.com/users/lu-wo/followers", "following_url": "https://api.github.com/users/lu-wo/following{/other_user}", "gists_url": "https://api.github.com/users/lu-wo/gists{/gist_id}", "starred_url": "https://api.github.com/users/lu-wo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lu-wo/subscriptions", "organizations_url": "https://api.github.com/users/lu-wo/orgs", "repos_url": "https://api.github.com/users/lu-wo/repos", "events_url": "https://api.github.com/users/lu-wo/events{/privacy}", "received_events_url": "https://api.github.com/users/lu-wo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24738). All of your documentation changes will be reflected on that endpoint.", "Hmm I believe this was omitted because Audio Spectrogram Transformer doesn't take a padding mask as input: https://github.com/huggingface/transformers/blob/cfc8a05305b4c89c5393766161d89ef24e72fdfa/src/transformers/models/audio_spectrogram_transformer/modeling_audio_spectrogram_transformer.py#L567\r\n(only a head mask which is for masking out entire heads, not elements of the input sequence)\r\n\r\nThe way AST works is by padding / truncating all input audio samples to a fixed length, then computing the log-mel spectrogram on these adjusted inputs. Since we pad with zeros (silence), the model learns padding implicitly from the input features, and so doesn't require an attention mask. The same is done with the Whisper model, which also works directly on log-mel spectrograms: https://huggingface.co/blog/fine-tune-whisper#load-whisperfeatureextractor\r\n\r\nSo I don't think it's necessary to return the attention mask in the feature extractor, since we'll just discard this immediately anyways. Probably instead we can remove the attribute `return_attention_mask` from the init?\r\n\r\nAlso cc @NielsRogge ", "Thank you @sanchit-gandhi, that makes sense! The way I encountered the issue is related to your explanation: I am using the ASTFeatureExtractor for another model, where it would be nice to have the attention mask, therefore I got confused that I didn't obtain the mask even though setting return_attention_mask=True in the ASTFeatureExtractor's init method. \r\n\r\nI think it would be nice to either add this functionality such that one could use it with attention masks (as the current documentation promises and as I tried to use it), or, as you said, remove it from the init. \r\n\r\nWhat do you think @NielsRogge ?", "Hey @lu-wo! Interesting use case! Unfortunately we can't maintain all classes in `transformers` to be compatible with every other combination of model, i.e. the `ASTFeatureExtractor` is designed to work with the `ASTModel`, but we can't guarantee that it works for every other model that takes a log-mel spectrogram as input. To do so would be a large maintenance burden, since we'd have to check that every combination works, and would probably complicate the code by introducing additional complexity.\r\n\r\nI would suggest trying one of two things here:\r\n1. Use a similar log-mel feature extractor that does return an attention mask. The Whisper feature extractor also computes log-mel spectrograms, but we require the attention mask if we use SpecAug during fine-tuning, so it can return the attention mask if required. Note that you may have to change the spectrogram hyper parameters to get parity with the AST feature extractor\r\n2. Copy the feature extractor code locally, and make the changes you require so that your use case works. If you subsequently train a new model is added to the `transformers` library, then the feature extractor we add for this model will use an attention mask since the model requires one!\r\n\r\nLet me know if the above two don't work - happy to brainstorm some more solutions with you!", "Thanks @sanchit-gandhi , I guess I can adapt the code for my purposes :) ", "Thanks for understanding and looking forward to your next PR @lu-wo! 🤗" ]
1,689
1,690
1,689
NONE
null
The ASTFeatureExtractor has a return_attention_mask attribute, but even if set to true, the feature extractor does not return it, because the code to do so is missing. I added the code which checks the lengths of the raw audio arrays before computing the spectrograms and then creates the attention mask for each element in the batch accordingly. @sanchit-gandhi - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24738/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24738/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24738", "html_url": "https://github.com/huggingface/transformers/pull/24738", "diff_url": "https://github.com/huggingface/transformers/pull/24738.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24738.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24737
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24737/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24737/comments
https://api.github.com/repos/huggingface/transformers/issues/24737/events
https://github.com/huggingface/transformers/issues/24737
1,797,529,993
I_kwDOCUB6oc5rJCGJ
24,737
Falcon Models saved with `save_pretrained` no longer get saved with python files
{ "login": "fadynakhla", "id": 67917337, "node_id": "MDQ6VXNlcjY3OTE3MzM3", "avatar_url": "https://avatars.githubusercontent.com/u/67917337?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fadynakhla", "html_url": "https://github.com/fadynakhla", "followers_url": "https://api.github.com/users/fadynakhla/followers", "following_url": "https://api.github.com/users/fadynakhla/following{/other_user}", "gists_url": "https://api.github.com/users/fadynakhla/gists{/gist_id}", "starred_url": "https://api.github.com/users/fadynakhla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fadynakhla/subscriptions", "organizations_url": "https://api.github.com/users/fadynakhla/orgs", "repos_url": "https://api.github.com/users/fadynakhla/repos", "events_url": "https://api.github.com/users/fadynakhla/events{/privacy}", "received_events_url": "https://api.github.com/users/fadynakhla/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "Hi @sgugger \r\n\r\nI checked the code snippet and indeed only config and model bin files are saved. (tested on main branch of July 10th)\r\nI am more than happy to help and learn, but I would like to know if this behavior is expected before taking action.\r\n(and if you want to fix directly, ok for me)\r\n\r\n```\r\ntotal 27038084\r\n-rw-r--r-- 1 root root 773 Jul 12 12:41 config.json\r\n-rw-r--r-- 1 root root 116 Jul 12 12:41 generation_config.json\r\n-rw-r--r-- 1 root root 9962615667 Jul 12 12:41 pytorch_model-00001-of-00003.bin\r\n-rw-r--r-- 1 root root 9939388767 Jul 12 12:42 pytorch_model-00002-of-00003.bin\r\n-rw-r--r-- 1 root root 7784945757 Jul 12 12:42 pytorch_model-00003-of-00003.bin\r\n-rw-r--r-- 1 root root 16924 Jul 12 12:42 pytorch_model.bin.index.json\r\n```", "This is expected as the config will keep references to where the code lives, you can see it has:\r\n```\r\n \"auto_map\": {\r\n \"AutoConfig\": \"tiiuae/falcon-7b-instruct--configuration_RW.RWConfig\",\r\n \"AutoModelForCausalLM\": \"tiiuae/falcon-7b-instruct--modelling_RW.RWForCausalLM\"\r\n },\r\n```\r\n\r\nSaving then reloading with `from_pretrained` from the local dir works without issue on main. I don't know what exact code sample caused the issue but on my side:\r\n\r\n```py\r\nfrom transformers import AutoModelForCausalLM\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\"tiiuae/falcon-7b-instruct\", trust_remote_code=True)\r\nmodel.save_pretrained(\"/path/to/save\")\r\n\r\nnew_model = AutoModelForCausalLM.from_pretrained(\"/path/to/save\", trust_remote_code=True)\r\n```\r\nworks.", "Hey @sgugger apologies for the misunderstanding you're right I was mistaken and over simplified the code snippet causing the issue; after taking another look I've realized that the issue is how I've downloaded the model. Rather than using\r\n```\r\nAutoModelForCausalLM.from_pretrained(\"tiiuae/falcon-7b-instruct\", trust_remote_code=True)\r\n```\r\nI first download the model locally with\r\n```\r\ngit lfs install\r\ngit clone [email protected]:tiiuae/falcon-7b-instruct\r\n```\r\nif I inspect `config.json` I see this:\r\n```\r\n\"auto_map\": {\r\n  \"AutoConfig\": \"configuration_RW.RWConfig\",\r\n  \"AutoModelForCausalLM\": \"modelling_RW.RWForCausalLM\"\r\n},\r\n```\r\nwhich matches what is in the hub here: https://huggingface.co/tiiuae/falcon-7b-instruct/blob/main/config.json. \r\nThen when running \r\n\r\n```\r\nfrom transformers import AutoModelForCausalLM\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\"/local/falcon-7b-instruct\", trust_remote_code=True)\r\nmodel.save_pretrained(\"/path/to/save\")\r\n\r\nnew_model = AutoModelForCausalLM.from_pretrained(\"/path/to/save\", trust_remote_code=True)\r\n```\r\nI get the error above. It may be that this is the expected behavior but it works fine with version `4.27.4` as in that case `save_pretrained()` actually copies over `configuration_RW.py` and `modelling_RW.py`. \r\n\r\nMy assumption is that this is issue is due to `RWConfig` and `RWModel` being defined within the model repo as opposed to within the transformers library but I may be mistaken.", "That I can reproduce. This should be fixed by the PR mentioned above.", "That's awesome thanks, just a question or two if that's alright so I can see if I understand what's going on here:\r\n```\r\n if os.path.isdir(pretrained_model_name_or_path):\r\n model_class.register_for_auto_class(cls.__name__)\r\n else:\r\n cls._model_mapping.register(config.__class__, model_class, exist_ok=True)\r\n```\r\nin case we are loading from a local trust remote code repo `model_class.register_for_auto_class()` sets `model_class._auto_class = cls.__name__` which I believe in the case of falcon results in `RWForCausalLM._auto_class = \"RWForCausalLM\"`\r\n\r\nThen in the call to `save_pretrained()` this block:\r\n```\r\n if self._auto_class is not None:\r\n custom_object_save(self, save_directory, config=self.config)\r\n```\r\nget's executed which results in the modelling files being saved along with the the weights and config files. Is that correct?\r\n\r\nEdit: one other question is there a reason why this `cls._model_mapping.register(config.__class__, model_class, exist_ok=True)` is used in stead of `cls.register(config.__class__, model_class, exist_ok=True)`?", "That's completely correct!\r\n\r\nAs for the second question, I haven't deep-dived to make sure the two do exactly the same thing, but it's probably the same yes. This line is only there so that `pipeline` does not complain that the model doesn't belong to the corresponding auto class when using remote code.", "Thanks again for all your help really appreciate it! Tested this with your PR and works on my end for local falcon models!\r\n\r\nAlso `cls.register()` just calls `cls._model_mapping.register()` with an additional check\r\n```\r\n @classmethod\r\n def register(cls, config_class, model_class, exist_ok=False):\r\n if hasattr(model_class, \"config_class\") and model_class.config_class != config_class:\r\n raise ValueError(\r\n \"The model class you are passing has a `config_class` attribute that is not consistent with the \"\r\n f\"config class you passed (model has {model_class.config_class} and you passed {config_class}. Fix \"\r\n \"one of those so they match!\"\r\n )\r\n cls._model_mapping.register(config_class, model_class, exist_ok=exist_ok)\r\n```\r\nSwitching that line out to `cls.register` doesn't cause the above value error at least when loading falcon with `from_pretrained` but not sure if there are cases where it would be benificial to not have the restriction that `model_class.config_class == config_class`", "I think it would be fine if we add an `and not exist_ok` in the test. Would you like to make a PR with those changes?", "Yeah would love to just want to make sure I understand the rationale behind adding `and not exist_ok`. \r\nCorrect me if I'm wrong but I think the reason is that if `exists_ok = True` we will overwrite `_model_mapping` anyway so we don't want to enforce the restriction that `model_class.config_class == config_class`; is that the right idea?", "Oh I completely misread your comment, thanks for asking a clarification. The test should be left as is, it is a consistency check, not an exist ok check. We can do the switch without adding anything.", "Ok makes sense; more than happy to still make a PR for that switch if it would be helpful", "Please go ahead!", "PR is linked above! One of us will have to rebase/fix conflicts as I've made these changes on top of main which hasn't incorporated your PR yet" ]
1,689
1,689
1,689
CONTRIBUTOR
null
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.15.0-75-generic-x86_64-with-glibc2.35 - Python version: 3.10.3 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No or N/A - Using distributed or parallel set-up in script?: No or N/A ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction When saving `tiiuae/falcon` models using ``` from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-7b-instruct") model.save_pretrained("/path/to/save") ``` the python files `configuration_RW.py` and `modelling_RW.py` are no longer saved. Loading the model with `from_pretrained(...)` results in the following error: ``` >>> model = AutoModelForCausalLM.from_pretrained("/data/test-models/falcon-40b-instruct", trust_remote_code=True) Could not locate the configuration_RW.py inside /data/test-models/falcon-40b-instruct. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/recoverx/.cache/pypoetry/virtualenvs/test-tgi-yWaeKVH5-py3.10/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 456, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "/home/recoverx/.cache/pypoetry/virtualenvs/test-tgi-yWaeKVH5-py3.10/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 953, in from_pretrained config_class = get_class_from_dynamic_module(class_ref, pretrained_model_name_or_path, **kwargs) File "/home/recoverx/.cache/pypoetry/virtualenvs/test-tgi-yWaeKVH5-py3.10/lib/python3.10/site-packages/transformers/dynamic_module_utils.py", line 431, in get_class_from_dynamic_module final_module = get_cached_module_file( File "/home/recoverx/.cache/pypoetry/virtualenvs/test-tgi-yWaeKVH5-py3.10/lib/python3.10/site-packages/transformers/dynamic_module_utils.py", line 247, in get_cached_module_file resolved_module_file = cached_file( File "/home/recoverx/.cache/pypoetry/virtualenvs/test-tgi-yWaeKVH5-py3.10/lib/python3.10/site-packages/transformers/utils/hub.py", line 388, in cached_file raise EnvironmentError( OSError: /data/test-models/falcon-40b-instruct does not appear to have a file named configuration_RW.py. Checkout 'https://huggingface.co//data/test-models/falcon-40b-instruct/None' for available files. ``` ### Expected behavior To be able to load the model with `from_pretrained` after saving it with `save_pretrained` either by having the python files saved or pulling them from the hub. With transformers version = `4.27.4` using `save_pretrained()` does actually save the python files and the saved model can be loaded right away
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24737/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24737/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24736
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24736/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24736/comments
https://api.github.com/repos/huggingface/transformers/issues/24736/events
https://github.com/huggingface/transformers/pull/24736
1,797,508,647
PR_kwDOCUB6oc5VIDV3
24,736
Fix typo in LocalAgent
{ "login": "jamartin9", "id": 7027701, "node_id": "MDQ6VXNlcjcwMjc3MDE=", "avatar_url": "https://avatars.githubusercontent.com/u/7027701?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jamartin9", "html_url": "https://github.com/jamartin9", "followers_url": "https://api.github.com/users/jamartin9/followers", "following_url": "https://api.github.com/users/jamartin9/following{/other_user}", "gists_url": "https://api.github.com/users/jamartin9/gists{/gist_id}", "starred_url": "https://api.github.com/users/jamartin9/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jamartin9/subscriptions", "organizations_url": "https://api.github.com/users/jamartin9/orgs", "repos_url": "https://api.github.com/users/jamartin9/repos", "events_url": "https://api.github.com/users/jamartin9/events{/privacy}", "received_events_url": "https://api.github.com/users/jamartin9/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24736). All of your documentation changes will be reflected on that endpoint." ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? This PR fixes a typo in LocalAgent. Crash Log: ``` Traceback (most recent call last): File "/gnu/git/hf-agent/./agent.py", line 18, in <module> agent.run(prompt) File "/gnu/git/hf-agent/venv/lib/python3.11/site-packages/transformers/tools/agents.py", line 335, in run result = self.generate_one(prompt, stop=["Task:"]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/gnu/git/hf-agent/venv/lib/python3.11/site-packages/transformers/tools/agents.py", line 731, in generate_one encoded_inputs = self.tokenizer(prompt, return_tensors="pt").to(self._model_device) ^^^^^^^^^^^^^^^^^^ File "/gnu/git/hf-agent/venv/lib/python3.11/site-packages/transformers/tools/agents.py", line 727, in _model_device for param in self.mode.parameters(): ^^^^^^^^^ AttributeError: 'LocalAgent' object has no attribute 'mode'. Did you mean: 'model'? ``` Code that triggered the above crash ``` #!/usr/bin/env python3 import torch from transformers import LocalAgent model = "bigcode/tiny_starcoder_py" agent = LocalAgent.from_pretrained(model, torch_dtype=torch.bfloat16) text = "Sally sold sea shells down by the seashore." prompt = "Summarize the text given in the variable `text` and read it out loud." agent.run(prompt, text=text) ``` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24736/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24736/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24736", "html_url": "https://github.com/huggingface/transformers/pull/24736", "diff_url": "https://github.com/huggingface/transformers/pull/24736.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24736.patch", "merged_at": 1689080691000 }
https://api.github.com/repos/huggingface/transformers/issues/24735
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24735/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24735/comments
https://api.github.com/repos/huggingface/transformers/issues/24735/events
https://github.com/huggingface/transformers/issues/24735
1,797,497,355
I_kwDOCUB6oc5rI6IL
24,735
Distil* hanging on torch.distributed.barrier()
{ "login": "higopires", "id": 66256549, "node_id": "MDQ6VXNlcjY2MjU2NTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/66256549?v=4", "gravatar_id": "", "url": "https://api.github.com/users/higopires", "html_url": "https://github.com/higopires", "followers_url": "https://api.github.com/users/higopires/followers", "following_url": "https://api.github.com/users/higopires/following{/other_user}", "gists_url": "https://api.github.com/users/higopires/gists{/gist_id}", "starred_url": "https://api.github.com/users/higopires/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/higopires/subscriptions", "organizations_url": "https://api.github.com/users/higopires/orgs", "repos_url": "https://api.github.com/users/higopires/repos", "events_url": "https://api.github.com/users/higopires/events{/privacy}", "received_events_url": "https://api.github.com/users/higopires/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is probably a setup error in the environment, as it means that one of the processes does not properly ping the others at the barrier line, and then it hangs forever.", "Is there a way to solve this setup error?", "i have same problem", "I run into this problem recently.", "> \r\n\r\nfor me it was gpu communication issue\r\ni used multi gpu from gpu cluster server.\r\nbut gpu cannot communicate each other so they cannot find other gpu comes to barrier() function. then they wait forever.\r\nit was gpu environment setting issue\r\nSo I have contacted the cluster manager.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,694
1,694
NONE
null
### System Info - `transformers` version: 4.31.0.dev0 - Platform: Linux-5.14.21-150400.24.55-default-x86_64-with-glibc2.31 - Python version: 3.10.10 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 1.13.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help? @VictorSanh @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm trying to run [Distil*](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) project with a custom dataset. After the preprocessing steps, I enter the following command to start the training (single-node multi-GPU): ``` CUDA_VISIBLE_DEVICES=0,1,2 N_GPU_NODE=3 N_NODES=1 NODE_RANK=0 python -m torch.distributed.launch \ --nproc_per_node=3 \ train.py \ --force \ --dump_path serialization_dir/my_first_training \ --data_file ./data/binarized_text.roberta-base.pickle \ --student_type roberta \ --student_config ./training_configs/distilroberta-base.json \ --student_pretrained_weights ~/higo/distilbert/serialization_dir/tf_roberta_048131723.pth \ --teacher_type roberta \ --teacher_name roberta-base \ --mlm \ --temperature 2.0 \ --alpha_ce 5.0 \ --alpha_mlm 2.0 \ --alpha_clm 0.0 \ --alpha_mse 0.0 \ --alpha_cos 1.0 \ --token_counts ./data/token_counts.binarized_text.roberta-base.pickle \ --freeze_pos_embs \ --freeze_token_type_embds \ --n_epoch 4 \ --batch_size 8 \ --gradient_accumulation_steps 256 \ --learning_rate 2e-4 \ --n_gpu 3 \ --seed 42 ``` When the script arrives [here](https://github.com/huggingface/transformers/blob/a074a5d34d6411fb00e83a2ed30acf23d8c976b5/examples/research_projects/distillation/distiller.py#L343), it get stuck, the train does not start and after a given span of time, I got a timeout error. I tried to set higher timeout values (up to 3 hours), with no result. At this given point, my `nvidia-smi` is shown like this: ``` +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 530.30.02 Driver Version: 530.30.02 CUDA Version: 12.1 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA A100 80GB PCIe Off| 00000000:4F:00.0 Off | 0 | | N/A 34C P0 71W / 300W| 2479MiB / 81920MiB | 100% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ | 1 NVIDIA A100 80GB PCIe Off| 00000000:52:00.0 Off | 0 | | N/A 34C P0 69W / 300W| 1851MiB / 81920MiB | 100% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ | 2 NVIDIA A100 80GB PCIe Off| 00000000:CE:00.0 Off | 0 | | N/A 36C P0 68W / 300W| 1827MiB / 81920MiB | 100% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ | 3 NVIDIA A100 80GB PCIe Off| 00000000:D1:00.0 Off | 0 | | N/A 42C P0 66W / 300W| 63151MiB / 81920MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 69824 C .../home/u021274/higo/myenv/bin/python 2476MiB | | 1 N/A N/A 69825 C .../home/u021274/higo/myenv/bin/python 1848MiB | | 2 N/A N/A 69826 C .../home/u021274/higo/myenv/bin/python 1824MiB | | 3 N/A N/A 97463 C python3 63148MiB | +---------------------------------------------------------------------------------------+ ``` (Process at GPU 3 is from another researcher) To me, the little amount of allocated memory seems odd. I honestly don't have a clue of what can be happening. Checked some other threads, but nothing helped to make things clear. ### Expected behavior Start of single-node, multi-GPU distributed training.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24735/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24735/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24734
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24734/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24734/comments
https://api.github.com/repos/huggingface/transformers/issues/24734/events
https://github.com/huggingface/transformers/issues/24734
1,797,203,090
I_kwDOCUB6oc5rHySS
24,734
bug: eval_accumulation_steps can lead to incorrect metrics
{ "login": "sjrl", "id": 10526848, "node_id": "MDQ6VXNlcjEwNTI2ODQ4", "avatar_url": "https://avatars.githubusercontent.com/u/10526848?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sjrl", "html_url": "https://github.com/sjrl", "followers_url": "https://api.github.com/users/sjrl/followers", "following_url": "https://api.github.com/users/sjrl/following{/other_user}", "gists_url": "https://api.github.com/users/sjrl/gists{/gist_id}", "starred_url": "https://api.github.com/users/sjrl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sjrl/subscriptions", "organizations_url": "https://api.github.com/users/sjrl/orgs", "repos_url": "https://api.github.com/users/sjrl/repos", "events_url": "https://api.github.com/users/sjrl/events{/privacy}", "received_events_url": "https://api.github.com/users/sjrl/received_events", "type": "User", "site_admin": false }
[ { "id": 5616426447, "node_id": "LA_kwDOCUB6oc8AAAABTsPdzw", "url": "https://api.github.com/repos/huggingface/transformers/labels/solved", "name": "solved", "color": "B1D6DC", "default": false, "description": "" } ]
closed
false
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false } ]
[ "Thanks for the report! I'll look into a solution for this today", "@sjrl could you quickly verify that installing `transformers` via `pip install git+https://github.com/huggingface/transformers@fix-eval-accum-steps` solves this for you? Thanks!", "Hey @muellerzr thanks for the quick fix! And my apologies I actually can't seem to reproduce the error on my end, but I did check that your change also works. ", "@muellerzr Sorry for disturbing you. I noticed this PR's change\r\n\r\n```diff\r\n- if args.eval_accumulation_steps is not None and (step + 1) % args.eval_accumulation_steps == 0:\r\n+ if args.eval_accumulation_steps is not None and self.accelerator.sync_gradients:\r\n```\r\n\r\nbreaks the behaviour of evaluation accumulation as described https://github.com/huggingface/transformers/pull/25819. And in the latest v4.33.1, it has been changed partially back to\r\n\r\n```diff\r\n- if args.eval_accumulation_steps is not None and self.accelerator.sync_gradients:\r\n+ if args.eval_accumulation_steps is not None and (step + 1) % args.eval_accumulation_steps == 0 and self.accelerator.sync_gradients:\r\n```\r\n\r\nMay I ask what is the purpose of introducing `self.accelerator.sync_gradients` check in evaluation loop? In certain cases, this `self.accelerator.sync_gradients` will be set False in training which prevent the accumulation in the evaluation." ]
1,689
1,694
1,689
CONTRIBUTOR
null
### System Info - `transformers` version: 4.31.0.dev0 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): 2.11.1 (True) - Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu) - Jax version: 0.3.6 - JaxLib version: 0.3.5 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? Hey @sgugger, I'm tagging you since this has to do with the trainer. ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Using the `run_qa.py` script in the `examples/pytorch/question-answering/` folder ```bash python run_qa.py \ --model_name_or_path "sjrhuschlee/flan-t5-base-squad2" \ --dataset_name squad_v2 \ --output_dir "tmp/eval_squad_v2/" \ --version_2_with_negative True \ --max_seq_length 512 \ --doc_stride 128 \ --do_eval \ --per_device_eval_batch_size 24 \ --tf32 True \ --dataloader_num_workers 6 \ --preprocessing_num_workers 6 \ --bf16_full_eval \ --eval_accumulation_steps 2 \ --overwrite_output_dir False ``` I found that the calculated metrics when using `eval_accumulation_steps` is not always correct. When not using `eval_accumulation_steps` with the above script I find that I get the expected metrics. However, I found that I needed to use `eval_accumulation_steps` for evaluation of the `flan-t5` models with the above parameters on my system otherwise the memory usage on the GPU would fluctuate from 4 - 8GB which could cause an OOM. I believe I found the cause for the inconsistency in the metrics. Specifically this line https://github.com/huggingface/transformers/blob/a074a5d34d6411fb00e83a2ed30acf23d8c976b5/src/transformers/trainer.py#L3150 does not cover the edge case where the total number of batches in the evaluation is not exactly divisible by `eval_accumulation_steps`. For example, if `eval_accumulation_steps = 2` and the total number of batches is 613, then only the last batch is used when calculating `all_preds`. I was able to partially fix this problem by adding a new variable called `total_steps` and updating the if statement ```python logger.info(f"***** Running {description} *****") if has_length(dataloader): total_steps = len(dataloader) logger.info(f" Num examples = {self.num_examples(dataloader)}") else: total_steps = None logger.info(" Num examples: Unknown") ... if args.eval_accumulation_steps is not None and ( (step + 1) % args.eval_accumulation_steps == 0 or (step + 1) == total_steps ): ``` However, this will still be a problem for dataloaders that don't have a defined length. ### Expected behavior Using `eval_accumulation_steps` should work in every case even when the number of batches is not divisible by `eval_accumulation_steps`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24734/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24734/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24733
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24733/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24733/comments
https://api.github.com/repos/huggingface/transformers/issues/24733/events
https://github.com/huggingface/transformers/pull/24733
1,797,201,707
PR_kwDOCUB6oc5VHBh9
24,733
Docs: add `kwargs` type to fix formatting
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "After the PR (as seen in the doc preview):\r\n<img width=\"926\" alt=\"Screenshot 2023-07-10 at 18 55 21\" src=\"https://github.com/huggingface/transformers/assets/12240844/9329186d-86ee-4091-a5c5-bbee77ab83f7\">\r\n", "Happy to push a `doc-builder` side change if needed -- and to do the opposite of this PR: remove `Dict[str, Any]` from `kwargs` whenever it is present.\r\n\r\nJust let me know your preference :) If it's neutral for you, I think having the explicit type is friendly for Python newbies.", "@amyeroberts if you don't oppose, I'll merge this PR 🤗 ", "@gante Go for it! " ]
1,689
1,689
1,689
MEMBER
null
# What does this PR do? As the title indicates: in several places our docs `kwargs` did not include its type, which made our doc builder treat it like a continuation of the previous parameter 💔 Example of currently broken docs ([this function](https://huggingface.co/docs/transformers/v4.30.0/en/main_classes/processors#transformers.ProcessorMixin.save_pretrained)): <img width="915" alt="Screenshot 2023-07-10 at 18 20 47" src="https://github.com/huggingface/transformers/assets/12240844/d027950f-7082-416a-b2da-e4f3712bd27c"> This PR is a result of CMD+F on the broken pattern, and applying the fix :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24733/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24733/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24733", "html_url": "https://github.com/huggingface/transformers/pull/24733", "diff_url": "https://github.com/huggingface/transformers/pull/24733.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24733.patch", "merged_at": 1689088889000 }
https://api.github.com/repos/huggingface/transformers/issues/24732
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24732/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24732/comments
https://api.github.com/repos/huggingface/transformers/issues/24732/events
https://github.com/huggingface/transformers/issues/24732
1,796,991,248
I_kwDOCUB6oc5rG-kQ
24,732
GPT2 model training , Loss nan
{ "login": "irfan767", "id": 29143478, "node_id": "MDQ6VXNlcjI5MTQzNDc4", "avatar_url": "https://avatars.githubusercontent.com/u/29143478?v=4", "gravatar_id": "", "url": "https://api.github.com/users/irfan767", "html_url": "https://github.com/irfan767", "followers_url": "https://api.github.com/users/irfan767/followers", "following_url": "https://api.github.com/users/irfan767/following{/other_user}", "gists_url": "https://api.github.com/users/irfan767/gists{/gist_id}", "starred_url": "https://api.github.com/users/irfan767/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/irfan767/subscriptions", "organizations_url": "https://api.github.com/users/irfan767/orgs", "repos_url": "https://api.github.com/users/irfan767/repos", "events_url": "https://api.github.com/users/irfan767/events{/privacy}", "received_events_url": "https://api.github.com/users/irfan767/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @irfan767, thanks for raising an issue! \r\n\r\nQuestions about debugging code or custom training behaviour are best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,692
1,692
NONE
null
### System Info Sometimes I get this error. while following this article about fine-tuning based on question and answers. [https://discuss.huggingface.co/t/fine-tuning-gpt2-for-question-answering/31895](url) I have just updated the code to create batches because my dataset is more extensive. Here is the code that I am using: ### Who can help? _No response_ ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction import pandas as pd import torch from torch.utils.data import DataLoader, Dataset from transformers import GPT2Tokenizer, GPT2LMHeadModel class FeedbackEssentials(Dataset): def __init__(self, qa_pairs, tokenizer, max_length): self.qa_pairs = qa_pairs self.tokenizer = tokenizer self.max_length = max_length def __len__(self): return len(self.qa_pairs) def __getitem__(self, idx): question = self.qa_pairs[idx][0] text = f"{question} {self.tokenizer.eos_token}" input_ids = self.tokenizer.encode(text, add_special_tokens=True, max_length=self.max_length, padding='max_length', truncation=True) attention_mask = [1] * len(input_ids) # Assuming all tokens should be attended to return { 'input_ids': torch.tensor(input_ids), 'attention_mask': torch.tensor(attention_mask) } train_df = pd.read_csv('/Users/irfanyaqub/Downloads/Research Dataset/train_dataset.csv') val_df = pd.read_csv('/Users/irfanyaqub/Downloads/Research Dataset/val_dataset.csv') val_df=val_df[:10] def remove_anomalies(value): return value['Coding'].replace({'\*-': ''}, regex=True) train_df['Coding'] = remove_anomalies(train_df) val_df['Coding'] = remove_anomalies(val_df) tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2LMHeadModel.from_pretrained('gpt2') tokenizer.add_special_tokens({'pad_token': '[PAD]'}) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') def text_manipulation(train_dataset): column1_values = train_dataset['Total Marks'].values column2_values = train_dataset['Coding'].values listOfLists = [[pair[0], pair[1]] for pair in zip(column1_values, column2_values)] text = "" for feedback in listOfLists: text += f"{feedback[0]} {feedback[1]} {tokenizer.eos_token}" return text training_dataset = text_manipulation(val_df) max_length_training = max(len(tokenizer.encode(qa_pair[0], add_special_tokens=True)) for qa_pair in training_dataset) dataset_training = FeedbackEssentials(training_dataset, tokenizer, max_length_training) batch_size = 4 dataloader = DataLoader(dataset_training, batch_size=batch_size, shuffle=True) optimizer = torch.optim.AdamW(model.parameters(), lr=5e-5) scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=0.9) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model.to(device) model.train() for epoch in range(1): for batch in dataloader: input_ids = batch['input_ids'].to(device) attention_mask = batch['attention_mask'].to(device) optimizer.zero_grad() loss = model(input_ids.to(device), labels=input_ids.to(device))[0] loss.backward() optimizer.step() scheduler.step() if epoch % 100 == 0: print(f"Epoch {epoch}, Loss {loss.item()}") model.eval() def generate_response(question): input_ids = tokenizer.encode(question, add_special_tokens=True, return_tensors='pt').to(device) sample_output = model.generate(input_ids, do_sample=True, max_length=200, top_k=20, top_p=1.0) answer = tokenizer.decode(sample_output[0], skip_special_tokens=True) sentences = answer.split('. ') for sentence in sentences: if question in sentence: return sentence return answer ### Expected behavior Expected output will be something like that: question = “How to delete an account” response = generate_response(question) print(f"{question}\n {response}") Answer: How to delete an account How to delete an account <|question|> This tool allows you to:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24732/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24732/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24731
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24731/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24731/comments
https://api.github.com/repos/huggingface/transformers/issues/24731/events
https://github.com/huggingface/transformers/issues/24731
1,796,988,197
I_kwDOCUB6oc5rG90l
24,731
LLAMA for sequence classification
{ "login": "lathashree01", "id": 35988419, "node_id": "MDQ6VXNlcjM1OTg4NDE5", "avatar_url": "https://avatars.githubusercontent.com/u/35988419?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lathashree01", "html_url": "https://github.com/lathashree01", "followers_url": "https://api.github.com/users/lathashree01/followers", "following_url": "https://api.github.com/users/lathashree01/following{/other_user}", "gists_url": "https://api.github.com/users/lathashree01/gists{/gist_id}", "starred_url": "https://api.github.com/users/lathashree01/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lathashree01/subscriptions", "organizations_url": "https://api.github.com/users/lathashree01/orgs", "repos_url": "https://api.github.com/users/lathashree01/repos", "events_url": "https://api.github.com/users/lathashree01/events{/privacy}", "received_events_url": "https://api.github.com/users/lathashree01/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "hi @Lathashree01 \r\nthanks for reporting, can you try to load the model in `bfloat16`, also what is the GPU hardware you are using? ", "Hi @younesbelkada , \r\nI am using Quadro RTX 6000 node with 8 GPUs of 24GB memory.\r\n\r\nWhen I run using - bfloat16, I am getting below error:\r\n`TypeError: Got unsupported ScalarType BFloat16`\r\n @ preds = output['logits'].detach().cpu().numpy()\r\n", "Can you replace the lines that causing that error to:\r\n```python\r\n preds = output['logits'].detach().cpu().float().numpy()\r\n labels = b_labels.to('cpu').float().numpy()\r\n```\r\nfrom what I can they are used to compute the accuracy only so it should be fine\r\nAlso can you share your bitsandbytes version?", "Hi @younesbelkada ,\r\n\r\nI tried removing the above lines and ran with bfloat16; I see loss values normally, hope everything works as expected. \r\nThank you so much. I was stuck in this and was trying out so many other things. \r\n\r\nAlso, I changed the accuracy calculation to:\r\n```\r\n preds = output['logits'].detach().cpu().to(torch.float16)\r\n labels = b_labels.to('cpu').numpy()\r\n```\r\n\r\nMy bitsandBytes version is - bitsandbytes 0.39.0\r\nHowever, I do see some errors when I run `python -m bitsandbytes`. \r\n\r\nUserWarning: /home/anaconda3/envs/finetuneenv did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searchingfurther paths...\r\n..\r\nthings related to posixpath\r\n....\r\nraise RuntimeError('Something when wrong when trying to find file. Maybe you do not have a linux system?')\r\nRuntimeError: Something when wrong when trying to find file. Maybe you do not have a linux system?", "Thanks ! Does training works with PEFT + int8 as showed in the script you shared? i.e. do you get that error only if you do `python -m bitsandbytes` ?\r\nAlso, there should be no need to call `model.cuda()` after you have quantized the model", "No, the training does not work fine with int8; logits and loss goes to NaN.\r\n\r\nAbove mentioned error about bitsandbytes appears when I start training (even in bfloat16) and when I do `python -m bitsandbytes`.\r\n\r\nSince I am not using any components on bnb and I am also not loading the model in int8, I ignored the error while training with bfloat16. Is that fine? Please let me know what I can do if that's a problem.\r\n\r\n", "I see now thanks ! \r\nfor the bnb issue probably your CUDA + bnb installation might be broken, you can post the issue on the bitsandbytes repository by stating what hardware and operating system you are using\r\nRegarding your hotfix, which is to fine-tune in bf16 I think it is fine to do so, bfloat16 training is recommended over float16 training", "> Regarding your hotfix, which is to fine-tune in bf16 I think it is fine to do so, bfloat16 training is recommended over float16 training\r\n\r\nOh, that's relieving. \r\n\r\nThank you, I will check on bitsandbytes error.", "I have the same errors.", "> I have the same errors.\r\n\r\nMine got resolved when trained with bf16. \r\nIf you could please elaborate on where your error is or maybe post your error trace. It will be helpful for the team or others to provide any suggestions.\r\n", "Hope you're doing well! 👋 I'm currently working on a project similar to yours. Unfortunately, I cannot get model's outputs:\r\n```\r\nException: Please convert all Tensors to FakeTensors first or instantiate FakeTensorMode with 'allow_non_fake_inputs'. Found in aten.embedding.default(*(Parameter containing:\r\ntensor([[ 9.8884e-05, -2.3329e-04, 5.8460e-04, ..., -3.4237e-04,\r\n 5.9724e-05, -1.1957e-04],\r\n [ 1.5289e-02, -1.2154e-02, 1.2512e-02, ..., 1.3092e-02,\r\n 7.2174e-03, -6.8045e-04],\r\n [ 1.7433e-03, 1.7633e-03, -1.4465e-02, ..., -1.1444e-02,\r\n -1.2665e-02, 3.7289e-04],\r\n ...,\r\n [-9.0179e-03, 3.0807e-02, -1.6708e-02, ..., -1.2680e-02,\r\n 1.0437e-02, 4.2343e-03],\r\n [-1.1368e-02, -1.4801e-02, -3.5667e-03, ..., 6.5308e-03,\r\n -2.2263e-02, -6.1455e-03],\r\n [-1.3992e-02, 1.6985e-03, -2.1469e-02, ..., 1.3527e-02,\r\n 2.8290e-02, -8.9111e-03]], device='cuda:0', dtype=torch.float16), FakeTensor(FakeTensor(..., device='meta', size=(1, 111), dtype=torch.int64), cuda:0), 31999), **{}) \r\n```\r\nMy code is really simple:\r\n```python\r\nimport os\r\nimport sys\r\n\r\nimport fire\r\nimport torch\r\nimport transformers\r\nfrom peft import PeftModel\r\nfrom transformers import GenerationConfig, LlamaForSequenceClassification, LlamaTokenizer\r\n\r\nfrom utils.callbacks import Iteratorize, Stream\r\nfrom utils.prompter import Prompter\r\nfrom datasets import load_dataset\r\nfrom sklearn.metrics import precision_score, recall_score, f1_score\r\nimport csv\r\nimport gradio as gr\r\nfrom peft import (\r\n LoraConfig,\r\n get_peft_model,\r\n get_peft_model_state_dict,\r\n prepare_model_for_int8_training,\r\n set_peft_model_state_dict,\r\n)\r\nif torch.cuda.is_available():\r\n device = \"cuda\"\r\nelse:\r\n device = \"cpu\"\r\n\r\ntry:\r\n if torch.backends.mps.is_available():\r\n device = \"mps\"\r\nexcept: # noqa: E722\r\n pass\r\n\r\n\r\ndef main(\r\n load_8bit: bool = True, \r\n base_model: str = \"/home/fyli/pretrain/llama-7b-hf\",\r\n lora_weights: str = \"/home/fyli/alpaca-lora/trained_weight\",\r\n prompt_template: str = \"\", # The prompt template to use, will default to alpaca.\r\n data_path: str = \"/home/fyli/datasets/machamp/rel-heter/test.json\",\r\n):\r\n base_model = base_model or os.environ.get(\"BASE_MODEL\", \"\")\r\n assert (\r\n base_model\r\n ), \"Please specify a --base_model, e.g. --base_model='huggyllama/llama-7b'\"\r\n\r\n tokenizer = LlamaTokenizer.from_pretrained(base_model)\r\n\r\n if device == \"cuda\":\r\n model = LlamaForSequenceClassification.from_pretrained(\r\n base_model,\r\n load_in_8bit=load_8bit,\r\n torch_dtype=torch.float16,\r\n device_map=\"auto\",\r\n )\r\n model = PeftModel.from_pretrained(\r\n model,\r\n lora_weights,\r\n torch_dtype=torch.float16,\r\n )\r\n else:\r\n assert False, \"CUDA is not available.\"\r\n print('[INFO]model: '+str(model)+\"[INFO]\\n\")\r\n\r\n # unwind broken decapoda-research config\r\n model.config.pad_token_id = tokenizer.pad_token_id = 0 # unk\r\n model.config.bos_token_id = 1\r\n model.config.eos_token_id = 2\r\n\r\n if not load_8bit:\r\n model.half() # seems to fix bugs for some users.\r\n\r\n model.eval()\r\n if torch.__version__ >= \"2\" and sys.platform != \"win32\":\r\n model = torch.compile(model)\r\n\r\n def evaluate(\r\n instruction,\r\n input=None,\r\n **kwargs,\r\n ):\r\n full_prompt = input+instruction\r\n tokenizer = LlamaTokenizer.from_pretrained(base_model)\r\n tokenizer_full_prompt = tokenizer(full_prompt, return_tensors=\"pt\").to(\"cuda:0\")\r\n # input_ids = tokenizer_full_prompt[\"input_ids\"].to(device)\r\n\r\n with torch.no_grad():\r\n output = model(**tokenizer_full_prompt) # error occur here\r\n \r\n print(\"\\n[INFO]output: \"+str(output))\r\n\r\n\r\n Test = load_dataset('json',data_files=data_path)\r\n for test in Test[\"train\"]:\r\n gen = evaluate(test[\"instruction\"], test[\"input\"])\r\n print(\"\\n[INFO] gen: \"+str(gen))\r\n\r\nif __name__ == \"__main__\":\r\n fire.Fire(main)\r\n```\r\nAny help would be greatly appreciated. Thanks", "@paulthewineguy can you open a different issue for this? It is not related. You can also try to get help on[ the forum ](https://discuss.huggingface.co/) as it seems you need help debugging your code. ", "I have the same problem when trying to run_glue.py (for text classification) using the Lllam 7b, loaded with load_in_8bit=True and also setting training_args.fp16 = True" ]
1,689
1,693
1,690
NONE
null
### System Info @ArthurZucker and @younesbelkada I am trying to perform sequence classification for text using LLAMA 7B model leveraging LORA training. I have 2 classes. Tokeniser and models are loading fine. But loss is zero after the first batch; when I check the logits, of model outputs, they are nan. I am getting ‘NaN’ loss after the first batch. Experiments tried (but did not work): - Tried clip grad value and clip grad norm (values from 1.0 to 5.0) ```torch.nn.utils.clip_grad_value_(model.parameters(), 5.0)``` - Tried changing lr too - loading in 8bit and float16 Any help would be greatly appreciated. Thanks ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Starting seq classification training using the above code. ### Expected behavior I see the loss calculated for the first batch only. From the next batch the logits become NaN and in turn loss and everything else is NaN
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24731/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24731/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24730
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24730/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24730/comments
https://api.github.com/repos/huggingface/transformers/issues/24730/events
https://github.com/huggingface/transformers/pull/24730
1,796,981,961
PR_kwDOCUB6oc5VGSFg
24,730
Docs: change some `input_ids` doc reference from `BertTokenizer` to `AutoTokenizer`
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
MEMBER
null
# What does this PR do? As the title indicates. We are doing it in most places, but there were a few places with the old pattern. (detected it as part of https://github.com/huggingface/transformers/issues/24575)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24730/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24730/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24730", "html_url": "https://github.com/huggingface/transformers/pull/24730", "diff_url": "https://github.com/huggingface/transformers/pull/24730.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24730.patch", "merged_at": 1689008247000 }
https://api.github.com/repos/huggingface/transformers/issues/24729
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24729/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24729/comments
https://api.github.com/repos/huggingface/transformers/issues/24729/events
https://github.com/huggingface/transformers/pull/24729
1,796,978,421
PR_kwDOCUB6oc5VGRS8
24,729
Docs: Update logit processors __call__ docs
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
MEMBER
null
# What does this PR do? PR done as part of https://github.com/huggingface/transformers/issues/24575 This PR polishes the inexistent `__call__` method docs for the logit processors (before this PR, only the base classes had docs).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24729/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24729/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24729", "html_url": "https://github.com/huggingface/transformers/pull/24729", "diff_url": "https://github.com/huggingface/transformers/pull/24729.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24729.patch", "merged_at": 1689160891000 }
https://api.github.com/repos/huggingface/transformers/issues/24728
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24728/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24728/comments
https://api.github.com/repos/huggingface/transformers/issues/24728/events
https://github.com/huggingface/transformers/issues/24728
1,796,931,495
I_kwDOCUB6oc5rGv-n
24,728
Saving with Trainer missing config.json and tokenizer files.
{ "login": "dumpmemory", "id": 64742282, "node_id": "MDQ6VXNlcjY0NzQyMjgy", "avatar_url": "https://avatars.githubusercontent.com/u/64742282?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dumpmemory", "html_url": "https://github.com/dumpmemory", "followers_url": "https://api.github.com/users/dumpmemory/followers", "following_url": "https://api.github.com/users/dumpmemory/following{/other_user}", "gists_url": "https://api.github.com/users/dumpmemory/gists{/gist_id}", "starred_url": "https://api.github.com/users/dumpmemory/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dumpmemory/subscriptions", "organizations_url": "https://api.github.com/users/dumpmemory/orgs", "repos_url": "https://api.github.com/users/dumpmemory/repos", "events_url": "https://api.github.com/users/dumpmemory/events{/privacy}", "received_events_url": "https://api.github.com/users/dumpmemory/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@pacman100 ", "Hi @dumpmemory !\r\n\r\nCould you verify if any of the 3 places `self._save` in the code snippet below is triggered?\r\n\r\n(line 2742, 2753, 2762)\r\n\r\nhttps://github.com/huggingface/transformers/blob/25411085647a4dbcbd4e7ba6f381881a3e49c33e/src/transformers/trainer.py#L2734-L2762\r\n\r\n", "> Hi @dumpmemory !\r\n> \r\n> Could you verify if any of the 3 places `self._save` in the code snippet below is triggered?\r\n> \r\n> (line 2742, 2753, 2762)\r\n> \r\n> https://github.com/huggingface/transformers/blob/25411085647a4dbcbd4e7ba6f381881a3e49c33e/src/transformers/trainer.py#L2734-L2762\r\n\r\nI can check it later. currently my training is using zero3 and multi gpus setting ", "It would be nice if you can check the execution flow at this place, to see if any `self._save` is triggered 🙏 or not (and why). Thank you! No worry, we can wait :-)", "> It would be nice if you can check the execution flow at this place, to see if any `self._save` is triggered 🙏 or not (and why). Thank you! No worry, we can wait :-)\r\n\r\n I’m checking now. My transformer code base may not be the newest one.", "I am using save_16bit_model=true", "I almost found the reason and I will update my code base to the current main base and test again. ", "after update to the main commit code base for accelerate and transformers. it was fixed ", "hi, @dumpmemory @ydshieh \r\ntrainer will not save tokenizer and config.json when training in deepspeed-**zero3** with `stage3_gather_16bit_weights_on_model_save=False`.\r\n\r\nline 2776 will `raise ValueError`, so line 2778 `self._save` never run to save tokenizer and other stuff. is this expected behavior?\r\n\r\nhttps://github.com/huggingface/transformers/blob/d4bd33cc9f11ca48635e54983d75249c78d72e2a/src/transformers/trainer.py#L2771-L2784\r\n\r\n" ]
1,689
1,691
1,689
CONTRIBUTOR
null
### System Info - `transformers` version: 4.31.0.dev0 - Platform: Linux-5.4.119-19.0009.28-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> - - `Accelerate` version: 0.20.3 - Platform: Linux-5.4.119-19.0009.28-x86_64-with-glibc2.35 - Python version: 3.10.6 - Numpy version: 1.22.2 - PyTorch version (GPU?): 2.0.0 (True) - PyTorch XPU available: False - System RAM: 1877.62 GB - GPU type: NVIDIA H800 - `Accelerate` default config: Not found - [2023-07-10 14:40:30,136] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect) -------------------------------------------------- DeepSpeed C++/CUDA extension op report -------------------------------------------------- NOTE: Ops not installed will be just-in-time (JIT) compiled at runtime if needed. Op compatibility means that your system meet the required dependencies to JIT install the op. -------------------------------------------------- JIT compiled ops requires ninja ninja .................. [OKAY] -------------------------------------------------- op name ................ installed .. compatible -------------------------------------------------- [WARNING] async_io requires the dev libaio .so object and headers but these were not found. [WARNING] async_io: please install the libaio-dev package with apt [WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found. async_io ............... [NO] ....... [NO] cpu_adagrad ............ [NO] ....... [OKAY] cpu_adam ............... [NO] ....... [OKAY] fused_adam ............. [NO] ....... [OKAY] fused_lamb ............. [NO] ....... [OKAY] quantizer .............. [NO] ....... [OKAY] random_ltd ............. [NO] ....... [OKAY] [WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.0 [WARNING] using untested triton version (2.0.0), only 1.0.0 is known to be compatible sparse_attn ............ [NO] ....... [NO] spatial_inference ...... [NO] ....... [OKAY] transformer ............ [NO] ....... [OKAY] stochastic_transformer . [NO] ....... [OKAY] transformer_inference .. [NO] ....... [OKAY] -------------------------------------------------- DeepSpeed general environment info: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] torch version .................... 2.0.0 deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] deepspeed info ................... 0.9.5, unknown, unknown torch cuda version ............... 12.1 torch hip version ................ None nvcc version ..................... 12.1 deepspeed wheel compiled w. ...... torch 2.0, cuda 12.1 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction run run_clm.py i have add one Callback as following to trainfer. with deepspeed zero3 enable. ``` class CheckPointFinishCallBack(TrainerCallback): def on_save(self, args: TrainingArguments, state: TrainerState, control: TrainerControl, **kwargs): # Save model checkpoint checkpoint_folder = f"{PREFIX_CHECKPOINT_DIR}-{state.global_step}" log_file = os.path.join(args.output_dir,"checkpoint_saved") with open(log_file,"w") as writer: writer.write(checkpoint_folder) ``` ### Expected behavior save the mode config.json file and tokenizer files.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24728/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24728/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24727
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24727/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24727/comments
https://api.github.com/repos/huggingface/transformers/issues/24727/events
https://github.com/huggingface/transformers/issues/24727
1,796,541,319
I_kwDOCUB6oc5rFQuH
24,727
Add "save_best_only" parameter in "transformers.PushToHubCallback" class
{ "login": "MUmairAB", "id": 120496694, "node_id": "U_kgDOBy6iNg", "avatar_url": "https://avatars.githubusercontent.com/u/120496694?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MUmairAB", "html_url": "https://github.com/MUmairAB", "followers_url": "https://api.github.com/users/MUmairAB/followers", "following_url": "https://api.github.com/users/MUmairAB/following{/other_user}", "gists_url": "https://api.github.com/users/MUmairAB/gists{/gist_id}", "starred_url": "https://api.github.com/users/MUmairAB/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MUmairAB/subscriptions", "organizations_url": "https://api.github.com/users/MUmairAB/orgs", "repos_url": "https://api.github.com/users/MUmairAB/repos", "events_url": "https://api.github.com/users/MUmairAB/events{/privacy}", "received_events_url": "https://api.github.com/users/MUmairAB/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
closed
false
null
[]
[ "cc @sgugger to see if we want to support the **save/push only the best model during training**. It seems the trainer currently only support load the best model at the end (with a specified metric).", "The whole goal of pushing the model to the hub is to be able to resume training from a different machine if there is a problem. If we push only the best model while checkpointing, this is not going to be possible anymore.\r\n\r\nNote that the best model will be pushed at the end of the training, so you will have the correct result once the training is finished.", "Thank you for your consideration and feedback on my feature request.\r\n\r\nI understand the goal of pushing the model to the hub using **Transformers.PushToHubCallback()** is to enable resuming training from a different machine if necessary. I appreciate the point you made about the potential impact on that capability if only the best model is saved and pushed during training.\r\n\r\nKeeping this in mind, would it be possible to explore a solution that balances both requirements?\r\n\r\n- Perhaps a configuration option that allows users to choose between saving and pushing only the best model, or \r\n- Saving and pushing the best model at specific intervals during training?\r\n\r\nThis way, users can have the flexibility to optimize storage and computational resources while still maintaining the ability to resume training from different machines if needed.\r\n\r\nThank you for your time and consideration. I look forward to hearing your thoughts on this matter.\r\n\r\n\r\n_**A workaround**_\r\n\r\n_I would like to state that even if the PushToHubCallback does not incorporate this feature, there is still a workaround if we are constrained by the bandwidth._\r\n\r\n_We should train the model without **PushToHubCallback**. In order to save the best model locally, use the Keras **save_best_only** callback as shown above. Finally, at the end of the training, we can use **model.push_to_hub()** to save the best model stored on the local machine to the Hub._\r\n\r\n_Nonetheless, it would be better if this feature is incorporated in the **Transformers.PushToHubCallback()**._" ]
1,688
1,694
1,694
NONE
null
### Feature request When utilizing Keras callbacks, we have the ability to specify when the model should be saved during training. The **transformers.PushToHubCallback()** class already incorporates similar functionality through the use of the **"save_strategy"** parameter. This parameter accepts the following values: - "no": Saving is performed at the conclusion of training. - "epoch": Saving is performed at the end of each epoch. - "steps": Saving is performed every "save_steps" interval. However, these options do not take into account accuracy (or any other specified metric) improvement. In contrast, the Keras callback provides the **"save_best_only"** parameter, which exclusively saves the model when there is an enhancement in accuracy or the specified metric. The code snippet below demonstrates its usage: ``` #Define the callback callbacks = [ keras.callbacks.ModelCheckpoint( filepath="directory_name/model.keras", monitor="val_loss", save_best_only=True, ) ] #Start the training history = model.fit( train_dataset, epochs=5, validation_data=validation_dataset, monitor="val_loss", callbacks=callbacks) ``` The model mentioned above will undergo training for a total of 5 epochs. However, the model will only be saved when there is an improvement in the "validation loss" metric. **transformers.PushToHubCallback()** class must incorporate this feature as well. ### Motivation This feature is indeed quite valuable, and it is readily accessible through [Keras callbacks](https://keras.io/api/callbacks/model_checkpoint/#:~:text=save_best_only%3A%20if%20save_best_only%3DTrue%20%2C,by%20each%20new%20better%20model.). By utilizing this feature, significant processing power and bandwidth can be saved, particularly when dealing with large transformers models. It ensures that only the best-performing models, based on the specified metric (such as validation loss), are saved, resulting in more efficient storage and reduced computational resources. ### Your contribution This [source code](https://github.com/keras-team/keras/blob/v2.12.0/keras/callbacks.py) can be helpful.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24727/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24727/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24726
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24726/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24726/comments
https://api.github.com/repos/huggingface/transformers/issues/24726/events
https://github.com/huggingface/transformers/pull/24726
1,796,152,718
PR_kwDOCUB6oc5VDcwN
24,726
[`T5`, `MT5`, `UMT5`] Add [T5, MT5, UMT5]ForSequenceClassification
{ "login": "sjrl", "id": 10526848, "node_id": "MDQ6VXNlcjEwNTI2ODQ4", "avatar_url": "https://avatars.githubusercontent.com/u/10526848?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sjrl", "html_url": "https://github.com/sjrl", "followers_url": "https://api.github.com/users/sjrl/followers", "following_url": "https://api.github.com/users/sjrl/following{/other_user}", "gists_url": "https://api.github.com/users/sjrl/gists{/gist_id}", "starred_url": "https://api.github.com/users/sjrl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sjrl/subscriptions", "organizations_url": "https://api.github.com/users/sjrl/orgs", "repos_url": "https://api.github.com/users/sjrl/repos", "events_url": "https://api.github.com/users/sjrl/events{/privacy}", "received_events_url": "https://api.github.com/users/sjrl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Quick comment, we can probably add this to UMT5 too (you don't have to if it take too much time)\r\n\r\nFor sure, I'd be happy to!", "Hey @sgugger thanks for the feedback!\r\n\r\n> Hi, thanks for your PR! This does not follow the pattern of BartForSequenceClassification, or any other classification model of the library: \r\n\r\nI agree! I ran into the same problem when implementing `T5ForQuestionAnswering`. I opted to follow the implementation used for `T5ForConditionalGeneration` which also does not use the `BaseModel` and instead reimplements the encoder and decoder. \r\n\r\nI can go ahead and try and use the BaseModel for the SequenceClassification model, but its probably worth doing a refactor of T5 to use the BaseModel for the other models as well (e.g. ConditionalGeneration and QuestionAnswering). What do you think? ", "We can't change existing models without risking massive breaking changes (users wouldn't be able to re-use their checkpoints directly, I think `from_pretrained` would still work though). But that doesn't mean we shouldn't do the right thing for new models! So if you could try it the usual way, that would be great!", "And thanks for the feedback! ", "> # What does this PR do?\r\n> This adds a sequence classification head to the PyTorch implementation of T5 and MT5, following the pattern of BartForSequenceClassification since it is also an encoder-decoder sequence classification model.\r\n> \r\n> I have trained and uploaded a flan-t5-base for MNLI [here](https://huggingface.co/sjrhuschlee/flan-t5-base-mnli) which has shown promising results on the dataset.\r\n> \r\n> I've updated the model tests to include the new model and I believe I found hopefully most of the additional imports and compatibility with the text-classification and zero-shot classification pipelines.\r\n> \r\n> **NOTE:**\r\n> \r\n> * [x] Help with failing tests\r\n> * I found a number of tests are failing and I have linked it to the fact that `T5ForSequenceClassification` (and also `BartForSequenceClassification`) expect the `input_ids` and `decoder_input_ids` to have the same sequence length which they do not for the T5 tests (shown below) https://github.com/huggingface/transformers/blob/abaca9f9432a84cfaa95531de4c72334f38a42f2/tests/models/t5/test_modeling_t5.py#L104-L106\r\n> \r\n> where `encoder_seq_length != decoder_seq_length`\r\n> * Whereas they do have the same sequence length for the `BartModelTest` (shown below) https://github.com/huggingface/transformers/blob/abaca9f9432a84cfaa95531de4c72334f38a42f2/tests/models/bart/test_modeling_bart.py#L126-L133\r\n> * Here are the lines of code in `BartForSequenceClassification` (and the `T5` versions) that cause an error when the encoder and decoder sequence lengths are different\r\n> https://github.com/huggingface/transformers/blob/caf5e369fc7b4755d9f98568cbe5e36a0898c96c/src/transformers/models/bart/modeling_bart.py#L1546-L1554\r\n> \r\n> The `eos_mask` has the wrong shape to be properly cast onto the `hidden_states` since the `eos_mask` shape is linked to the encoder sequence length and the `hidden_states` shape is linked to the decoder sequence length.\r\n> \r\n> Would it be okay to change the T5 tests such that the decoder and encoder input_ids have the same sequence length to get the tests to pass?\r\n> \r\n> ## Before submitting\r\n> * [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).\r\n> * [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),\r\n> Pull Request section?\r\n> * [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link\r\n> to it if that's the case.\r\n> * [x] Did you make sure to update the documentation with your changes? Here are the\r\n> [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and\r\n> [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).\r\n> * [x] Did you write any new necessary tests?\r\n> \r\n> ## Who can review?\r\n> Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.\r\n> \r\n> Hey @ArthurZucker and @younesbelkada I would greatly appreciate a review on this when you have a chance.\r\n\r\nThanks for your work! Could you please give an example on how to use T5ForSequenceClassification on sst-2/sst-5 dataset,especially how to use the tokenizer? I have try but cannot make it. Thanks!", "@sjrl Thanks for your work! Could you please give an example on how to use T5ForSequenceClassification on sst-2/sst-5 dataset,especially how to use the tokenizer? I have try but cannot make it. Thanks!", "Hi @dongdongzhaoUP, \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports." ]
1,688
1,694
1,690
CONTRIBUTOR
null
# What does this PR do? This adds a sequence classification head to the PyTorch implementation of T5 and MT5, following the pattern of BartForSequenceClassification since it is also an encoder-decoder sequence classification model. I have trained and uploaded a flan-t5-base for MNLI [here](https://huggingface.co/sjrhuschlee/flan-t5-base-mnli) which has shown promising results on the dataset. I've updated the model tests to include the new model and I believe I found hopefully most of the additional imports and compatibility with the text-classification and zero-shot classification pipelines. **NOTE:** - [x] Help with failing tests - I found a number of tests are failing and I have linked it to the fact that `T5ForSequenceClassification` (and also `BartForSequenceClassification`) expect the `input_ids` and `decoder_input_ids` to have the same sequence length which they do not for the T5 tests (shown below) https://github.com/huggingface/transformers/blob/abaca9f9432a84cfaa95531de4c72334f38a42f2/tests/models/t5/test_modeling_t5.py#L104-L106 where `encoder_seq_length != decoder_seq_length` - Whereas they do have the same sequence length for the `BartModelTest` (shown below) https://github.com/huggingface/transformers/blob/abaca9f9432a84cfaa95531de4c72334f38a42f2/tests/models/bart/test_modeling_bart.py#L126-L133 - Here are the lines of code in `BartForSequenceClassification` (and the `T5` versions) that cause an error when the encoder and decoder sequence lengths are different https://github.com/huggingface/transformers/blob/caf5e369fc7b4755d9f98568cbe5e36a0898c96c/src/transformers/models/bart/modeling_bart.py#L1546-L1554 The `eos_mask` has the wrong shape to be properly cast onto the `hidden_states` since the `eos_mask` shape is linked to the encoder sequence length and the `hidden_states` shape is linked to the decoder sequence length. Would it be okay to change the T5 tests such that the decoder and encoder input_ids have the same sequence length to get the tests to pass? ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Hey @ArthurZucker and @younesbelkada I would greatly appreciate a review on this when you have a chance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24726/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24726/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24726", "html_url": "https://github.com/huggingface/transformers/pull/24726", "diff_url": "https://github.com/huggingface/transformers/pull/24726.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24726.patch", "merged_at": 1690311769000 }
https://api.github.com/repos/huggingface/transformers/issues/24725
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24725/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24725/comments
https://api.github.com/repos/huggingface/transformers/issues/24725/events
https://github.com/huggingface/transformers/issues/24725
1,795,833,413
I_kwDOCUB6oc5rCj5F
24,725
Sum loss instead of mean loss should be used if gradient accumulation step is larger than 1 when training a language model
{ "login": "Atry", "id": 601530, "node_id": "MDQ6VXNlcjYwMTUzMA==", "avatar_url": "https://avatars.githubusercontent.com/u/601530?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Atry", "html_url": "https://github.com/Atry", "followers_url": "https://api.github.com/users/Atry/followers", "following_url": "https://api.github.com/users/Atry/following{/other_user}", "gists_url": "https://api.github.com/users/Atry/gists{/gist_id}", "starred_url": "https://api.github.com/users/Atry/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Atry/subscriptions", "organizations_url": "https://api.github.com/users/Atry/orgs", "repos_url": "https://api.github.com/users/Atry/repos", "events_url": "https://api.github.com/users/Atry/events{/privacy}", "received_events_url": "https://api.github.com/users/Atry/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Atry \r\n\r\nYour description is correct. However, the loss logic is implemented in each model classes, and therefore it could not see multiple batches in a single model forward pass (and that's probably the main reason for which we just simply use `mean`).\r\n\r\nThe best and easy way to have a correct computation if to modify the trainer class to compute back, given the loss from model output, compute the sum of losses in a batch (by considering the sequence length, or total number of tokens that is meaningful - i.e. not padding token etc.), and send this new custom loss values to compute the gradients then accumulate it.", "Computing back the gradient would damage the precision if the gradient is in `fp16`.", "An idea is to switch all models to `sum` loss and create a custom `GradientScaler` to count the number of trainable tokens.", "By the way there is another example of the issue in `mean` loss. Suppose you have batch size 33, 1 epoch, a data set of 100 samples, then the last iteration will have only 1 sample and the gradient produced by the last sample is 33 times larger than other samples'.", "> switch all models to sum loss \r\n\r\nThis would be a big breaking change, and would not be an option.\r\n\r\n> Computing back the gradient would damage the precision if the gradient is in fp16\r\n\r\nI would not think it will produce a big difference, if at the end, we still use some form of mean after we accumulate (sum) all the gradients (saying divided by the total number of non-padding tokens appear in all the batches in a gradient accumulation).\r\n\r\nWhen the loss is computed by sum in a batch, it actually requires specific work to perform to get back to the usual definition of that loss (say the average non-padding token loss) when we sum over all batches.\r\n\r\n(Here I only say non-padding token. But loss definition could get very complex depending on the tasks and the specific models)\r\n\r\n", "As studied in https://arxiv.org/abs/1711.00489, changing batch size would have a side effect to also change learning rate per sample (and learning rate per token) even when the learning rate per iteration is unchanged. However their analysis to their experiment result is non-sense. The actual explanation is that the side effect is just due to the mean loss. Sum loss would not lead to the side effect.\r\n", "If you are not happy with the loss computation inside the model, you can just not pass the `labels` to the model and compute it yourself outside of the forward pass. Note that all of our examples account for gradient accumulation by dividing the final loss by the number of gradient accumulation steps.\r\n\r\nAs @ydshieh mentioned, a breaking change across all models of this magnitude is not possible.", "Good idea! I wonder if the `Trainer` can fix this loss issue by not passing `labels`, too.", "The Trainer already does divide the loss by the number of gradient accumulation steps and there are tests in the CI to ensure training with batch size X and batch size X / g gradient accumulation steps g yield the same results.", "Suppose you have a dataset of two samples used in unsupervised learning against a decoder-only language model, sample 1 contains 11 tokens, sample 2 contains 101 tokens, when training at batch size 1 without padding, the `mean` loss of sample 1 is 0.1 and the `mean` loss of sample 2 is 0.9, then mathematically what's your expected loss when the batch size is 2? \r\n\r\nIn current `transformers` implementation:\r\n- when gradient accumulation step is 1 and batch size is 2, padding to sequence length 101, the loss would be `(0.1*10+0.9*100)/(10+100)=0.82727`\r\n- when gradient accumulation step is 2 and batch size is 1, no padding, the loss would be `(0.1+0.9)/2=0.5`. \r\n\r\nIMHO ideally the loss should be 0.82727", "> when gradient accumulation step is 1 and batch size is 2, padding to sequence length 101, the loss would be (0.1*10+0.9*100)/(100*2)=0.455\r\n\r\nwhere does `100*2` come from in the denominator?", "I believe in `transformers` we do take care of the padding token.\r\n\r\nIf you find a HF causal LM model that has a loss computation (in the model forward) that doesn't take care of the padding token, please let us know. 🙏 ", "You are right. I misunderstood the implementation. I just updated my previous comments. Thank you!", "Thanks!\r\n\r\nAs mentioned earlier:\r\n\r\n- you can either compute back the sum from the mean\r\n- but as you don't like the precision loss in fp16 if using the above way, you can choose not to pass the labels to the model forward, and compute the actual sum.\r\n\r\nBut \r\n\r\n - (*) you need to modify a bit the code `to not to divide by the accumulation step 2`, but the total number of non-padding tokens seen in all the batches during that gradient accumulation\r\n - this necessary change (*) is not possible to be done in the model forward, no matter if we return `mean` or `sum` in forward pass.", "I confronted the same issue. The gradient accumulation's result is much worse than using a large batch size (per device). \r\n\r\nThe main reason that I assume is probably that the gradient accumulation macro-averages the loss scores, but they should be micro-averaged.\r\n\r\nI think this problem is so critical that it affects the result a lot for LMs (variable lengths across batches). Otherwise, the training result must be suboptimal.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,688
1,695
1,693
NONE
null
### System Info Not applicable, because this is a design issue, not a runtime error. ### Who can help? @sgugger, @ArthurZucker and @younesbelkada ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Given gradient accumulation step 2, batch size 1, and a training set of 2 samples, where sample 1 contains 11 tokens and sample 2 contains 101 tokens, train a decoder-only model in unsupervised learning (first token in each sample is untrainable), then the gradient will be different from training on same dataset and model at gradient accumulation step 1, batch size 2. The reason is that currently `transformers` use mean loss for most models (if not all), as a result, each token in sample 1 would produce 10 times larger gradient than that of each token in sample 2. ### Expected behavior Settings of accumulation step 2 / batch size 1 should produce the same gradient as settings of accumulation step 1 / batch size 2.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24725/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24725/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24724
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24724/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24724/comments
https://api.github.com/repos/huggingface/transformers/issues/24724/events
https://github.com/huggingface/transformers/issues/24724
1,795,744,066
I_kwDOCUB6oc5rCOFC
24,724
New Version Usage Issue
{ "login": "Excuses123", "id": 22993056, "node_id": "MDQ6VXNlcjIyOTkzMDU2", "avatar_url": "https://avatars.githubusercontent.com/u/22993056?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Excuses123", "html_url": "https://github.com/Excuses123", "followers_url": "https://api.github.com/users/Excuses123/followers", "following_url": "https://api.github.com/users/Excuses123/following{/other_user}", "gists_url": "https://api.github.com/users/Excuses123/gists{/gist_id}", "starred_url": "https://api.github.com/users/Excuses123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Excuses123/subscriptions", "organizations_url": "https://api.github.com/users/Excuses123/orgs", "repos_url": "https://api.github.com/users/Excuses123/repos", "events_url": "https://api.github.com/users/Excuses123/events{/privacy}", "received_events_url": "https://api.github.com/users/Excuses123/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Here's another question, in the new version of the Transformers package, the default loaded model by from_pretrained has become safeTensors. How can I change it to pytorch.bin? Is there any parameter I can specify?", "Hi @Excuses123, thanks for raising this issue.\r\n\r\nWithout knowing the model or dataset, we're unable to reproduce and won't be able to debug this issue. Is there a minimal reproducible snippet with a public dataset and model checkpoint where this issue (increase memory footprint) still occurs and you could share?\r\n\r\nTo force the model to not load safetensor weights you can pass `use_safetensors=False` in the `from_pretrained` call", "@amyeroberts Thank you for your response.\r\n\r\nI am using the model: \r\n[bigscience/bloomz-1b1](https://huggingface.co/bigscience/bloomz-1b1)\r\n\r\nThe data can be found at: https://huggingface.co/datasets/BelleGroup/train_0.5M_CN/blob/main/Belle_open_source_0.5M.json\r\n\r\nBelow is the execution script:\r\n\r\n```\r\ntorchrun --nproc_per_node=4 --master_port=12345 train.py \\\r\n --model_name_or_path bigscience/bloomz-1b1 \\\r\n --cache_dir /workspace/pretrain_model/bloomz \\\r\n --output_dir /workspace/finetune_model/bloomz/bloomz_1b1_sft \\\r\n --data_path /workspace/datasets/Belle_train_0.5M_CN/Belle_open_source_0.5M.json \\\r\n --fp16 True \\\r\n --num_train_epochs 1 \\\r\n --per_device_train_batch_size 1 \\\r\n --per_device_eval_batch_size 1 \\\r\n --gradient_accumulation_steps 32 \\\r\n --model_max_length 512 \\\r\n --evaluation_strategy \"no\" \\\r\n --save_strategy \"steps\" \\\r\n --save_steps 2000 \\\r\n --save_total_limit 1 \\\r\n --learning_rate 2e-5 \\\r\n --weight_decay 0. \\\r\n --warmup_ratio 0.03 \\\r\n --lr_scheduler_type \"cosine\" \\\r\n --logging_steps 1 \\\r\n --fsdp \"full_shard auto_wrap\" \\\r\n --fsdp_transformer_layer_cls_to_wrap 'BloomBlock' \\\r\n --report_to \"tensorboard\"\r\n```\r\n\r\nAfter testing, The maximum version that can currently run is 4.29.2, and all versions after that cannot run.", "I guess it might be caused by FSDP (Fully Sharded Data Parallelism), but I'm not sure.", "@Excuses123 Have you tried running without FDSP? Which version of accelerate are you running?", "@amyeroberts I have tried it, and without FSDP, both the new and old versions of transformers throw an OOM error. My accelerate version is 0.20.3.", "> both the new and old versions of transformers throw an OOM error.\r\n\r\n@Excuses123 Is this including versions <= 4.29.2 ? ", "@amyeroberts I have tried version 4.29.0 and it works", "@Excuses123 OK, thanks for confirming. \r\n\r\nCould you:\r\n* Format the code example so that all of the code is in markdown code blocks: ` ``` code goes here ``` ` \r\n* Try on the most recent version of transformers, [installing from source](https://huggingface.co/docs/transformers/installation#install-from-source)?\r\n* Share the versions of datasets being used? ", "@amyeroberts I have fixed the code formatting, and the version of my datasets is 2.11.0. My machine is currently running a task, and as soon as it is finished, I will try the latest version.", "Facing the same issue. Code ran smoothly with transformers==4.28.1 but OOM with transformers==4.30.2", "@Excuses123 @larrylawl OK, thanks for the information and updates. \r\n\r\nI'm going to cc @pacman100 and @younesbelkada who know more about training in fp16 and torchrun ", "I can confirm this. It is a bug introduced recently. It can be reproduced by the Vicuna training [example](https://github.com/lm-sys/FastChat#fine-tuning-vicuna-7b-with-local-gpus).\r\nThe script works well for 4.28.1 but hits OOM with 4.31.0.\r\n\r\nWith 4.31.0, the warning is\r\n```\r\nFSDP Warning: When using FSDP, it is efficient and recommended to call prepare for the model before creating the optimizer\r\nFSDP Warning: When using FSDP, several parameter groups will be conflated into a single one due to nested module wrapping and parameter flattening.\r\n```\r\n\r\nTo fix it, I followed the [guide](https://huggingface.co/docs/accelerate/usage_guides/fsdp#a-few-caveats-to-be-aware-of) and changed these lines (https://github.com/huggingface/transformers/blob/e42587f596181396e1c4b63660abf0c736b10dae/src/transformers/trainer.py#L1646-L1661) to\r\n```python3\r\n model = self.accelerator.prepare(model)\r\n if delay_optimizer_creation:\r\n self.create_optimizer_and_scheduler(num_training_steps=max_steps)\r\n self.optimizer = self.accelerator.prepare(self.optimizer)\r\n```\r\nThen the warnings and OOM disappeared.\r\n\r\n@pacman100 @younesbelkada I think my fix is a hack that only works for my case. Could you do a more complete fix in the main branch?", "Hello @Ying1123, Thank you for the detailed info, very helpful. Could you please try out the above PRs for accelerate and transformers and see if it fixes the OOM? ", "> Hello @Ying1123, Thank you for the detailed info, very helpful. Could you please try out the above PRs for accelerate and transformers and see if it fixes the OOM?\r\n\r\nThanks @pacman100, cherry-pick the PRs for transformers v4.31.0 and accelerate v0.21.0 works for me.", "@pacman100 Hi, I am still getting out-of-memory issues with the latest main.\r\nWith transformer==4.28.1, the vicuna-7b [example](https://github.com/lm-sys/FastChat#fine-tuning-vicuna-7b-with-local-gpus) can run on 4xA100 (40GB) without any issues.\r\n\r\nAfter accelerate is used for FSDP (from v4.30 - the current main), the example hits OOM.\r\nBefore your fix, the example hits OOM immediately. After your fix, the example hits OOM after a few batches.\r\n\r\nFrom these observations, I can confirm that the recent refactoring makes the memory usage higher than the older version but I do not know how to debug because I am not familiar with Accelerate.\r\nCould you do more testing and help us fix it? This blocks us from updating transformers to the latest version.", "Hello @merrymercy, can you post the vram usage with the 4.28 version?", "Hi @pacman100 @Ying1123 , I meet the same issus: OOM ; And I revised my tranfomers to 4.31.0 or 4.30.0 and accelerate=0.21.0, all these are not worked ! \r\nOn 2 x A6000 48G, fine-tuning LLaMA 7B\r\nWith transformer=4.31.0, accelerate=0.22.0.dev0 (latest main), the warning is:\r\n```\r\nFutureWarning: using `--fsdp_transformer_layer_cls_to_wrap` is deprecated. Use fsdp_config instead\r\nFSDP Warning: When using FSDP, it is efficient and recommended to call prepare for the model before creating the optimizer.\r\nFSDP Warning: When using FSDP, several parameter groups will be conflated into a single one due to nested module wrapping and parameter flattening.\r\n```\r\n\r\nAnd my fsdp are:\r\n```\r\n --fsdp \"full_shard auto_wrap\" \\\r\n --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \\\r\n```", "@pacman100 @Ying1123 And I found another way to add the fsdp_config.json can disappear the all follow warning :\r\n```\r\nFutureWarning: using `--fsdp_transformer_layer_cls_to_wrap` is deprecated. Use fsdp_config instead\r\n```\r\nAnd [hacking method](https://github.com/huggingface/transformers/issues/24724#issuecomment-1645189539) can disappear:\r\n```\r\nFSDP Warning: When using FSDP, it is efficient and recommended to call prepare for the model before creating the optimizer.\r\nFSDP Warning: When using FSDP, several parameter groups will be conflated into a single one due to nested module wrapping and parameter flattening.\r\n```\r\nBut all these still hit on OOM !\r\nMy fsdp_config.json is:\r\n```\r\n{\r\n \"fsdp_auto_wrap_policy\": \"FULL_SHARD\",\r\n \"fsdp_transformer_layer_cls_to_wrap\": \"LlamaDecoderLayer\"\r\n}\r\n```\r\nI think there is better way to fix this. ", "I see same memory usage across versions for the following example:\r\n\r\n```\r\ncd transformers\r\n\r\nexport TASK_NAME=mrpc\r\n\r\ntorchrun --nnodes 1 --nproc-per-node 2 ./examples/pytorch/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 5e-5 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --fsdp \"full_shard auto_wrap\" --fsdp_transformer_layer_cls_to_wrap BertLayer --bf16\r\n```\r\n\r\nversion 4.28.1 - 5.4GB vram\r\nlatest main branch - 4.8GB vram\r\n\r\nPlease provide a minimal example that I can directly run without having to spend time in getting it to work.\r\n", "You mean the \r\ntransformers=the latest main branch; \r\naccelerate=0.21.0 ?", "Both Accelerate and Transformers main branch", "With both Accelerate and Transformers main branch works for me", "@Xuekai-Zhu did you fix the problem? i met the same oom as 2xA6000 with both main branch", "I confirm using @Ying1123 's hacking does not work for me. I have 4 A100 card, with `transformers==4.31.0, accelerator==0.21.0`. ", "due to this method. downgrade to transformer==4.28.1 worked for me\r\n> @pacman100 Hi, I am still getting out-of-memory issues with the latest main. With transformer==4.28.1, the vicuna-7b [example](https://github.com/lm-sys/FastChat#fine-tuning-vicuna-7b-with-local-gpus) can run on 4xA100 (40GB) without any issues.\r\n> \r\n> After accelerate is used for FSDP (from v4.30 - the current main), the example hits OOM. Before your fix, the example hits OOM immediately. After your fix, the example hits OOM after a few batches.\r\n> \r\n> From these observations, I can confirm that the recent refactoring makes the memory usage higher than the older version but I do not know how to debug because I am not familiar with Accelerate. Could you do more testing and help us fix it? This blocks us from updating transformers to the latest version.\r\n\r\n", "I tried all the solution still getting OOM on A100 80GB", "If you still have an issue I suggest you to create a new issue, share a reproducer, a traceback and ping @pacman100, otherwise there is no way we can help you 😓 " ]
1,688
1,694
1,689
NONE
null
### System Info - `transformers` version: 4.29.0 - Platform: Linux-3.10.0-1160.92.1.el7.x86_64-x86_64-with-glibc2.31 - Python version: 3.10.9 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ##Here is my code. ``` import os import logging from dataclasses import dataclass, field from typing import Dict, Optional, Sequence import torch import transformers from datasets import load_dataset, load_from_disk from transformers import ( AutoModelForCausalLM, AutoTokenizer, Trainer, DataCollatorForSeq2Seq, ) IGNORE_INDEX = -100 PROMPT_DICT = { "prompt_input": ( "### 指令:\n{instruction}\n\n### 输入:\n{input}\n\n### 回答:" ), "prompt_no_input": ( "### 指令:\n{instruction}\n\n### 回答:" ), } @dataclass class TrainingArguments(transformers.TrainingArguments): model_name_or_path: Optional[str] = field(default=None, metadata={"help": "模型名称"}) cache_dir: Optional[str] = field(default=None, metadata={"help": "模型地址"}) data_path: str = field(default=None, metadata={"help": "数据地址"}) mask_input: bool = field(default=True, metadata={"help": "是否遮掉指令,只计算回答的损失"}) model_max_length: int = field(default=512, metadata={"help": "最大序列长度"}) optim: str = field(default="adamw_torch", metadata={"help": "优化器"}) @dataclass class DataCollatorForSupervisedDataset(object): """Collate examples for supervised fine-tuning.""" tokenizer: transformers.PreTrainedTokenizer def __call__(self, instances: Sequence[Dict]) -> Dict[str, torch.Tensor]: input_ids, labels = tuple([torch.tensor(instance[key]) for instance in instances] for key in ("input_ids", "labels")) input_ids = torch.nn.utils.rnn.pad_sequence( input_ids, batch_first=True, padding_value=self.tokenizer.pad_token_id ) labels = torch.nn.utils.rnn.pad_sequence(labels, batch_first=True, padding_value=IGNORE_INDEX) return dict( input_ids=input_ids, labels=labels, attention_mask=input_ids.ne(self.tokenizer.pad_token_id), ) def train(): local_rank = int(os.environ["LOCAL_RANK"]) parser = transformers.HfArgumentParser(TrainingArguments) training_args, = parser.parse_args_into_dataclasses() if local_rank == 0: print(training_args) tokenizer = AutoTokenizer.from_pretrained( training_args.model_name_or_path, cache_dir=training_args.cache_dir, model_max_length=training_args.model_max_length, padding_side="right" ) model = AutoModelForCausalLM.from_pretrained( training_args.model_name_or_path, cache_dir=training_args.cache_dir, # torch_dtype=torch.float16 ) def generate_and_tokenize(sample): prompt_input, prompt_no_input = PROMPT_DICT["prompt_input"], PROMPT_DICT["prompt_no_input"] source = prompt_input.format_map(sample) if sample.get("input", "") != "" \ else prompt_no_input.format_map(sample) target = f"\n{sample['output']}{tokenizer.eos_token}" complete = source + target # </s> 1 2 3 : a b </s> complete_tokenized = tokenizer(complete, truncation=True, max_length=training_args.model_max_length) # </s> 1 2 3 : source_tokenized = tokenizer(source, truncation=True, max_length=training_args.model_max_length) if training_args.mask_input: source_len = len(source_tokenized['input_ids']) complete_tokenized['labels'] = [IGNORE_INDEX] * source_len + complete_tokenized['input_ids'][source_len:] else: complete_tokenized['labels'] = complete_tokenized['input_ids'].copy() return complete_tokenized tokenized_path = os.path.join(os.path.dirname(training_args.data_path), f"{training_args.model_name_or_path.split('/')[-1]}_tokenized") if not os.path.exists(tokenized_path): logging.warning("tokenized data not existed, tokenize data...") data = load_dataset("json", data_files=training_args.data_path) train_dataset = data['train'].shuffle().map(generate_and_tokenize, batched=False, remove_columns=["instruction", "input", "output"]) if local_rank == 0: train_dataset.save_to_disk(tokenized_path) else: logging.warning("tokenized data existed, load data...") train_dataset = load_from_disk(tokenized_path) # data_collator = DataCollatorForSupervisedDataset(tokenizer=tokenizer) data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, label_pad_token_id=IGNORE_INDEX, pad_to_multiple_of=8) logging.warning("training...") trainer = Trainer(model=model, tokenizer=tokenizer, args=training_args, train_dataset=train_dataset, eval_dataset=None, data_collator=data_collator) trainer.train() trainer.save_state() trainer.save_model(output_dir=training_args.output_dir) tokenizer.save_pretrained(save_directory=training_args.output_dir) if __name__ == '__main__': train() ``` ### Expected behavior Has anyone encountered this problem? I used the same instruction fine-tuning code. It runs successfully with transformers package version 4.29.0, but when I upgrade to version 4.30.2, it fails to run and throws an OOM (Out of Memory) error. Does anyone know the reason behind this? Below is the GPU status during my successful run. ![image](https://github.com/huggingface/transformers/assets/22993056/47653653-0ec4-4d98-beab-101665dde0d1)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24724/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24724/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24723
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24723/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24723/comments
https://api.github.com/repos/huggingface/transformers/issues/24723/events
https://github.com/huggingface/transformers/issues/24723
1,795,409,801
I_kwDOCUB6oc5rA8eJ
24,723
install from source dont work
{ "login": "IdoTal120", "id": 139057336, "node_id": "U_kgDOCEnYuA", "avatar_url": "https://avatars.githubusercontent.com/u/139057336?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IdoTal120", "html_url": "https://github.com/IdoTal120", "followers_url": "https://api.github.com/users/IdoTal120/followers", "following_url": "https://api.github.com/users/IdoTal120/following{/other_user}", "gists_url": "https://api.github.com/users/IdoTal120/gists{/gist_id}", "starred_url": "https://api.github.com/users/IdoTal120/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IdoTal120/subscriptions", "organizations_url": "https://api.github.com/users/IdoTal120/orgs", "repos_url": "https://api.github.com/users/IdoTal120/repos", "events_url": "https://api.github.com/users/IdoTal120/events{/privacy}", "received_events_url": "https://api.github.com/users/IdoTal120/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "\r\nHello @IdoTal120 ! Welcome to Github, 👋\r\nTaking this into account\r\n```\r\nERROR: Error [WinError 2] The system cannot find the file specified while executing command git version\r\nERROR: Cannot find command 'git' - do you have 'git' installed and in your PATH?\r\n```\r\n\r\n\r\nThe error message is telling you that `pip `doesn't know where you have installed `git`. Can you confirm that you have `git` installed? It can be installed for Windows [here](https://git-scm.com/download/win).\r\n\r\nThen you need to add it to the PATH environment variable. Currently, `pip` is checking for `git.exe `in all the locations listed in PATH, but the location for the` git `executable is not there. I believe during installation of git you have the option to update PATH automatically, but you can do it at any time: find the filepath to git.exe on your local machine (for example, `C:\\Program Files\\...\\git\\bin`) and then add it to PATH ([instructions here for your operating system I think](https://www.opensourceforu.com/2021/01/how-to-install-and-configure-git-on-a-windows-server/)).\r\n\r\nTaken from [here](https://github.com/stefmolin/Hands-On-Data-Analysis-with-Pandas-2nd-edition/issues/3)\r\nLet me know how it goes and welcome to the Open! \r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,688
1,692
1,692
NONE
null
### System Info ms server 2019 PS C:\Users\a_ital> pip install git+https://github.com/huggingface/transformers Collecting git+https://github.com/huggingface/transformers Cloning https://github.com/huggingface/transformers to c:\users\a_ital\appdata\local\temp\6\pip-req-build-no6t74od ERROR: Error [WinError 2] The system cannot find the file specified while executing command git version ERROR: Cannot find command 'git' - do you have 'git' installed and in your PATH? ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction PS C:\Users\a_ital> pip install git+https://github.com/huggingface/transformers Collecting git+https://github.com/huggingface/transformers Cloning https://github.com/huggingface/transformers to c:\users\a_ital\appdata\local\temp\6\pip-req-build-no6t74od ERROR: Error [WinError 2] The system cannot find the file specified while executing command git version ERROR: Cannot find command 'git' - do you have 'git' installed and in your PATH? PS C:\Users\a_ital> ### Expected behavior I need to install transformers
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24723/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24723/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24722
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24722/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24722/comments
https://api.github.com/repos/huggingface/transformers/issues/24722/events
https://github.com/huggingface/transformers/issues/24722
1,795,282,010
I_kwDOCUB6oc5rAdRa
24,722
Feature Request: To add nested hierarchy retrieval from Donut response
{ "login": "sam99dave", "id": 37779169, "node_id": "MDQ6VXNlcjM3Nzc5MTY5", "avatar_url": "https://avatars.githubusercontent.com/u/37779169?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sam99dave", "html_url": "https://github.com/sam99dave", "followers_url": "https://api.github.com/users/sam99dave/followers", "following_url": "https://api.github.com/users/sam99dave/following{/other_user}", "gists_url": "https://api.github.com/users/sam99dave/gists{/gist_id}", "starred_url": "https://api.github.com/users/sam99dave/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sam99dave/subscriptions", "organizations_url": "https://api.github.com/users/sam99dave/orgs", "repos_url": "https://api.github.com/users/sam99dave/repos", "events_url": "https://api.github.com/users/sam99dave/events{/privacy}", "received_events_url": "https://api.github.com/users/sam99dave/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "Hi @sam99dave, \r\n\r\nThanks for raising this issue! Would you like to open a PR with your suggestion? This way you get the contribution on git. One thing to note when adding this feature is that we have to consider backwards compatibility with our models, so the default behaviour would still need to be preserved.", "> Hi @sam99dave,\r\n> \r\n> Thanks for raising this issue! Would you like to open a PR with your suggestion? This way you get the contribution on git. One thing to note when adding this feature is that we have to consider backwards compatibility with our models, so the default behaviour would still need to be preserved.\r\n\r\nHey hi,\r\nI would like to open a PR for this. Regarding the backward compatibility, I agree with that, I think it can be handled by having a check for some nested key. If it's present then only we will use the logic to handle it. If not present then it will return what it should be returning by default. Will be doing some test on this to be sure of it." ]
1,688
1,689
null
NONE
null
### Feature request ### Donut for hierarchy extraction (Document Parsing) While preprocessing the ground truth json to the tokens for Donut the processor function (json2token) handles nested hierarchy but the same doesn't hold true for token2json. Below is an example json: ` { "header": "This is 1st header", "elements": [ { "text_block": "This is a textblock" }, { "header": "1st nested header", "elements": [ { "text_block": "This is a sentence" }, { "text_block": "Another sentence...." }, { "itallic_header": "This is an itallic header", "elements": [ { "text_block": "Text 1 inside itallic header.." }, { "text_block": "Text 2 inside itallic header.." } ] } ] } ] } ` Consider the above json. Applying the json2token function gives the following token sequence. Function Call: `output = json2token(temp_test)` > <s_header>This is 1st header</s_header><s_elements><s_text_block>This is a textblock</s_text_block><sep/><s_header>1st nested header</s_header><s_elements><s_text_block>This is a sentence</s_text_block><sep/><s_text_block>Another sentence....</s_text_block><sep/><s_itallic_header>This is an itallic header</s_itallic_header><s_elements><s_text_block>Text 1 inside itallic header..</s_text_block><sep/><s_text_block>Text 2 inside itallic header..</s_text_block></s_elements></s_elements></s_elements> This maintains the hierarchy (like parenthesis matching). So, if donut is trained on such data it will give response which parses the information & also retains the hierarchy but the token2json function doesn't handle the conversion properly. Below is the output of the function id passed the token sequence present above. Function Call: `processor.token2json(output)` Output ` [ { 'header': 'This is 1st header', 'elements': [ { 'text_block': 'This is a textblock' }, { 'header': '1st nested header', 'text_block': 'This is a sentence' }, { 'text_block': 'Another sentence....' }, { 'itallic_header': 'This is an itallic header', 'text_block': 'Text 1 inside itallic header..' }, { 'text_block': 'Text 2 inside itallic header..' } ] } ] ` Updated Function Results (Preserving the hierarchy): ` [ { 'header': 'This is 1st header', 'elements': [ { 'text_block': 'This is a textblock' }, { 'header': '1st nested header', 'elements': [ { 'text_block': 'This is a sentence' }, { 'text_block': 'Another sentence....' }, { 'itallic_header': 'This is an itallic header', 'elements': [ { 'text_block': 'Text 1 inside itallic header..' }, { 'text_block': 'Text 2 inside itallic header..' } ] } ] } ] } ] ` Example from CORD: > temp_test = { "company": "ADVANCO COMPANY", "date": "17/01/2018", "address": "NO 1&3, JALAN WANGSA DELIMA 12, WANGSA LINK, WANGSA MAJU, 53300 KUALA LUMPUR", "total": "7.00" } Updated Function Output: ` [ { 'company': 'ADVANCO COMPANY', 'date': '17/01/2018', 'address': 'NO 1&3, JALAN WANGSA DELIMA 12, WANGSA LINK, WANGSA MAJU, 53300 KUALA LUMPUR', 'total': '7.00' } ] ` ### Motivation Found out about this while working on a project to extract information from images also maintaining the hierarchy/structure of it. Going through the CORD dataset made me realize that the data itself is not nested in nature. So, thought of testing on a sample the postprocessing logics json -> token & token -> json conversion. Updated the token2json to get the hierarchy as it is from the token but wasn't sure about the model performance on nested jsons but long story short Donut predicts the hierarchy pretty good. ### Your contribution ` def token2json(tokens, is_inner_value=False, nested_key = 'elements'): """ Convert a (generated) token seuqnce into an ordered JSON format """ output = dict() while tokens: start_token = re.search(r"<s_(.*?)>", tokens, re.IGNORECASE) if start_token is None: break key = start_token.group(1) start_matches = re.finditer(fr"<s_{key}>", tokens) end_matches = re.finditer(fr"</s_{key}>", tokens) start_tups = [(match.group(), match.start(), match.end()) for match in start_matches] end_tups = [(match.group(), match.start(), match.end()) for match in end_matches] mergeTups = start_tups + end_tups sortedMergeTups = sorted(mergeTups, key=lambda x: x[1]) # remove any unattended close tag for the key present before the current focus start key updatedIdx = -1 for idx in range(len(sortedMergeTups)): if start_token.span()[0] == sortedMergeTups[idx][1]: updatedIdx = idx break sortedMergeTups = sortedMergeTups[updatedIdx:] start_main = sortedMergeTups[0] match_tracker = 0 end_token = None if key == nested_key : if start_main[0] == f'<s_{key}>': for tup in sortedMergeTups[1:]: if tup[0] == f'</s_{key}>': if match_tracker == 0: end_token = tup break else: match_tracker -= 1 elif tup[0] == f'<s_{key}>': match_tracker += 1 elif len(sortedMergeTups) > 1: nextTup = sortedMergeTups[1] if nextTup[0] == f'</s_{key}>': end_token = nextTup if end_token is None: tokens = tokens.replace(start_token[0], "", 1) else: start_token_word = start_main[0] start_token_id = start_main[2] end_token_word = end_token[0] end_token_id = end_token[1] content = tokens[start_token_id: end_token_id] if content is not None: if r"<s_" in content and r"</s_" in content: # non-leaf node value = token2json(content, is_inner_value=True) if value: if len(value) == 1: value = value[0] output[key] = value else: # leaf nodes if key in output.keys(): if isinstance(output[key], str): tempVal = output[key] output[key] = [tempVal] else: output[key] = [] for leaf in content.split(r"<sep/>"): leaf = leaf.strip() if ( leaf in processor.tokenizer.get_added_vocab() and leaf[0] == "<" and leaf[-2:] == "/>" ): leaf = leaf[1:-2] # for categorical special tokens output[key].append(leaf) if len(output[key]) == 1: output[key] = output[key][0] tokens = tokens[end_token[2]:] if tokens[:6] == r"<sep/>": # non-leaf nodes return [output] + token2json(tokens[6:], is_inner_value=True) if len(output): return [output] if is_inner_value else output else: return [] if is_inner_value else {"text_sequence": tokens} `
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24722/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24722/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/24721
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24721/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24721/comments
https://api.github.com/repos/huggingface/transformers/issues/24721/events
https://github.com/huggingface/transformers/pull/24721
1,794,958,934
PR_kwDOCUB6oc5U_lOL
24,721
[WIP] Gradient Checkpointing: use_reentrant=False
{ "login": "tsheasha", "id": 941429, "node_id": "MDQ6VXNlcjk0MTQyOQ==", "avatar_url": "https://avatars.githubusercontent.com/u/941429?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tsheasha", "html_url": "https://github.com/tsheasha", "followers_url": "https://api.github.com/users/tsheasha/followers", "following_url": "https://api.github.com/users/tsheasha/following{/other_user}", "gists_url": "https://api.github.com/users/tsheasha/gists{/gist_id}", "starred_url": "https://api.github.com/users/tsheasha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tsheasha/subscriptions", "organizations_url": "https://api.github.com/users/tsheasha/orgs", "repos_url": "https://api.github.com/users/tsheasha/repos", "events_url": "https://api.github.com/users/tsheasha/events{/privacy}", "received_events_url": "https://api.github.com/users/tsheasha/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,688
1,688
1,688
NONE
null
# What does this PR do? As per pytorch's [recommendation](https://github.com/pytorch/pytorch/blob/main/torch/utils/checkpoint.py#L418) When using gradient checkpointing for models that allow them, torch.util.checkpoint recommends using `use_reentrant=False``` as per [this note](https://github.com/pytorch/pytorch/blob/main/torch/utils/checkpoint.py#L336): ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? @ArthurZucker @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24721/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24721/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24721", "html_url": "https://github.com/huggingface/transformers/pull/24721", "diff_url": "https://github.com/huggingface/transformers/pull/24721.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24721.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24720
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24720/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24720/comments
https://api.github.com/repos/huggingface/transformers/issues/24720/events
https://github.com/huggingface/transformers/pull/24720
1,794,944,277
PR_kwDOCUB6oc5U_ijF
24,720
Pvt model
{ "login": "Xrenya", "id": 51479797, "node_id": "MDQ6VXNlcjUxNDc5Nzk3", "avatar_url": "https://avatars.githubusercontent.com/u/51479797?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Xrenya", "html_url": "https://github.com/Xrenya", "followers_url": "https://api.github.com/users/Xrenya/followers", "following_url": "https://api.github.com/users/Xrenya/following{/other_user}", "gists_url": "https://api.github.com/users/Xrenya/gists{/gist_id}", "starred_url": "https://api.github.com/users/Xrenya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Xrenya/subscriptions", "organizations_url": "https://api.github.com/users/Xrenya/orgs", "repos_url": "https://api.github.com/users/Xrenya/repos", "events_url": "https://api.github.com/users/Xrenya/events{/privacy}", "received_events_url": "https://api.github.com/users/Xrenya/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@Xrenya Thanks again for adding and iterating. Merging now :) " ]
1,688
1,690
1,690
CONTRIBUTOR
null
# Add PVT(Pyramid Vision Transformer) Partially fixes: [issue](https://github.com/huggingface/transformers/issues/17596), [Closed PR](https://github.com/huggingface/transformers/pull/22445) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @amyeroberts From previous PR: > PvtBlock which contains a PvtPatchEmbeddings layer and the subsequent PvtLayer layers. PvtLayer has depth, while PvtPatchEmbeddings only at the beginning of the each encoder block and it would be trailed along the whole depth without using or it would require extra logic to make it None and also would require to trail height and width.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24720/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24720/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24720", "html_url": "https://github.com/huggingface/transformers/pull/24720", "diff_url": "https://github.com/huggingface/transformers/pull/24720.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24720.patch", "merged_at": 1690209260000 }
https://api.github.com/repos/huggingface/transformers/issues/24719
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24719/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24719/comments
https://api.github.com/repos/huggingface/transformers/issues/24719/events
https://github.com/huggingface/transformers/pull/24719
1,794,938,681
PR_kwDOCUB6oc5U_hfU
24,719
add gradient checkpointing for distilbert
{ "login": "jordane95", "id": 69186130, "node_id": "MDQ6VXNlcjY5MTg2MTMw", "avatar_url": "https://avatars.githubusercontent.com/u/69186130?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jordane95", "html_url": "https://github.com/jordane95", "followers_url": "https://api.github.com/users/jordane95/followers", "following_url": "https://api.github.com/users/jordane95/following{/other_user}", "gists_url": "https://api.github.com/users/jordane95/gists{/gist_id}", "starred_url": "https://api.github.com/users/jordane95/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jordane95/subscriptions", "organizations_url": "https://api.github.com/users/jordane95/orgs", "repos_url": "https://api.github.com/users/jordane95/repos", "events_url": "https://api.github.com/users/jordane95/events{/privacy}", "received_events_url": "https://api.github.com/users/jordane95/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @jordane95, thanks for opening this PR. \r\n\r\nOverall changes look OK to me. I can see from the issue discussion that gradient checkpointing was deliberately not added to DistilBert. In general, we try to avoid adding complexity to existing models, in particular their forward pass. Let's get @sgugger second opinion on whether this should be merged into main. \r\n\r\nFor the quality checks, you'll need to run `make style` and push any changes made to this branch.", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,688
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? Fixes #9113 and #23219 I just added the gradient checkpointint feature for DistilBert following the implementation in BERT. This should be useful if one wants to train a relatively small model with extremly large batch size for better performance in application scenarios such as text retrieval or embeddings. @ArthurZucker @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24719/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24719/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24719", "html_url": "https://github.com/huggingface/transformers/pull/24719", "diff_url": "https://github.com/huggingface/transformers/pull/24719.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24719.patch", "merged_at": 1689089388000 }