url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/23193
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23193/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23193/comments
https://api.github.com/repos/huggingface/transformers/issues/23193/events
https://github.com/huggingface/transformers/issues/23193
1,699,168,180
I_kwDOCUB6oc5lRz-0
23,193
examples/run_speech_recognition_ctc: UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 725: character maps to <undefined>
{ "login": "RobertBaruch", "id": 1783950, "node_id": "MDQ6VXNlcjE3ODM5NTA=", "avatar_url": "https://avatars.githubusercontent.com/u/1783950?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RobertBaruch", "html_url": "https://github.com/RobertBaruch", "followers_url": "https://api.github.com/users/RobertBaruch/followers", "following_url": "https://api.github.com/users/RobertBaruch/following{/other_user}", "gists_url": "https://api.github.com/users/RobertBaruch/gists{/gist_id}", "starred_url": "https://api.github.com/users/RobertBaruch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RobertBaruch/subscriptions", "organizations_url": "https://api.github.com/users/RobertBaruch/orgs", "repos_url": "https://api.github.com/users/RobertBaruch/repos", "events_url": "https://api.github.com/users/RobertBaruch/events{/privacy}", "received_events_url": "https://api.github.com/users/RobertBaruch/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,683
1,683
1,683
CONTRIBUTOR
null
### System Info - `transformers` version: 4.28.1 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.11.2 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: NO - Using distributed or parallel set-up in script?: NO ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Create a json file corresponding to the [first example in speech recognition for pytorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition#single-gpu-ctc). See attached. Run `python run_speech_recognition_ctc.py train.json` Get error: ``` Traceback (most recent call last): File "F:\eo-reco\run_speech_recognition_ctc.py", line 775, in <module> main() File "F:\eo-reco\run_speech_recognition_ctc.py", line 378, in main model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1])) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\eo-reco\.env\Lib\site-packages\transformers\hf_argparser.py", line 391, in parse_json_file data = json.loads(open_json_file.read()) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\rober\AppData\Local\Programs\Python\Python311\Lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 725: character maps to <undefined> ``` [train.json.zip](https://github.com/huggingface/transformers/files/11415631/train.json.zip) ### Expected behavior No error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23193/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23193/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23192
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23192/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23192/comments
https://api.github.com/repos/huggingface/transformers/issues/23192/events
https://github.com/huggingface/transformers/pull/23192
1,699,078,164
PR_kwDOCUB6oc5P8EYR
23,192
update flax_utils.py
{ "login": "hannan72", "id": 8229163, "node_id": "MDQ6VXNlcjgyMjkxNjM=", "avatar_url": "https://avatars.githubusercontent.com/u/8229163?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hannan72", "html_url": "https://github.com/hannan72", "followers_url": "https://api.github.com/users/hannan72/followers", "following_url": "https://api.github.com/users/hannan72/following{/other_user}", "gists_url": "https://api.github.com/users/hannan72/gists{/gist_id}", "starred_url": "https://api.github.com/users/hannan72/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hannan72/subscriptions", "organizations_url": "https://api.github.com/users/hannan72/orgs", "repos_url": "https://api.github.com/users/hannan72/repos", "events_url": "https://api.github.com/users/hannan72/events{/privacy}", "received_events_url": "https://api.github.com/users/hannan72/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23192). All of your documentation changes will be reflected on that endpoint." ]
1,683
1,683
1,683
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23192/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23192/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23192", "html_url": "https://github.com/huggingface/transformers/pull/23192", "diff_url": "https://github.com/huggingface/transformers/pull/23192.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23192.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/23191
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23191/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23191/comments
https://api.github.com/repos/huggingface/transformers/issues/23191/events
https://github.com/huggingface/transformers/pull/23191
1,699,043,471
PR_kwDOCUB6oc5P79O0
23,191
Update LLaMA docs with arxiv link
{ "login": "awinml", "id": 97467100, "node_id": "U_kgDOBc863A", "avatar_url": "https://avatars.githubusercontent.com/u/97467100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/awinml", "html_url": "https://github.com/awinml", "followers_url": "https://api.github.com/users/awinml/followers", "following_url": "https://api.github.com/users/awinml/following{/other_user}", "gists_url": "https://api.github.com/users/awinml/gists{/gist_id}", "starred_url": "https://api.github.com/users/awinml/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/awinml/subscriptions", "organizations_url": "https://api.github.com/users/awinml/orgs", "repos_url": "https://api.github.com/users/awinml/repos", "events_url": "https://api.github.com/users/awinml/events{/privacy}", "received_events_url": "https://api.github.com/users/awinml/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,683
1,683
1,683
CONTRIBUTOR
null
# What does this PR do? Fixes #23186 Adds arxiv link for "LLaMA: Open and Efficient Foundation Language Models" (https://arxiv.org/abs/2302.13971) to LLaMA model docs. ## Who can review? @sgugger, @stevhliu and @MKhalusova
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23191/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23191/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23191", "html_url": "https://github.com/huggingface/transformers/pull/23191", "diff_url": "https://github.com/huggingface/transformers/pull/23191.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23191.patch", "merged_at": 1683499964000 }
https://api.github.com/repos/huggingface/transformers/issues/23190
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23190/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23190/comments
https://api.github.com/repos/huggingface/transformers/issues/23190/events
https://github.com/huggingface/transformers/pull/23190
1,698,999,018
PR_kwDOCUB6oc5P70BW
23,190
Add BROS
{ "login": "jinhopark8345", "id": 60179569, "node_id": "MDQ6VXNlcjYwMTc5NTY5", "avatar_url": "https://avatars.githubusercontent.com/u/60179569?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jinhopark8345", "html_url": "https://github.com/jinhopark8345", "followers_url": "https://api.github.com/users/jinhopark8345/followers", "following_url": "https://api.github.com/users/jinhopark8345/following{/other_user}", "gists_url": "https://api.github.com/users/jinhopark8345/gists{/gist_id}", "starred_url": "https://api.github.com/users/jinhopark8345/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jinhopark8345/subscriptions", "organizations_url": "https://api.github.com/users/jinhopark8345/orgs", "repos_url": "https://api.github.com/users/jinhopark8345/repos", "events_url": "https://api.github.com/users/jinhopark8345/events{/privacy}", "received_events_url": "https://api.github.com/users/jinhopark8345/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@jinhopark8345 Awesome work - looking forward to having this model added! Feel free to ping us when the PR is ready for review or you have any implementation questions in the meantime. ", "@amyeroberts \r\n\r\nI am confused about what needs to be done.\r\n\r\nAccording to the [How to add a new model](https://huggingface.co/docs/transformers/add_new_model#514-port-brandnewbert-to-transformers) guideline, a big part of it is porting pretrained models (from the original repo) into Huggingface transformers and making sure they are correctly ported by checking the outputs of each layer's forward step.\r\n\r\nHowever, it seems like the authors of the Bros model used `transformers-cli` to create the boilerplate code, and I don't think there is much to change from the [original code](https://github.com/clovaai/bros/blob/master/bros/modeling_bros.py).\r\n\r\nDo I need to write a conversion script? Or can I skip this step and move to the step where I add model test codes?\r\n\r\nThanks for the help in advance!", "@jinhopark8345 Interesting - that will definitely make things easier! In this case, if the files are already on the hub and in the correct format, there's no need for the conversion script. It's possible there might be additional arguments required in the config files or additional files needed in the hub repo, in which case, I'd suggest writing a script to add these. You probably won't be able to write directly to the org's repo, but can open a PR with any necessary changes. ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23190). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Still working on it! Writing tutorial/demo notebooks of how to use BROS ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@amyeroberts Is it possible to reopen this PR because I have been working on [forked repo](https://github.com/jinhopark8345/bros/tree/feature/update-data-loading) of original bros code", "> Thanks for adding this model!\r\n> \r\n> Overall PR is looking good - very clean and easy to read. Most comments are nits about formatting and have the copied from statements.\r\n> \r\n\r\nThank you for the feedback!!\r\n\r\n> What's the difference between Bros, BrosSpadeEL and BrosSpadeEE ? The purpose of these three modules isn't clear. Could you add an explanation in the model's class docstrings as well as the model's doc page - bros.md?\r\n\r\nBros is a encoder transformer. BrosSpadeEL is Bros + [SPADE EL](https://arxiv.org/abs/2005.00642) (entity linking decoder attached) and BrosSpadeEE is Bros + [SPADE EE](https://arxiv.org/abs/2005.00642) (entity extraction decoder attached). I will add more detailed explanation to model's doc page.\r\n\r\n[Original bros repo](https://github.com/clovaai/bros) has total 3 heads for the Bros Model:\r\n- BrosTokenClassification\r\n- BrosEE (entity extraction)\r\n- BrosEL (entity linking)\r\n\r\nBoth BrosTokenClassification and BrosEE essentially perform the same job. However, in the case of BrosTokenClassification, it assumes input tokens are perfectly serialized (which is very challenging task since they exist in a 2D space), while BrosEE allows for more flexibility in handling serialization errors as it predicts next connection token from one token.\r\n\r\nBrosEL predicts relation from first token (of one entity) to the other first token(of the other entity) if these two entities share some relation.\r\n\r\n\r\n> About the model inputs. We try as much as possible in transformers to have a consistent API. This also means consistent inputs and outputs. It would be better if this model took a standard bbox format e.g. (x0, y0, x1, y1) and then converted to the 6 point needed within its forward pass.\r\n\r\nI will change the forward pass to convert from 2 points to 4 points within its forward pass.\r\n", "@amyeroberts \r\nWould it be a better idea to add/update `tokenizer_bros.py `and `processing_bros.py` that takes boxes as input, similar to [layoutlmv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3) ?\r\n\r\nwhile updating the doctest code for the bros model, I realized that trying out the bros model using ([my demo script](https://github.com/jinhopark8345/FormUnderstanding/blob/main/scripts/bros_on_SROIE_BIO_tagging.py)) would be quite complicated because of making `bounding boxes`, `box first token mask`, update `labels` accordingly.", "@jinhopark8345 What I would suggest is following the pattern of LayoutLM model: the model takes in bboxes of format (x0, y0, x1, y1). Then for BROs, within the model these are converted to the 6 points used in the forward pass.\r\n\r\nNo tokenizer needs to be added as the model can use the already implemented BERTTokenizer. \r\n\r\nProcessors classes which bundle together two processing classes e.g. tokenizer + image processor, so isn't needed here.", "@amyeroberts I am encountering the following errors during the tests ( `ci/circleci : tests_torch` )\r\n\r\n```bash\r\nFAILED tests/utils/test_hub_utils.py::GetFromCacheTests::test_get_file_gated_repo - AssertionError: OSError not raised\r\nFAILED tests/utils/test_hub_utils.py::GetFromCacheTests::test_has_file_gated_repo - AssertionError: OSError not raised\r\n```\r\nIs this related to my pull request, or are there issues within the Huggingface library?", "@jinhopark8345 - these aren't related to your PR and are likely transient issues with connecting to the hub. You can ignore them for now. If they persist exclusively on this PR and when the PR is finalised then we can dig into it more. ", "@amyeroberts Thank you very much for the feedback and help! 😃 I truly appreciate it. I have implemented your suggestions from the code review, and all necessary tests are passing. However, there were a few suggestions that I wasn't sure how to address, so I left comments explaining my thoughts on those points.", "@amyeroberts I added `convert_bros_to_pytorch.py` script because `bbox_projection` (linear layer) moved from `BrosTextEmbeddings` to newly added `BrosBboxEmbedding`.", "@jinhopark8345 Great - thank you! Could you rebase on main to resolve the conflicts? We should be good to go after that :) ", "@jinhopark8345 Thanks for contributing this model! Make sure to share about it's addition to the library on twitter/linkedin/your medium of choice 🤗 ", "@jinhopark8345 Thank you for adding this model into `transformers`.\r\n\r\nRegarding the test `BrosModelIntegrationTest.test_inference_no_head`, it fails on our T4 GPU VM as the expected and actual output differ too much, as you can see.\r\n\r\nCould you double check on your machines, please? And what's your machine (GPU) type? \r\n\r\nThank you in advance.\r\n\r\n```bash\r\n(Pdb) outputs.last_hidden_state[0, :3, :3]\r\ntensor([[-0.3165, 0.0830, -0.1203],\r\n [-0.0089, 0.0031, 0.0736],\r\n [-0.0461, 0.0146, 0.0880]], device='cuda:0')\r\n(Pdb) expected_slice\r\ntensor([[-0.4027, 0.0756, -0.0647],\r\n [-0.0192, -0.0065, 0.1042],\r\n [-0.0671, 0.0214, 0.0960]], device='cuda:0')\r\n\r\n```\r\n\r\nYou can run the test with\r\n\r\n```bash\r\nTF_FORCE_GPU_ALLOW_GROWTH=true RUN_SLOW=1 python3 -m pytest -v tests/models/bros/test_modeling_bros.py::BrosModelIntegrationTest::test_inference_no_head\r\n```", "> @jinhopark8345 Thank you for adding this model into `transformers`.\r\n> \r\n> Regarding the test `BrosModelIntegrationTest.test_inference_no_head`, it fails on our T4 GPU VM as the expected and actual output differ too much, as you can see.\r\n> \r\n> Could you double check on your machines, please? And what's your machine (GPU) type?\r\n> \r\n> Thank you in advance.\r\n> \r\n> ```shell\r\n> (Pdb) outputs.last_hidden_state[0, :3, :3]\r\n> tensor([[-0.3165, 0.0830, -0.1203],\r\n> [-0.0089, 0.0031, 0.0736],\r\n> [-0.0461, 0.0146, 0.0880]], device='cuda:0')\r\n> (Pdb) expected_slice\r\n> tensor([[-0.4027, 0.0756, -0.0647],\r\n> [-0.0192, -0.0065, 0.1042],\r\n> [-0.0671, 0.0214, 0.0960]], device='cuda:0')\r\n> ```\r\n> \r\n> You can run the test with\r\n> \r\n> ```shell\r\n> TF_FORCE_GPU_ALLOW_GROWTH=true RUN_SLOW=1 python3 -m pytest -v tests/models/bros/test_modeling_bros.py::BrosModelIntegrationTest::test_inference_no_head\r\n> ```\r\n\r\n\r\n@ydshieh Thank you for providing the test command!\r\n\r\nI was able to reproduce the issue, but the `outputs.last_hidden_state[0, :3, :3]` value I obtained was different.\r\nInterestingly, not only did the output was different from the `expected_slice` but the value of `outputs.last_hidden_state[0, :3, :3]` changed every time I ran the command you provided. For testing, I am using an RTX3090.\r\n\r\nAfter some testing, I found that some weights weren't being initialized properly.\r\nThis issue came from that `bbox_projection` layer (a linear layer) was moved from `BrosEmbeddings` to under `BrosBboxEmbeddings` (`BrosBboxEmbeddings` is newly added class)\r\n\r\nBy changing:\r\n\r\n model = BrosModel.from_pretrained(\"naver-clova-ocr/bros-base-uncased\").to(torch_device)\r\n\r\nto:\r\n\r\n model = BrosModel.from_pretrained(\"jinho8345/bros-base-uncased\").to(torch_device)\r\n\r\nI was able to get consistent outputs. (conversion script : `transformers/models/bros/convert_bros_to_pytorch.py`)\r\n\r\nI suspect this issue wasn't detected earlier because when running:\r\n\r\n python3 -m pytest -v tests/models/bros/test_modeling_bros.py\r\n\r\ntorch cuda seed is manually set to certain value, perhaps due to other tests or other reasons.\r\n\r\nThe update is [here](https://github.com/jinhopark8345/transformers/blob/745f9c8c027a42ddedde4c6d65c4d127ec8614e1/tests/models/bros/test_modeling_bros.py#L413) but I am not sure how I should apply this patch to Transformers library.\r\n", "Hi @jinhopark8345 Thanks a lot for looking into this!\r\n\r\nYou can open a PR to update the checkpoint repo used in the test, or we can do it on our own side.\r\n\r\nBut is it expected that `naver-clova-ocr/bros-base-uncased` doesn't have all the weights? What are the difference between these 2 checkpoints?", "> Hi @jinhopark8345 Thanks a lot for looking into this!\r\n> \r\n> You can open a PR to update the checkpoint repo used in the test, or we can do it on our own side.\r\n> \r\n> But is it expected that `naver-clova-ocr/bros-base-uncased` doesn't have all the weights? What are the difference between these 2 checkpoints?\r\n\r\n The `naver-clova-ocr/bros-base-uncased` has all the weights. But some weights have been renamed. So if we load `BrosModel` with `naver-clova-ocr/bros-base-uncased` checkpoint (original checkpoint), the renamed weights won't be initialized correctly with pretrained weights. \r\n \r\n these are the renamed weights!\r\n ```python\r\n def rename_key(name):\r\n if name == \"embeddings.bbox_projection.weight\":\r\n name = \"bbox_embeddings.bbox_projection.weight\"\r\n\r\n if name == \"embeddings.bbox_sinusoid_emb.x_pos_emb.inv_freq\":\r\n name = \"bbox_embeddings.bbox_sinusoid_emb.x_pos_emb.inv_freq\"\r\n\r\n if name == \"embeddings.bbox_sinusoid_emb.y_pos_emb.inv_freq\":\r\n name = \"bbox_embeddings.bbox_sinusoid_emb.y_pos_emb.inv_freq\"\r\n\r\n return name\r\n\r\n ```\r\n \r\n If you confirm updating the checkpoint is okay, I would like to open PR!", "Sure, go for it. BTW, I see a lot of `naver-clova-ocr/bros-base-uncased` used, in particular in the examples. So just to be sure, is the user expected to use `naver-clova-ocr/bros-base-uncased` or the renamed one `jinho8345/bros-base-uncased`?\r\n\r\nFrom your description, I think it is `jinho8345/bros-base-uncased`. If this is the case, could you update all occurrence (not just in the tests). Thank you!", "Hello @jinhopark8345 Thank you again for fixing the checkpoint. I have yet another question needs your help.\r\n\r\nFor `BrosModel`, the `bbox_position_embeddings` could be `None` before calling `self.encoder` (if `bbox` is `None`)\r\n\r\nhttps://github.com/huggingface/transformers/blob/37c205eb5d5165b70d3100b599a2bcfc483944f5/src/transformers/models/bros/modeling_bros.py#L927-L933\r\n\r\nbut eventually, `BrosSelfAttention` will fail if it receives `None` for `bbox_pos_emb`\r\n\r\nhttps://github.com/huggingface/transformers/blob/37c205eb5d5165b70d3100b599a2bcfc483944f5/src/transformers/models/bros/modeling_bros.py#L391\r\n\r\nCould you double check if `BrosModel` will only work if `bbox` is not `None` in the original implementation? If this is not the case, how is `bbox_pos_emb` being created if `bbox` is `None` etc.\r\n\r\nThank you in advance, again!", "Hello @ydshieh Thank you for asking!\r\n\r\nBelow code is the original implementation\r\n``` \r\n scaled_bbox = bbox * self.config.bbox_scale\r\n bbox_pos_emb = self.embeddings.calc_bbox_pos_emb(\r\n scaled_bbox, self.config.pe_type\r\n )\r\n```\r\n\r\nIn original implementation, `BrosModel` will only work if `bbox` is not `None`.\r\n\r\nWould it be more helpful to users if we remove https://github.com/huggingface/transformers/blob/2d71307dc0ee2849f785568f345837e726209fc6/src/transformers/models/bros/modeling_bros.py#L928 so that `BrosModel` fails earlier? or do you suggest different solutions?", "Hi! In this case, you can add a try: except at the beginning of `BrosModel.forward` method as the input validation.\r\n\r\n(we might need a few more fixes if CI fails due to this)\r\n\r\nThank you !", "Hi @jinhopark8345 , congrats on this amazing contribution.\r\n\r\nFeel free to share about it on Twitter/LinkedIn and we'll amplify.", "Hi @jinhopark8345,\r\nCan you please provide examples of how to use logits from BrosSpadeELForTokenClassification to identify the intra-relationships? \r\nTIA", "Hi @Prathyusha-Akundi,\r\n\r\nYou can refer to the [example notebook](https://github.com/jinhopark8345/FormUnderstanding/blob/main/notebooks/Fine_tuning_bros_spade_on_FUNSD_entity_linking_dataset.ipynb) for identifying intra-relationships.\r\n\r\nIf you are looking for information on entity linking versus entity extraction, you can check out the [entity linking explanation vs entity extraction](https://github.com/jinhopark8345/FormUnderstanding) here.", "Thank you @jinhopark8345 , this is extremely helpful!" ]
1,683
1,699
1,694
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Add [BROS(BERT Relying On Spatiality)](https://arxiv.org/abs/2108.04539) to 🤗 Transformers ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a [Github issue](https://github.com/huggingface/transformers/issues/23181) or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @NielsRogge
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23190/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23190/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23190", "html_url": "https://github.com/huggingface/transformers/pull/23190", "diff_url": "https://github.com/huggingface/transformers/pull/23190.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23190.patch", "merged_at": 1694710957000 }
https://api.github.com/repos/huggingface/transformers/issues/23189
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23189/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23189/comments
https://api.github.com/repos/huggingface/transformers/issues/23189/events
https://github.com/huggingface/transformers/issues/23189
1,698,929,531
I_kwDOCUB6oc5lQ5t7
23,189
Regression Models
{ "login": "vrunm", "id": 97465624, "node_id": "U_kgDOBc81GA", "avatar_url": "https://avatars.githubusercontent.com/u/97465624?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vrunm", "html_url": "https://github.com/vrunm", "followers_url": "https://api.github.com/users/vrunm/followers", "following_url": "https://api.github.com/users/vrunm/following{/other_user}", "gists_url": "https://api.github.com/users/vrunm/gists{/gist_id}", "starred_url": "https://api.github.com/users/vrunm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vrunm/subscriptions", "organizations_url": "https://api.github.com/users/vrunm/orgs", "repos_url": "https://api.github.com/users/vrunm/repos", "events_url": "https://api.github.com/users/vrunm/events{/privacy}", "received_events_url": "https://api.github.com/users/vrunm/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @vrunm, \r\nI think you can use the forums for this sort of discussion.\r\nSome helpful links would be https://discuss.huggingface.co/t/tabular-classification-regression-pipeline/22030/2\r\nand https://discuss.huggingface.co/t/how-to-set-up-trainer-for-a-regression/12994 (related to your Post).\r\nYou can check the model documentation for [informer](https://huggingface.co/docs/transformers/model_doc/informer) and [time series transformer](https://huggingface.co/docs/transformers/model_doc/time_series_transformer).\r\nAlso, this blog, [Probabilistic Time Series Forecasting with 🤗 Transformers](https://huggingface.co/blog/time-series-transformers) might be helpful as well. \r\nThere are multiple papers and repo as well that use transformers for regression which you can easily find by searching on google.", "@hsuyab I did go through these links and search online but I want something very specific and customizable. I want something that can be used from Huggingface as a core function. ", "Well you can check these, but I don't understand what you mean exactly by core functionality, https://pytorch-tabular.readthedocs.io/en/latest/\nand https://pytorch-forecasting.readthedocs.io/", "@hsuyab I want to build a multi variate regression model and want to use a Huggingface class specifically designed to that. Not a pipeline which does not allow to train and finetune your model.\r\n", "@vrunm okay, you can try loading in the modules and modifying the class functions by yourself however creating this functionality separately wouldn't make sense imo. It's still better to use some other libraries that are focused on this task or best use something like xgbosst/lightgbm.", "@hsuyab Sure I will try that but do you have the code to modify the class functions or should I create a PR for this?", "It's best you create a PR and use that.", "@hsuyab can you share with me the outline of the classes to change to implement this functionality. I think asking the contributors will be a better choice.", "Is it possible to implement regression from a specific class of huggingface transformers? What should the outline of the classes to change to implement this as a PR?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@hsuyab were you able to open a PR to add this functionality?", "no, imo performing regression is not something that's needed as a feature in transformers as of now as there are other libraries that are focused on implementing this in a better way.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@hsuyab Do you think it is necessary to implement this functionality for now? Would really like your comments on the classes to implemented for this?", "@vrunm @hsuyab Thanks for discussing and raising this issue. \r\n\r\nQuestions about how to solve problems using transformers are best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n\r\nOne thing to note is that regression is already possible to do with models like BERT if `num_labels` is set to 1 in the config e.g. see this line in the code: https://github.com/huggingface/transformers/blob/33aafc26ee68df65c7d9457259fc3d59f79eef4f/src/transformers/models/bert/modeling_bert.py#L1583C26-L1583C26", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,683
1,702
1,691
NONE
null
### Feature request I am working on a regression problem and I am looking forward to using Transformers for it but before jumping into the implementation and all stuff, I am curious that can you use transformers for a regression problem? I have around 90 features (floating points) and one target. I couldn’t find any paper on transformers for regression problems so please let me know if any of you used transformers for this purpose. I am working on a problem where I am having tabular data having more than 90 features and one target and all the features are in integers (continuous). I want to use pre-trained BERT, GPT2 but when it comes to the tokenizer the tokenizer is expecting the input in the text format. I can change the integer data in the text format like this: original_data = [1,2,3,4,5,…,94] transformed_data = ["1,2,3,4,5,…,94"] Now if I pass the transformed_data to the tokenizer then surely it will work but I wanna know if someone tried to use transformers for this purpose and if yes, then what was the outcome, and how did the results look like? How can I use the transformers library for this purpose all the tokenizers are trained for the text data so I am kinda lost. Any help will be appreciate. ### Motivation The purpose of regression models is to predict a continuous output variable based on one or more input variables. Regression models are widely used in many fields such as finance, economics, engineering, and social sciences, where the goal is to understand the relationship between the input variables and the output variable and to make predictions based on that understanding. In regression analysis, the focus is on building a model that captures the relationship between the input variables and the output variable. This model is then used to predict the values of the output variable for new input data. The model can also be used to identify the important input variables that have a significant impact on the output variable. Regression models come in various types, such as linear regression, logistic regression, polynomial regression, and others. The choice of the regression model depends on the type of data, the type of relationship between the input and output variables, and the purpose of the analysis. ### Your contribution I can implement some of the code given in this [Post:](https://lajavaness.medium.com/regression-with-text-input-using-bert-and-transformers-71c155034b13)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23189/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23189/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23188
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23188/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23188/comments
https://api.github.com/repos/huggingface/transformers/issues/23188/events
https://github.com/huggingface/transformers/issues/23188
1,698,865,817
I_kwDOCUB6oc5lQqKZ
23,188
Running inference from ASR documentation, pipeline errors with "Can't load tokenizer"
{ "login": "RobertBaruch", "id": 1783950, "node_id": "MDQ6VXNlcjE3ODM5NTA=", "avatar_url": "https://avatars.githubusercontent.com/u/1783950?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RobertBaruch", "html_url": "https://github.com/RobertBaruch", "followers_url": "https://api.github.com/users/RobertBaruch/followers", "following_url": "https://api.github.com/users/RobertBaruch/following{/other_user}", "gists_url": "https://api.github.com/users/RobertBaruch/gists{/gist_id}", "starred_url": "https://api.github.com/users/RobertBaruch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RobertBaruch/subscriptions", "organizations_url": "https://api.github.com/users/RobertBaruch/orgs", "repos_url": "https://api.github.com/users/RobertBaruch/repos", "events_url": "https://api.github.com/users/RobertBaruch/events{/privacy}", "received_events_url": "https://api.github.com/users/RobertBaruch/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Your code works fine for me on macOS (I tried with the main branch of Transformers, which is version 4.29.0.dev0). It also looks like the `tokenizer_config.json` is present in your model repo, so all the required files are present.\r\n\r\nAre you sure you don't have a `F:\\eo-reco\\xekri\\my_awesome_asr_model` directory that would be interfering with this?\r\n", "The problem happens even if I delete the local directory.\r\n\r\nSo the problem appears to be that there is a missing step in the docs:\r\n\r\n`processor.save_pretrained(save_directory=\"my_awesome_asr_mind_model\")`\r\n\r\nWithout this, there is no `tokenizer_config.json`.\r\n\r\nThe reason `tokenizer_config.json` was present in my repo is that I added the line and then ran the program again.\r\n\r\nIf you look at `main.py.zip` above, you can see where I had the line commented out. With that line commented out, the error happens.", "It does look like those instructions are missing from the docs, I'll ping someone from the docs team to have a look. Thanks for reporting! \r\n", "Possibly related: #23222", "Thanks for reporting this! If you pass `processor` to the Trainer, it will save both `tokenizer` and `feature_extractor`, and push them both to hub. I'll update the docs. https://github.com/huggingface/transformers/pull/23239", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,683
1,686
1,686
CONTRIBUTOR
null
### System Info - `transformers` version: 4.28.1 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.11.2 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: NO - Using distributed or parallel set-up in script?: NO ### Who can help? @Narsil @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Put together the script from [Automatic Speech Recognition](https://huggingface.co/docs/transformers/tasks/asr) into a file `main.py`, up to but not including Inference. Run under Windows. Training succeeds. Put together the Inference section into a file `infer.py`. Run under Windows. Output: ``` Downloading pytorch_model.bin: 100%|██████████████████████████████████████████████████████████████████████████████████| 378M/378M [00:35<00:00, 10.6MB/s] Traceback (most recent call last): File "f:\eo-reco\infer.py", line 10, in <module> transcriber = pipeline("automatic-speech-recognition", model="xekri/my_awesome_asr_model") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "f:\eo-reco\.env\Lib\site-packages\transformers\pipelines\__init__.py", line 876, in pipeline tokenizer = AutoTokenizer.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "f:\eo-reco\.env\Lib\site-packages\transformers\models\auto\tokenization_auto.py", line 723, in from_pretrained return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "f:\eo-reco\.env\Lib\site-packages\transformers\tokenization_utils_base.py", line 1795, in from_pretrained raise EnvironmentError( OSError: Can't load tokenizer for 'xekri/my_awesome_asr_model'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'xekri/my_awesome_asr_model' is the correct path to a directory containing all relevant files for a Wav2Vec2CTCTokenizer tokenizer. ``` [main.py.zip](https://github.com/huggingface/transformers/files/11413782/main.py.zip) [infer.py.zip](https://github.com/huggingface/transformers/files/11413784/infer.py.zip) ### Expected behavior No error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23188/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23188/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23187
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23187/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23187/comments
https://api.github.com/repos/huggingface/transformers/issues/23187/events
https://github.com/huggingface/transformers/issues/23187
1,698,860,446
I_kwDOCUB6oc5lQo2e
23,187
push_to_hub fails with "cannot lock ref" and "failed to push some refs"
{ "login": "RobertBaruch", "id": 1783950, "node_id": "MDQ6VXNlcjE3ODM5NTA=", "avatar_url": "https://avatars.githubusercontent.com/u/1783950?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RobertBaruch", "html_url": "https://github.com/RobertBaruch", "followers_url": "https://api.github.com/users/RobertBaruch/followers", "following_url": "https://api.github.com/users/RobertBaruch/following{/other_user}", "gists_url": "https://api.github.com/users/RobertBaruch/gists{/gist_id}", "starred_url": "https://api.github.com/users/RobertBaruch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RobertBaruch/subscriptions", "organizations_url": "https://api.github.com/users/RobertBaruch/orgs", "repos_url": "https://api.github.com/users/RobertBaruch/repos", "events_url": "https://api.github.com/users/RobertBaruch/events{/privacy}", "received_events_url": "https://api.github.com/users/RobertBaruch/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @Wauplin ", "Hi @RobertBaruch, I'm sorry you're facing this issue. I'm not sure what happened here. Can be either a wrong git setup or a temporary server issue on git operations. Or maybe concurrent push to the Hub which made one fail while having the others correctly uploading. Just to be sure, is the final state of repo as you want it or are you missing something? If something is still missing, I would advice you to save the data locally (with `.save_pretrained`) and then upload the folder using [`huggingface_hub.upload_folder`](https://huggingface.co/docs/huggingface_hub/guides/upload#upload-a-folder).\r\n\r\n@sgugger I hope this is the type of unclear error that we could rid off when switching to a http-based approach (once https://github.com/huggingface/huggingface_hub/pull/1458 is merged) :)", "This happens every time -- even with the [example for speech recognition](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition#single-gpu-ctc). I'm pretty sure this has something to do with Windows.\r\n\r\nThe final state of the repo appears to be correct. However, the problem is that an error is raised, which means that anything in the program after pushing to the repo will not be executed.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "hi @Wauplin, I faced the same issue several times. Can you clarify how to replicate what the --push_to_hub does from the command line? Is there a command to create the model card and upload only the files that --push_to_hub (without all checkpoints)? ", "how do I know what caused the issue of not uploading to the hub? Can it be that I don't have space left in my account? ", "> Can you clarify how to replicate what the --push_to_hub does from the command line?\r\n\r\nDon't know about a command line equivalent but \r\n@sgugger mentioned in https://github.com/huggingface/huggingface_hub/issues/1560#issuecomment-1634053167 that you can use `trainer.create_model_card()` to create a model card from your trainer. \r\n\r\n> Is there a command (...) upload only the files that --push_to_hub (without all checkpoints)?\r\n\r\nOnce you have files saved locally, uploading them to the Hub can be quickly done using `huggingface_hub`. Here is a guide on [how to upload files to the Hub.](https://huggingface.co/docs/huggingface_hub/guides/upload#upload-files-to-the-hub). It is not a command line tool but rather a few lines of scripts to write. But that's only once you have files saved locally and know which ones you want to upload.\r\n\r\n> how do I know what caused the issue of not uploading to the hub? Can it be that I don't have space left in my account?\r\n\r\nIf the upload fails, it is probably due to some network issues (see https://github.com/huggingface/huggingface_hub/issues/1560#issuecomment-1635878401). In any case, it is not a problem of not have space left on your Hugging Face account since it's unlimited. \r\n" ]
1,683
1,689
1,686
CONTRIBUTOR
null
### System Info - `transformers` version: 4.28.1 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.11.2 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: NO - Using distributed or parallel set-up in script?: NO ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Put together the script from [Automatic Speech Recognition](https://huggingface.co/docs/transformers/tasks/asr) into a file `main.py`, up to but not including Inference. Run under Windows. Training succeeds, but then: ``` Pushing to hub... Several commits (2) will be pushed upstream. The progress bars may be unreliable. Upload file pytorch_model.bin: 366MB [02:42, 3.62MB/s] remote: error: cannot lock ref 'refs/heads/main': is at aa0f87dd56de1a36e17cffb07b4a50a0d0f530f4 but expected 52d558ffa06199a1340c979ed1fbbc0e98c862c8 To https://huggingface.co/xekri/my_awesome_asr_model ! [remote rejected] main -> main (failed to update ref) error: failed to push some refs to 'https://huggingface.co/xekri/my_awesome_asr_model' Upload file pytorch_model.bin: 100%|██████████████████████████████████████████████████████████████████████████████████| 360M/360M [02:43<00:00, 2.32MB/s] Traceback (most recent call last): File "f:\eo-reco\.env\Lib\site-packages\huggingface_hub\repository.py", line 1099, in git_push raise subprocess.CalledProcessError(return_code, process.args, output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['git', 'push', '--set-upstream', 'origin', 'main']' returned non-zero exit status 1. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "f:\eo-reco\main.py", line 147, in <module> trainer.push_to_hub() File "f:\eo-reco\.env\Lib\site-packages\transformers\trainer.py", line 3661, in push_to_hub git_head_commit_url = self.repo.push_to_hub( ^^^^^^^^^^^^^^^^^^^^^^ File "f:\eo-reco\.env\Lib\site-packages\huggingface_hub\repository.py", line 1307, in push_to_hub return self.git_push( ^^^^^^^^^^^^^^ File "f:\eo-reco\.env\Lib\site-packages\huggingface_hub\repository.py", line 1102, in git_push raise EnvironmentError(exc.stderr) OSError: remote: error: cannot lock ref 'refs/heads/main': is at aa0f87dd56de1a36e17cffb07b4a50a0d0f530f4 but expected 52d558ffa06199a1340c979ed1fbbc0e98c862c8 To https://huggingface.co/xekri/my_awesome_asr_model ! [remote rejected] main -> main (failed to update ref) error: failed to push some refs to 'https://huggingface.co/xekri/my_awesome_asr_model' The push command with PID 7788 failed. To https://huggingface.co/xekri/my_awesome_asr_model 52d558f..381666e main -> main ``` Checking the repo on the hub shows that all files were seemingly committed. The four commits to the hub were: * `aa0f87dd56de1a36e17cffb07b4a50a0d0f530f4 `: "End of training" * `381666e4197a0e5ce2b4d8a9b0c3f426cd2b2348`: "Training in progress, step 200" * `52d558ffa06199a1340c979ed1fbbc0e98c862c8`: "Training in progress, step 100" * `09c2ba5a5066b5b24e8fd2ddf333eda61f6c85da`: "initial commit" In the attached file, the changes from the documented script are: 1. Loading dataset `mozilla-foundation/common_voice_13_0` (since dropbox is rejecting requests to download `PolyAI/minds14`, see [discussion](https://huggingface.co/datasets/PolyAI/minds14/discussions/6)) 2. Modifications for columns present in that dataset. [main.py.zip](https://github.com/huggingface/transformers/files/11413763/main.py.zip) ### Expected behavior No scary git errors
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23187/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23187/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23186
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23186/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23186/comments
https://api.github.com/repos/huggingface/transformers/issues/23186/events
https://github.com/huggingface/transformers/issues/23186
1,698,860,202
I_kwDOCUB6oc5lQoyq
23,186
[Documentation] Possible mistake in model_doc LLaMA
{ "login": "habaneraa", "id": 95517280, "node_id": "U_kgDOBbF6YA", "avatar_url": "https://avatars.githubusercontent.com/u/95517280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/habaneraa", "html_url": "https://github.com/habaneraa", "followers_url": "https://api.github.com/users/habaneraa/followers", "following_url": "https://api.github.com/users/habaneraa/following{/other_user}", "gists_url": "https://api.github.com/users/habaneraa/gists{/gist_id}", "starred_url": "https://api.github.com/users/habaneraa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/habaneraa/subscriptions", "organizations_url": "https://api.github.com/users/habaneraa/orgs", "repos_url": "https://api.github.com/users/habaneraa/repos", "events_url": "https://api.github.com/users/habaneraa/events{/privacy}", "received_events_url": "https://api.github.com/users/habaneraa/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@habaneraa You're right! This seems to be a mistake in the docs. I will open a PR to fix this." ]
1,683
1,683
1,683
NONE
null
### System Info Maybe a small mistake in the documentation. Here: https://github.com/huggingface/transformers/blob/ef42c2c487260c2a0111fa9d17f2507d84ddedea/docs/source/en/model_doc/llama.mdx?plain=1#L17 The title "LLaMA: Open and Efficient Foundation Language Models" is repeated. Does it mean [this arxiv link](https://arxiv.org/pdf/2302.13971.pdf)? ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Steps to reproduce: 1. Open docs https://huggingface.co/docs/transformers/main/en/model_doc/llama#overview 2. See the first line ### Expected behavior It should be a link to https://arxiv.org/pdf/2302.13971.pdf
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23186/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23186/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23185
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23185/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23185/comments
https://api.github.com/repos/huggingface/transformers/issues/23185/events
https://github.com/huggingface/transformers/issues/23185
1,698,848,192
I_kwDOCUB6oc5lQl3A
23,185
Code in the documentation on fine-tuning mBART-50 for machine translation doesn't seem to perform a backward pass
{ "login": "Franck-Dernoncourt", "id": 15331, "node_id": "MDQ6VXNlcjE1MzMx", "avatar_url": "https://avatars.githubusercontent.com/u/15331?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Franck-Dernoncourt", "html_url": "https://github.com/Franck-Dernoncourt", "followers_url": "https://api.github.com/users/Franck-Dernoncourt/followers", "following_url": "https://api.github.com/users/Franck-Dernoncourt/following{/other_user}", "gists_url": "https://api.github.com/users/Franck-Dernoncourt/gists{/gist_id}", "starred_url": "https://api.github.com/users/Franck-Dernoncourt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Franck-Dernoncourt/subscriptions", "organizations_url": "https://api.github.com/users/Franck-Dernoncourt/orgs", "repos_url": "https://api.github.com/users/Franck-Dernoncourt/repos", "events_url": "https://api.github.com/users/Franck-Dernoncourt/events{/privacy}", "received_events_url": "https://api.github.com/users/Franck-Dernoncourt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The documentation shows how to do a forward pass of the model, it's not an example of full training. Those examples can be found in the [examples folder](https://github.com/huggingface/transformers/tree/main/examples/pytorch). You do need to define an optimizer, call `loss.backward()` etc. for a full training loop, like for any other PyTorch model.", "Thanks @sgugger ! The documentation incorrectly claims it performs fine-tuning. That should be fixed, either by removing the fine-tuning claim or preferably by actually providing a fine-tuning code example. ", "I'm sorry but I don't see the words fine-tuning in the link you provided. Could you point out to me where the claim is? I see a snippet of code showing how to use the model for training, which you should plugin your actual training loop with your own data.", "Thanks @sgugger , good idea, I should have provided the quote. Here is the quote from the [Hugging Face documentation](https://huggingface.co/docs/transformers/model_doc/mbart#training-of-mbart50):\r\n\r\n![image](https://user-images.githubusercontent.com/15331/236707560-6123ba1a-53e1-4f72-9853-cd2264823ff2.png)\r\n\r\nThe presence of \"Training of MBart-50\" and \"Supervised training\" heavily implies that the code trains the model.\r\n", "One could add the following to fine-tune mBART-50:\r\n\r\n\r\n```\r\nfrom transformers.optimization import AdamW\r\n\r\n# Set up the optimizer and training settings\r\noptimizer = AdamW(model.parameters(), lr=1e-4)\r\nmodel.train()\r\n\r\nprint('Fine-tuning started')\r\nfor i in range(100):\r\n optimizer.zero_grad()\r\n output = model(**model_inputs, labels=labels) # forward pass\r\n loss = output.loss\r\n loss.backward()\r\n optimizer.step()\r\nprint('Fine-tuning ended')\r\n```\r\n\r\nFull code:\r\n\r\n```\r\nfrom transformers import MBartForConditionalGeneration, MBart50TokenizerFast\r\nfrom transformers.optimization import AdamW\r\nimport os\r\nos.environ[\"TOKENIZERS_PARALLELISM\"] = \"false\"\r\n\r\n\r\nprint('Model loading started')\r\nmodel = MBartForConditionalGeneration.from_pretrained(\"facebook/mbart-large-50\")\r\ntokenizer = MBart50TokenizerFast.from_pretrained(\"facebook/mbart-large-50\", src_lang=\"fr_XX\", tgt_lang=\"en_XX\")\r\nprint('Model loading done')\r\n\r\nsrc_text = \" billozarion \"\r\ntgt_text = \" plorizatizzzon \"\r\n\r\nmodel_inputs = tokenizer(src_text, return_tensors=\"pt\")\r\nwith tokenizer.as_target_tokenizer():\r\n labels = tokenizer(tgt_text, return_tensors=\"pt\").input_ids\r\n\r\n# Set up the optimizer and training settings\r\noptimizer = AdamW(model.parameters(), lr=1e-4)\r\nmodel.train()\r\n\r\nprint('Fine-tuning started')\r\nfor i in range(100):\r\n optimizer.zero_grad()\r\n output = model(**model_inputs, labels=labels) # forward pass\r\n loss = output.loss\r\n loss.backward()\r\n optimizer.step()\r\nprint('Fine-tuning ended')\r\n \r\n# translate French to English\r\ntokenizer = MBart50TokenizerFast.from_pretrained(\"facebook/mbart-large-50-many-to-many-mmt\")\r\ntokenizer.src_lang = \"fr_XX\"\r\narticle_fr = src_text\r\nencoded_fr = tokenizer(article_fr, return_tensors=\"pt\")\r\ngenerated_tokens = model.generate(**encoded_fr, forced_bos_token_id=tokenizer.lang_code_to_id[\"en_XX\"])\r\ntranslation =tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)\r\nprint(translation)\r\n```\r\n\r\nIt outputs the correct made up translation \"plorizatizzzon\". I'd suggest that the code in the documentation be updated accordingly to truly perform fine-tuning.\r\n\r\nhttps://github.com/huggingface/transformers/tree/main/examples/pytorch/translation contains two more advanced scripts to fine-tune mBART (thanks [sgugger](https://github.com/sgugger) for [pointing](https://github.com/huggingface/transformers/issues/23185#issuecomment-1537564079) me to it).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,683
1,686
1,686
NONE
null
(Assignee: @patrickvonplaten) I try to fine-tune mBART-50 ([paper](https://arxiv.org/pdf/2008.00401), [pre-trained model on Hugging Face](https://huggingface.co/facebook/mbart-large-50)) for machine translation in the transformers Python library. To test the fine-tuning, I am trying to simply teach mBART-50 a new word that I made up (the made up French "billozarion", whose made up translation in English is "plorization"). I use the following code. Over 95% of the code is from the [Hugging Face documentation](https://huggingface.co/docs/transformers/model_doc/mbart#training-of-mbart50): ``` from transformers import MBartForConditionalGeneration, MBart50TokenizerFast print('Model loading started') model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50") tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50", src_lang="fr_XX", tgt_lang="en_XX") print('Model loading done') src_text = " billozarion " tgt_text = " plorization " model_inputs = tokenizer(src_text, return_tensors="pt") with tokenizer.as_target_tokenizer(): labels = tokenizer(tgt_text, return_tensors="pt").input_ids print('Fine-tuning started') for i in range(1000): #pass model(**model_inputs, labels=labels) # forward pass print('Fine-tuning ended') # Testing whether the model learned the new word. Translate French to English tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") tokenizer.src_lang = "fr_XX" article_fr = src_text encoded_fr = tokenizer(article_fr, return_tensors="pt") generated_tokens = model.generate(**encoded_fr, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"]) translation = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) print(translation) ``` However, the new word wasn't learned. The output is "billozarion" instead of "plorization". I'm strictly following the Hugging Face documentation, unless I missed something. The `# forward pass` does make me concerned, as one would need a backward pass to update the gradients. Maybe this means that the documentation is incorrect, however I can't test that hypothesis as I don't know how to add the backward pass. Anyway, it seems there is an issue with the documentation on fine-tuning mBART-50 for machine translation: either the comment `# forward pass` is incorrect, or the code itself is missing the backward pass. --- Environment that I used to run the code: Ubuntu 20.04.5 LTS with an NVIDIA A100 40GB GPU (I also tested with an NVIDIA T4 Tensor Core GPU) and CUDA 12.0 with the following conda environment: ``` conda create --name mbart-python39 python=3.9 conda activate mbart-python39 pip install transformers==4.28.1 pip install chardet==5.1.0 pip install sentencepiece==0.1.99 pip install protobuf==3.20 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23185/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23185/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23184
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23184/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23184/comments
https://api.github.com/repos/huggingface/transformers/issues/23184/events
https://github.com/huggingface/transformers/pull/23184
1,698,790,384
PR_kwDOCUB6oc5P7Jlz
23,184
Update feature_extraction_deit.py
{ "login": "detasar", "id": 19317091, "node_id": "MDQ6VXNlcjE5MzE3MDkx", "avatar_url": "https://avatars.githubusercontent.com/u/19317091?v=4", "gravatar_id": "", "url": "https://api.github.com/users/detasar", "html_url": "https://github.com/detasar", "followers_url": "https://api.github.com/users/detasar/followers", "following_url": "https://api.github.com/users/detasar/following{/other_user}", "gists_url": "https://api.github.com/users/detasar/gists{/gist_id}", "starred_url": "https://api.github.com/users/detasar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/detasar/subscriptions", "organizations_url": "https://api.github.com/users/detasar/orgs", "repos_url": "https://api.github.com/users/detasar/repos", "events_url": "https://api.github.com/users/detasar/events{/privacy}", "received_events_url": "https://api.github.com/users/detasar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Updated copyright year to 2023\r\nRearranged import statements for better readability\r\nReplaced the warnings import with a more specific import\r\nMinor formatting improvements", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,683
1,683
1,683
NONE
null
Updated copyright year to 2023 Rearranged import statements for better readability Replaced the warnings import with a more specific import Minor formatting improvements # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23184/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23184/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23184", "html_url": "https://github.com/huggingface/transformers/pull/23184", "diff_url": "https://github.com/huggingface/transformers/pull/23184.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23184.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/23183
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23183/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23183/comments
https://api.github.com/repos/huggingface/transformers/issues/23183/events
https://github.com/huggingface/transformers/issues/23183
1,698,767,792
I_kwDOCUB6oc5lQSOw
23,183
Allow unneeded labels in forward
{ "login": "surya-narayanan", "id": 17240858, "node_id": "MDQ6VXNlcjE3MjQwODU4", "avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4", "gravatar_id": "", "url": "https://api.github.com/users/surya-narayanan", "html_url": "https://github.com/surya-narayanan", "followers_url": "https://api.github.com/users/surya-narayanan/followers", "following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}", "gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}", "starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions", "organizations_url": "https://api.github.com/users/surya-narayanan/orgs", "repos_url": "https://api.github.com/users/surya-narayanan/repos", "events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}", "received_events_url": "https://api.github.com/users/surya-narayanan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,683
1,686
1,686
NONE
null
### Feature request It would be nice to leave in the `labels` part of my batch while passing it through `AutoModel`, and not have it throw the error `AutoModel doesn't expect keyword argument labels` ### Motivation Sometimes i want to leave metadata in my batch, it would be nice for the model to use what it needs and leave the rest for downstream analysis. ### Your contribution Happy to discuss my needs and use case, and a PR if I can :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23183/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23183/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23182
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23182/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23182/comments
https://api.github.com/repos/huggingface/transformers/issues/23182/events
https://github.com/huggingface/transformers/pull/23182
1,698,686,069
PR_kwDOCUB6oc5P60vK
23,182
Generate: starcoder 🤜 🤛 assisted generation
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Arg. Would love to be able to change those parts directly in the model files in the future, instead of hacking special keys like this :-/\r\n\r\n@sgugger I'm going to add a common test for the cache format, to ensure we don't do this again for future models :) " ]
1,683
1,683
1,683
MEMBER
null
# What does this PR do? Starcoder (GPTBigCode) has a unique cache format, and assisted generation is heavy on cache-related ops. This PR adds the GPTBigCode special case. All slow tests for assisted generation are passing after these changes.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23182/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23182/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23182", "html_url": "https://github.com/huggingface/transformers/pull/23182", "diff_url": "https://github.com/huggingface/transformers/pull/23182.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23182.patch", "merged_at": 1683539140000 }
https://api.github.com/repos/huggingface/transformers/issues/23181
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23181/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23181/comments
https://api.github.com/repos/huggingface/transformers/issues/23181/events
https://github.com/huggingface/transformers/issues/23181
1,698,668,220
I_kwDOCUB6oc5lP568
23,181
Add BROS
{ "login": "jinhopark8345", "id": 60179569, "node_id": "MDQ6VXNlcjYwMTc5NTY5", "avatar_url": "https://avatars.githubusercontent.com/u/60179569?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jinhopark8345", "html_url": "https://github.com/jinhopark8345", "followers_url": "https://api.github.com/users/jinhopark8345/followers", "following_url": "https://api.github.com/users/jinhopark8345/following{/other_user}", "gists_url": "https://api.github.com/users/jinhopark8345/gists{/gist_id}", "starred_url": "https://api.github.com/users/jinhopark8345/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jinhopark8345/subscriptions", "organizations_url": "https://api.github.com/users/jinhopark8345/orgs", "repos_url": "https://api.github.com/users/jinhopark8345/repos", "events_url": "https://api.github.com/users/jinhopark8345/events{/privacy}", "received_events_url": "https://api.github.com/users/jinhopark8345/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "Would be great if you could add it, it should be very straightforward :)\r\n\r\nI have a few demo notebooks on fine-tuning BROS, let me share them here:\r\n\r\n- https://colab.research.google.com/drive/1PUpiKcSdXjBYM6a300ayC9TaYwzxdMus?usp=sharing\r\n- https://colab.research.google.com/drive/1pTjxx4_46Sk1Zs0W_yzceP_bstmu4vfz?usp=sharing.\r\n\r\nThe first one is fine-tuning BROS on the FUNSD dataset, the second one is the same but with support for creating more training examples using the `return_overflowing_tokens` feature.\r\n\r\nLet me know if you need any help to start contributing, feel free to start opening a draft PR" ]
1,683
1,695
1,695
CONTRIBUTOR
null
### Model description [BROS(BERT Relying On Spatiality)](https://arxiv.org/abs/2108.04539) is a pre-trained multimodal transformer for Document Understanding using OCR results of document images (text and bounding box pairs). and I would like to add this model to Huggingface as my first contribution! ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation https://github.com/clovaai/bros
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23181/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23181/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23180
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23180/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23180/comments
https://api.github.com/repos/huggingface/transformers/issues/23180/events
https://github.com/huggingface/transformers/issues/23180
1,698,613,690
I_kwDOCUB6oc5lPsm6
23,180
Improvements Over `enable_progress_bar` in `transformers.utils.logging`
{ "login": "aress31", "id": 11601622, "node_id": "MDQ6VXNlcjExNjAxNjIy", "avatar_url": "https://avatars.githubusercontent.com/u/11601622?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aress31", "html_url": "https://github.com/aress31", "followers_url": "https://api.github.com/users/aress31/followers", "following_url": "https://api.github.com/users/aress31/following{/other_user}", "gists_url": "https://api.github.com/users/aress31/gists{/gist_id}", "starred_url": "https://api.github.com/users/aress31/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aress31/subscriptions", "organizations_url": "https://api.github.com/users/aress31/orgs", "repos_url": "https://api.github.com/users/aress31/repos", "events_url": "https://api.github.com/users/aress31/events{/privacy}", "received_events_url": "https://api.github.com/users/aress31/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "We do not plan on offering more than the ability to turn progress bars on and off.", "What is the rational behind that?\r\n\r\nNot offering the ability to control at least the stream where the tqdm works is hindering integration with other techs. E.g., a Node server calling a python script as subprocess.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,683
1,686
1,686
NONE
null
### Feature request Would it be possible to enhance the functionality of the `tqdm` bar utilized in the [logging module](https://github.com/huggingface/transformers/blob/v4.28.1/src/transformers/utils/logging.py#L351) to provide greater flexibility and adaptability for a broader range of use cases? ### Motivation At present, it is not feasible to track the model download progress except by employing the `tf_logging.enable_progress_bar` method, which does not support a custom `tqdm`. Moreover, the built-in `tqdm` does not flush output, causing complications in my specific use case where I am invoking my script as a child process of a node server. Consequently, the progress output fails to reach the node process before the download completes, rendering the progress bar futile. Thus, I am requesting for increased flexibility and functionality of the `tqdm` bar utilized in the logging module to cater to a wider array of scenarios. ### Your contribution The fix would be to add a `tqdm` or `tqdm_kwargs` argument to the following method. ```python def enable_progress_bar(): """Enable tqdm progress bar.""" global _tqdm_active _tqdm_active = True hf_hub_utils.enable_progress_bars() ``` Note: I have tried to set the tqdm using `tf_logging.tqdm = new_tqdm` but this seems to impact non download/progressbar type of messages which is odd...
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23180/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23180/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23179
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23179/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23179/comments
https://api.github.com/repos/huggingface/transformers/issues/23179/events
https://github.com/huggingface/transformers/issues/23179
1,698,582,244
I_kwDOCUB6oc5lPk7k
23,179
How does the decoder handle pad encodings without encoder attention mask ?
{ "login": "drkhan107", "id": 63947299, "node_id": "MDQ6VXNlcjYzOTQ3Mjk5", "avatar_url": "https://avatars.githubusercontent.com/u/63947299?v=4", "gravatar_id": "", "url": "https://api.github.com/users/drkhan107", "html_url": "https://github.com/drkhan107", "followers_url": "https://api.github.com/users/drkhan107/followers", "following_url": "https://api.github.com/users/drkhan107/following{/other_user}", "gists_url": "https://api.github.com/users/drkhan107/gists{/gist_id}", "starred_url": "https://api.github.com/users/drkhan107/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/drkhan107/subscriptions", "organizations_url": "https://api.github.com/users/drkhan107/orgs", "repos_url": "https://api.github.com/users/drkhan107/repos", "events_url": "https://api.github.com/users/drkhan107/events{/privacy}", "received_events_url": "https://api.github.com/users/drkhan107/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante @younesbelkada ", "Hey @drkhan107 👋 \r\n\r\nThe only alternative I see is to fine-tune the model using padded data as input and unpadded data as output, so the model learns to ignore the padding (and that may not work).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,683
1,686
1,686
NONE
null
I am trying to generate sequence using T5 decoder (not using generate function) by passing encodings and decoder input ids only. However my encoding has pad tokens as well. How is decoder able to handle those pad tokens without passing encoder attention mask. Here is my code input_ids = tokenizer(seq, max_length=1024, padding='max_length', truncation=True, return_tensors="pt") input_ids=input_ids.to(device) encoder_output_vectors = model.base_model.encoder(input_ids['input_ids'], return_dict=True) encodings=encoder_output_vectors.last_hidden_state #recon is my prompt decoder_input_ids = tokenizer("recon:", add_special_tokens=False, return_tensors="pt").input_ids decoder_input_ids=decoder_input_ids.to(device) decoder_hidden_state = None for i in range(max_len): with torch.no_grad(): outputs=model.decoder(input_ids=decoder_input_ids,encoder_hidden_states=encodings) logits=model.lm_head(outputs[0]) next_decoder_input_ids = torch.argmax(logits[:, -1:], axis=-1) decoder_input_ids = torch.cat([decoder_input_ids, next_decoder_input_ids], axis=-1) if next_decoder_input_ids == tokenizer.eos_token_id: break rec=tokenizer.decode(decoder_input_ids[0], skip_special_tokens=True)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23179/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23179/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23178
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23178/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23178/comments
https://api.github.com/repos/huggingface/transformers/issues/23178/events
https://github.com/huggingface/transformers/pull/23178
1,698,572,907
PR_kwDOCUB6oc5P6d_4
23,178
Update tokenization_llama.py
{ "login": "sjm1992st", "id": 15169452, "node_id": "MDQ6VXNlcjE1MTY5NDUy", "avatar_url": "https://avatars.githubusercontent.com/u/15169452?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sjm1992st", "html_url": "https://github.com/sjm1992st", "followers_url": "https://api.github.com/users/sjm1992st/followers", "following_url": "https://api.github.com/users/sjm1992st/following{/other_user}", "gists_url": "https://api.github.com/users/sjm1992st/gists{/gist_id}", "starred_url": "https://api.github.com/users/sjm1992st/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sjm1992st/subscriptions", "organizations_url": "https://api.github.com/users/sjm1992st/orgs", "repos_url": "https://api.github.com/users/sjm1992st/repos", "events_url": "https://api.github.com/users/sjm1992st/events{/privacy}", "received_events_url": "https://api.github.com/users/sjm1992st/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23178). All of your documentation changes will be reflected on that endpoint.", "cc @gante since Arthur is on holiday.", "Hey @sjm1992st -- the issue in #23175 is unrelated to the tokenizer (see my comment there).\r\n\r\nAs such, without further context, I won't accept this PR :)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,683
1,686
1,686
NONE
null
# What does this PR do? Fixes # (issue) https://github.com/huggingface/transformers/issues/23175 we could see it : https://github.com/facebookresearch/llama/blob/main/llama/generation.py #line62: generation.py
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23178/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23178/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23178", "html_url": "https://github.com/huggingface/transformers/pull/23178", "diff_url": "https://github.com/huggingface/transformers/pull/23178.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23178.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/23177
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23177/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23177/comments
https://api.github.com/repos/huggingface/transformers/issues/23177/events
https://github.com/huggingface/transformers/issues/23177
1,698,554,346
I_kwDOCUB6oc5lPeHq
23,177
Can you write code that trains bert with MLM and the next sentence at the same time?
{ "login": "ingale726", "id": 42893941, "node_id": "MDQ6VXNlcjQyODkzOTQx", "avatar_url": "https://avatars.githubusercontent.com/u/42893941?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ingale726", "html_url": "https://github.com/ingale726", "followers_url": "https://api.github.com/users/ingale726/followers", "following_url": "https://api.github.com/users/ingale726/following{/other_user}", "gists_url": "https://api.github.com/users/ingale726/gists{/gist_id}", "starred_url": "https://api.github.com/users/ingale726/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ingale726/subscriptions", "organizations_url": "https://api.github.com/users/ingale726/orgs", "repos_url": "https://api.github.com/users/ingale726/repos", "events_url": "https://api.github.com/users/ingale726/events{/privacy}", "received_events_url": "https://api.github.com/users/ingale726/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @xiao12mm, I belive the NSP loss isn't available for model training directly in huggingface. This [thread](https://discuss.huggingface.co/t/continual-pre-training-from-an-initial-checkpoint-with-mlm-and-nsp/6869) provide a script for NSP training.\r\n\r\nAbout this - \"training with both is better than training with mlm alone\", there has been specific research done to verify the claim. One of the conclusions from \"RoBERTa: A Robustly Optimized BERT Pretraining Approach\" paper is that - \"removing the NSP loss matches or slightly improves downstream task performance\". ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,683
1,686
1,686
NONE
null
### Feature request Or is there a training code already in place? ### Motivation I've learned that training with both is better than training with mlm alone, by which I mean the generated vector features ### Your contribution no yet
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23177/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23177/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23176
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23176/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23176/comments
https://api.github.com/repos/huggingface/transformers/issues/23176/events
https://github.com/huggingface/transformers/issues/23176
1,698,474,995
I_kwDOCUB6oc5lPKvz
23,176
I want to use 'from_ Pretrained' to read the '.safetensors' model file. What should I do?
{ "login": "Yu-xm", "id": 72803279, "node_id": "MDQ6VXNlcjcyODAzMjc5", "avatar_url": "https://avatars.githubusercontent.com/u/72803279?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Yu-xm", "html_url": "https://github.com/Yu-xm", "followers_url": "https://api.github.com/users/Yu-xm/followers", "following_url": "https://api.github.com/users/Yu-xm/following{/other_user}", "gists_url": "https://api.github.com/users/Yu-xm/gists{/gist_id}", "starred_url": "https://api.github.com/users/Yu-xm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Yu-xm/subscriptions", "organizations_url": "https://api.github.com/users/Yu-xm/orgs", "repos_url": "https://api.github.com/users/Yu-xm/repos", "events_url": "https://api.github.com/users/Yu-xm/events{/privacy}", "received_events_url": "https://api.github.com/users/Yu-xm/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "`AutoModelForCausalLM.from_pretrained(llama_path)` is enough.", "> `AutoModelForCausalLM.from_pretrained(llama_path)` is enough.\r\n\r\nI used your method and got an error:\r\nOSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory pretrain_ models/llama_7b. models/llama_7b.", "Then your comment above was wrong:\r\n>llama_path include:\r\n>model.safetensors, config.json and other config files.\r\n\r\nIf you have the `model.safetensors` file, `from_pretrained` will succeed. Unles you don't have `safetensors` installed in which case you shouldn't be able to have that file converted from the conversion script, but it's easily fixable with `pip install safetensors`.", "> 那么你上面的评论是错误的:\r\n> \r\n> > llama_path 包括:\r\n> > model.safetensors、config.json 等配置文件。\r\n> \r\n> 如果你有这个`model.safetensors`文件,`from_pretrained`就会成功。除非你没有`safetensors`安装,在这种情况下你不应该能够从转换脚本转换该文件,但它很容易用`pip install safetensors`.\r\n\r\nI install safetensors and use following code:\r\nAutoModelForCausalLM.from_pretrained(llama_path)\r\nand then, I got a new error: AttributeError: 'NoneType' object has no attribute 'get' ?\r\nIs it the reason for my Transformers version? I am using pip install git+ https://github.com/huggingface/transformers The method of downloading is not directly 'pip install transformers'. Because when I directly 'pip install transformers', I have problems with from transformers import LlamaForCausalLM, LlamaTokenizer.\r\n\r\n", "> I'm sure the path contain the model.safetensors file\r\n\r\n", "Same Issue Here.\r\n\r\nI Want to Use The Model \"wojtab/llava-7b-v0-4bit-128g\" using from_pretrained()", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Got a Soution!\r\n\r\nCheckout AUTOGPTQ.", "@TheFaheem Sorry, may I know how to solve this problem?", "> @TheFaheem Sorry, may I know how to solve this problem?\r\n\r\nCheck it out Here => https://github.com/PanQiWei/AutoGPTQ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,683
1,689
1,689
NONE
null
### System Info - `transformers` version: 4.29.0.dev0 - Platform: Linux-6.2.0-20-generic-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction My code: llama_config = AutoConfig.from_pretrained(llama_path + '/config.json') llama = AutoModelForCausalLM.from_pretrained(model_bytes, config = llama_config) llama_path include: model.safetensors, config.json and other config files. ### Expected behavior I want to use 'from_ Pretrained' to read the '.safetensors' model file. What should I do?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23176/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23176/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23175
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23175/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23175/comments
https://api.github.com/repos/huggingface/transformers/issues/23175/events
https://github.com/huggingface/transformers/issues/23175
1,698,312,483
I_kwDOCUB6oc5lOjEj
23,175
When using model.generate, it does not stop at eos_token, but instead continues until the maximum length.
{ "login": "bestpredicts", "id": 12403152, "node_id": "MDQ6VXNlcjEyNDAzMTUy", "avatar_url": "https://avatars.githubusercontent.com/u/12403152?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bestpredicts", "html_url": "https://github.com/bestpredicts", "followers_url": "https://api.github.com/users/bestpredicts/followers", "following_url": "https://api.github.com/users/bestpredicts/following{/other_user}", "gists_url": "https://api.github.com/users/bestpredicts/gists{/gist_id}", "starred_url": "https://api.github.com/users/bestpredicts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bestpredicts/subscriptions", "organizations_url": "https://api.github.com/users/bestpredicts/orgs", "repos_url": "https://api.github.com/users/bestpredicts/repos", "events_url": "https://api.github.com/users/bestpredicts/events{/privacy}", "received_events_url": "https://api.github.com/users/bestpredicts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @bestpredicts 👋 \r\n\r\nI agree that `</s>` should not be generated. However, I need a stand-alone short reproducer to understand what's truly going on. We have many Llama checkpoints that are not compatible with `transformers`.", "@bestpredicts I have same question, do you solved it?", "> Hey @bestpredicts 👋\r\n> \r\n> I agree that `</s>` should not be generated. However, I need a stand-alone short reproducer to understand what's truly going on. We have many Llama checkpoints that are not compatible with `transformers`.\r\n\r\nme too,same question.", "You problem can come from the `model.eos_token_id` that is not the correct one (wild guess) but we need a minimal reproducer to help you. ", "Edit: have deleted this comment, because think the issue I was seeing was just the EOS was outputted as very unlikely by the models for some reason. (Check the history for the code snippet.)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,683
1,688
1,688
NONE
null
### System Info ubuntu20.04 transformers==4.29.0 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction use my own model, infer by ```shell output_texts = model.generate( input_ids=input_ids, attention_mask=attention_mask, pad_token_id= tokenizer.eos_token_id, eos_token_id= tokenizer.eos_token_id, max_new_tokens=500, do_sample=False, top_k=30, top_p=0.85, temperature=0.3, repetition_penalty=1.2) ``` output like: ```shell 因此需要进行额外的优化。</s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s> ``` ### Expected behavior should end by eos token but not
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23175/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23175/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23174
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23174/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23174/comments
https://api.github.com/repos/huggingface/transformers/issues/23174/events
https://github.com/huggingface/transformers/issues/23174
1,698,100,997
I_kwDOCUB6oc5lNvcF
23,174
MPT
{ "login": "zphang", "id": 1668462, "node_id": "MDQ6VXNlcjE2Njg0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zphang", "html_url": "https://github.com/zphang", "followers_url": "https://api.github.com/users/zphang/followers", "following_url": "https://api.github.com/users/zphang/following{/other_user}", "gists_url": "https://api.github.com/users/zphang/gists{/gist_id}", "starred_url": "https://api.github.com/users/zphang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zphang/subscriptions", "organizations_url": "https://api.github.com/users/zphang/orgs", "repos_url": "https://api.github.com/users/zphang/repos", "events_url": "https://api.github.com/users/zphang/events{/privacy}", "received_events_url": "https://api.github.com/users/zphang/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "Can this model be implemented without the need for executing custom python files? I believe it's a huge security risk to allow these models to run their own python code. Yes it is up to the users to verify it's safety and take responsibility for their own actions; yet in the real world, most people won't scan through the python files that come with the models and the next majority might not understand code enough to verify it's safety.\r\n", "Disregarding HF, does anyone have inference code to run the MPT with FasterTransformer?", "We are currently talking about this integration with mosaic ML, hope to have updates on this soon! ", "> Disregarding HF, does anyone have inference code to run the MPT with FasterTransformer?\r\n\r\nHere is a script for converting a HuggingFace MPT checkpoint to FasterTransformer https://github.com/mosaicml/llm-foundry/blob/main/scripts/inference/convert_hf_mpt_to_ft.py", "> We are currently talking about this integration with mosaic ML, hope to have updates on this soon!\r\n\r\n@ArthurZucker Curious to know how the talks with MosaicML are going. :-)", "Probably need to update the ticket description citing MPT-30B as well :-)", "> We are currently talking about this integration with mosaic ML, hope to have updates on this soon!\r\n\r\n@ArthurZucker Any updates on this issue?", "Yes! We didn't receive a proper answer, I'll be taking this over! Will open a pr by tomorrow! 😉 " ]
1,683
1,690
1,690
CONTRIBUTOR
null
### Model description New LLM from MosaicML, 7B parameters. See: https://www.mosaicml.com/blog/mpt-7b ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation https://huggingface.co/mosaicml/mpt-7b/tree/main The model is already implemented in a HF/T compatible way, but has multiple source files for its model implementation, some components that aren't used in this current model, and most importantly, dependencies that aren't normally included in HF (e.g. einops, flash_attn). Do the HF folks have a view on whether we would want to add those dependencies, or implement a vanilla version based only on existing requirements (in which case, it would arguably be easier to modify an existing LM implementation instead, rather than use MosaicML's implemenetation)?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23174/reactions", "total_count": 7, "+1": 7, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23174/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23173
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23173/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23173/comments
https://api.github.com/repos/huggingface/transformers/issues/23173/events
https://github.com/huggingface/transformers/pull/23173
1,697,847,018
PR_kwDOCUB6oc5P4CKj
23,173
Add FlaxWhisperForAudioClassification model
{ "login": "raghavanone", "id": 115454562, "node_id": "U_kgDOBuGyYg", "avatar_url": "https://avatars.githubusercontent.com/u/115454562?v=4", "gravatar_id": "", "url": "https://api.github.com/users/raghavanone", "html_url": "https://github.com/raghavanone", "followers_url": "https://api.github.com/users/raghavanone/followers", "following_url": "https://api.github.com/users/raghavanone/following{/other_user}", "gists_url": "https://api.github.com/users/raghavanone/gists{/gist_id}", "starred_url": "https://api.github.com/users/raghavanone/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/raghavanone/subscriptions", "organizations_url": "https://api.github.com/users/raghavanone/orgs", "repos_url": "https://api.github.com/users/raghavanone/repos", "events_url": "https://api.github.com/users/raghavanone/events{/privacy}", "received_events_url": "https://api.github.com/users/raghavanone/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "cc @sanchit-gandhi ", "The test failures are appearing on this one. Let's fix them and re-merge!", "We need to make two changes following the updates in #22954! First, we need to assign the attribute `gradient_checkpointing` to the class `FlaxWhisperForAudioClassificationModule`, similar to what we do for `FlaxWhisperForConditionalGeneration`:\r\nhttps://github.com/huggingface/transformers/blob/a5741d7cb59f8a81d1f5fc7a6b106056d34f9969/src/transformers/models/whisper/modeling_flax_whisper.py#L1176\r\nWe then need to forward `self.gradient_checkpointing` to the encoder:\r\n```diff\r\n- self.encoder = FlaxWhisperEncoder(config=self.config, dtype=self.dtype)\r\n+ self.encoder = FlaxWhisperEncoder(config=self.config, dtype=self.dtype, gradient_checkpointing=self.gradient_checkpointing)\r\n```\r\nThis will facilitate gradient checkpointing for the module!", "@sgugger @sanchit-gandhi Done, all tests pass !" ]
1,683
1,686
1,683
CONTRIBUTOR
null
Fixes #21779
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23173/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23173/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23173", "html_url": "https://github.com/huggingface/transformers/pull/23173", "diff_url": "https://github.com/huggingface/transformers/pull/23173.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23173.patch", "merged_at": 1683307427000 }
https://api.github.com/repos/huggingface/transformers/issues/23172
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23172/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23172/comments
https://api.github.com/repos/huggingface/transformers/issues/23172/events
https://github.com/huggingface/transformers/pull/23172
1,697,763,612
PR_kwDOCUB6oc5P3wKK
23,172
Change summarization model
{ "login": "philschmid", "id": 32632186, "node_id": "MDQ6VXNlcjMyNjMyMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/philschmid", "html_url": "https://github.com/philschmid", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "organizations_url": "https://api.github.com/users/philschmid/orgs", "repos_url": "https://api.github.com/users/philschmid/repos", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "received_events_url": "https://api.github.com/users/philschmid/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,683
1,683
1,683
MEMBER
null
Change the summarization to a better and smaller one. The `philschmid/flan-t5-base-samsum` gets +6 points on rogue on the samesum dataset.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23172/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23172/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23172", "html_url": "https://github.com/huggingface/transformers/pull/23172", "diff_url": "https://github.com/huggingface/transformers/pull/23172.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23172.patch", "merged_at": 1683303428000 }
https://api.github.com/repos/huggingface/transformers/issues/23171
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23171/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23171/comments
https://api.github.com/repos/huggingface/transformers/issues/23171/events
https://github.com/huggingface/transformers/issues/23171
1,697,754,366
I_kwDOCUB6oc5lMaz-
23,171
Support `logging_ratio`, `save_ratio`, and `eval_ratio` (like for `warmup_ratio`)
{ "login": "konstantinjdobler", "id": 28780372, "node_id": "MDQ6VXNlcjI4NzgwMzcy", "avatar_url": "https://avatars.githubusercontent.com/u/28780372?v=4", "gravatar_id": "", "url": "https://api.github.com/users/konstantinjdobler", "html_url": "https://github.com/konstantinjdobler", "followers_url": "https://api.github.com/users/konstantinjdobler/followers", "following_url": "https://api.github.com/users/konstantinjdobler/following{/other_user}", "gists_url": "https://api.github.com/users/konstantinjdobler/gists{/gist_id}", "starred_url": "https://api.github.com/users/konstantinjdobler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/konstantinjdobler/subscriptions", "organizations_url": "https://api.github.com/users/konstantinjdobler/orgs", "repos_url": "https://api.github.com/users/konstantinjdobler/repos", "events_url": "https://api.github.com/users/konstantinjdobler/events{/privacy}", "received_events_url": "https://api.github.com/users/konstantinjdobler/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "We already have 96 training arguments though, and that would make three more of all users to learn :-/", "Another option would be to allow `float` inputs for `logging_steps`, `save_steps`, `eval_steps` and interpret all inputs `<1.0` as a ratio (and throw an error if inputs `>1.0` are not integers). \r\n\r\nBut yes there is a tradeoff with too much complexity", "I would prefer that solution actually, even if the naming is not perfect.", "Going over the code a bit, it seems like we would have to wait until after this if statement in `_inner_training_loop` to set the correct values based on the `max_steps` that gets calculated there + some additional guards in the `__post_init__` of the `TrainingArguments`. \r\nhttps://github.com/huggingface/transformers/blob/04ab5605fbb4ef207b10bf2772d88c53fc242e83/src/transformers/trainer.py#L1683-L1711\r\nAt first glance, it looks like the setting of when things get logged/saved/evaluated in `DefaultFlowCallback` should work out-of-the-box with this change.\r\nI'm willing to contribute the changes once I find some time, does the general plan sound reasonable?", "Yes it does. I'll be looking forward to your PR!", "Can someone clarify if this ratio means the % of training steps **per epoch** or the total training steps? If the latter, how do we know preemptively the total number of epochs (or total number of training steps) that the model is going to train for?" ]
1,683
1,707
1,683
CONTRIBUTOR
null
### Feature request I would love if `TrainingArguments` and the Huggingface `Trainer` would support `logging_ratio`, `save_ratio`, and `eval_ratio` arguments (complementing `logging_steps`, `save_steps`, and `eval_steps`). If the `*_ratio` argument is set to e.g. `0.1`, logging/saving/eval would be done every `0.1 * total_training_steps`. This is already done for `warmup_ratio` and `warmup_steps`. ### Motivation When dealing with many different tasks and datasets, it can be frustrating to have to calculate different appropriate `logging_steps` etc. for each individual dataset. This proposal would enable a unified, simple and concise way to solve this problem. ### Your contribution I realize this might not be trivial to fully integrate, but hopefully, we can take `warmup_steps` and `warmup_ratio` as a reference. Depending on how deep the required changes are, I can also submit a PR (with some pointers on what to look out for).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23171/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23171/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23170
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23170/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23170/comments
https://api.github.com/repos/huggingface/transformers/issues/23170/events
https://github.com/huggingface/transformers/issues/23170
1,697,620,334
I_kwDOCUB6oc5lL6Fu
23,170
Gradient Checkpointing Fails with frozen parameters
{ "login": "jamesharrisivi", "id": 132676609, "node_id": "U_kgDOB-h8AQ", "avatar_url": "https://avatars.githubusercontent.com/u/132676609?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jamesharrisivi", "html_url": "https://github.com/jamesharrisivi", "followers_url": "https://api.github.com/users/jamesharrisivi/followers", "following_url": "https://api.github.com/users/jamesharrisivi/following{/other_user}", "gists_url": "https://api.github.com/users/jamesharrisivi/gists{/gist_id}", "starred_url": "https://api.github.com/users/jamesharrisivi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jamesharrisivi/subscriptions", "organizations_url": "https://api.github.com/users/jamesharrisivi/orgs", "repos_url": "https://api.github.com/users/jamesharrisivi/repos", "events_url": "https://api.github.com/users/jamesharrisivi/events{/privacy}", "received_events_url": "https://api.github.com/users/jamesharrisivi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada and @pacman100 ", "hi @jamesharrisivi \r\nFor that you need to make sure that the input being passed to the peft model has `requires_grad` set to `True`\r\nThis is a duplicate of https://discuss.huggingface.co/t/peft-lora-gpt-neox-backward-pass-failing/35641 \r\nCan you try to add:\r\n```python\r\nif hasattr(model, \"enable_input_require_grads\"):\r\n model.enable_input_require_grads()\r\nelse:\r\n def make_inputs_require_grad(module, input, output):\r\n output.requires_grad_(True)\r\n\r\n model.get_input_embeddings().register_forward_hook(make_inputs_require_grad)\r\n```\r\nsomewhere in your training script, before the call to `get_peft_model` ?\r\n", "I agree though we can do it directly when creating the peft model, I propose to fix it in https://github.com/huggingface/peft/pull/404", "@younesbelkada Is this fix also intended to be compatible with Deepspeed?", "I think it would work with DS out of the box but not sure, did you tried already on your end @bradfox2 ?", "> I think it would work with DS out of the box but not sure, did you tried already on your end @bradfox2 ?\r\n\r\nFrom what I saw, get_input_embeddings() wasn't defined on the DS model object.", "> hi @jamesharrisivi For that you need to make sure that the input being passed to the peft model has `requires_grad` set to `True` This is a duplicate of [discuss.huggingface.co/t/peft-lora-gpt-neox-backward-pass-failing/35641](https://discuss.huggingface.co/t/peft-lora-gpt-neox-backward-pass-failing/35641) Can you try to add:\r\n> \r\n> ```python\r\n> if hasattr(model, \"enable_input_require_grads\"):\r\n> model.enable_input_require_grads()\r\n> else:\r\n> def make_inputs_require_grad(module, input, output):\r\n> output.requires_grad_(True)\r\n> \r\n> model.get_input_embeddings().register_forward_hook(make_inputs_require_grad)\r\n> ```\r\n> \r\n> somewhere in your training script, before the call to `get_peft_model` ?\r\n\r\nHi:\r\nmay I ask if this is the choice for re-entrant=True by default now? BTW, the input for embedding is INT tensors, so why input.requires_grad_(True) worked here?", "> hi @jamesharrisivi For that you need to make sure that the input being passed to the peft model has `requires_grad` set to `True` This is a duplicate of https://discuss.huggingface.co/t/peft-lora-gpt-neox-backward-pass-failing/35641 Can you try to add:\r\n> \r\n> ```python\r\n> if hasattr(model, \"enable_input_require_grads\"):\r\n> model.enable_input_require_grads()\r\n> else:\r\n> def make_inputs_require_grad(module, input, output):\r\n> output.requires_grad_(True)\r\n> \r\n> model.get_input_embeddings().register_forward_hook(make_inputs_require_grad)\r\n> ```\r\n> \r\n> somewhere in your training script, before the call to `get_peft_model` ?\r\n\r\nHow can we back-prop the gradients when we use `model.generate()` efficiently?\r\n\r\n", "Hey @SuperBruceJia please refrain from asking the same question on each and every single issue that is unrelated 😅 The forum is the best place to discuss this 🤗 ", "> How can we back-prop the gradients when we use model.generate() efficiently?\r\n\r\nCurrently you can't back-prop when using generate as that method uses `torch.no_grad()` context manager. You can overwrite the method and make sure it is not using that context manager", "> > How can we back-prop the gradients when we use model.generate() efficiently?\r\n> \r\n> Currently you can't back-prop when using generate as that method uses `torch.no_grad()` context manager. You can overwrite the method and make sure it is not using that context manager\r\n\r\n@younesbelkada @ArthurZucker After commenting the `torch.no_grad()`, a huge amount of GPU memory is needed to do the `model.generate()` and a `CUDA out of memory` error would be easily triggered. Do you have any suggestions on this? \r\n\r\nThank you very much, and have a nice day!\r\n\r\nBest regards,\r\n\r\nShuyue\r\nDec. 4th, 2023" ]
1,683
1,701
1,685
NONE
null
### System Info The PEFT methods freeze the bulk of the transformer, apart from an external module. When I enable gradient checkpointing and train with these models or even if I simply freeze an embedding layer of a normal model, training breaks. So this problem is not specific to PEFT: gradient_checkpointing + frozen first parameter = Error But if I do ``` for n, param in model.named_parameters(): param.requires_grad = True break ``` it trains successfully. So it seems like there is a check if the first parameter has a gradient. Ideally, I would not have to set the first parameter (embedding) to True, as I want the whole model including embeddings frozen. ``` warnings.warn("None of the inputs have requires_grad=True. Gradients will be None") /admin/miniconda3/envs/peft/lib/python3.10/site-packages/torch/utils/checkpoint.py:31: UserWarning: None of the inputs have requires_grad=True. Gradients will be None warnings.warn("None of the inputs have requires_grad=True. Gradients will be None") ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ /admin/peft/model/model_training/trainer.py:48 │ │ 0 in <module> │ │ │ │ │ │ │ │ /admin/peft/model/model_training/trainer.py:47 │ │ 4 in main │ │ │ │ 471 │ │ compute_metrics=partial(compute_metrics, metrics=metrics, preprocess_fns=preproc │ │ 472 │ │ preprocess_logits_for_metrics=preprocess_logits_for_metrics, │ │ 473 │ ) │ │ ❱ 474 │ trainer.train(resume_from_checkpoint=training_conf.resume_from_checkpoint) │ │ 475 │ trainer.save_model() │ │ 476 │ tokenizer.save_pretrained(output_dir) │ │ 477 │ │ │ │ /admin//miniconda3/envs/peft/lib/python3.10/site-packages/transformers/ │ │ trainer.py:1639 in train │ │ │ │ 1636 │ │ inner_training_loop = find_executable_batch_size( │ │ 1637 │ │ │ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size │ │ 1638 │ │ ) │ │ ❱ 1639 │ │ return inner_training_loop( │ │ 1640 │ │ │ args=args, │ │ 1641 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │ │ 1642 │ │ │ trial=trial, │ │ │ │ /admin//miniconda3/envs/peft/lib/python3.10/site-packages/transformers/ │ │ trainer.py:1906 in _inner_training_loop │ │ │ │ 1903 │ │ │ │ │ with model.no_sync(): │ │ 1904 │ │ │ │ │ │ tr_loss_step = self.training_step(model, inputs) │ │ 1905 │ │ │ │ else: │ │ ❱ 1906 │ │ │ │ │ tr_loss_step = self.training_step(model, inputs) │ │ 1907 │ │ │ │ │ │ 1908 │ │ │ │ if ( │ │ 1909 │ │ │ │ │ args.logging_nan_inf_filter │ │ │ │ /admin//miniconda3/envs/peft/lib/python3.10/site-packages/transformers/ │ │ trainer.py:2668 in training_step │ │ │ │ 2665 │ │ │ │ scaled_loss.backward() │ │ 2666 │ │ elif self.deepspeed: │ │ 2667 │ │ │ # loss gets scaled under gradient_accumulation_steps in deepspeed │ │ ❱ 2668 │ │ │ loss = self.deepspeed.backward(loss) │ │ 2669 │ │ else: │ │ 2670 │ │ │ loss.backward() │ │ 2671 │ │ │ │ /admin//miniconda3/envs/peft/lib/python3.10/site-packages/deepspeed/uti │ │ ls/nvtx.py:11 in wrapped_fn │ │ │ │ 8 │ function call.""" │ │ 9 │ def wrapped_fn(*args, **kwargs): │ │ 10 │ │ get_accelerator().range_push(func.__qualname__) │ │ ❱ 11 │ │ ret_val = func(*args, **kwargs) │ │ 12 │ │ get_accelerator().range_pop() │ │ 13 │ │ return ret_val │ │ 14 │ │ │ │ /admin//miniconda3/envs/peft/lib/python3.10/site-packages/deepspeed/run │ │ time/engine.py:1974 in backward │ │ │ │ 1971 │ │ if self.zero_optimization(): │ │ 1972 │ │ │ self.optimizer.is_gradient_accumulation_boundary = self.is_gradient_accumula │ │ 1973 │ │ │ ) │ │ ❱ 1974 │ │ │ self.optimizer.backward(loss, retain_graph=retain_graph) │ │ 1975 │ │ elif self.amp_enabled(): │ │ 1976 │ │ │ # AMP requires delaying unscale when inside gradient accumulation boundaries │ │ 1977 │ │ │ # https://nvidia.github.io/apex/advanced.html#gradient-accumulation-across-i │ │ │ │ /admin//miniconda3/envs/peft/lib/python3.10/site-packages/deepspeed/run │ │ time/zero/stage_1_and_2.py:2028 in backward │ │ │ │ 2025 │ │ │ scaled_loss = self.external_loss_scale * loss │ │ 2026 │ │ │ scaled_loss.backward() │ │ 2027 │ │ else: │ │ ❱ 2028 │ │ │ self.loss_scaler.backward(loss.float(), retain_graph=retain_graph) │ │ 2029 │ │ │ 2030 │ def check_overflow(self, partition_gradients=True): │ │ 2031 │ │ self._check_overflow(partition_gradients) │ │ │ │ /admin//miniconda3/envs/peft/lib/python3.10/site-packages/deepspeed/run │ │ time/fp16/loss_scaler.py:54 in backward │ │ │ │ 51 │ │ │ 52 │ def backward(self, loss, retain_graph=False): │ │ 53 │ │ scaled_loss = loss * self.loss_scale │ │ ❱ 54 │ │ scaled_loss.backward(retain_graph=retain_graph) │ │ 55 │ │ 56 │ │ 57 class LossScaler(LossScalerBase): │ │ │ │ /admin//miniconda3/envs/peft/lib/python3.10/site-packages/torch/_tensor │ │ .py:488 in backward │ │ │ │ 485 │ │ │ │ create_graph=create_graph, │ │ 486 │ │ │ │ inputs=inputs, │ │ 487 │ │ │ ) │ │ ❱ 488 │ │ torch.autograd.backward( │ │ 489 │ │ │ self, gradient, retain_graph, create_graph, inputs=inputs │ │ 490 │ │ ) │ │ 491 │ │ │ │ /admin//miniconda3/envs/peft/lib/python3.10/site-packages/torch/autogra │ │ d/__init__.py:197 in backward │ │ │ │ 194 │ # The reason we repeat same the comment below is that │ │ 195 │ # some Python versions print out the first line of a multi-line function │ │ 196 │ # calls in the traceback and some print out the last line │ │ ❱ 197 │ Variable._execution_engine.run_backward( # Calls into the C++ engine to run the bac │ │ 198 │ │ tensors, grad_tensors_, retain_graph, create_graph, inputs, │ │ 199 │ │ allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to ru │ │ 200 │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import AutoModel from peft import LoraConfig, get_peft_model model = AutoModel.from_pretrained('decapoda-research/llama-7b-hf') config = LoraConfig( r=16, lora_alpha=32, target_modules=["q_proj", "k_proj", "v_proj", "o_proj"], lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", ) model = get_peft_model(model, config) and then proceed to train with gradient_checkpointing enabled. ### Expected behavior Gradient checkpointing shouldn't affect whether a subset of parameters are frozen. As PEFT models are increasingly popular as well as gradient_checkpointing it makes sense to get to the bottom of this bug.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23170/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23170/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23169
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23169/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23169/comments
https://api.github.com/repos/huggingface/transformers/issues/23169/events
https://github.com/huggingface/transformers/issues/23169
1,697,602,423
I_kwDOCUB6oc5lL1t3
23,169
Cant get deepspeed parm
{ "login": "www516717402", "id": 30460509, "node_id": "MDQ6VXNlcjMwNDYwNTA5", "avatar_url": "https://avatars.githubusercontent.com/u/30460509?v=4", "gravatar_id": "", "url": "https://api.github.com/users/www516717402", "html_url": "https://github.com/www516717402", "followers_url": "https://api.github.com/users/www516717402/followers", "following_url": "https://api.github.com/users/www516717402/following{/other_user}", "gists_url": "https://api.github.com/users/www516717402/gists{/gist_id}", "starred_url": "https://api.github.com/users/www516717402/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/www516717402/subscriptions", "organizations_url": "https://api.github.com/users/www516717402/orgs", "repos_url": "https://api.github.com/users/www516717402/repos", "events_url": "https://api.github.com/users/www516717402/events{/privacy}", "received_events_url": "https://api.github.com/users/www516717402/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is a wrong repo. For Accelerate please file Issues under https://github.com/huggingface/accelerate/issues" ]
1,683
1,683
1,683
NONE
null
### System Info latest version 2023.5.5 ### Who can help? @stas00 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Use Accelerate and deepspeed plugin , transformers Trainer class. I get config.ymal from `accelerate config` as follows: ``` ymal compute_environment: LOCAL_MACHINE deepspeed_config: deepspeed_config_file: /xxx/dp_zero2.json zero3_init_flag: false distributed_type: DEEPSPEED downcast_bf16: 'no' machine_rank: 0 main_training_function: main num_machines: 1 num_processes: 4 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false ``` `/xxx/dp_zero2.json` is config, dont care. However get deepspeed=None from TrainingArguments when run ` accelerate launch xxx.py `,It means deepspeed parms is invalid. ### Expected behavior Expect `Trainer class` read deepspeed parm from ymal file
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23169/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23169/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23168
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23168/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23168/comments
https://api.github.com/repos/huggingface/transformers/issues/23168/events
https://github.com/huggingface/transformers/pull/23168
1,697,515,311
PR_kwDOCUB6oc5P26NX
23,168
shift torch dynamo handling to accelerate
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,683
1,685
1,685
CONTRIBUTOR
null
### What does this PR do? 1. Shits the torch dynamo handling to accelerate 2. Should be merged after #23158 3. No user facing change. Now, users can use `accelerate launch` for torch dynamo, e.g., ``` accelerate launch --dynamo_backend=inductor ./examples/pytorch/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir ~/temp/$TASK_NAME/ --fp16 --overwrite_output_dir --pad_to_max_length --dataloader_drop_last ``` Current usage like below is unimpacted: ``` python ./examples/pytorch/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir ~/temp/$TASK_NAME/ --fp16 --overwrite_output_dir --torch_compile --pad_to_max_length --dataloader_drop_last ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23168/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23168/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23168", "html_url": "https://github.com/huggingface/transformers/pull/23168", "diff_url": "https://github.com/huggingface/transformers/pull/23168.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23168.patch", "merged_at": 1685524328000 }
https://api.github.com/repos/huggingface/transformers/issues/23167
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23167/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23167/comments
https://api.github.com/repos/huggingface/transformers/issues/23167/events
https://github.com/huggingface/transformers/pull/23167
1,697,489,120
PR_kwDOCUB6oc5P20dz
23,167
fixed whisper positional encoding
{ "login": "anvilarth", "id": 43551010, "node_id": "MDQ6VXNlcjQzNTUxMDEw", "avatar_url": "https://avatars.githubusercontent.com/u/43551010?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anvilarth", "html_url": "https://github.com/anvilarth", "followers_url": "https://api.github.com/users/anvilarth/followers", "following_url": "https://api.github.com/users/anvilarth/following{/other_user}", "gists_url": "https://api.github.com/users/anvilarth/gists{/gist_id}", "starred_url": "https://api.github.com/users/anvilarth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anvilarth/subscriptions", "organizations_url": "https://api.github.com/users/anvilarth/orgs", "repos_url": "https://api.github.com/users/anvilarth/repos", "events_url": "https://api.github.com/users/anvilarth/events{/privacy}", "received_events_url": "https://api.github.com/users/anvilarth/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@clefourrier @gante", "Looks like, there are no problems with tests.\r\n![image](https://user-images.githubusercontent.com/43551010/236498753-3a084a00-873a-4c91-b9af-caf5a12481e2.png)\r\n" ]
1,683
1,683
1,683
CONTRIBUTOR
null
Whisper Positional Encoding has incorrect behavior when passing inputs_embeds: - When we are passing `input_ids` (batch_size x seq_len) it takes the -1 dimension of it which is correct. - When we are passing `input_embeds` (batch_size x seq_len x embedding_dim) it doesn't work. If we take -1 dimension, we get embedding dim elements. My fix is just to take the 1 dimension which will be always correct
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23167/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23167/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23167", "html_url": "https://github.com/huggingface/transformers/pull/23167", "diff_url": "https://github.com/huggingface/transformers/pull/23167.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23167.patch", "merged_at": 1683300975000 }
https://api.github.com/repos/huggingface/transformers/issues/23166
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23166/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23166/comments
https://api.github.com/repos/huggingface/transformers/issues/23166/events
https://github.com/huggingface/transformers/pull/23166
1,697,465,826
PR_kwDOCUB6oc5P2vXl
23,166
🌐 [i18n-KO] Translated `troubleshooting.mdx` to Korean
{ "login": "0525hhgus", "id": 47289574, "node_id": "MDQ6VXNlcjQ3Mjg5NTc0", "avatar_url": "https://avatars.githubusercontent.com/u/47289574?v=4", "gravatar_id": "", "url": "https://api.github.com/users/0525hhgus", "html_url": "https://github.com/0525hhgus", "followers_url": "https://api.github.com/users/0525hhgus/followers", "following_url": "https://api.github.com/users/0525hhgus/following{/other_user}", "gists_url": "https://api.github.com/users/0525hhgus/gists{/gist_id}", "starred_url": "https://api.github.com/users/0525hhgus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/0525hhgus/subscriptions", "organizations_url": "https://api.github.com/users/0525hhgus/orgs", "repos_url": "https://api.github.com/users/0525hhgus/repos", "events_url": "https://api.github.com/users/0525hhgus/events{/privacy}", "received_events_url": "https://api.github.com/users/0525hhgus/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "May you please review this PR? 😃 \n@sgugger, @ArthurZucker, @eunseojo", "@0525hhgus you need to put it out of draft mode if it's ready for review.", "> @0525hhgus you need to put it out of draft mode if it's ready for review.\r\n\r\nI changed to ready for review status! Thank you for your review." ]
1,683
1,685
1,685
CONTRIBUTOR
null
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니다 --> # What does this PR do? Translated the `troubleshooting.mdx` file of the documentation to Korean. Thank you in advance for your review 😄 Part of https://github.com/huggingface/transformers/issues/20179 <!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: --> ## Before reviewing - [x] Check for missing / redundant translations (번역 누락/중복 검사) - [x] Grammar Check (맞춤법 검사) - [x] Review or Add new terms to glossary (용어 확인 및 추가) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview로 정상작동 확인) ## Who can review? (Initial) <!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23166/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23166/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23166", "html_url": "https://github.com/huggingface/transformers/pull/23166", "diff_url": "https://github.com/huggingface/transformers/pull/23166.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23166.patch", "merged_at": 1685454588000 }
https://api.github.com/repos/huggingface/transformers/issues/23165
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23165/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23165/comments
https://api.github.com/repos/huggingface/transformers/issues/23165/events
https://github.com/huggingface/transformers/issues/23165
1,697,425,126
I_kwDOCUB6oc5lLKbm
23,165
i got a Trainer error: Attempting to unscale FP16 gradients
{ "login": "han508", "id": 69674181, "node_id": "MDQ6VXNlcjY5Njc0MTgx", "avatar_url": "https://avatars.githubusercontent.com/u/69674181?v=4", "gravatar_id": "", "url": "https://api.github.com/users/han508", "html_url": "https://github.com/han508", "followers_url": "https://api.github.com/users/han508/followers", "following_url": "https://api.github.com/users/han508/following{/other_user}", "gists_url": "https://api.github.com/users/han508/gists{/gist_id}", "starred_url": "https://api.github.com/users/han508/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/han508/subscriptions", "organizations_url": "https://api.github.com/users/han508/orgs", "repos_url": "https://api.github.com/users/han508/repos", "events_url": "https://api.github.com/users/han508/events{/privacy}", "received_events_url": "https://api.github.com/users/han508/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You can't train a model loaded in FP16:\r\n```\r\nmodel = LlamaForCausalLM.from_pretrained(xxx, torch_dtype=torch.float16)\r\n```\r\nis the culprit here. I don't know how PEFT initializes the layer to train afterwards, but some of them must be in the same dtype cc @younesbelkada ", "I second what @sgugger said, \r\nhowever I see that you're importing peft but doing nothing with it, also make sure to use the latest `peft` release as it contains some bug fixes.\r\n```bash\r\npip install --upgrade peft\r\n```\r\n\r\nIn my opinion, to use PEFT at its best, you should load your model in 8bit as follows:\r\n\r\n```python\r\nfrom peft import LoraConfig, get_peft_model, prepare_model_for_int8_training\r\nfrom transformers import LlamaTokenizer, LlamaForCausalLM\r\n\r\npath_to_llama = xxx\r\nmodel = LlamaForCausalLM.from_pretrained(\r\n path_to_llama,\r\n device_map=\"auto\",\r\n load_in_8bit=True\r\n)\r\n\r\ntokenizer = LlamaTokenizer.from_pretrained(path_to_llama)\r\n\r\nconfig = LoraConfig(\r\n r=16,\r\n lora_alpha=32,\r\n target_modules=[\"q_proj\", \"v_proj\"],\r\n lora_dropout=0.05,\r\n bias=\"none\",\r\n task_type=\"CAUSAL_LM\",\r\n)\r\n\r\nmodel = prepare_model_for_int8_training(model)\r\nmodel = get_peft_model(model, config)\r\n\r\n... # get your dataset etc here\r\ntrainer = Trainer(\r\n model=model,\r\n ...\r\n)\r\n```\r\nAlso make sure to use `transformers` latest release as well:\r\n```bash\r\npip install --upgrade transformers\r\n```", "For reference, I would have a look at how the PEFT slow tests are designed, check here: https://github.com/huggingface/peft/blob/b1059b73aab9043b118ff19b0cf96263ea86248a/tests/test_gpu_examples.py#L114 ", "Thank you for your reply, when I update the latest PEFT and transformers, All problems are resolved.\r\n", "> You can't train a model loaded in FP16:\r\n> \r\n> ```\r\n> model = LlamaForCausalLM.from_pretrained(xxx, torch_dtype=torch.float16)\r\n> ```\r\n> \r\n> is the culprit here. I don't know how PEFT initializes the layer to train afterwards, but some of them must be in the same dtype cc @younesbelkada\r\n\r\nThanks for the answer, it saved me some time to test if it is possible to fine tune a model loaded in FP16.\r\nBut what about models loaded in 8bit? Can I just fine tune the model with an 8-bit optimiser without using any PEFT techniques such as LoRA?\r\nIf I can't tune a model loaded in 8bit, I wonder why we are allowed to use LoRA to fine tune the model?", "> I second what @sgugger said, however I see that you're importing peft but doing nothing with it, also make sure to use the latest `peft` release as it contains some bug fixes.\r\n> \r\n> ```shell\r\n> pip install --upgrade peft\r\n> ```\r\n> \r\n> In my opinion, to use PEFT at its best, you should load your model in 8bit as follows:\r\n> \r\n> ```python\r\n> from peft import LoraConfig, get_peft_model, prepare_model_for_int8_training\r\n> from transformers import LlamaTokenizer, LlamaForCausalLM\r\n> \r\n> path_to_llama = xxx\r\n> model = LlamaForCausalLM.from_pretrained(\r\n> path_to_llama,\r\n> device_map=\"auto\",\r\n> load_in_8bit=True\r\n> )\r\n> \r\n> tokenizer = LlamaTokenizer.from_pretrained(path_to_llama)\r\n> \r\n> config = LoraConfig(\r\n> r=16,\r\n> lora_alpha=32,\r\n> target_modules=[\"q_proj\", \"v_proj\"],\r\n> lora_dropout=0.05,\r\n> bias=\"none\",\r\n> task_type=\"CAUSAL_LM\",\r\n> )\r\n> \r\n> model = prepare_model_for_int8_training(model)\r\n> model = get_peft_model(model, config)\r\n> \r\n> ... # get your dataset etc here\r\n> trainer = Trainer(\r\n> model=model,\r\n> ...\r\n> )\r\n> ```\r\n> \r\n> Also make sure to use `transformers` latest release as well:\r\n> \r\n> ```shell\r\n> pip install --upgrade transformers\r\n> ```\r\nHi Younes, thank you for your work on PEFT.\r\nRecently I read some papers measuring the performance difference between full fine-tuning and lora-based fine-tuning. There's actually a huge difference between the tuned models in terms of their benchmarks/metrics.\r\nHere're the links to the publications: \r\n\r\nhttps://arxiv.org/abs/2304.14454\r\n![image](https://user-images.githubusercontent.com/50556116/236809378-88376565-f41b-4a29-bb84-bd380dc44dfd.png)\r\n\r\nhttps://arxiv.org/abs/2304.08109\r\n![image](https://user-images.githubusercontent.com/50556116/236809531-29315632-33bd-43cc-8c01-c95cd5382d83.png)\r\n\r\n\r\nI am very grateful that we have these open-source fine-tuning techniques. But I am curious about your opinions on the performance trade-off between lora and full-tuing?\r\n\r\nThanks for your concerns.", "Hi @IwanVan \r\nThanks for your reply and your interests, I will answer to your questions to the best of my knowlegde\r\n\r\n1- Sadly it is not possible to do pure int8 training, (i.e. pass the full 8bit model to the optimizer state) as I believe this will result in a very unstable training as your weight matrix can be only represented in 8bit precision (256 possible values), so the model won't probably learn anything. Although it's not possible to train in pure fp16 (from my understanding), you can train your model in a precision called `bfloat16` (simply pass `torch_dtype=torch.bfloat16`), that has the same training dynamics as `float32`, and that is commonly used to train large scale models.\r\nWe have made a detailed blogpost about that [here](https://huggingface.co/blog/hf-bitsandbytes-integration) that I invite your to have a look.\r\n\r\n2- This seems to be a new paper so it's the first time I go through, and from my understanding it tries to fine-tune Llama to the medical paper domain. I agree the differences here sound quite large. Thinking about it loud, maybe the domain gap was too high for that model but I am not sure. Empirically it has been showed (from the original paper and from what I have seen so far) that you can get very comparable results (sometimes better results) than full finetuning when going for PEFT methods (and on all modalities, vision, text, RLHF, etc.), so I would say it would really depend on your usecase, dataset, etc..\r\nNote that with PEFT you can fit into your devices the model + optimizer states of very large models! In [this blogpost](https://huggingface.co/blog/trl-peft) we show how to fit a 20B model into a 24GB GPU and train that model. This is totally not possible when going for full-finetuning. I would say this is the main (and big) advantage of PEFT methods. cc also @pacman100 that would probably have more insights here!\r\n\r\nThanks!", "> If I can't tune a model loaded in 8bit, I wonder why we are allowed to use LoRA to fine tune the model?\r\n\r\nBecause in the case of tuning the LoRA layers, the base model will stay untouched, in 8bit, but the LoRA layers that we're going to train will be kept in full precision (float32)", "> Hi @IwanVan Thanks for your reply and your interests, I will answer to your questions to the best of my knowlegde\r\n> \r\n> 1- Sadly it is not possible to do pure int8 training, (i.e. pass the full 8bit model to the optimizer state) as I believe this will result in a very unstable training as your weight matrix can be only represented in 8bit precision (256 possible values), so the model won't probably learn anything. Although it's not possible to train in pure fp16 (from my understanding), you can train your model in a precision called `bfloat16` (simply pass `torch_dtype=torch.bfloat16`), that has the same training dynamics as `float32`, and that is commonly used to train large scale models. We have made a detailed blogpost about that [here](https://huggingface.co/blog/hf-bitsandbytes-integration) that I invite your to have a look.\r\n> \r\n> 2- This seems to be a new paper so it's the first time I go through, and from my understanding it tries to fine-tune Llama to the medical paper domain. I agree the differences here sound quite large. Thinking about it loud, maybe the domain gap was too high for that model but I am not sure. Empirically it has been showed (from the original paper and from what I have seen so far) that you can get very comparable results (sometimes better results) than full finetuning when going for PEFT methods (and on all modalities, vision, text, RLHF, etc.), so I would say it would really depend on your usecase, dataset, etc.. Note that with PEFT you can fit into your devices the model + optimizer states of very large models! In [this blogpost](https://huggingface.co/blog/trl-peft) we show how to fit a 20B model into a 24GB GPU and train that model. This is totally not possible when going for full-finetuning. I would say this is the main (and big) advantage of PEFT methods. cc also @pacman100 that would probably have more insights here!\r\n> \r\n> Thanks!\r\n\r\nHi @younesbelkada , thanks again for your quick response.\r\n\r\n1. I actually have implemented a lot of your example codes from the [Peft lib](https://github.com/huggingface/peft/tree/main/examples) already. \r\nAlso the `load_in_8bit` support backed by bnb is really impressive, and I've used it for zero-/ few-shot inference with LLM on a single 4090.\r\nFor training, I have implemented almost every factors that were mention in [Efficient Training on a Single GPU](https://huggingface.co/docs/transformers/perf_train_gpu_one) by using the HF trainer. However, the largest model that I can tune in full precision is flan-t5-3B with very efficient setup and a new GPU-friendly optimizer called [Lion](https://github.com/lucidrains/lion-pytorch), but in [8bit version](https://github.com/TimDettmers/bitsandbytes/blob/main/bitsandbytes/optim/lion.py#L36).\r\n\r\n2. Personally I am very excited about efficient fine-tuning techniques such as Lora, and I have carefully examined the code for AdaLoRA and a newer technique called [Ladder Side-Tuning (LST)](https://github.com/ylsung/Ladder-Side-Tuning), and I have [asked the authors](https://github.com/ylsung/Ladder-Side-Tuning/issues/6) if they intend to integrate this technique into the peft library.\r\nHowever, the reason I have been on the fence for the last two weeks with regard to peft techniques such as lora is that there is a growing number of papers appearing which fine-tune models using peft techniques based on some very new auto-regressive models. An increasing number of studies show that lora seems to have significant robustness problems for training of domain-specific ([medical](https://arxiv.org/abs/2304.14454)) and other language ([Chinese](https://arxiv.org/abs/2304.08109)) instructions. In these papers, lora lags behind full fine-tuning almost across the board in all metrics. Certainly I agree with your analysis of the causes above, and I am not in a hurry to draw conclusions about the results from these papers, as new technologies need to be viewed rationally.\r\n\r\nBut I wonder if I could open a new issue in the peft repository to follow up on the current new research on peft/lora and see if I could find a reasonable explanation for the difference in performance across different fine-tuning techniques by documenting and analysing similar papers over time and get more developers involved in the discussion?\r\n\r\nRegards,\r\nWang", "@younesbelkada Hello, I load 7B llama for peft Lora finetune on a single v100 but got OOM, is that normal?\r\n\r\nam using default float(32).\r\n\r\nDoes it have to be load in in8 for lora finetuning?", "@younesbelkada after load in in8, I got error like this:\r\n\r\n```\r\nRuntimeError: expected scalar type Half but found Float\r\n```\r\n\r\nI swear i have no where set float16 in my code..... ", "hi @lucasjinreal \r\nDo you used `prepare_model_for_int8_training` on your script? You need to call that method before calling `get_peft_model`", "@younesbelkada I noticed that LION merge into master, when will it update to pip btw?\r\n\r\n> Do you used prepare_model_for_int8_training on your script?\r\n\r\nYes, I have used. after I set `fp16=False` it now works.\r\n\r\nBut, do u know why 32GB unable to train with float32? Am have to using deepspeed to offload now, and int8 training seems slower than offload", "hi @lucasjinreal \r\nIt should be already in pip there should be an announcement soon about that :) \r\n\r\n> Yes, I have used. after I set fp16=False it now works.\r\n\r\nAwesome!\r\n\r\n> But, do u know why 32GB unable to train with float32? Am have to using deepspeed to offload now, and int8 training seems slower than offload\r\n\r\nYes int8 can be slower in some cases, you might be interested in using FP4 quantization that should be much faster, it will be part of the announcement today as well. I will keep you posted\r\n\r\nRelevant links: https://github.com/artidoro/qlora & https://github.com/huggingface/transformers/pull/23479\r\n", "@younesbelkada Looking forward to it, do u mean fp4 training? Looks like only decent GPU like H100 support it. Will transformers new release incuding this as well?", "> You can't train a model loaded in FP16:\r\n> \r\n> ```\r\n> model = LlamaForCausalLM.from_pretrained(xxx, torch_dtype=torch.float16)\r\n> ```\r\n> \r\n> is the culprit here. I don't know how PEFT initializes the layer to train afterwards, but some of them must be in the same dtype cc @younesbelkada\r\n\r\nCould you explain what you mean by cannot train a fp16 model? Is it because you would need a fp32 copy of weights for fp16 mixed precision training? " ]
1,683
1,696
1,683
NONE
null
### System Info - `transformers` version: 4.28.1 - Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.27 - Python version: 3.9.16 - Huggingface_hub version: 0.13.4 - Safetensors version: not installed - PyTorch version (GPU?): 1.13.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> - device : Tesla T4*4 - CUDA-11.6 ### Who can help? @sgugger Now, when I add fp16=True, i get the error: ValueError: Attempting to unscale FP16 gradients. when running trainer.train() ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction from transformers import LlamaTokenizer, LlamaForCausalLM,AutoTokenizer,AutoModelForSeq2SeqLM, LlamaConfig from peft import prepare_model_for_int8_training, LoraConfig, get_peft_model, get_peft_model_state_dict merge_tokenizer = LlamaTokenizer.from_pretrained('/home/han/new_store/Llama/merged_tokenizer_hf',padding=True, truncation=True) print(len(merge_tokenizer)) n = merge_tokenizer.add_special_tokens({'pad_token': '[PAD]'}) len(merge_tokenizer) from datasets import load_dataset dataset = load_dataset("json", data_files="./data/alpaca_data_zh_51k.json") dataset = dataset.filter(lambda x: x["output"]!=None) dataset = dataset.filter(lambda x: x["input"] !=None) def preprocess_function(sample): l = "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.</s>Human:" for i in range(len(sample['instruction'])): if sample['input'][i]!='': sample['instruction'][i]=l+sample['instruction'][i]+'[PAD]'+sample['input'][i] # print(sample['input'][i]) output = ['Assistant:'+i for i in sample['output']] model_inputs = merge_tokenizer(sample['instruction'], truncation=True,padding=True,max_length=200) labels = merge_tokenizer(output, truncation=True, padding=True,max_length=200) model_inputs["labels"] = labels["input_ids"] # print(model_inputs) return model_inputs input_data = dataset['train'].map(preprocess_function,batched=True,remove_columns=['instruction','input','output']) import torch model = LlamaForCausalLM.from_pretrained('decapoda-research/llama-7b-hf',device_map='auto',cache_dir='./cache/',torch_dtype=torch.float16) model.resize_token_embeddings(len(merge_tokenizer)) from transformers import TrainingArguments, Trainer, DataCollatorForLanguageModeling trainArgs = TrainingArguments( output_dir= '../ckps_emb', do_train=True, # per_device_train_batch_size=4, auto_find_batch_size=True, fp16=True, gradient_accumulation_steps=4, evaluation_strategy="steps", save_strategy="steps", save_steps=1000, eval_steps=1000, logging_steps=20, warmup_steps=100, num_train_epochs=2, learning_rate=5e-4, load_best_model_at_end=True, report_to="wandb" ) for name, param in model.named_parameters(): param.requires_grad_(False) if name =='model.embed_tokens.weight': param.requires_grad_(True) print(name, "requires_grad:", param.requires_grad) trainer = Trainer( model=model, args=trainArgs, train_dataset=input_data, eval_dataset=input_data, data_collator=DataCollatorForLanguageModeling(merge_tokenizer, mlm=False), ) model.config.use_cache = True trainer.train() model.save_pretrained('../ckps/demo_llama71_full') ### Expected behavior i except it does not give a error ,ValueError:Attempting to unscale FP16 gradients.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23165/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23165/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23164
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23164/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23164/comments
https://api.github.com/repos/huggingface/transformers/issues/23164/events
https://github.com/huggingface/transformers/pull/23164
1,697,412,255
PR_kwDOCUB6oc5P2jg2
23,164
🌐 [i18n-KO] Translated object_detection.mdx to Korean
{ "login": "kihoon71", "id": 75935546, "node_id": "MDQ6VXNlcjc1OTM1NTQ2", "avatar_url": "https://avatars.githubusercontent.com/u/75935546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kihoon71", "html_url": "https://github.com/kihoon71", "followers_url": "https://api.github.com/users/kihoon71/followers", "following_url": "https://api.github.com/users/kihoon71/following{/other_user}", "gists_url": "https://api.github.com/users/kihoon71/gists{/gist_id}", "starred_url": "https://api.github.com/users/kihoon71/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kihoon71/subscriptions", "organizations_url": "https://api.github.com/users/kihoon71/orgs", "repos_url": "https://api.github.com/users/kihoon71/repos", "events_url": "https://api.github.com/users/kihoon71/events{/privacy}", "received_events_url": "https://api.github.com/users/kihoon71/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,683
1,685
1,685
CONTRIBUTOR
null
# What does this PR do? Translated the object_detection.mdx file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before reviewing - [x] Check for missing / redundant translations (번역 누락/중복 검사) - [x] Grammar Check (맞춤법 검사) - [x] Review or Add new terms to glossary (용어 확인 및 추가) - [x] Check Inline TOC (e.g. [[lowercased-header]]) - [x] Check live-preview for gotchas (live-preview로 정상작동 확인) ## Who can review? (Initial) @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? May you please review this PR? @sgugger, @ArthurZucker, @eunseojo <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23164/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23164/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23164", "html_url": "https://github.com/huggingface/transformers/pull/23164", "diff_url": "https://github.com/huggingface/transformers/pull/23164.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23164.patch", "merged_at": 1685706235000 }
https://api.github.com/repos/huggingface/transformers/issues/23163
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23163/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23163/comments
https://api.github.com/repos/huggingface/transformers/issues/23163/events
https://github.com/huggingface/transformers/pull/23163
1,697,407,035
PR_kwDOCUB6oc5P2iX-
23,163
Better check for packages availability
{ "login": "apbard", "id": 12557177, "node_id": "MDQ6VXNlcjEyNTU3MTc3", "avatar_url": "https://avatars.githubusercontent.com/u/12557177?v=4", "gravatar_id": "", "url": "https://api.github.com/users/apbard", "html_url": "https://github.com/apbard", "followers_url": "https://api.github.com/users/apbard/followers", "following_url": "https://api.github.com/users/apbard/following{/other_user}", "gists_url": "https://api.github.com/users/apbard/gists{/gist_id}", "starred_url": "https://api.github.com/users/apbard/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apbard/subscriptions", "organizations_url": "https://api.github.com/users/apbard/orgs", "repos_url": "https://api.github.com/users/apbard/repos", "events_url": "https://api.github.com/users/apbard/events{/privacy}", "received_events_url": "https://api.github.com/users/apbard/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Thanks a lot for the refactor, this is super nice!\r\n> \r\n> There are a couple of packages that do not properly implement metadata (I kno for sure `opencv` does not since we are adding it in another PR), could you quickly check that all the packages for which you did this PR do implement the metadata? if they don't we need to rely on the old way, which is fine as it should be an exceptional case.\r\n\r\nI have checked the packages and found issues only on sklearn (the import is sklearn and package scikit-learn) and decord.\r\n\r\nCould you please double check: smdistributed, tensorflow_text, torchdistx?", "I have validated that `tensorflow_text` works. Installing `torchdistx` seems to painful for the time I have right now, but looking at the GitHub, everything should be fine.\r\n\r\nChecking internally for `smdistributed` since it only exists in SageMaker envirnoments.", "Not hearing anything back on `smdistributed`, so let's just merge this PR and see if anyone complains :sweat_smile: \r\n\r\nCan you just fix the conflict?", "don't think failures are due to this PR. Also main is failing" ]
1,683
1,683
1,683
CONTRIBUTOR
null
Following up huggingface/accelerate#1356 Refactor checks to avoid boilerplate and to ensure we are not picking up a folder that happens to be called as the package. I assume all _*_available are for caching purposes. But is that really needed?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23163/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23163/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23163", "html_url": "https://github.com/huggingface/transformers/pull/23163", "diff_url": "https://github.com/huggingface/transformers/pull/23163.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23163.patch", "merged_at": 1683827543000 }
https://api.github.com/repos/huggingface/transformers/issues/23162
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23162/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23162/comments
https://api.github.com/repos/huggingface/transformers/issues/23162/events
https://github.com/huggingface/transformers/pull/23162
1,697,299,031
PR_kwDOCUB6oc5P2K1f
23,162
Unpin numba
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Failing test is unrelated ([tf compile test](https://circleci-tasks-prod.s3.us-east-1.amazonaws.com/forks/storage/artifacts/d5b57382-7f67-4274-9623-7f238ef4fb6f/457033993/0fb5a370-4bdb-43e1-a1a2-c242ca17c8d3/0/~/transformers/reports/tests_tf/failures_short.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIAQVFQINEONDE666BB%2F20230522%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230522T165404Z&X-Amz-Expires=60&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEJn%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLWVhc3QtMSJIMEYCIQCSbOdyE07pYZr8Irw7BBJgEnLQCNFnrj3zK2%2FcTGV9jwIhAM3cUGClvwCTfkqdiYJKR5orc0gzQPRB5KHXpXqX00XzKrQCCML%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEQAxoMMDQ1NDY2ODA2NTU2Igw2mZInyce2C0JoQbsqiALBk31dgOmIJwAV%2FwHQoioyIo8GaTDckrk0%2BW1lfxXFVCAB8YxcUGOvSBFIIijvkMpBa6jlob4IQ9dAZf%2FvMKH1aXfOEXU284URRx6VEYoapQELm8CUc35O3YEeF%2BOIyQojAI9e0oBTYlhqzn%2Fg2bzV3mvyzvKtHWN17wvOWHBAKoba%2Bt2FdxkJBKq%2BnmQHHyQYq7JxDJ1L0Jd6xKa6uT85hjmkPNFiZr8iAvicJtcSM4UVdY5o9yI2Wty8lc8IDykk9%2BS9HeJ2mONGAowSBJz33LKjZDnRe4oOWkKt%2F3kaRm%2BTVzchn4Hy6poGKG6wr%2FqAHv0Kyd%2FJ09FviDzMvppV4tFzoOXkgicwm7auowY6nAHbz06ploAc04Toucr8X%2BlicPoUNiKWQBpolxtbSGpfvxKsTlSge8HaMvGqx6EZSjDG0JlC3MCgKfLfg5WhwkX0MMBQnv4UhP65R%2B7aEUNv%2B9yZOC6NCvvu8Bv9Hj0ml4fRCcggNYtqwNQKZRmshd59IK0ZqBCdWIfbQ5x8uvmJubBnBR7kmfmexwxiUOZ%2BbVMZBqDpSuzXGhXlH%2B0%3D&X-Amz-SignedHeaders=host&x-id=GetObject&X-Amz-Signature=5c8e53587d9a3023543892b06988a69ef02ebd3d2ccff9bfdac9b8dbf7587786))" ]
1,683
1,687
1,685
CONTRIBUTOR
null
# What does this PR do? Numba was pinned to <0.57.0 in #23118 - this is because it forced an update of the numpy package to >= 1.24. From numpy >= 1.24, converting a ragged list to a numpy array requires the user to **explicitly** set `dtype=object` (before this happened automatically, but threw a deprecation warning). This PR updates the feature extraction and tokenisation utils to explicitly specify `dtype=object` when converting ragged lists to numpy arrays.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23162/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23162/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23162", "html_url": "https://github.com/huggingface/transformers/pull/23162", "diff_url": "https://github.com/huggingface/transformers/pull/23162.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23162.patch", "merged_at": 1685541571000 }
https://api.github.com/repos/huggingface/transformers/issues/23161
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23161/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23161/comments
https://api.github.com/repos/huggingface/transformers/issues/23161/events
https://github.com/huggingface/transformers/issues/23161
1,697,261,597
I_kwDOCUB6oc5lKigd
23,161
mac m2 max data collator issue
{ "login": "sd3ntato", "id": 11448010, "node_id": "MDQ6VXNlcjExNDQ4MDEw", "avatar_url": "https://avatars.githubusercontent.com/u/11448010?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sd3ntato", "html_url": "https://github.com/sd3ntato", "followers_url": "https://api.github.com/users/sd3ntato/followers", "following_url": "https://api.github.com/users/sd3ntato/following{/other_user}", "gists_url": "https://api.github.com/users/sd3ntato/gists{/gist_id}", "starred_url": "https://api.github.com/users/sd3ntato/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sd3ntato/subscriptions", "organizations_url": "https://api.github.com/users/sd3ntato/orgs", "repos_url": "https://api.github.com/users/sd3ntato/repos", "events_url": "https://api.github.com/users/sd3ntato/events{/privacy}", "received_events_url": "https://api.github.com/users/sd3ntato/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "No one can help without a clear reproducer of the issue.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,683
1,686
1,686
NONE
null
### System Info - `transformers` version: 4.28.1 - Platform: macOS-13.2.1-arm64-arm-64bit - Python version: 3.9.6 - Huggingface_hub version: 0.13.4 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction data_collator = DataCollatorForSeq2Seq( tokenizer, model=model, label_pad_token_id=label_pad_token_id, pad_to_multiple_of=8, return_tensors='pt' ) ### Expected behavior I'd expect normal behaviour but get TypeError: can't convert mps:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23161/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23161/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23160
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23160/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23160/comments
https://api.github.com/repos/huggingface/transformers/issues/23160/events
https://github.com/huggingface/transformers/issues/23160
1,697,261,459
I_kwDOCUB6oc5lKieT
23,160
implement unlimiformer into transformers
{ "login": "chris-aeviator", "id": 11522213, "node_id": "MDQ6VXNlcjExNTIyMjEz", "avatar_url": "https://avatars.githubusercontent.com/u/11522213?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chris-aeviator", "html_url": "https://github.com/chris-aeviator", "followers_url": "https://api.github.com/users/chris-aeviator/followers", "following_url": "https://api.github.com/users/chris-aeviator/following{/other_user}", "gists_url": "https://api.github.com/users/chris-aeviator/gists{/gist_id}", "starred_url": "https://api.github.com/users/chris-aeviator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chris-aeviator/subscriptions", "organizations_url": "https://api.github.com/users/chris-aeviator/orgs", "repos_url": "https://api.github.com/users/chris-aeviator/repos", "events_url": "https://api.github.com/users/chris-aeviator/events{/privacy}", "received_events_url": "https://api.github.com/users/chris-aeviator/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Bump\n\n> Am 04.06.2023 um 17:01 schrieb github-actions[bot] ***@***.***>:\n> \n> \n> This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n> \n> Please note that issues that do not follow the contributing guidelines are likely to be ignored.\n> \n> —\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n> You are receiving this because you authored the thread.\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,683
1,688
1,688
NONE
null
### Feature request https://github.com/abertsch72/unlimiformer promises to support unlimited input length on any transformer based encoder/decoder model with sub-linear cost in time. ### Motivation Context lengths are fairly limited ### Your contribution Testing
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23160/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23160/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23159
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23159/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23159/comments
https://api.github.com/repos/huggingface/transformers/issues/23159/events
https://github.com/huggingface/transformers/pull/23159
1,697,188,221
PR_kwDOCUB6oc5P1y72
23,159
search buffers for dtype
{ "login": "cyyever", "id": 17618148, "node_id": "MDQ6VXNlcjE3NjE4MTQ4", "avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cyyever", "html_url": "https://github.com/cyyever", "followers_url": "https://api.github.com/users/cyyever/followers", "following_url": "https://api.github.com/users/cyyever/following{/other_user}", "gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}", "starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cyyever/subscriptions", "organizations_url": "https://api.github.com/users/cyyever/orgs", "repos_url": "https://api.github.com/users/cyyever/repos", "events_url": "https://api.github.com/users/cyyever/events{/privacy}", "received_events_url": "https://api.github.com/users/cyyever/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,683
1,686
1,683
CONTRIBUTOR
null
# What does this PR do? This PR extends the logic of get_parameter_dtype to search buffers after parameters are searched. If a model is frozen such that all its parameters are turned into buffers, the current logic may not be able to find a dtype even if it tries to search module.\_\_dict\_\_. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23159/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23159/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23159", "html_url": "https://github.com/huggingface/transformers/pull/23159", "diff_url": "https://github.com/huggingface/transformers/pull/23159.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23159.patch", "merged_at": 1683387668000 }
https://api.github.com/repos/huggingface/transformers/issues/23158
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23158/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23158/comments
https://api.github.com/repos/huggingface/transformers/issues/23158/events
https://github.com/huggingface/transformers/pull/23158
1,697,160,899
PR_kwDOCUB6oc5P1tBP
23,158
move fsdp handling to accelerate
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hello Sylvain, we can't do that as FSDP XLA integration uses it and that isn't supported yet in accelerate" ]
1,683
1,685
1,685
CONTRIBUTOR
null
### What does this PR do? 1. Moves PyTorch FSDP handling to Accelerate 2. Should be merged after #23151 3. No user-facing change. Now, users can use `accelerate launch` for fsdp in Trainer, e.g.: ``` accelerate launch --num_processes=2 --use_fsdp --mixed_precision=bf16 --fsdp_auto_wrap_policy=TRANSFORMER_BASED_WRAP --fsdp_transformer_layer_cls_to_wrap="BertLayer" --fsdp_sharding_strategy=1 --fsdp_state_dict_type=FULL_STATE_DICT ./examples/pytorch/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 5e-5 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir ``` Continue to use torchrun with trainer args as usual. ``` torchrun --nnodes 1 --nproc-per-node 2 ./examples/pytorch/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 5e-5 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --fsdp "full_shard auto_wrap" --fsdp_transformer_layer_cls_to_wrap BertLayer --bf16 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23158/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23158/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23158", "html_url": "https://github.com/huggingface/transformers/pull/23158", "diff_url": "https://github.com/huggingface/transformers/pull/23158.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23158.patch", "merged_at": 1685522447000 }
https://api.github.com/repos/huggingface/transformers/issues/23207
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23207/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23207/comments
https://api.github.com/repos/huggingface/transformers/issues/23207/events
https://github.com/huggingface/transformers/issues/23207
1,700,239,569
I_kwDOCUB6oc5lV5jR
23,207
Better prompt error messages
{ "login": "lucasjinreal", "id": 21303438, "node_id": "MDQ6VXNlcjIxMzAzNDM4", "avatar_url": "https://avatars.githubusercontent.com/u/21303438?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lucasjinreal", "html_url": "https://github.com/lucasjinreal", "followers_url": "https://api.github.com/users/lucasjinreal/followers", "following_url": "https://api.github.com/users/lucasjinreal/following{/other_user}", "gists_url": "https://api.github.com/users/lucasjinreal/gists{/gist_id}", "starred_url": "https://api.github.com/users/lucasjinreal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucasjinreal/subscriptions", "organizations_url": "https://api.github.com/users/lucasjinreal/orgs", "repos_url": "https://api.github.com/users/lucasjinreal/repos", "events_url": "https://api.github.com/users/lucasjinreal/events{/privacy}", "received_events_url": "https://api.github.com/users/lucasjinreal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @lucasjinreal , I'm transferring this issue to `transformers` as this is not really related to `huggingface_hub` itself (hfh is the underlying library making calls to the HF Hub but is not responsible if a path is provided as repo_id when downloading a file).\r\n\r\ncc @sgugger I'm not an expert on how files are loaded in `transformers` but I think a \"catch `HfValidationError`\" statement in [this try/except](https://github.com/huggingface/transformers/blob/main/src/transformers/utils/hub.py#L423) (`from huggingface_hub.utils import HFValidationError`) would allow a better error message.", "Mmm that's actually a bit tricky since this error can come from multiple causes.", "Yeah, but I think they might can be detect in priority order, some obviously pattern can handled more properly", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,683
1,686
1,686
NONE
null
I got error when AutoTokenizer.from_pretrained: ``` huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': './dist/models/vicuna-v1-7b'. Use `repo_type` argument if needed. ``` this was caused local path doesn't exist actually, but the error messsages makes me confused, maybe add some prompt if path does not exist to aware users you should put right path rather than keep telling my using right repo id??
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23207/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23207/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23157
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23157/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23157/comments
https://api.github.com/repos/huggingface/transformers/issues/23157/events
https://github.com/huggingface/transformers/pull/23157
1,696,848,241
PR_kwDOCUB6oc5P0pZ5
23,157
Fixing class embedding selection in owl-vit
{ "login": "orrzohar", "id": 108689663, "node_id": "U_kgDOBnp4_w", "avatar_url": "https://avatars.githubusercontent.com/u/108689663?v=4", "gravatar_id": "", "url": "https://api.github.com/users/orrzohar", "html_url": "https://github.com/orrzohar", "followers_url": "https://api.github.com/users/orrzohar/followers", "following_url": "https://api.github.com/users/orrzohar/following{/other_user}", "gists_url": "https://api.github.com/users/orrzohar/gists{/gist_id}", "starred_url": "https://api.github.com/users/orrzohar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/orrzohar/subscriptions", "organizations_url": "https://api.github.com/users/orrzohar/orgs", "repos_url": "https://api.github.com/users/orrzohar/repos", "events_url": "https://api.github.com/users/orrzohar/events{/privacy}", "received_events_url": "https://api.github.com/users/orrzohar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@orrzohar thank you for opening the PR! I'll take double check the original code and the forward pass shortly.", "After this fix I have problem with predictions.\r\n\r\nI Use [Colab demo](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/zeroshot_object_detection_with_owlvit.ipynb) for owl-vit\r\nAnd on example photo with cats I have only one prediction. Looks suspicious and not the same as original repo\r\n\r\n<img width=\"945\" alt=\"Screenshot 2023-05-09 at 16 13 13\" src=\"https://github.com/huggingface/transformers/assets/55710648/02503c77-acb1-4d97-95a2-3d2f0ff6599f\">\r\n\r\n@alaradirik \r\n", "> Hi @MaslikovEgor, Hub demos are deployed once and not updated unless triggered. I'm rebooting the demo to reflect the changes.\r\n\r\nNice to meet you!\r\n\r\nYeah, I understand this. But in google colab demo we install fresh transformers from source:\r\n\r\n`!pip install git+https://github.com/huggingface/transformers.git`\r\n\r\nSo this is the problem with this changes\r\n\r\n@alaradirik ", "I found when evaluating COCO that [email protected] increases from 6 to 37. This is still below the expected 44+, but closer to the reported/expected performance.\r\n\r\nI am still trying to figure out why." ]
1,683
1,683
1,683
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # For OWL-ViT image-guided object detection, there is a mistake in selecting the best embedding (the most distinct one with high IoU). Specifically; selected_inds is a [num_inds, 1] dimensional tensor, where the indexes indicate which queries had a high IoU with target object bbox. But, as selected_inds[0] was selected, only the first of all the possible queries is selected. Specifically, instead of selected_embeddings being a [num_inds, D] dimensional tensor, it is a [1, D] dimensional tensor. This led ultimately to the first query always being selected, not the most unique one as required. An error is not raised. To see this is the case, just add a print statement of 'torch.argmin(mean_sim)' here: https://github.com/huggingface/transformers/blob/01734dba842c29408c96caa5c345c9e415c7569b/src/transformers/models/owlvit/modeling_owlvit.py#L1505 & you will see it is always 0. - `transformers` version: 4.28.1 - Platform: Linux-5.4.0-109-generic-x86_64-with-glibc2.31 - Python version: 3.11.3 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @sgugger @NielsRogge @alaradirik @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23157/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23157/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23157", "html_url": "https://github.com/huggingface/transformers/pull/23157", "diff_url": "https://github.com/huggingface/transformers/pull/23157.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23157.patch", "merged_at": 1683545704000 }
https://api.github.com/repos/huggingface/transformers/issues/23156
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23156/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23156/comments
https://api.github.com/repos/huggingface/transformers/issues/23156/events
https://github.com/huggingface/transformers/pull/23156
1,696,491,195
PR_kwDOCUB6oc5Pzbbb
23,156
Add `no_trainer` scripts to pre-train Vision Transformers
{ "login": "awinml", "id": 97467100, "node_id": "U_kgDOBc863A", "avatar_url": "https://avatars.githubusercontent.com/u/97467100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/awinml", "html_url": "https://github.com/awinml", "followers_url": "https://api.github.com/users/awinml/followers", "following_url": "https://api.github.com/users/awinml/following{/other_user}", "gists_url": "https://api.github.com/users/awinml/gists{/gist_id}", "starred_url": "https://api.github.com/users/awinml/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/awinml/subscriptions", "organizations_url": "https://api.github.com/users/awinml/orgs", "repos_url": "https://api.github.com/users/awinml/repos", "events_url": "https://api.github.com/users/awinml/events{/privacy}", "received_events_url": "https://api.github.com/users/awinml/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@NielsRogge As per https://github.com/huggingface/transformers/pull/20412#issuecomment-1370792519, I have made a comparison notebook running both the trainer.py and no_trainer.py scripts on a small dataset. It can be viewed [here](https://colab.research.google.com/drive/1er4n0_AoQZU-OtXZPla3i-rJ38apA_zh?usp=sharing).\r\n\r\nBoth the scripts progress similarly.", "Thanks a lot @awinml! I'll assign core maintainers for a final review." ]
1,683
1,683
1,683
CONTRIBUTOR
null
# What does this PR do? Add scripts to pre-train Transformer-based Vision models without using the Trainer class. Fixes #20053 Fixes #20412 This PR completes the stalled PR #20412. ## Who can review? @amyeroberts @NielsRogge @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23156/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23156/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23156", "html_url": "https://github.com/huggingface/transformers/pull/23156", "diff_url": "https://github.com/huggingface/transformers/pull/23156.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23156.patch", "merged_at": 1683307369000 }
https://api.github.com/repos/huggingface/transformers/issues/23155
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23155/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23155/comments
https://api.github.com/repos/huggingface/transformers/issues/23155/events
https://github.com/huggingface/transformers/pull/23155
1,696,485,227
PR_kwDOCUB6oc5PzaIb
23,155
TF port of Convnextv2
{ "login": "IMvision12", "id": 88665786, "node_id": "MDQ6VXNlcjg4NjY1Nzg2", "avatar_url": "https://avatars.githubusercontent.com/u/88665786?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IMvision12", "html_url": "https://github.com/IMvision12", "followers_url": "https://api.github.com/users/IMvision12/followers", "following_url": "https://api.github.com/users/IMvision12/following{/other_user}", "gists_url": "https://api.github.com/users/IMvision12/gists{/gist_id}", "starred_url": "https://api.github.com/users/IMvision12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IMvision12/subscriptions", "organizations_url": "https://api.github.com/users/IMvision12/orgs", "repos_url": "https://api.github.com/users/IMvision12/repos", "events_url": "https://api.github.com/users/IMvision12/events{/privacy}", "received_events_url": "https://api.github.com/users/IMvision12/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "While converting pt weights to TensorFlow I am getting this error:\r\nhow to solve this?\r\n```\r\nAll PyTorch model weights were used when initializing TFConvNextV2ForImageClassification.\r\n\r\nAll the weights of TFConvNextV2ForImageClassification were initialized from the PyTorch model.\r\nIf your task is similar to the task the model of the checkpoint was trained on, you can already use TFConvNextV2ForImageClassification for predictions without further training.\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/transformers-cli\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/usr/local/lib/python3.10/dist-packages/transformers/commands/transformers_cli.py\", line 55, in main\r\n service.run()\r\n File \"/usr/local/lib/python3.10/dist-packages/transformers/commands/pt_to_tf.py\", line 344, in run\r\n raise ValueError(\r\nValueError: The cross-loaded TensorFlow model has different outputs, something went wrong!\r\n\r\nList of maximum output differences above the threshold (5e-05):\r\nlogits: 3.871e+00\r\n\r\nList of maximum hidden layer differences above the threshold (5e-05):\r\nhidden_states[1]: 3.463e-01\r\nhidden_states[2]: 1.682e+00\r\nhidden_states[3]: 2.259e+01\r\nhidden_states[4]: 6.839e-01\r\n```\r\n\r\nCode used:\r\n\r\n```\r\n!transformers-cli pt-to-tf --model-name facebook/convnextv2-nano-1k-224 --no-pr --local-dir /content/convnextv2-nano-1k-224\r\n```", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,683
1,684
1,684
CONTRIBUTOR
null
# What does this PR do? TF port of convnextv2 @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23155/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23155/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23155", "html_url": "https://github.com/huggingface/transformers/pull/23155", "diff_url": "https://github.com/huggingface/transformers/pull/23155.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23155.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/23154
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23154/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23154/comments
https://api.github.com/repos/huggingface/transformers/issues/23154/events
https://github.com/huggingface/transformers/pull/23154
1,696,438,573
PR_kwDOCUB6oc5PzPpy
23,154
Revert "Add FlaxWhisperForAudioClassification model"
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23154). All of your documentation changes will be reflected on that endpoint." ]
1,683
1,683
1,683
COLLABORATOR
null
Reverts huggingface/transformers#22883
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23154/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23154/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23154", "html_url": "https://github.com/huggingface/transformers/pull/23154", "diff_url": "https://github.com/huggingface/transformers/pull/23154.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23154.patch", "merged_at": 1683222427000 }
https://api.github.com/repos/huggingface/transformers/issues/23153
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23153/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23153/comments
https://api.github.com/repos/huggingface/transformers/issues/23153/events
https://github.com/huggingface/transformers/pull/23153
1,696,352,391
PR_kwDOCUB6oc5Py8pZ
23,153
[`Blip`] Remove redundant shift right
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23153). All of your documentation changes will be reflected on that endpoint." ]
1,683
1,684
1,684
CONTRIBUTOR
null
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/23000 In fact `_shift_right` does not need to be called inside `BlipForQuestionAnswering` as the right shifting of the tokens is done directly on the text decoder as mentioned by the user. Therefore, that class will be trained to perform next-next token prediction instead of next-token prediction. The fix is to simply remove that shift method and the call to it. cc @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23153/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23153/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23153", "html_url": "https://github.com/huggingface/transformers/pull/23153", "diff_url": "https://github.com/huggingface/transformers/pull/23153.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23153.patch", "merged_at": 1684516457000 }
https://api.github.com/repos/huggingface/transformers/issues/23152
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23152/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23152/comments
https://api.github.com/repos/huggingface/transformers/issues/23152/events
https://github.com/huggingface/transformers/issues/23152
1,696,019,577
I_kwDOCUB6oc5lFzR5
23,152
resume checkpoint and continue training using deepspeed integration while changing the number of gpus
{ "login": "HalcyonLiang", "id": 20160375, "node_id": "MDQ6VXNlcjIwMTYwMzc1", "avatar_url": "https://avatars.githubusercontent.com/u/20160375?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HalcyonLiang", "html_url": "https://github.com/HalcyonLiang", "followers_url": "https://api.github.com/users/HalcyonLiang/followers", "following_url": "https://api.github.com/users/HalcyonLiang/following{/other_user}", "gists_url": "https://api.github.com/users/HalcyonLiang/gists{/gist_id}", "starred_url": "https://api.github.com/users/HalcyonLiang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HalcyonLiang/subscriptions", "organizations_url": "https://api.github.com/users/HalcyonLiang/orgs", "repos_url": "https://api.github.com/users/HalcyonLiang/repos", "events_url": "https://api.github.com/users/HalcyonLiang/events{/privacy}", "received_events_url": "https://api.github.com/users/HalcyonLiang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,683
1,683
1,683
NONE
null
### System Info transformers version: 4.28.1 torch version: 1.13+cu116 Can trainer support resuming checkpoint while using different number of gpus? I saw optimizer states are saved per rank while saving checkpoints, how can i resuming successfully while changing the numer of gpus ? ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction trainer.train(resume_from_checkpoint=checkpoint) ### Expected behavior resuming successfully
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23152/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23152/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23151
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23151/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23151/comments
https://api.github.com/repos/huggingface/transformers/issues/23151/events
https://github.com/huggingface/transformers/pull/23151
1,695,952,595
PR_kwDOCUB6oc5PxkmM
23,151
accelerate DDP integrate
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,683
1,685
1,685
CONTRIBUTOR
null
### What does this PR do? 1. Move DDP preparation to Accelerate. 2. This PR should be merged after #23148 3. No user-facing change. Now, user can use `accelerate launch` for DDP and MP, e.g., ``` accelerate launch --num_processes 2 --multi_gpu --mixed_precision "bf16" run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 5e-5 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir ``` Previous way of using torchrun works as usual: ``` torchrun --nnodes 1 --nproc-per-node 2 run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 5e-5 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --bf16 ``` Empirical nuances that I noticed: 1. As DDP uses Accelerate, the LR Scheduler is run `num_processes` per step. Previously, it was only run once per step. Because of this, lr decreases rapidly when using Accelerate's integration. In the above example, I had to increase LR from 2e-5 to 5e-5 to account for this behaviour in order to maintain the performance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23151/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23151/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23151", "html_url": "https://github.com/huggingface/transformers/pull/23151", "diff_url": "https://github.com/huggingface/transformers/pull/23151.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23151.patch", "merged_at": 1685520769000 }
https://api.github.com/repos/huggingface/transformers/issues/23150
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23150/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23150/comments
https://api.github.com/repos/huggingface/transformers/issues/23150/events
https://github.com/huggingface/transformers/issues/23150
1,695,896,140
I_kwDOCUB6oc5lFVJM
23,150
After completion of Trainer.hyperparameter_search() attribute trainer.state.best_model_checkpoint references the last trained model instead of the best one
{ "login": "fantauzzi", "id": 2722433, "node_id": "MDQ6VXNlcjI3MjI0MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/2722433?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fantauzzi", "html_url": "https://github.com/fantauzzi", "followers_url": "https://api.github.com/users/fantauzzi/followers", "following_url": "https://api.github.com/users/fantauzzi/following{/other_user}", "gists_url": "https://api.github.com/users/fantauzzi/gists{/gist_id}", "starred_url": "https://api.github.com/users/fantauzzi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fantauzzi/subscriptions", "organizations_url": "https://api.github.com/users/fantauzzi/orgs", "repos_url": "https://api.github.com/users/fantauzzi/repos", "events_url": "https://api.github.com/users/fantauzzi/events{/privacy}", "received_events_url": "https://api.github.com/users/fantauzzi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hyperparameter search does not play well with the best model indeed. That's not something in our roadmap for fixing, but we are happy to look at any PR!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,683
1,686
1,686
NONE
null
### System Info - `transformers` version: 4.28.1 - Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35 - Python version: 3.11.3 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction https://colab.research.google.com/drive/1Ht14ntTQy96-_zO-iVlwvAgkyY8t6vKc?usp=sharing Call `Trainer.hyperparameter_search()`; when it completes, attribute `Trainer.state.best_model_checkpoint` and other `Trainer.state` attributes reference the last trained model, in the sequence of models trained by `Trainer.hyperparameter_search()`. Note: to speed-up reproduction of the issue, I have limited the training dataset size in the provided code, line #49; that's why the evaluation metrics at the end of the hyperparameters search are poor. ### Expected behavior After `Trainer.hyperparameter_search()` completes, attribute `Trainer.state.best_model_checkpoint` should contain the filename of the checkpoint with the **best** model among all the models trained during hyperparameters search, not the **last** model; that is, the model trained during the run indicated in the `BestRun` instance returned by `hyperparameter_search()` Likewise, other `Trainer.state` attributes should relate to the same model, e.g: `Trainer.state.best_metric` `Trainer.state.epoch` `Trainer.state.global_step`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23150/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23150/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23149
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23149/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23149/comments
https://api.github.com/repos/huggingface/transformers/issues/23149/events
https://github.com/huggingface/transformers/pull/23149
1,695,874,868
PR_kwDOCUB6oc5PxTsk
23,149
gpt2 multi-gpu fix
{ "login": "peter-sk", "id": 6168908, "node_id": "MDQ6VXNlcjYxNjg5MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/6168908?v=4", "gravatar_id": "", "url": "https://api.github.com/users/peter-sk", "html_url": "https://github.com/peter-sk", "followers_url": "https://api.github.com/users/peter-sk/followers", "following_url": "https://api.github.com/users/peter-sk/following{/other_user}", "gists_url": "https://api.github.com/users/peter-sk/gists{/gist_id}", "starred_url": "https://api.github.com/users/peter-sk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/peter-sk/subscriptions", "organizations_url": "https://api.github.com/users/peter-sk/orgs", "repos_url": "https://api.github.com/users/peter-sk/repos", "events_url": "https://api.github.com/users/peter-sk/events{/privacy}", "received_events_url": "https://api.github.com/users/peter-sk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,683
1,683
1,683
CONTRIBUTOR
null
# What does this PR do? Move tensors to same device. Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? @younesbelkada @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23149/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23149/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23149", "html_url": "https://github.com/huggingface/transformers/pull/23149", "diff_url": "https://github.com/huggingface/transformers/pull/23149.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23149.patch", "merged_at": 1683208718000 }
https://api.github.com/repos/huggingface/transformers/issues/23148
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23148/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23148/comments
https://api.github.com/repos/huggingface/transformers/issues/23148/events
https://github.com/huggingface/transformers/pull/23148
1,695,779,875
PR_kwDOCUB6oc5Pw-22
23,148
accelerate mixed precision integrate
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,683
1,685
1,685
CONTRIBUTOR
null
### What does this PR do? 1. Shift Trainer's mixed precision handling to accelerate. Having smaller PR for this instead of mixing this with DDP, FSDP and DeepSpeed changes. 2. Sharded DDP and Apex are cases not supported in Accelerate and because of this I'm unable to simplify further and delete chunks of code in Trainer. 3. No user-facing changes. User can now use `accelerate launch` for launching mixed precision training in Trainer. Example given below: ``` accelerate launch --mixed_precision="bf16" run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir ~/temp/$TASK_NAME/ --overwrite_output_dir ``` the previous usage via `python` or `torchrun` is same. ``` python run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir ~/temp/$TASK_NAME/ --fp16 --overwrite_output_dir ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23148/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23148/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23148", "html_url": "https://github.com/huggingface/transformers/pull/23148", "diff_url": "https://github.com/huggingface/transformers/pull/23148.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23148.patch", "merged_at": 1685516272000 }
https://api.github.com/repos/huggingface/transformers/issues/23147
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23147/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23147/comments
https://api.github.com/repos/huggingface/transformers/issues/23147/events
https://github.com/huggingface/transformers/pull/23147
1,695,745,539
PR_kwDOCUB6oc5Pw3e5
23,147
[`GPT-J`] Fix causal mask dtype
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,683
1,683
1,683
CONTRIBUTOR
null
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/23136 When going for `low_cpu_mem_usage` each parameter is force-casted to the expected dtype, which is force-set to `torch.float16` for 8bit models. Therefore, for 8bit models (and also half-precision models) the causal mask is always force casted to float16 as it is part of the model's state dict, hence expected to be loaded from the Hub if the mask is available on the state dict. The fix is to add `persistant=False` and add a field `_keys_to_ignore_on_unexpected` (for removing the warnings) to avoid loading that causal mask from the state dict and assign it to the buffer, and all causal masks that are saved as buffers should do the same to avoid unexpected behaviors. cc @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23147/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23147/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23147", "html_url": "https://github.com/huggingface/transformers/pull/23147", "diff_url": "https://github.com/huggingface/transformers/pull/23147.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23147.patch", "merged_at": 1683210679000 }
https://api.github.com/repos/huggingface/transformers/issues/23146
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23146/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23146/comments
https://api.github.com/repos/huggingface/transformers/issues/23146/events
https://github.com/huggingface/transformers/issues/23146
1,695,428,645
I_kwDOCUB6oc5lDjAl
23,146
Loading quantization model video memory occupancy problem
{ "login": "Doraemon20190612", "id": 47512652, "node_id": "MDQ6VXNlcjQ3NTEyNjUy", "avatar_url": "https://avatars.githubusercontent.com/u/47512652?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Doraemon20190612", "html_url": "https://github.com/Doraemon20190612", "followers_url": "https://api.github.com/users/Doraemon20190612/followers", "following_url": "https://api.github.com/users/Doraemon20190612/following{/other_user}", "gists_url": "https://api.github.com/users/Doraemon20190612/gists{/gist_id}", "starred_url": "https://api.github.com/users/Doraemon20190612/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Doraemon20190612/subscriptions", "organizations_url": "https://api.github.com/users/Doraemon20190612/orgs", "repos_url": "https://api.github.com/users/Doraemon20190612/repos", "events_url": "https://api.github.com/users/Doraemon20190612/events{/privacy}", "received_events_url": "https://api.github.com/users/Doraemon20190612/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Doraemon20190612 \r\nThanks for your interest in using this feature!\r\nAs stated on the warning:\r\n```bash\r\nDetected the presence of a `quantization_config` attribute in the model's configuration but you don't have the correct `bitsandbytes` version to support int8 serialization. Please install the latest version of `bitsandbytes` with `pip install --upgrade bitsandbytes`.\r\n```\r\nTherefore you need to upgrade `bitsandbytes` as stated on the warning. Can you try:\r\n```bash\r\npip install --upgrade bitsandbytes\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,683
1,686
1,686
NONE
null
### System Info The original model is loaded as an 8bit model and saved. When the saved quantization model is loaded again, the video memory occupancy is the same as that of the original model. - `transformers` version: 4.29.0.dev0 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.9.12 - Huggingface_hub version: 0.13.3 - Safetensors version: not installed - PyTorch version (GPU?): 1.12.0+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger @younesbelkada @Arthur ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction code: ``` from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained('D:/DL/pretrain_model/bloom-560m') model_q = AutoModelForCausalLM.from_pretrained('D:/DL/pretrain_model/bloom-560m', load_in_8bit=True, device_map='auto') print(model.get_memory_footprint()) print(model_q.get_memory_footprint()) model_q.save_pretrained('D:/DL/model_result/bloom-560m-8bit') model_q_again = AutoModelForCausalLM.from_pretrained('D:/DL/model_result/bloom-560m-8bit') print(model_q_again.get_memory_footprint()) ``` output: ``` C:\ProgramData\Anaconda3\envs\torch\lib\site-packages\tqdm\auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html from .autonotebook import tqdm as notebook_tqdm Overriding torch_dtype=None with `torch_dtype=torch.float16` due to requirements of `bitsandbytes` to enable model loading in mixed int8. Either pass torch_dtype=torch.float16 or don't pass this argument at all to remove this warning. ===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues ================================================================================ CUDA SETUP: CUDA runtime path found: C:\ProgramData\Anaconda3\envs\torch\bin\cudart64_110.dll CUDA SETUP: Highest compute capability among GPUs detected: 8.6 CUDA SETUP: Detected CUDA version 116 CUDA SETUP: Loading binary C:\ProgramData\Anaconda3\envs\torch\lib\site-packages\bitsandbytes\libbitsandbytes_cuda116.dll... C:\ProgramData\Anaconda3\envs\torch\lib\site-packages\bitsandbytes\cuda_setup\main.py:141: UserWarning: C:\ProgramData\Anaconda3\envs\torch did not contain cudart64_110.dll as expected! Searching further paths... warn(msg) C:\ProgramData\Anaconda3\envs\torch\lib\site-packages\bitsandbytes\cuda_setup\main.py:141: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {WindowsPath('C:/ProgramData/Anaconda3/envs/torch/Library/mingw-w64/bin'), WindowsPath('C:/Program Files/MongoDB/Server/6.0/bin'), WindowsPath('C:/ProgramData/Anaconda3/envs/torch/Library/usr/bin')} warn(msg) 2236858368 816439296 Detected the presence of a `quantization_config` attribute in the model's configuration but you don't have the correct `bitsandbytes` version to support int8 serialization. Please install the latest version of `bitsandbytes` with `pip install --upgrade bitsandbytes`. 2236858368 ``` ### Expected behavior I hope that when I load the quantized saved model again, the video memory usage can correspond to the actual size of the quantized model
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23146/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23146/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23145
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23145/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23145/comments
https://api.github.com/repos/huggingface/transformers/issues/23145/events
https://github.com/huggingface/transformers/issues/23145
1,695,214,569
I_kwDOCUB6oc5lCuvp
23,145
Detr Models cannot be loaded with `device_map="auto"`
{ "login": "chiragjn", "id": 10295418, "node_id": "MDQ6VXNlcjEwMjk1NDE4", "avatar_url": "https://avatars.githubusercontent.com/u/10295418?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chiragjn", "html_url": "https://github.com/chiragjn", "followers_url": "https://api.github.com/users/chiragjn/followers", "following_url": "https://api.github.com/users/chiragjn/following{/other_user}", "gists_url": "https://api.github.com/users/chiragjn/gists{/gist_id}", "starred_url": "https://api.github.com/users/chiragjn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chiragjn/subscriptions", "organizations_url": "https://api.github.com/users/chiragjn/orgs", "repos_url": "https://api.github.com/users/chiragjn/repos", "events_url": "https://api.github.com/users/chiragjn/events{/privacy}", "received_events_url": "https://api.github.com/users/chiragjn/received_events", "type": "User", "site_admin": false }
[ { "id": 3081136536, "node_id": "MDU6TGFiZWwzMDgxMTM2NTM2", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Difficult%20Issue", "name": "Good Difficult Issue", "color": "684CC7", "default": false, "description": "" } ]
open
false
null
[]
[ "cc @alaradirik and @amyeroberts ", "Hi @chiragjn, I was able to replicate the error on my local (also macOS-13.1-x86_64-i386-64bit) and I'm looking into the issue.", "A quick update - I tracked down the issue to the accelerate library, setting `device_map=True` sets `low_cpu_mem_usage` to True. This causes the model parameters to be initialized as meta tensors, which can not be copied to CPU or GPU without tensor conversion.\r\n\r\nThis issue also affects DETA, Conditional DETR, Deformable DETR and Table Transformers as they have identical frozen modules that are initialized by copying the parameters of their respective backbone models. We will be opening a fix PR shortly!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hey, is there any progress with this issue?", "Hi @AlonZolfi, @alaradirik has now left Hugging Face, so I'm picking this up. \r\n\r\nAs @alaradirik mentions, this arises as a consequence the replacement of the batch norm in the backbone of these models. I'll be digging into it properly next week when I have a bit more time. \r\n\r\nRe-opening the issue as it's not yet solved and will keep you posted! ", "It closed again, there was some progress with the issue?", "@amyeroberts the problem is indeed annoying, I have similar problem fine-tuning some models like llama. anyone working to solve it?", "Hey @amyeroberts, was this issue solved already?", "@AlonZolfi @ranchlai No, I unfortunately haven't had bandwidth to address this yet. I'm marking it as a difficult issue that anyone in the community can try and tackle if they wish. ", "I got the same problem when using accelerate, doing `model.cuda()` worked as expected. \r\nThe related PR is #26150\r\nwhere:\r\n```python\r\nfrom transformers import AutoModelForSeq2SeqLM\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(model_name, device_map=\"auto\")\r\n```\r\n\r\nso pinging @muellerzr as this is probably related to our hf hooks. Now I might be creating the buffer and tensors in a wrong way but can´t get it to load so help is appreciated! (See the `UMTRelativePositionalBias` class) \r\n\r\n(using accelerate 0.22.0)", "cc @SunMarc ", "Hi @ArthurZucker , I left a few comments on the PR to explain the issue. Hope that you have enough context to fix the problem ;) " ]
1,683
1,694
null
NONE
null
### System Info - `transformers` version: 4.28.1 - Platform: macOS-13.1-x86_64-i386-64bit - Python version: 3.9.2 - Huggingface_hub version: 0.12.1 - Safetensors version: not installed - PyTorch version (GPU?): 1.13.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import pipeline p = pipeline( "object-detection", model="facebook/detr-resnet-50", image_processor="facebook/detr-resnet-50", device_map="auto" ) ``` ### Expected behavior This does not work because the `transformers.models.detr.modeling_detr.DetrConvEncoder` model init involves copy weights from `nn.BatchNorm2d` to `DetrFrozenBatchNorm2d` which is not allowed when on a meta device. ``` File "/Users/chiragjn/venv39/lib/python3.9/site-packages/transformers/pipelines/__init__.py", line 779, in pipeline framework, model = infer_framework_load_model( File "/Users/chiragjn/venv39/lib/python3.9/site-packages/transformers/pipelines/base.py", line 262, in infer_framework_load_model model = model_class.from_pretrained(model, **kwargs) File "/Users/chiragjn/venv39/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 471, in from_pretrained return model_class.from_pretrained( File "/Users/chiragjn/venv39/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2629, in from_pretrained model = cls(config, *model_args, **model_kwargs) File "/Users/chiragjn/venv39/lib/python3.9/site-packages/transformers/models/detr/modeling_detr.py", line 1373, in __init__ self.model = DetrModel(config) File "/Users/chiragjn/venv39/lib/python3.9/site-packages/transformers/models/detr/modeling_detr.py", line 1205, in __init__ backbone = DetrConvEncoder(config) File "/Users/chiragjn/venv39/lib/python3.9/site-packages/transformers/models/detr/modeling_detr.py", line 354, in __init__ replace_batch_norm(backbone) File "/Users/chiragjn/venv39/lib/python3.9/site-packages/transformers/models/detr/modeling_detr.py", line 314, in replace_batch_norm frozen.weight.data.copy_(bn.weight) NotImplementedError: Cannot copy out of meta tensor; no data! ``` The model loads fine with a specific device with `device` argument.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23145/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23145/timeline
reopened
null
null
https://api.github.com/repos/huggingface/transformers/issues/23144
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23144/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23144/comments
https://api.github.com/repos/huggingface/transformers/issues/23144/events
https://github.com/huggingface/transformers/pull/23144
1,695,210,392
PR_kwDOCUB6oc5PvB6C
23,144
Remove typo in perf_train_gpu_many.mdx
{ "login": "MrGeislinger", "id": 9027783, "node_id": "MDQ6VXNlcjkwMjc3ODM=", "avatar_url": "https://avatars.githubusercontent.com/u/9027783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MrGeislinger", "html_url": "https://github.com/MrGeislinger", "followers_url": "https://api.github.com/users/MrGeislinger/followers", "following_url": "https://api.github.com/users/MrGeislinger/following{/other_user}", "gists_url": "https://api.github.com/users/MrGeislinger/gists{/gist_id}", "starred_url": "https://api.github.com/users/MrGeislinger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MrGeislinger/subscriptions", "organizations_url": "https://api.github.com/users/MrGeislinger/orgs", "repos_url": "https://api.github.com/users/MrGeislinger/repos", "events_url": "https://api.github.com/users/MrGeislinger/events{/privacy}", "received_events_url": "https://api.github.com/users/MrGeislinger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,683
1,683
1,683
CONTRIBUTOR
null
Simple typo in the documentation (excess `w` in the word `bottom`) # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? (N/A) ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23144/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23144/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23144", "html_url": "https://github.com/huggingface/transformers/pull/23144", "diff_url": "https://github.com/huggingface/transformers/pull/23144.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23144.patch", "merged_at": 1683208605000 }
https://api.github.com/repos/huggingface/transformers/issues/23143
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23143/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23143/comments
https://api.github.com/repos/huggingface/transformers/issues/23143/events
https://github.com/huggingface/transformers/pull/23143
1,695,206,645
PR_kwDOCUB6oc5PvBGD
23,143
fix spelling error
{ "login": "digger-yu", "id": 55081697, "node_id": "MDQ6VXNlcjU1MDgxNjk3", "avatar_url": "https://avatars.githubusercontent.com/u/55081697?v=4", "gravatar_id": "", "url": "https://api.github.com/users/digger-yu", "html_url": "https://github.com/digger-yu", "followers_url": "https://api.github.com/users/digger-yu/followers", "following_url": "https://api.github.com/users/digger-yu/following{/other_user}", "gists_url": "https://api.github.com/users/digger-yu/gists{/gist_id}", "starred_url": "https://api.github.com/users/digger-yu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/digger-yu/subscriptions", "organizations_url": "https://api.github.com/users/digger-yu/orgs", "repos_url": "https://api.github.com/users/digger-yu/repos", "events_url": "https://api.github.com/users/digger-yu/events{/privacy}", "received_events_url": "https://api.github.com/users/digger-yu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,683
1,683
1,683
CONTRIBUTOR
null
# What does this PR do? fix spelling error change referrred to referred ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23143/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23143/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23143", "html_url": "https://github.com/huggingface/transformers/pull/23143", "diff_url": "https://github.com/huggingface/transformers/pull/23143.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23143.patch", "merged_at": 1683208588000 }
https://api.github.com/repos/huggingface/transformers/issues/23142
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23142/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23142/comments
https://api.github.com/repos/huggingface/transformers/issues/23142/events
https://github.com/huggingface/transformers/pull/23142
1,695,120,160
PR_kwDOCUB6oc5Put4G
23,142
Add TrOCR resources
{ "login": "huangperry", "id": 60767670, "node_id": "MDQ6VXNlcjYwNzY3Njcw", "avatar_url": "https://avatars.githubusercontent.com/u/60767670?v=4", "gravatar_id": "", "url": "https://api.github.com/users/huangperry", "html_url": "https://github.com/huangperry", "followers_url": "https://api.github.com/users/huangperry/followers", "following_url": "https://api.github.com/users/huangperry/following{/other_user}", "gists_url": "https://api.github.com/users/huangperry/gists{/gist_id}", "starred_url": "https://api.github.com/users/huangperry/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/huangperry/subscriptions", "organizations_url": "https://api.github.com/users/huangperry/orgs", "repos_url": "https://api.github.com/users/huangperry/repos", "events_url": "https://api.github.com/users/huangperry/events{/privacy}", "received_events_url": "https://api.github.com/users/huangperry/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,683
1,683
1,683
CONTRIBUTOR
null
# What does this PR do? Adds resources of OpenAI GPT according to https://github.com/huggingface/transformers/issues/20055 Fixes #20055 (partially) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @stevhliu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23142/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23142/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23142", "html_url": "https://github.com/huggingface/transformers/pull/23142", "diff_url": "https://github.com/huggingface/transformers/pull/23142.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23142.patch", "merged_at": 1683300560000 }
https://api.github.com/repos/huggingface/transformers/issues/23141
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23141/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23141/comments
https://api.github.com/repos/huggingface/transformers/issues/23141/events
https://github.com/huggingface/transformers/pull/23141
1,694,949,237
PR_kwDOCUB6oc5PuH9q
23,141
fix: Passing language as acronym to Whisper generate
{ "login": "connor-henderson", "id": 78612354, "node_id": "MDQ6VXNlcjc4NjEyMzU0", "avatar_url": "https://avatars.githubusercontent.com/u/78612354?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-henderson", "html_url": "https://github.com/connor-henderson", "followers_url": "https://api.github.com/users/connor-henderson/followers", "following_url": "https://api.github.com/users/connor-henderson/following{/other_user}", "gists_url": "https://api.github.com/users/connor-henderson/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-henderson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-henderson/subscriptions", "organizations_url": "https://api.github.com/users/connor-henderson/orgs", "repos_url": "https://api.github.com/users/connor-henderson/repos", "events_url": "https://api.github.com/users/connor-henderson/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-henderson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "cc @sanchit-gandhi ", "Thank you @sanchit-gandhi I made the requested changes and just have two callouts regarding the changes\r\n\r\nOne is I also realized we could use `TO_LANGUAGE_CODE.values()` instead of `LANGUAGES.keys()` for checking the acronym so I made that edit, and two is to keep that test fast I did a hacky setattr for the `generation_config` so it had some properties that `model.generate` is expecting it to have", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23141). All of your documentation changes will be reflected on that endpoint." ]
1,683
1,683
1,683
CONTRIBUTOR
null
# What does this PR do? Fixes #23140 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @hollance @gante
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23141/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23141/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23141", "html_url": "https://github.com/huggingface/transformers/pull/23141", "diff_url": "https://github.com/huggingface/transformers/pull/23141.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23141.patch", "merged_at": 1683301940000 }
https://api.github.com/repos/huggingface/transformers/issues/23140
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23140/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23140/comments
https://api.github.com/repos/huggingface/transformers/issues/23140/events
https://github.com/huggingface/transformers/issues/23140
1,694,947,442
I_kwDOCUB6oc5lBthy
23,140
Whisper generation support for passing acronym to language arg
{ "login": "connor-henderson", "id": 78612354, "node_id": "MDQ6VXNlcjc4NjEyMzU0", "avatar_url": "https://avatars.githubusercontent.com/u/78612354?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-henderson", "html_url": "https://github.com/connor-henderson", "followers_url": "https://api.github.com/users/connor-henderson/followers", "following_url": "https://api.github.com/users/connor-henderson/following{/other_user}", "gists_url": "https://api.github.com/users/connor-henderson/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-henderson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-henderson/subscriptions", "organizations_url": "https://api.github.com/users/connor-henderson/orgs", "repos_url": "https://api.github.com/users/connor-henderson/repos", "events_url": "https://api.github.com/users/connor-henderson/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-henderson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ArthurZucker ", "Yeah I'm not sure why it was decided the language token had to be passed in there, and at the very least the current error message is misleading. Arthur is probably the best person to look at this." ]
1,683
1,683
1,683
CONTRIBUTOR
null
### System Info - `transformers` version: 4.29.0.dev0 - Platform: macOS-13.0-arm64-arm-64bit - Python version: 3.9.16 - Huggingface_hub version: 0.12.0 - Safetensors version: 0.2.8 - PyTorch version (GPU?): 1.13.1 (False) - Tensorflow version (GPU?): 2.11.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu) - Jax version: 0.3.6 - JaxLib version: 0.3.5 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @hollance @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```py processor = WhisperProcessor.from_pretrained("openai/whisper-tiny") model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny") ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") sample = ds[0]["audio"]["array"] input_features = processor.feature_extractor(sample, return_tensors="pt").input_features pred_ids = model.generate(input_features, language="de") ``` Throws this error: <img width="778" alt="Screenshot 2023-05-03 at 6 29 38 PM" src="https://user-images.githubusercontent.com/78612354/236067028-ee7ab371-e9a2-44eb-9895-b5c8f3a2fcdd.png"> Then this error when that's fixed: <img width="1198" alt="Screenshot 2023-05-03 at 6 30 34 PM" src="https://user-images.githubusercontent.com/78612354/236067052-8f1ae574-db51-44e4-800c-aa4f38b0200e.png"> ### Expected behavior Should recognize and use language passed in acronym format as per the docstring
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23140/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23140/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23139
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23139/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23139/comments
https://api.github.com/repos/huggingface/transformers/issues/23139/events
https://github.com/huggingface/transformers/pull/23139
1,694,807,082
PR_kwDOCUB6oc5PtonY
23,139
Generate: text generation pipeline no longer emits `max_length` warning when it is not set
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@Narsil added test 👍 (precisely as suggested, using a small model checks that the warning is raised only when it should)", "Sorry am I wrong or this is still an issue on `text2text_generation`?\r\nI'm asking this because i keep seeing these warnings on stdout during a summarization task:\r\n\r\n> Your max_length is set to 1024, but your input_length is only 197. Since this is a summarization task, where outputs shorter than the input are typically wanted, you might consider decreasing max_length manually, e.g. summarizer('...', max_length=98)\r\n> Both `max_new_tokens` (=100) and `max_length`(=1024) seem to have been set. `max_new_tokens` will take precedence. Please refer to the documentation for more information. (https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)\r\n\r\nAnd I can see these lines in `src/transformers/pipelines/text2text_generation.py:184-186`:\r\n```python\r\n generate_kwargs[\"min_length\"] = generate_kwargs.get(\"min_length\", self.model.config.min_length)\r\n generate_kwargs[\"max_length\"] = generate_kwargs.get(\"max_length\", self.model.config.max_length)\r\n self.check_inputs(input_length, generate_kwargs[\"min_length\"], generate_kwargs[\"max_length\"])\r\n```", "@ndricca it's possible, they are different pipelines. I will check whether we can sort this one out :) ", "@ndricca the PR linked above sorts the (incorrect) warning you were seeing :)" ]
1,683
1,694
1,683
MEMBER
null
# What does this PR do? Fixes #22636 In the `text-generation` pipeline, `max_length` is updated to take into account the prefix (which defaults to the BOS token). When `max_new_tokens` was set, it meant that `.generate` received both parameters, triggering the warning. This PR defers the `max_length` update to right before generation, in case `max_new_tokens` is set at call time, and only updates it if `max_new_tokens` is not set -- avoiding triggering the warning if the user has not set `max_length` while keeping the same behavior. ____________________________________ Test script, which was triggering the warning before this change: ```py from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline, GenerationConfig device = "cuda:0" model_name = "facebook/opt-1.3b" # tokenizer, model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, low_cpu_mem_usage=True, pad_token_id=tokenizer.eos_token_id ).to(device) # pipeline pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device=device) # generate text text = "Hello " result = pipe( text, generation_config=GenerationConfig( max_new_tokens=70, num_beams=1, do_sample=False ) ) # print result print(result) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23139/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23139/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23139", "html_url": "https://github.com/huggingface/transformers/pull/23139", "diff_url": "https://github.com/huggingface/transformers/pull/23139.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23139.patch", "merged_at": 1683221783000 }
https://api.github.com/repos/huggingface/transformers/issues/23138
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23138/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23138/comments
https://api.github.com/repos/huggingface/transformers/issues/23138/events
https://github.com/huggingface/transformers/issues/23138
1,694,768,661
I_kwDOCUB6oc5lBB4V
23,138
BatchEncoding breaks duck-typing, either document or auto-cast to dict
{ "login": "atyshka", "id": 19317207, "node_id": "MDQ6VXNlcjE5MzE3MjA3", "avatar_url": "https://avatars.githubusercontent.com/u/19317207?v=4", "gravatar_id": "", "url": "https://api.github.com/users/atyshka", "html_url": "https://github.com/atyshka", "followers_url": "https://api.github.com/users/atyshka/followers", "following_url": "https://api.github.com/users/atyshka/following{/other_user}", "gists_url": "https://api.github.com/users/atyshka/gists{/gist_id}", "starred_url": "https://api.github.com/users/atyshka/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/atyshka/subscriptions", "organizations_url": "https://api.github.com/users/atyshka/orgs", "repos_url": "https://api.github.com/users/atyshka/repos", "events_url": "https://api.github.com/users/atyshka/events{/privacy}", "received_events_url": "https://api.github.com/users/atyshka/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This was additionally made harder to debug because the typing for prediction_step indicates a Dict when it is actually a BatchEncoding. Since they print out the same, the only way I identified the cause of the bug was by stepping through and inspecting the types.\r\n\r\n```\r\n def prediction_step(\r\n self,\r\n model: nn.Module,\r\n inputs: Dict[str, Union[torch.Tensor, Any]],\r\n prediction_loss_only: bool,\r\n ignore_keys: Optional[List[str]] = None,\r\n ) -> Tuple[Optional[torch.Tensor], Optional[torch.Tensor], Optional[torch.Tensor]]:\r\n```", "1 and 2 are not viable options on our side. We do rely on properties of `BatchEncoding` in our inputs. You can try 3 for PyTorch, the check that is needed is\r\n```\r\nfrom collections.abc import Mapping\r\n\r\nif isinstance(obj, Mapping) and len(obj) > 0:\r\n return [type(obj)(i) for i in zip(*map(scatter_map, obj.items()))]\r\n```\r\n(which is the one we use in all our tooling), but I don't guarantee they will accept it.\r\n\r\nOr you can do the conversion from `BatchEncoding` to `dict` in your own subclass of the Trainer.\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,683
1,686
1,686
NONE
null
### Feature request I would like to propose that somewhere in the typical model pipeline, such as the data collators or the Trainer loop, instances of BatchEncoding should be converted to dicts to avoid breaking DataParallel models. Alternatively, I could add to the documentation to state that BatchEncodings should not be passed to PyTorch models. Maintainers, please let me know which solution you would prefer. ### Motivation I am currently trying to integrate the sentence-transformers library with the nice Trainer API. One of the peculiarities of sentence-transformers is that rather than unpacking the model parameters like so `outputs = model(**inputs)`, the model expects a dict as input, which is internally unpacked. No big deal, I just override the `prediction_step` and remove the unpacking. However, this strangely failed when using a DataParallel setup. I realized this is because the following code in DataParallel could not handle BatchEncodings: ``` def scatter(inputs, target_gpus, dim=0): r""" Slices tensors into approximately equal chunks and distributes them across given GPUs. Duplicates references to objects that are not tensors. """ def scatter_map(obj): if isinstance(obj, torch.Tensor): return Scatter.apply(target_gpus, None, dim, obj) if _is_namedtuple(obj): return [type(obj)(*args) for args in zip(*map(scatter_map, obj))] if isinstance(obj, tuple) and len(obj) > 0: return list(zip(*map(scatter_map, obj))) if isinstance(obj, list) and len(obj) > 0: return [list(i) for i in zip(*map(scatter_map, obj))] if isinstance(obj, dict) and len(obj) > 0: return [type(obj)(i) for i in zip(*map(scatter_map, obj.items()))] return [obj for targets in target_gpus] ``` DataParallel walks like a dict, talks like a dict, prints out exactly like a dict, but is not a subclass of dict, and therefore breaks this code. The tensors are not distributed across GPUs, and errors result. In my opinion, this violates the idea of duck typing, but I realize that technically it's PyTorch who's breaking duck typing here. Normally, this isn't a problem since Trainer unpacks the model args (BatchEncoding) in its predict_step, but I can't imagine sentence-transformers is the only library that does not adopt this convention. ### Your contribution I would like to do one of three things: 1. Change DataCollators to output dicts or have Trainer check for BatchEncodings and convert them to dicts 2. Document that BatchEncodings should not be passed to models 3. Submit a PR to PyTorch that modifies this function to check for UserDicts like BatchEncodings
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23138/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23138/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23137
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23137/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23137/comments
https://api.github.com/repos/huggingface/transformers/issues/23137/events
https://github.com/huggingface/transformers/issues/23137
1,694,749,483
I_kwDOCUB6oc5lA9Mr
23,137
TypeError: is_accelerate_available() got an unexpected keyword argument 'check_partial_state'.
{ "login": "nipunikajain", "id": 42820022, "node_id": "MDQ6VXNlcjQyODIwMDIy", "avatar_url": "https://avatars.githubusercontent.com/u/42820022?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nipunikajain", "html_url": "https://github.com/nipunikajain", "followers_url": "https://api.github.com/users/nipunikajain/followers", "following_url": "https://api.github.com/users/nipunikajain/following{/other_user}", "gists_url": "https://api.github.com/users/nipunikajain/gists{/gist_id}", "starred_url": "https://api.github.com/users/nipunikajain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nipunikajain/subscriptions", "organizations_url": "https://api.github.com/users/nipunikajain/orgs", "repos_url": "https://api.github.com/users/nipunikajain/repos", "events_url": "https://api.github.com/users/nipunikajain/events{/privacy}", "received_events_url": "https://api.github.com/users/nipunikajain/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Looks like you may have a borked install of Transformers? If installing from source, that function does accept `check_partial_state` as can be seen [here](https://github.com/huggingface/transformers/blob/78b7debf56efb907c6af767882162050d4fbb294/src/transformers/utils/import_utils.py#L582).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,683
1,686
1,686
NONE
null
### System Info - `transformers` version: 4.29.0.dev0 - Platform: Linux-5.10.147+-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu) - Jax version: 0.4.8 - JaxLib version: 0.4.7 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction trainer = CustomTrainer( model=model, train_dataset=train_data_loader, eval_dataset=validation_data_loader, args=transformers.TrainingArguments( per_device_train_batch_size=4, gradient_accumulation_steps=4, warmup_steps=30, max_steps=50, learning_rate=2e-4, fp16=True, logging_steps=1, evaluation_strategy="steps", output_dir="outputs", weight_decay=0.01, # L2 regularization ), data_collator=transformers.DataCollatorForLanguageModeling( tokenizer, mlm=False ), ) # # Add dropout to the model (if not already present) # model.config.dropout = 0.1 model.config.use_cache = False # silence the warning, Please re-enable for inference! trainer.train() ### Expected behavior @ArthurZucker and @younesbelkada This should ideally start a traning loop but instead I am getting unexpected keyword argument check_partial_state TypeError Traceback (most recent call last) ────────────────────────────────╮ │ in <cell line: 1>:5 │ │ in __init__:111 │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/training_args.py:1279 in __post_init__ │ │ │ │ 1276 │ │ if ( │ │ 1277 │ │ │ self.framework == "pt" │ │ 1278 │ │ │ and is_torch_available() │ │ ❱ 1279 │ │ │ and (self.device.type != "cuda") │ │ 1280 │ │ │ and (get_xla_device_type(self.device) != "GPU") │ │ 1281 │ │ │ and (self.fp16 or self.fp16_full_eval) │ │ 1282 │ │ ): │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/training_args.py:1643 in device │ │ │ │ 1640 │ │ The device used by this process. │ │ 1641 │ │ """ │ │ 1642 │ │ requires_backends(self, ["torch"]) │ │ ❱ 1643 │ │ return self._setup_devices │ │ 1644 │ │ │ 1645 │ @property │ │ 1646 │ def n_gpu(self): │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/utils/generic.py:54 in __get__ │ │ │ │ 51 │ │ attr = "__cached_" + self.fget.__name__ │ │ 52 │ │ cached = getattr(obj, attr, None) │ │ 53 │ │ if cached is None: │ │ ❱ 54 │ │ │ cached = self.fget(obj) │ │ 55 │ │ │ setattr(obj, attr, cached) │ │ 56 │ │ return cached │ │ 57 │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/training_args.py:1558 in _setup_devices │ │ │ │ 1555 │ def _setup_devices(self) -> "torch.device": │ │ 1556 │ │ requires_backends(self, ["torch"]) │ │ 1557 │ │ logger.info("PyTorch: setting up devices") │ │ ❱ 1558 │ │ if not is_sagemaker_mp_enabled() and not is_accelerate_available(check_partial_s │ │ 1559 │ │ │ raise ImportError( │ │ 1560 │ │ │ │ "Using the `Trainer` with `PyTorch` requires `accelerate`: Run `pip inst │ │ 1561 │ │ │ ) │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ TypeError: is_accelerate_available() got an unexpected keyword argument 'check_partial_state'
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23137/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23137/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23136
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23136/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23136/comments
https://api.github.com/repos/huggingface/transformers/issues/23136/events
https://github.com/huggingface/transformers/issues/23136
1,694,691,467
I_kwDOCUB6oc5lAvCL
23,136
[GPT-J] where expected condition to be a boolean tensor, but got a tensor with dtype Half
{ "login": "Praful932", "id": 45713796, "node_id": "MDQ6VXNlcjQ1NzEzNzk2", "avatar_url": "https://avatars.githubusercontent.com/u/45713796?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Praful932", "html_url": "https://github.com/Praful932", "followers_url": "https://api.github.com/users/Praful932/followers", "following_url": "https://api.github.com/users/Praful932/following{/other_user}", "gists_url": "https://api.github.com/users/Praful932/gists{/gist_id}", "starred_url": "https://api.github.com/users/Praful932/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Praful932/subscriptions", "organizations_url": "https://api.github.com/users/Praful932/orgs", "repos_url": "https://api.github.com/users/Praful932/repos", "events_url": "https://api.github.com/users/Praful932/events{/privacy}", "received_events_url": "https://api.github.com/users/Praful932/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Praful932 \r\nThanks for the issue, I have managed to reproduce it and fix it with https://github.com/huggingface/transformers/pull/23147\r\nCan you try to uninstall `transformers` and install it again from source?\r\n```bash\r\npip install git+https://github.com/huggingface/transformers\r\n```", "This is working, Thanks for the quick fix!" ]
1,683
1,684
1,683
NONE
null
### System Info - `transformers` version: 4.28.1 - Platform: Linux-5.10.147+-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu) - Jax version: 0.4.8 - JaxLib version: 0.4.7 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Collab example - [Link](https://colab.research.google.com/drive/1D689vHxZk5Bov5piIXOQ62t3M_cim4dr?usp=sharing) ### Expected behavior I expected some output generated from the model
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23136/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23136/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23135
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23135/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23135/comments
https://api.github.com/repos/huggingface/transformers/issues/23135/events
https://github.com/huggingface/transformers/issues/23135
1,694,667,688
I_kwDOCUB6oc5lApOo
23,135
loss 0.0 or NaN when training T5 or Flan-T5 models with bf16 on multiple GPUs
{ "login": "cchen-dialpad", "id": 47165889, "node_id": "MDQ6VXNlcjQ3MTY1ODg5", "avatar_url": "https://avatars.githubusercontent.com/u/47165889?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cchen-dialpad", "html_url": "https://github.com/cchen-dialpad", "followers_url": "https://api.github.com/users/cchen-dialpad/followers", "following_url": "https://api.github.com/users/cchen-dialpad/following{/other_user}", "gists_url": "https://api.github.com/users/cchen-dialpad/gists{/gist_id}", "starred_url": "https://api.github.com/users/cchen-dialpad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cchen-dialpad/subscriptions", "organizations_url": "https://api.github.com/users/cchen-dialpad/orgs", "repos_url": "https://api.github.com/users/cchen-dialpad/repos", "events_url": "https://api.github.com/users/cchen-dialpad/events{/privacy}", "received_events_url": "https://api.github.com/users/cchen-dialpad/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "BTW, if I use deepspeed to launch the script then `loss` and `eval_loss` are normal.\r\nIn addition, the 0.0 or NaN only happens when it is trained on multiple GPUs in the DataParallel mode, which is aligned with this issue: https://github.com/huggingface/transformers/issues/18899", "Thank you for the good report and the link to the other similar issue, @cchen-dialpad \r\n\r\nI think hardly anybody uses DP since DDP was introduced. Is there a reason to use DP when DDP is by far more superior?\r\n\r\nSwitching to DDP was the resolution of the ticket you linked to: https://github.com/huggingface/transformers/issues/18899#issuecomment-1249262873\r\n\r\n", "@stas00 Oh, I guess I just got curious why that is the case, DDP works but DP doesn't :)", "You're more than welcome to try to figure it out, @cchen-dialpad - I haven't used DP in many years, perhaps it's not being well maintained because it's rarely used? It'd be an optimizer issue most likely if you want a place to start.", "lol I see, thanks for the pointers!", "this happened to me! ddp, bf16, t5-large, also FSDP" ]
1,683
1,695
1,683
CONTRIBUTOR
null
### System Info - `transformers` version: 4.28.0 - Platform: Linux-5.19.0-1022-gcp-x86_64-with-Ubuntu-22.04-jammy - Python version: 3.7.16 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @stas00 ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Steps to reproduce: 1. Launch the example summarization training pipeline with `--bf16` and one of the Flan-T5 or T5 models such as `google/flan-t5-base` and `t5-large`. ``` python3.7 examples/pytorch/summarization/run_summarization.py \ --report_to none \ --bf16 \ --model_name_or_path google/flan-t5-base \ --evaluation_strategy steps \ --logging_strategy steps \ --logging_steps 10 \ --save_strategy steps \ --save_steps 30000 \ --num_train_epochs 3 \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --max_train_samples=10000 \ --max_eval_samples=100 \ --overwrite_output_dir \ --overwrite_cache ``` Then you will see the following logs: ``` {'loss': 0.0, 'learning_rate': 4.893503727369542e-05, 'epoch': 0.06} {'eval_loss': nan, 'eval_runtime': 1.6413, 'eval_samples_per_second': 60.927, 'eval_steps_per_second': 2.437, 'epoch': 0.06} ``` ### Expected behavior Loss and eval_loss are wrong during training. Only T5 model works properly is `t5-small`, same as what was mentioned here: https://discuss.huggingface.co/t/t5-fp16-issue-is-fixed/3139, but other `T5` or `Flan-T5` models (including `flan-t5-small`) still suffer from this issue. I can train with FP32 without this problem, but would like to know if the fix (maybe not relevant here? since bf16 is different from fp16) mentioned has been incorporated.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23135/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23135/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23134
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23134/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23134/comments
https://api.github.com/repos/huggingface/transformers/issues/23134/events
https://github.com/huggingface/transformers/pull/23134
1,694,558,904
PR_kwDOCUB6oc5PsxwU
23,134
Tidy Pytorch GLUE benchmark example
{ "login": "tlby", "id": 3189927, "node_id": "MDQ6VXNlcjMxODk5Mjc=", "avatar_url": "https://avatars.githubusercontent.com/u/3189927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tlby", "html_url": "https://github.com/tlby", "followers_url": "https://api.github.com/users/tlby/followers", "following_url": "https://api.github.com/users/tlby/following{/other_user}", "gists_url": "https://api.github.com/users/tlby/gists{/gist_id}", "starred_url": "https://api.github.com/users/tlby/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tlby/subscriptions", "organizations_url": "https://api.github.com/users/tlby/orgs", "repos_url": "https://api.github.com/users/tlby/repos", "events_url": "https://api.github.com/users/tlby/events{/privacy}", "received_events_url": "https://api.github.com/users/tlby/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,683
1,683
1,683
CONTRIBUTOR
null
Migration to Evaluate for metric is not quite complete # What does this PR do? #18369 left the Pytorch GLUE Benchmark example a bit rough, still hand implementing some metrics, and leaving metric objects constructed but unused in some cases. This work completes the migration to Evaluate. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger @atturaioe
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23134/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23134/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23134", "html_url": "https://github.com/huggingface/transformers/pull/23134", "diff_url": "https://github.com/huggingface/transformers/pull/23134.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23134.patch", "merged_at": 1683143442000 }
https://api.github.com/repos/huggingface/transformers/issues/23133
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23133/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23133/comments
https://api.github.com/repos/huggingface/transformers/issues/23133/events
https://github.com/huggingface/transformers/pull/23133
1,694,432,193
PR_kwDOCUB6oc5PsWJe
23,133
Remove redundant print statements
{ "login": "alaradirik", "id": 8944735, "node_id": "MDQ6VXNlcjg5NDQ3MzU=", "avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alaradirik", "html_url": "https://github.com/alaradirik", "followers_url": "https://api.github.com/users/alaradirik/followers", "following_url": "https://api.github.com/users/alaradirik/following{/other_user}", "gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}", "starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions", "organizations_url": "https://api.github.com/users/alaradirik/orgs", "repos_url": "https://api.github.com/users/alaradirik/repos", "events_url": "https://api.github.com/users/alaradirik/events{/privacy}", "received_events_url": "https://api.github.com/users/alaradirik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,683
1,683
1,683
CONTRIBUTOR
null
# What does this PR do? Removes leftover comments / print lines from `test_backbone_common.py`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23133/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23133/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23133", "html_url": "https://github.com/huggingface/transformers/pull/23133", "diff_url": "https://github.com/huggingface/transformers/pull/23133.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23133.patch", "merged_at": 1683133489000 }
https://api.github.com/repos/huggingface/transformers/issues/23132
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23132/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23132/comments
https://api.github.com/repos/huggingface/transformers/issues/23132/events
https://github.com/huggingface/transformers/issues/23132
1,694,322,124
I_kwDOCUB6oc5k_U3M
23,132
Add Unlimiformer to 🤗 transformers
{ "login": "tanaymeh", "id": 26519539, "node_id": "MDQ6VXNlcjI2NTE5NTM5", "avatar_url": "https://avatars.githubusercontent.com/u/26519539?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tanaymeh", "html_url": "https://github.com/tanaymeh", "followers_url": "https://api.github.com/users/tanaymeh/followers", "following_url": "https://api.github.com/users/tanaymeh/following{/other_user}", "gists_url": "https://api.github.com/users/tanaymeh/gists{/gist_id}", "starred_url": "https://api.github.com/users/tanaymeh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tanaymeh/subscriptions", "organizations_url": "https://api.github.com/users/tanaymeh/orgs", "repos_url": "https://api.github.com/users/tanaymeh/repos", "events_url": "https://api.github.com/users/tanaymeh/events{/privacy}", "received_events_url": "https://api.github.com/users/tanaymeh/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[]
1,683
1,684
1,684
CONTRIBUTOR
null
### Model description I want to add the recently released Unlimiformer (Long-Range Transformers with Unlimited Length Input) model to 🤗 transformers. Will this be a good addition? ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Paper: https://arxiv.org/abs/2305.01625 Code: https://github.com/abertsch72/unlimiformer Model weights: https://github.com/abertsch72/unlimiformer#trained-models Authors: @abertsch72 cc @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23132/reactions", "total_count": 7, "+1": 7, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23132/timeline
not_planned
null
null
https://api.github.com/repos/huggingface/transformers/issues/23131
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23131/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23131/comments
https://api.github.com/repos/huggingface/transformers/issues/23131/events
https://github.com/huggingface/transformers/pull/23131
1,694,217,588
PR_kwDOCUB6oc5PrniY
23,131
Handle padding warning in generation when using `inputs_embeds`
{ "login": "zrthxn", "id": 35369637, "node_id": "MDQ6VXNlcjM1MzY5NjM3", "avatar_url": "https://avatars.githubusercontent.com/u/35369637?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zrthxn", "html_url": "https://github.com/zrthxn", "followers_url": "https://api.github.com/users/zrthxn/followers", "following_url": "https://api.github.com/users/zrthxn/following{/other_user}", "gists_url": "https://api.github.com/users/zrthxn/gists{/gist_id}", "starred_url": "https://api.github.com/users/zrthxn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zrthxn/subscriptions", "organizations_url": "https://api.github.com/users/zrthxn/orgs", "repos_url": "https://api.github.com/users/zrthxn/repos", "events_url": "https://api.github.com/users/zrthxn/events{/privacy}", "received_events_url": "https://api.github.com/users/zrthxn/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante", "_The documentation is not available anymore as the PR was closed or merged._", "> Why not simply add `and len(inputs_tensor.shape) == 2` to the `if`? Short, clear code is easier to maintain 🙌\r\n\r\nJust adding `and len(inputs_tensor.shape) == 2` wouldn't print the warning if using `inputs_embeds`. Would you prefer that behaviour? Also, this logic would fail if the pad token for some embedding is **not** a tensor full of the `pad_token_id`.", "> Just adding and len(inputs_tensor.shape) == 2 wouldn't print the warning if using inputs_embeds. Would you prefer that behaviour? Also, this logic would fail if the pad token for some embedding is not a tensor full of the pad_token_id.\r\n\r\nYeah, I'd prefer the shorter version with the `len(inputs_tensor.shape) == 2` check only. The reason being that although it is less precise, using the embeddings as an input is an advanced use case, for which we tend to be more hands-off. It also makes the code shorter and, therefore, more readable :)\r\n\r\n(if we were to make complete checks at all points, the code would quickly become unmaintainable)", "Alright, I've changed the logic to be very simple now. It just doesn't check this condition if `inputs_embeds` was passed.", "@zrthxn this PR needs to be rebased with `main` -- we fixed a dependency issue that's showing up on the CI down here 😬 apologies for the extra work!", "@gante no problem! Just did that. " ]
1,683
1,683
1,683
CONTRIBUTOR
null
# What does this PR do? Fixes: #23042 if `input_ids` was given, check if the last id in any sequence is `pad_token_id` if `inputs_embeds` was given, check if the last embed in any sequence is *all* zeros - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? [Please add a link to it if that's the case.](https://github.com/huggingface/transformers/issues/23042) - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23131/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23131/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23131", "html_url": "https://github.com/huggingface/transformers/pull/23131", "diff_url": "https://github.com/huggingface/transformers/pull/23131.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23131.patch", "merged_at": 1683907576000 }
https://api.github.com/repos/huggingface/transformers/issues/23130
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23130/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23130/comments
https://api.github.com/repos/huggingface/transformers/issues/23130/events
https://github.com/huggingface/transformers/pull/23130
1,694,212,479
PR_kwDOCUB6oc5PrmcX
23,130
<wip> Early draft of crossformer model
{ "login": "raghavanone", "id": 115454562, "node_id": "U_kgDOBuGyYg", "avatar_url": "https://avatars.githubusercontent.com/u/115454562?v=4", "gravatar_id": "", "url": "https://api.github.com/users/raghavanone", "html_url": "https://github.com/raghavanone", "followers_url": "https://api.github.com/users/raghavanone/followers", "following_url": "https://api.github.com/users/raghavanone/following{/other_user}", "gists_url": "https://api.github.com/users/raghavanone/gists{/gist_id}", "starred_url": "https://api.github.com/users/raghavanone/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/raghavanone/subscriptions", "organizations_url": "https://api.github.com/users/raghavanone/orgs", "repos_url": "https://api.github.com/users/raghavanone/repos", "events_url": "https://api.github.com/users/raghavanone/events{/privacy}", "received_events_url": "https://api.github.com/users/raghavanone/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23130). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi @raghavanone, thanks for opening this PR! \r\n\r\nThe easiest, fastest and preferred way to add a new model is directly onto the hub: https://huggingface.co/docs/transformers/model_sharing\r\n\r\nThe bar for adding models into the transformers repo through a PR is a lot higher and will require all passing tests and approval from a maintainer. As such, adding this way will take a lot longer. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@NielsRogge Request you to open this PR." ]
1,683
1,695
1,690
CONTRIBUTOR
null
Fixes #22852
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23130/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23130/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23130", "html_url": "https://github.com/huggingface/transformers/pull/23130", "diff_url": "https://github.com/huggingface/transformers/pull/23130.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23130.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/23129
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23129/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23129/comments
https://api.github.com/repos/huggingface/transformers/issues/23129/events
https://github.com/huggingface/transformers/issues/23129
1,694,178,455
I_kwDOCUB6oc5k-xyX
23,129
KeyError: 'llama' on using any variant of OpenAssistant LLaMa models
{ "login": "MoaazZaki", "id": 44510702, "node_id": "MDQ6VXNlcjQ0NTEwNzAy", "avatar_url": "https://avatars.githubusercontent.com/u/44510702?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MoaazZaki", "html_url": "https://github.com/MoaazZaki", "followers_url": "https://api.github.com/users/MoaazZaki/followers", "following_url": "https://api.github.com/users/MoaazZaki/following{/other_user}", "gists_url": "https://api.github.com/users/MoaazZaki/gists{/gist_id}", "starred_url": "https://api.github.com/users/MoaazZaki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MoaazZaki/subscriptions", "organizations_url": "https://api.github.com/users/MoaazZaki/orgs", "repos_url": "https://api.github.com/users/MoaazZaki/repos", "events_url": "https://api.github.com/users/MoaazZaki/events{/privacy}", "received_events_url": "https://api.github.com/users/MoaazZaki/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The error message shows an older version of Transformers (4.27.0) are you sure you are executing this in the right Python environment?", "That's right, upgrading to transformers >= 4.28.1 solved the issue.\r\n\r\nThank you 🙌", "I have the same problem, the environment where I am installing the new version of transformers is here:\r\n\r\n`RUN /opt/miniconda/envs/worker/bin/pip install -r requirements.txt`\r\n\r\nbut the line where the download_model.py is being executed uses another enviroment:\r\n\r\n/opt/miniconda/envs/text-generation/bin/python /worker/download_model.py\r\n\r\nI want to use distributed inferencing and GPUs, should I install the requirements from the worker env in the text-generation env? \r\n\r\nthank you" ]
1,683
1,686
1,683
NONE
null
### System Info - `transformers` version: 4.29.0.dev0 - Platform: Linux-5.15.0-1033-aws-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0+cu117 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction 1. Download any OpenAsssitant LLaMa model with `transformers.AutoModelForCausalLM.` & `transformers.AutoTokenizer` (e.g. `TheBloke/OpenAssistant-SFT-7-Llama-30B-HF`) 2. Try to generate anything with it. logs: ``` open-assistant-inference-worker-1 | File "/opt/miniconda/envs/text-generation/lib/python3.9/site-packages/text_generation_server/server.py", line 99, in serve_inner open-assistant-inference-worker-1 | model = get_model(model_id, revision, sharded, quantize) open-assistant-inference-worker-1 | open-assistant-inference-worker-1 | File "/opt/miniconda/envs/text-generation/lib/python3.9/site-packages/text_generation_server/models/__init__.py", line 52, in get_model open-assistant-inference-worker-1 | config = AutoConfig.from_pretrained(model_id, revision=revision) open-assistant-inference-worker-1 | open-assistant-inference-worker-1 | File "/opt/miniconda/envs/text-generation/lib/python3.9/site-packages/transformers-4.27.0.dev0-py3.9.egg/transformers/models/auto/configuration_auto.py", line 882, in from_pretrained open-assistant-inference-worker-1 | config_class = CONFIG_MAPPING[config_dict["model_type"]] open-assistant-inference-worker-1 | open-assistant-inference-worker-1 | File "/opt/miniconda/envs/text-generation/lib/python3.9/site-packages/transformers-4.27.0.dev0-py3.9.egg/transformers/models/auto/configuration_auto.py", line 588, in __getitem__ open-assistant-inference-worker-1 | raise KeyError(key) open-assistant-inference-worker-1 | open-assistant-inference-worker-1 |. KeyError: 'llama' ``` ### Expected behavior The model inference is working correctly without any issues
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23129/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23129/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23128
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23128/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23128/comments
https://api.github.com/repos/huggingface/transformers/issues/23128/events
https://github.com/huggingface/transformers/pull/23128
1,693,962,280
PR_kwDOCUB6oc5Pqv2B
23,128
Generate: better warnings with pipelines
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,683
1,683
1,683
MEMBER
null
# What does this PR do? Addresses comments in https://github.com/huggingface/transformers/issues/23054 This PR adds the following enhancements to generate-related pipeline warnings: 1. Clarifies the `max_length` reduction suggestion in the summarization pipeline 2. Also pipes task-specific configuration to `generation_config` (when applicable), which fixes the warning about relying on `model.config` to parameterize `generate`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23128/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23128/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23128", "html_url": "https://github.com/huggingface/transformers/pull/23128", "diff_url": "https://github.com/huggingface/transformers/pull/23128.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23128.patch", "merged_at": 1683121398000 }
https://api.github.com/repos/huggingface/transformers/issues/23127
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23127/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23127/comments
https://api.github.com/repos/huggingface/transformers/issues/23127/events
https://github.com/huggingface/transformers/pull/23127
1,693,895,396
PR_kwDOCUB6oc5Pqhhn
23,127
Generate: correct beam search length on score calculation for multi batch generation
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,683
1,683
1,683
MEMBER
null
# What does this PR do? Fixes #23084 When computing the score with length penalty, the length was (incorrectly) incremented once per batch member. It should only be incremented once -- the length here is `cur_len` (the length of the generated tokens) + `1` (the token being added at the iteration). Slow tests were ran for BART, T5, GPT2 -- no regressions.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23127/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23127/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23127", "html_url": "https://github.com/huggingface/transformers/pull/23127", "diff_url": "https://github.com/huggingface/transformers/pull/23127.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23127.patch", "merged_at": 1683120596000 }
https://api.github.com/repos/huggingface/transformers/issues/23126
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23126/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23126/comments
https://api.github.com/repos/huggingface/transformers/issues/23126/events
https://github.com/huggingface/transformers/pull/23126
1,693,836,346
PR_kwDOCUB6oc5PqUiF
23,126
Support union types `X | Y` syntax for `HfArgumentParser` for Python 3.10+
{ "login": "XuehaiPan", "id": 16078332, "node_id": "MDQ6VXNlcjE2MDc4MzMy", "avatar_url": "https://avatars.githubusercontent.com/u/16078332?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XuehaiPan", "html_url": "https://github.com/XuehaiPan", "followers_url": "https://api.github.com/users/XuehaiPan/followers", "following_url": "https://api.github.com/users/XuehaiPan/following{/other_user}", "gists_url": "https://api.github.com/users/XuehaiPan/gists{/gist_id}", "starred_url": "https://api.github.com/users/XuehaiPan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XuehaiPan/subscriptions", "organizations_url": "https://api.github.com/users/XuehaiPan/orgs", "repos_url": "https://api.github.com/users/XuehaiPan/repos", "events_url": "https://api.github.com/users/XuehaiPan/events{/privacy}", "received_events_url": "https://api.github.com/users/XuehaiPan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,683
1,683
1,683
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> Support union types `X | Y` syntax for `HfArgumentParser` for Python 3.10+. Allow users using Python 3.10+ to opt in new typing futures, such as [union types `X | Y` (PEP 604)](https://peps.python.org/pep-0604). Note that `typing.get_type_hints` does not work for union types on Python 3.7-3.9. <!-- Remove if not applicable --> Fixes #20249 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? Testing union types `X | Y` for Python 3.7-3.9 needs to add `from __future__ import annotations` at the top of the test script. I'm not sure should we need to create a separate test script or add new test cases directly in `tests/utils/test_hf_argparser.py`. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23126/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23126/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23126", "html_url": "https://github.com/huggingface/transformers/pull/23126", "diff_url": "https://github.com/huggingface/transformers/pull/23126.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23126.patch", "merged_at": 1683125394000 }
https://api.github.com/repos/huggingface/transformers/issues/23125
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23125/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23125/comments
https://api.github.com/repos/huggingface/transformers/issues/23125/events
https://github.com/huggingface/transformers/pull/23125
1,693,802,897
PR_kwDOCUB6oc5PqNPx
23,125
Generate: slow assisted generation test
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Maybe the test will be less flaky if done on a pretrained checkpoint\r\n\r\nDefinitely! However, I think there is a deeper problem here, the logits diverge way more than I'd expect on some models, and it's odd that those models rely on the same base code (roberta). After I finish preparing the release for assisted generation, I'll get back to sorting related bugs" ]
1,683
1,683
1,683
MEMBER
null
# What does this PR do? `test_assisted_decoding_matches_greedy_search` fails once in a while, which blocks development. This PR removes the blocker by moving it to a slow test. Why a slow test (and not redesign the test or add the flaky decorator)? 1. It is impossible to remove at 100% the non-determinism in this test. Some form of masking has to be used by design, which means that there is always a chance for the generations to diverge. When the generated sequences do diverge, the scores should be very similar at the step they diverge, as they are caused by the very small values within the numerical attention masks. 2. I've tried to add the check above (when the sequences diverge, the scores should be similar)... but some models still failed that check quite hard when the sequences didn't match. Some well-established models can run it without observed failures (e.g. 10k runs on GPT2 = 0 sequence mismatches). Others, especially roberta-based models, fail at a high rate. This means I should explore this further. 3. Since there is something to be explored, I believe the slow decorator is more appropriate: we can track failures without risking red CI on PRs.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23125/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23125/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23125", "html_url": "https://github.com/huggingface/transformers/pull/23125", "diff_url": "https://github.com/huggingface/transformers/pull/23125.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23125.patch", "merged_at": 1683120290000 }
https://api.github.com/repos/huggingface/transformers/issues/23124
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23124/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23124/comments
https://api.github.com/repos/huggingface/transformers/issues/23124/events
https://github.com/huggingface/transformers/issues/23124
1,693,784,594
I_kwDOCUB6oc5k9RoS
23,124
MarianMT architecture and onnx format
{ "login": "goga334", "id": 53278040, "node_id": "MDQ6VXNlcjUzMjc4MDQw", "avatar_url": "https://avatars.githubusercontent.com/u/53278040?v=4", "gravatar_id": "", "url": "https://api.github.com/users/goga334", "html_url": "https://github.com/goga334", "followers_url": "https://api.github.com/users/goga334/followers", "following_url": "https://api.github.com/users/goga334/following{/other_user}", "gists_url": "https://api.github.com/users/goga334/gists{/gist_id}", "starred_url": "https://api.github.com/users/goga334/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/goga334/subscriptions", "organizations_url": "https://api.github.com/users/goga334/orgs", "repos_url": "https://api.github.com/users/goga334/repos", "events_url": "https://api.github.com/users/goga334/events{/privacy}", "received_events_url": "https://api.github.com/users/goga334/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "No model in Transformers implements the softmax at the end, they return logits, or if labels are provided, the loss directly.", "Assuming your output from the ONNX model really is the logits and is in numpy format you could probably use a code snippet like this to decode it into text:\r\n```\r\nimport numpy as np\r\nfrom transformers import MarianTokenizer\r\nfrom scipy.special import softmax\r\n\r\ntokenizer = MarianTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-es', cache_dir='./estokenizer')\r\n\r\nbsz = 1\r\nseq_len = 42\r\nvocab_size = 20000\r\n\r\nlogits = np.random.rand(bsz, seq_len, vocab_size) # Output from the model.\r\n\r\ntoken_probs = softmax(logits, axis=-1)\r\n\r\ntoken_ids = np.argsort(token_probs, axis=2)[:, :, -1] # Get top token ID.\r\n\r\ntokenizer.batch_decode(token_ids)\r\n```\r\n\r\nBut are you sure about manually converting it to ONNX? Huggingface's Optimum package should support converting Marian to ONNX off the bat, with beam search support and all the whistles.", "@sgugger, thanks a lot, now I see how it works)", "@SmartWashingMachine, yes output was logits indeed. Thanks for the code snipped, it works well, but it seems that the model should be converted in some other way. I used Huggingface's Optimum for MarianMT and it works perfect with conversion and inference. \r\n\r\nHowever, i couldn't use it for m2m100 and mbart, google collab crashes because of lack of memory during conversion. I'll look for some other way of speed up for these two.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "> @SmartWashingMachine, yes output was logits indeed. Thanks for the code snipped, it works well, but it seems that the model should be converted in some other way. I used Huggingface's Optimum for MarianMT and it works perfect with conversion and inference.\r\n> \r\n> However, i couldn't use it for m2m100 and mbart, google collab crashes because of lack of memory during conversion. I'll look for some other way of speed up for these two.\r\n\r\nHello, may I ask a question? After using optim cli to export MarianMT, the model is divided into two parts: encoder. onnx and decoder. onnx. I suspect that I am using the wrong part. How did you call the decoder?\r\n```ptyhon\r\n···\r\ninput_ids = [[40604, 24, 90, 34, 2588, 2, 187, 56, 21258, 9,\r\n 11080, 11, 3, 5896, 526, 4947, 9, 11, 605, 3605, 11768, 6, 0]]\r\nattention_mask = [[1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]\r\n\r\nencoder_out = encoder_session.run(None, {'input_ids': input_ids,\r\n 'attention_mask': attention_mask})\r\nprint(encoder_out[0].shape)\r\n# beam search\r\nbatch_size = 1\r\nnum_beams = 4\r\nmax_length = 512\r\npad_token_id = 65000\r\neos_token_id = 0\r\n\r\ninput_ids = np.ones((num_beams, 1), dtype=np.int64) * pad_token_id\r\nbeam_scorer = BeamSearchScorer(batch_size=batch_size, num_beams=num_beams, max_length=max_length)\r\nif isinstance(eos_token_id, int):\r\n eos_token_id = [eos_token_id]\r\n\r\nbeam_scores = np.zeros((batch_size, num_beams), dtype=np.float32)\r\nbeam_scores[:, 1:] = -1e9\r\nbeam_scores = beam_scores.reshape((batch_size * num_beams,))\r\n\r\nencoder_out = encoder_out[0].repeat(4, 0)\r\nencoder_attention_mask = np.array(attention_mask, dtype=np.int64).repeat(4, 0)\r\nwhile True:\r\n decoder_out = decoder_session.run(None, {'input_ids': input_ids, 'attention_mask': None,\r\n 'encoder_hidden_states': encoder_out, 'encoder_attention_mask': encoder_attention_mask})\r\n print(decoder_out[0].shape)\r\n···\r\n```\r\ndecoder_output[0] shape is (4, 2, 65001)", "Hey 🤗 We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n\r\nThanks!" ]
1,683
1,698
1,686
NONE
null
**MarianMT architecture** I found an interesting detail about MarianMT implementation in huggingface. There is no "Softmax" layer after "Linear" at the end of the model, despite the default architecture of transformer. ``` MarianMTModel( (model): MarianModel( (shared): Embedding(58930, 512, padding_idx=58929) (encoder): MarianEncoder( (embed_tokens): Embedding(58930, 512, padding_idx=58929) (embed_positions): MarianSinusoidalPositionalEmbedding(512, 512) (layers): ModuleList( (0-5): 6 x MarianEncoderLayer( (self_attn): MarianAttention( (k_proj): Linear(in_features=512, out_features=512, bias=True) (v_proj): Linear(in_features=512, out_features=512, bias=True) (q_proj): Linear(in_features=512, out_features=512, bias=True) (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (self_attn_layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True) (activation_fn): SiLUActivation() (fc1): Linear(in_features=512, out_features=2048, bias=True) (fc2): Linear(in_features=2048, out_features=512, bias=True) (final_layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True) ) ) ) (decoder): MarianDecoder( (embed_tokens): Embedding(58930, 512, padding_idx=58929) (embed_positions): MarianSinusoidalPositionalEmbedding(512, 512) (layers): ModuleList( (0-5): 6 x MarianDecoderLayer( (self_attn): MarianAttention( (k_proj): Linear(in_features=512, out_features=512, bias=True) (v_proj): Linear(in_features=512, out_features=512, bias=True) (q_proj): Linear(in_features=512, out_features=512, bias=True) (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (activation_fn): SiLUActivation() (self_attn_layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True) (encoder_attn): MarianAttention( (k_proj): Linear(in_features=512, out_features=512, bias=True) (v_proj): Linear(in_features=512, out_features=512, bias=True) (q_proj): Linear(in_features=512, out_features=512, bias=True) (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (encoder_attn_layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=512, out_features=2048, bias=True) (fc2): Linear(in_features=2048, out_features=512, bias=True) (final_layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True) ) ) ) ) (lm_head): Linear(in_features=512, out_features=58930, bias=False) ) ``` There is no problem when loading this model via "MarianMTModel.from_pretrained" and calling ".generate()" method, everything works fine, returning output shaped (batch_size, max_seq_len). **MarianMT onnx format** However, when I tried to convert MarianMT huggingface model into onnx format via "torch.onnx.export" and use it with "onnxruntime.InferenceSession" calling "run()" method, I got raw embedding batches as outputs shaped (batch_size, max_seq_len, 58930), which I can't decode into text using MarianTokenizer. I suppose, it is caused by the absence of that Softmax layer at the end. **Regarding this, I have two questions:** - Is it normal that MarianMT in huggingface transformers has no Softmax layer at the end? - Is there a way to decode output embeddings shaped (batch_size, max_seq_len, 58930) into text? @gante @amyeroberts @sgugger, I'm not sure whether to consider this a bug or just a slight misunderstanding, so I'd be really grateful for some advice.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23124/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23124/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23123
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23123/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23123/comments
https://api.github.com/repos/huggingface/transformers/issues/23123/events
https://github.com/huggingface/transformers/pull/23123
1,693,660,620
PR_kwDOCUB6oc5Ppt4Y
23,123
improve unclear documentation
{ "login": "ManuelFay", "id": 43467008, "node_id": "MDQ6VXNlcjQzNDY3MDA4", "avatar_url": "https://avatars.githubusercontent.com/u/43467008?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ManuelFay", "html_url": "https://github.com/ManuelFay", "followers_url": "https://api.github.com/users/ManuelFay/followers", "following_url": "https://api.github.com/users/ManuelFay/following{/other_user}", "gists_url": "https://api.github.com/users/ManuelFay/gists{/gist_id}", "starred_url": "https://api.github.com/users/ManuelFay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ManuelFay/subscriptions", "organizations_url": "https://api.github.com/users/ManuelFay/orgs", "repos_url": "https://api.github.com/users/ManuelFay/repos", "events_url": "https://api.github.com/users/ManuelFay/events{/privacy}", "received_events_url": "https://api.github.com/users/ManuelFay/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,683
1,683
1,683
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) Unclear documentation in EarlyStoppingCallback meachanism. ## Before submitting - [ x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23123/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23123/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23123", "html_url": "https://github.com/huggingface/transformers/pull/23123", "diff_url": "https://github.com/huggingface/transformers/pull/23123.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23123.patch", "merged_at": 1683120990000 }
https://api.github.com/repos/huggingface/transformers/issues/23122
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23122/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23122/comments
https://api.github.com/repos/huggingface/transformers/issues/23122/events
https://github.com/huggingface/transformers/pull/23122
1,693,641,233
PR_kwDOCUB6oc5Ppprf
23,122
Fix ConvNext V2 parameter naming issue
{ "login": "alaradirik", "id": 8944735, "node_id": "MDQ6VXNlcjg5NDQ3MzU=", "avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alaradirik", "html_url": "https://github.com/alaradirik", "followers_url": "https://api.github.com/users/alaradirik/followers", "following_url": "https://api.github.com/users/alaradirik/following{/other_user}", "gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}", "starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions", "organizations_url": "https://api.github.com/users/alaradirik/orgs", "repos_url": "https://api.github.com/users/alaradirik/repos", "events_url": "https://api.github.com/users/alaradirik/events{/privacy}", "received_events_url": "https://api.github.com/users/alaradirik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,683
1,683
1,683
CONTRIBUTOR
null
# What does this PR do? Renames gamma and beta parameters of the `ConvNextV2GRN` module, which caused the `save_pretrained` method to rename these parameters to weight and bias. Existing checkpoints on the hub can be loaded without any warnings once the PR is merged. Fixes #23090 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23122/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23122/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23122", "html_url": "https://github.com/huggingface/transformers/pull/23122", "diff_url": "https://github.com/huggingface/transformers/pull/23122.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23122.patch", "merged_at": 1683123687000 }
https://api.github.com/repos/huggingface/transformers/issues/23121
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23121/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23121/comments
https://api.github.com/repos/huggingface/transformers/issues/23121/events
https://github.com/huggingface/transformers/pull/23121
1,693,603,401
PR_kwDOCUB6oc5Pphk7
23,121
[`Doctest`] Fix pix2struct doctest
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,683
1,683
1,683
CONTRIBUTOR
null
# What does this PR do? Link to failing job: https://github.com/huggingface/transformers/actions/runs/4867713745/jobs/8680544136 This PR fixes the current failing doctest for pix2struct. https://github.com/huggingface/transformers/pull/23051 fixed the issues related with pix2struct and training by changing the way attention masks are computed. Naturally this has changed the value of the expected loss function in the docstring cc @ydshieh
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23121/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23121/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23121", "html_url": "https://github.com/huggingface/transformers/pull/23121", "diff_url": "https://github.com/huggingface/transformers/pull/23121.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23121.patch", "merged_at": 1683105719000 }
https://api.github.com/repos/huggingface/transformers/issues/23120
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23120/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23120/comments
https://api.github.com/repos/huggingface/transformers/issues/23120/events
https://github.com/huggingface/transformers/issues/23120
1,693,521,107
I_kwDOCUB6oc5k8RTT
23,120
Trainer.hyperparameter_search() should give the option to reload the model from the best run before completing
{ "login": "fantauzzi", "id": 2722433, "node_id": "MDQ6VXNlcjI3MjI0MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/2722433?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fantauzzi", "html_url": "https://github.com/fantauzzi", "followers_url": "https://api.github.com/users/fantauzzi/followers", "following_url": "https://api.github.com/users/fantauzzi/following{/other_user}", "gists_url": "https://api.github.com/users/fantauzzi/gists{/gist_id}", "starred_url": "https://api.github.com/users/fantauzzi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fantauzzi/subscriptions", "organizations_url": "https://api.github.com/users/fantauzzi/orgs", "repos_url": "https://api.github.com/users/fantauzzi/repos", "events_url": "https://api.github.com/users/fantauzzi/events{/privacy}", "received_events_url": "https://api.github.com/users/fantauzzi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I have found the path to the saved checkpoint with the best model, after hyperparameter optimization has completed, is in `Trainer.state.best_model_checkpoint`, the model can then be easily loaded from that checkpoint.\r\nI leave this here in case it may help someone else find where that information is tucked away.", "Hi, I have found that the proposed solution does not work. Seems like `Trainer.state.best_model_checkpoint` will always have the best checkpoint for the latest Trial run.\r\n\r\nI went around this by creating my own `TrainerCallback` that tracks best and last checkpoints for each all trials in a sweep. In the end I just need to get it from it." ]
1,683
1,689
1,683
NONE
null
### Feature request After calling `Trainer.hyperparameter_search()`, the instance of `Trainer` contains the last trained model, among the multiple models trained for optimization of the hyperparameters. There should be an option, perhaps among those of `TrainingArguments`, to have the instance of `Trainer` reload the model from the best run (the best model) before `Trainer.hyperparameter_search()` returns. ### Motivation After optimizing hyperparameters we are typically interested in the trained model that optimizes them, not the model that, accidentally, was trained in the last run of optimization. We are interested in the *best* model, not the *last* model. Currently, accessing the best model is tricky, as one has to reload it from checkpoints on disk, after figuring out what the path to the wanted checkpoint is. The latter is further complicated by the fact that there may be multiple checkpoints on disk for the run that produced the best model, and we specifically want the checkpoint at the end of the epoch where the objective function was maximized (or minimized), which is not necessarily the last epoch of the run. ### Your contribution N/A
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23120/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23120/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23119
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23119/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23119/comments
https://api.github.com/repos/huggingface/transformers/issues/23119/events
https://github.com/huggingface/transformers/pull/23119
1,693,400,759
PR_kwDOCUB6oc5Po2S1
23,119
added farsi lang
{ "login": "mzamini92", "id": 32536264, "node_id": "MDQ6VXNlcjMyNTM2MjY0", "avatar_url": "https://avatars.githubusercontent.com/u/32536264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mzamini92", "html_url": "https://github.com/mzamini92", "followers_url": "https://api.github.com/users/mzamini92/followers", "following_url": "https://api.github.com/users/mzamini92/following{/other_user}", "gists_url": "https://api.github.com/users/mzamini92/gists{/gist_id}", "starred_url": "https://api.github.com/users/mzamini92/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mzamini92/subscriptions", "organizations_url": "https://api.github.com/users/mzamini92/orgs", "repos_url": "https://api.github.com/users/mzamini92/repos", "events_url": "https://api.github.com/users/mzamini92/events{/privacy}", "received_events_url": "https://api.github.com/users/mzamini92/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Please only add the translated file in the new folder.\r\n\r\n@sgugger only the translated files are now in fa folder. the rest have been removed.", "> Thanks a lot! One last comment on the `add_new_model` file.\r\n\r\n@sgugger I'm sorry I didn't quite get what you mean? can you please tell me more to do it right away? appreciate it. ", "You have translated half the file only. Maybe leave it out of this PR and add it in a new PR when you're fully done?" ]
1,683
1,683
1,683
NONE
null
# What does this PR do? Added Farsi (fa) to the docs. I have translated `_toctree.yml` and `accelerate.mdx` and `add_new_pipeline.mdx`. the rest also will be translated soon. I will make this pull request so others also can contribute and make this process faster. <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23119/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23119/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23119", "html_url": "https://github.com/huggingface/transformers/pull/23119", "diff_url": "https://github.com/huggingface/transformers/pull/23119.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23119.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/23118
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23118/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23118/comments
https://api.github.com/repos/huggingface/transformers/issues/23118/events
https://github.com/huggingface/transformers/pull/23118
1,693,286,034
PR_kwDOCUB6oc5PodSJ
23,118
Pin numba for now
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,683
1,683
1,683
COLLABORATOR
null
# What does this PR do? Today's release of `numba` broke the audio feature extractors. Not sure if it's because of numba by itself or because it forces an update to Numpy 1.24. Will be investigated later by the audio team but in the meantime pinning `numba` to make `main` green. cc @ydshieh @sanchit-gandhi for information, will merge this as soon as the CI is green.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23118/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23118/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23118", "html_url": "https://github.com/huggingface/transformers/pull/23118", "diff_url": "https://github.com/huggingface/transformers/pull/23118.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23118.patch", "merged_at": 1683079359000 }
https://api.github.com/repos/huggingface/transformers/issues/23117
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23117/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23117/comments
https://api.github.com/repos/huggingface/transformers/issues/23117/events
https://github.com/huggingface/transformers/issues/23117
1,693,233,502
I_kwDOCUB6oc5k7LFe
23,117
Provide a different API solution instead of offline mode
{ "login": "rbavery", "id": 22258697, "node_id": "MDQ6VXNlcjIyMjU4Njk3", "avatar_url": "https://avatars.githubusercontent.com/u/22258697?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rbavery", "html_url": "https://github.com/rbavery", "followers_url": "https://api.github.com/users/rbavery/followers", "following_url": "https://api.github.com/users/rbavery/following{/other_user}", "gists_url": "https://api.github.com/users/rbavery/gists{/gist_id}", "starred_url": "https://api.github.com/users/rbavery/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rbavery/subscriptions", "organizations_url": "https://api.github.com/users/rbavery/orgs", "repos_url": "https://api.github.com/users/rbavery/repos", "events_url": "https://api.github.com/users/rbavery/events{/privacy}", "received_events_url": "https://api.github.com/users/rbavery/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi there. `from_pretrained` already accepts a path to a folder, I'm not sure what it is you are requesting.", "The relevant docs to load from local data can be found here: https://huggingface.co/docs/transformers/v4.28.1/en/main_classes/model#transformers.PreTrainedModel.from_pretrained.\r\n\r\nThe `from_pretrained` method accepts either a `repo_id` to a repo on the 🤗 hub or a local path to a folder.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,683
1,686
1,686
NONE
null
### Feature request First off, thanks for the stellar lib and for all the work to get state of the art models in a consumable and documented state! I'm somewhat new to transformers, so this feedback is coming from a place of heavily using the library for a week. I find offline mode unintuitive. I don't see it referenced on a variety of tutorials so I spent a day trying to figure out how to load local files with transformers, see https://github.com/huggingface/transformers/issues/23116 The docs are somewhat buried in a section I wouldn't expect: Installation https://huggingface.co/docs/transformers/installation#offline-mode ### Motivation Offline mode makes it more difficult to work with local files, instead, methods like `from_pretrained` could pay attention to if a file path is locally sourced (starts with / ) or a url (https) or huggingface repo (no / or url prefix, has a username/repo pattern) ### Your contribution I'm happy to provide feedback.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23117/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/23117/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23116
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23116/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23116/comments
https://api.github.com/repos/huggingface/transformers/issues/23116/events
https://github.com/huggingface/transformers/issues/23116
1,693,124,621
I_kwDOCUB6oc5k6wgN
23,116
OneFormerImageProcessor does not support passing local config file, always tries to download from repo
{ "login": "rbavery", "id": 22258697, "node_id": "MDQ6VXNlcjIyMjU4Njk3", "avatar_url": "https://avatars.githubusercontent.com/u/22258697?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rbavery", "html_url": "https://github.com/rbavery", "followers_url": "https://api.github.com/users/rbavery/followers", "following_url": "https://api.github.com/users/rbavery/following{/other_user}", "gists_url": "https://api.github.com/users/rbavery/gists{/gist_id}", "starred_url": "https://api.github.com/users/rbavery/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rbavery/subscriptions", "organizations_url": "https://api.github.com/users/rbavery/orgs", "repos_url": "https://api.github.com/users/rbavery/repos", "events_url": "https://api.github.com/users/rbavery/events{/privacy}", "received_events_url": "https://api.github.com/users/rbavery/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@rbavery Thanks for raising this issue. \r\n\r\nI'm able to load a processor locally on the development branch without issue: \r\n```python\r\nfrom transformers import OneFormerProcessor\r\n\r\nprocessor = OneFormerProcessor.from_pretrained('shi-labs/oneformer_ade20k_swin_tiny')\r\nprocessor.save_pretrained('foo')\r\n\r\nnew_processor = OneFormerProcessor.from_pretrained('foo')\r\n```\r\n\r\nNote, the processor combines two processing objects - the image processor and a tokenizer - and so configurations + additional files are necessary for to successfully load both to create the processor. Could you share the files in the folder you're trying to load from? In the `foo` folder created, I see the following files: \r\n```\r\nmerges.txt\t\t\t\r\nspecial_tokens_map.json\t\t\r\ntokenizer_config.json\r\npreprocessor_config.json\t\r\ntokenizer.json\t\t\t\r\nvocab.json\r\n```\r\n\r\nAs a small side note, in the example snippet, I believe there's a small typo in the code, and should be: \r\n\r\n```python\r\nfrom transformers import OneFormerProcessor\r\nconfig_path = \"/local/config/path\"\r\nOneFormerProcessor.from_pretrained(config_path, ignore_mismatched_sizes=True)\r\n```\r\n\r\n\r\n", "Hi\r\nI have a similar problem , even when cloning the files locally still need to download ade20k_panoptic.json and it will not work without it", "Hi @ammarali32, \r\n\r\nAh OK, I understand now. This download is happening because of the [prepare_metadata method](https://github.com/huggingface/transformers/blob/17a55534f5e5df10ac4804d4270bf6b8cc24998d/src/transformers/models/oneformer/image_processing_oneformer.py#L323), which looks to download the file from the hub, and by default points to the `\"shi-labs/oneformer_demo\"` path. After being downloaded once, it should be possible to work in offline mode as it will be stored in the cache. However, I appreciate this isn't a complete solution. \r\n\r\nIf there's another repo on the hub you wish to download the class info file from, replacing `repo_path` when instantiating the image processor class should be enough. To make the class look to either local files or on the hub, the image processing code would need to be reworked a bit. This is something that should happen in the future, however it's not a piece of work I have capacity to work on at the moment. If anyone from the community would like to take this I'm happy to review any PRs.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "> ### System Info\r\n> * `transformers` version: 4.29.0.dev0\r\n> * Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35\r\n> * Python version: 3.10.10\r\n> * Huggingface_hub version: 0.14.1\r\n> * Safetensors version: 0.3.1\r\n> * PyTorch version (GPU?): 2.0.0+cu117 (True)\r\n> * Tensorflow version (GPU?): 2.11.1 (False)\r\n> * Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu)\r\n> * Jax version: 0.3.6\r\n> * JaxLib version: 0.3.5\r\n> * Using GPU in script?:\r\n> * Using distributed or parallel set-up in script?:\r\n> \r\n> ### Who can help?\r\n> @amyeroberts\r\n> \r\n> this forum post I put up seems like a bug: https://discuss.huggingface.co/t/how-to-load-local-config-json-for-oneformerimageprocessor-without-invoking-huggingfacehub-downloader/38372\r\n> \r\n> The OneFormerImageProcessor should accept local config files without trying to download them from a repo_path\r\n> \r\n> https://github.com/huggingface/transformers/blob/v4.28.1/src/transformers/models/oneformer/image_processing_oneformer.py#L323\r\n> \r\n> ### Information\r\n> * [x] The official example scripts\r\n> * [x] My own modified scripts\r\n> \r\n> ### Tasks\r\n> * [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\r\n> * [x] My own task or dataset (give details below)\r\n> \r\n> ### Reproduction\r\n> ```\r\n> from transformers import OneFormerProcessor\r\n> config_path = \"/local/config/path\"\r\n> OneFormerProcessor.from_pretrained(config_path, ignore_mismatched_sizes=True)ignore_mismatched_sizes=True)\r\n> ```\r\n> \r\n> ### Expected behavior\r\n> the processor gets initialized and doesn't error with\r\n> \r\n> ```\r\n> + f\"Repository Not Found for url: {response.url}.\"\r\n> + \"\\nPlease make sure you specified the correct `repo_id` and\"\r\n> \" `repo_type`.\\nIf you are trying to access a private or gated repo,\"\r\n> \" make sure you are authenticated.\"\r\n> ```\r\n\r\nHey, you can try to modify the prepare_metadata function in image_processing_oneformer.py like this:\r\n ```python\r\ndef prepare_metadata(repo_path, class_info_file):\r\n metadata = {}\r\n with open('xxx/preprocessor_config.json', \"r\") as f:\r\n class_info = json.load(f)\r\n metadata = class_info['metadata']\r\n return metadata\r\n ```\r\n", "thanks @TreastBean " ]
1,683
1,704
1,688
NONE
null
### System Info - `transformers` version: 4.29.0.dev0 - Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35 - Python version: 3.10.10 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): 2.11.1 (False) - Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu) - Jax version: 0.3.6 - JaxLib version: 0.3.5 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @amyeroberts this forum post I put up seems like a bug: https://discuss.huggingface.co/t/how-to-load-local-config-json-for-oneformerimageprocessor-without-invoking-huggingfacehub-downloader/38372 The OneFormerImageProcessor should accept local config files without trying to download them from a repo_path https://github.com/huggingface/transformers/blob/v4.28.1/src/transformers/models/oneformer/image_processing_oneformer.py#L323 ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` from transformers import OneFormerProcessor config_path = "/local/config/path" OneFormerProcessor.from_pretrained(config_path, ignore_mismatched_sizes=True)ignore_mismatched_sizes=True) ``` ### Expected behavior the processor gets initialized and doesn't error with ``` + f"Repository Not Found for url: {response.url}." + "\nPlease make sure you specified the correct `repo_id` and" " `repo_type`.\nIf you are trying to access a private or gated repo," " make sure you are authenticated." ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23116/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23116/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23115
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23115/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23115/comments
https://api.github.com/repos/huggingface/transformers/issues/23115/events
https://github.com/huggingface/transformers/pull/23115
1,692,930,362
PR_kwDOCUB6oc5PnQk7
23,115
Add resources for LayoutLmV2 and reformat documentation resources
{ "login": "y3sar", "id": 16244698, "node_id": "MDQ6VXNlcjE2MjQ0Njk4", "avatar_url": "https://avatars.githubusercontent.com/u/16244698?v=4", "gravatar_id": "", "url": "https://api.github.com/users/y3sar", "html_url": "https://github.com/y3sar", "followers_url": "https://api.github.com/users/y3sar/followers", "following_url": "https://api.github.com/users/y3sar/following{/other_user}", "gists_url": "https://api.github.com/users/y3sar/gists{/gist_id}", "starred_url": "https://api.github.com/users/y3sar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/y3sar/subscriptions", "organizations_url": "https://api.github.com/users/y3sar/orgs", "repos_url": "https://api.github.com/users/y3sar/repos", "events_url": "https://api.github.com/users/y3sar/events{/privacy}", "received_events_url": "https://api.github.com/users/y3sar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @stevhliu ", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,683
1,683
1,683
CONTRIBUTOR
null
# What does this PR do? From #19848 This PR adds resources to the LayoutLMV2 documentation page. Also in the LayoutLMV2 documentation page the documentation resources heading was incoherent with the other doc pages so I removed the heading and put the task specific guides under the corresponding task headings.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23115/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23115/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23115", "html_url": "https://github.com/huggingface/transformers/pull/23115", "diff_url": "https://github.com/huggingface/transformers/pull/23115.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23115.patch", "merged_at": 1683121980000 }
https://api.github.com/repos/huggingface/transformers/issues/23114
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23114/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23114/comments
https://api.github.com/repos/huggingface/transformers/issues/23114/events
https://github.com/huggingface/transformers/pull/23114
1,692,806,670
PR_kwDOCUB6oc5Pm107
23,114
Add accelerate support - vision MAE models
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23114). All of your documentation changes will be reflected on that endpoint.", "@sgugger I checked with 2 GPUs, I'll run with just one to make sure it still works 👍 " ]
1,683
1,689
1,688
COLLABORATOR
null
# What does this PR do? Adds accelerate support to VideoMAE and ViTMAE following the changes made in the [equivalent ViT PR](https://github.com/huggingface/transformers/pull/20174) Fixes #23086 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23114/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23114/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23114", "html_url": "https://github.com/huggingface/transformers/pull/23114", "diff_url": "https://github.com/huggingface/transformers/pull/23114.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23114.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/23113
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23113/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23113/comments
https://api.github.com/repos/huggingface/transformers/issues/23113/events
https://github.com/huggingface/transformers/pull/23113
1,692,585,120
PR_kwDOCUB6oc5PmGTX
23,113
docs: ko: fix: update `_toctree.yml`
{ "login": "wonhyeongseo", "id": 29195190, "node_id": "MDQ6VXNlcjI5MTk1MTkw", "avatar_url": "https://avatars.githubusercontent.com/u/29195190?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wonhyeongseo", "html_url": "https://github.com/wonhyeongseo", "followers_url": "https://api.github.com/users/wonhyeongseo/followers", "following_url": "https://api.github.com/users/wonhyeongseo/following{/other_user}", "gists_url": "https://api.github.com/users/wonhyeongseo/gists{/gist_id}", "starred_url": "https://api.github.com/users/wonhyeongseo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wonhyeongseo/subscriptions", "organizations_url": "https://api.github.com/users/wonhyeongseo/orgs", "repos_url": "https://api.github.com/users/wonhyeongseo/repos", "events_url": "https://api.github.com/users/wonhyeongseo/events{/privacy}", "received_events_url": "https://api.github.com/users/wonhyeongseo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Closing in favor of #23112 ", "Checking if same conflicts emerge with these changes as colleague's branch is experiencing.", "_The documentation is not available anymore as the PR was closed or merged._", "Closing again in favor of #23112 (Checks are passing)", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23113). All of your documentation changes will be reflected on that endpoint." ]
1,683
1,683
1,683
CONTRIBUTOR
null
# What does this PR do? Resolve conflicts raised by a recent `_toctree.yml` change (#23049) * Edited some titles to match the new coherent style. * Moved sections to match the English table of contents. * As all of the two removed files were yet to be translated, no work was needed.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23113/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23113/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23113", "html_url": "https://github.com/huggingface/transformers/pull/23113", "diff_url": "https://github.com/huggingface/transformers/pull/23113.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23113.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/23112
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23112/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23112/comments
https://api.github.com/repos/huggingface/transformers/issues/23112/events
https://github.com/huggingface/transformers/pull/23112
1,692,584,050
PR_kwDOCUB6oc5PmGEq
23,112
docs: ko: update `_toctree.yml`
{ "login": "HanNayeoniee", "id": 33839093, "node_id": "MDQ6VXNlcjMzODM5MDkz", "avatar_url": "https://avatars.githubusercontent.com/u/33839093?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HanNayeoniee", "html_url": "https://github.com/HanNayeoniee", "followers_url": "https://api.github.com/users/HanNayeoniee/followers", "following_url": "https://api.github.com/users/HanNayeoniee/following{/other_user}", "gists_url": "https://api.github.com/users/HanNayeoniee/gists{/gist_id}", "starred_url": "https://api.github.com/users/HanNayeoniee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HanNayeoniee/subscriptions", "organizations_url": "https://api.github.com/users/HanNayeoniee/orgs", "repos_url": "https://api.github.com/users/HanNayeoniee/repos", "events_url": "https://api.github.com/users/HanNayeoniee/events{/privacy}", "received_events_url": "https://api.github.com/users/HanNayeoniee/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "`ko/notebook.mdx`가 https://github.com/huggingface/transformers/pull/22670 에서 추가됐기 때문에 오류가 납니다. 삭제해주시면 안 날거예요", "> `ko/notebook.mdx`가 #22670 에서 추가됐기 때문에 오류가 납니다. 삭제해주시면 안 날거예요. 제 닫은 브랜치를 다시 열어서 테스트 후 다시 나연님께 pr 보낼게요. merge하지 않고도 cherry-pick하는 방법이 있습니다. 이러면 squash할 때 더 간편한데, 그건 나중에 해보죠.\r\n\r\n`notebook`이 문제가 된다는 부분까지는 파악하고 헤매고 있었네요.. 감사합니다!!\r\n목차 수정하는게 쉽지 않네요 😂\r\n내일 제 브랜치에 빌드 오류 없는지 확인해보겠습니다~", "_The documentation is not available anymore as the PR was closed or merged._", "> @MKhalusova Could you double-check this matches your latest changes?\r\n> Thanks.\r\n\r\nFrom what I can see it does match the changes, the structure is the same as in my changes. I can't verify the translation of the titles that were renamed, but it seems to be matching too. ", "> > @MKhalusova Could you double-check this matches your latest changes?\r\n> > Thanks.\r\n> \r\n> From what I can see it does match the changes, the structure is the same as in my changes. I can't verify the translation of the titles that were renamed, but it seems to be matching too.\r\n\r\nSorry that my reply is a little bit late. Not all renamed titles are translated in here.\r\nSome titles have been renamed, but some are not (e.g. “General usage” has been renamed to “Developer Guides”)\r\nSince changing title affects every document, [Pseudo Lab team](https://github.com/Pseudo-Lab) and I are going to translate all the renamed titles as we keep translating docs to Korean!" ]
1,683
1,683
1,683
CONTRIBUTOR
null
# What does this PR do? Part of https://github.com/huggingface/transformers/issues/20179 Initial version is in https://github.com/huggingface/transformers/pull/22581 Updated `_toctree.yml` according to https://github.com/huggingface/transformers/pull/23049 This PR restructures TOC for the documentation. Here's the scope of the restructure: a) TOC is sorted from “beginner” topics to more advanced b) Some topics have been renamed c) Task Guides are collapsed by default and now are on on the same level (currently NLP task guides are hidden, and not aligned with other modalities) d) “General usage” has been renamed to “Developer Guides” e) Benchmarks, notebooks, and community resources have been moved under Developer Guides f) “Converting from TensorFlow checkpoints” and "Migrating from previous packages" pages removed To dos: Some topics have been renamed to be concise and more descriptive. Needs to translate to Korean. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Initial) <!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd ## Who can review? (Final) <!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> @sgugger, @ArthurZucker, @eunseojo May you please review this PR?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23112/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23112/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23112", "html_url": "https://github.com/huggingface/transformers/pull/23112", "diff_url": "https://github.com/huggingface/transformers/pull/23112.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23112.patch", "merged_at": 1683126299000 }
https://api.github.com/repos/huggingface/transformers/issues/23111
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23111/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23111/comments
https://api.github.com/repos/huggingface/transformers/issues/23111/events
https://github.com/huggingface/transformers/pull/23111
1,692,511,850
PR_kwDOCUB6oc5Pl2tu
23,111
fix resume fsdp
{ "login": "qywu", "id": 18195478, "node_id": "MDQ6VXNlcjE4MTk1NDc4", "avatar_url": "https://avatars.githubusercontent.com/u/18195478?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qywu", "html_url": "https://github.com/qywu", "followers_url": "https://api.github.com/users/qywu/followers", "following_url": "https://api.github.com/users/qywu/following{/other_user}", "gists_url": "https://api.github.com/users/qywu/gists{/gist_id}", "starred_url": "https://api.github.com/users/qywu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qywu/subscriptions", "organizations_url": "https://api.github.com/users/qywu/orgs", "repos_url": "https://api.github.com/users/qywu/repos", "events_url": "https://api.github.com/users/qywu/events{/privacy}", "received_events_url": "https://api.github.com/users/qywu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Please run `make style` and `make quality` to fix the quality issues", "I have fixed the issues. The optimizer saving had no problems. For using [scatter_full_optim_state_dict](https://pytorch.org/docs/stable/fsdp.html#torch.distributed.fsdp.FullyShardedDataParallel.scatter_full_optim_state_dict), indeed loading on rank 0 is enough, which can save CPU memory usage.", "cc @sgugger for a second look", "thanks for the fix!" ]
1,683
1,683
1,683
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes [# 23034](https://github.com/huggingface/transformers/issues/23034) When training a model with FSDP, the checkpoint is not saved and loaded correctly. Only rank 0's optimizer state dict is saved. This PR fixes this issue. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @pacman100 <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23111/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23111/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23111", "html_url": "https://github.com/huggingface/transformers/pull/23111", "diff_url": "https://github.com/huggingface/transformers/pull/23111.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23111.patch", "merged_at": 1683208652000 }
https://api.github.com/repos/huggingface/transformers/issues/23110
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23110/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23110/comments
https://api.github.com/repos/huggingface/transformers/issues/23110/events
https://github.com/huggingface/transformers/pull/23110
1,692,386,095
PR_kwDOCUB6oc5PlbnN
23,110
[ONNX] Sam fix
{ "login": "michaelbenayoun", "id": 25418079, "node_id": "MDQ6VXNlcjI1NDE4MDc5", "avatar_url": "https://avatars.githubusercontent.com/u/25418079?v=4", "gravatar_id": "", "url": "https://api.github.com/users/michaelbenayoun", "html_url": "https://github.com/michaelbenayoun", "followers_url": "https://api.github.com/users/michaelbenayoun/followers", "following_url": "https://api.github.com/users/michaelbenayoun/following{/other_user}", "gists_url": "https://api.github.com/users/michaelbenayoun/gists{/gist_id}", "starred_url": "https://api.github.com/users/michaelbenayoun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/michaelbenayoun/subscriptions", "organizations_url": "https://api.github.com/users/michaelbenayoun/orgs", "repos_url": "https://api.github.com/users/michaelbenayoun/repos", "events_url": "https://api.github.com/users/michaelbenayoun/events{/privacy}", "received_events_url": "https://api.github.com/users/michaelbenayoun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,683
1,683
1,683
MEMBER
null
# What does this PR do? This PR provides a few changes to make the ONNX export work for `SamModel`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23110/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23110/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23110", "html_url": "https://github.com/huggingface/transformers/pull/23110", "diff_url": "https://github.com/huggingface/transformers/pull/23110.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23110.patch", "merged_at": 1683040803000 }
https://api.github.com/repos/huggingface/transformers/issues/23109
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23109/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23109/comments
https://api.github.com/repos/huggingface/transformers/issues/23109/events
https://github.com/huggingface/transformers/pull/23109
1,692,369,971
PR_kwDOCUB6oc5PlYIz
23,109
Add head_mask for llama
{ "login": "fxmeng", "id": 60565778, "node_id": "MDQ6VXNlcjYwNTY1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/60565778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmeng", "html_url": "https://github.com/fxmeng", "followers_url": "https://api.github.com/users/fxmeng/followers", "following_url": "https://api.github.com/users/fxmeng/following{/other_user}", "gists_url": "https://api.github.com/users/fxmeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmeng/subscriptions", "organizations_url": "https://api.github.com/users/fxmeng/orgs", "repos_url": "https://api.github.com/users/fxmeng/repos", "events_url": "https://api.github.com/users/fxmeng/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmeng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23109). All of your documentation changes will be reflected on that endpoint.", "cc @ArthurZucker and @younesbelkada " ]
1,683
1,685
1,685
NONE
null
# What does this PR do? Support inputting a head_mask to LLaMA's forward like other models. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23109/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23109/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23109", "html_url": "https://github.com/huggingface/transformers/pull/23109", "diff_url": "https://github.com/huggingface/transformers/pull/23109.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23109.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/23108
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23108/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23108/comments
https://api.github.com/repos/huggingface/transformers/issues/23108/events
https://github.com/huggingface/transformers/pull/23108
1,692,352,508
PR_kwDOCUB6oc5PlUZ5
23,108
[`Flava`] Fix flava `torch.distributed.nn.functional import all_gather` issue
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,683
1,683
1,683
CONTRIBUTOR
null
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/23047 Flava had some code that were copy-pasted from the original repository: https://github.com/facebookresearch/multimodal/blob/c6f6e44ec6e0addfdf01695db860a6febeb2d88b/torchmultimodal/utils/distributed.py#L12 From my understanding, It seems that there are two versions of `all_gather`: - `torch.distributed.nn.functional.all_gather` that backpropagates the gradients to all workers - `torch.distributed.all_gather` that will backpropagate the gradients to the current worker only cc @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23108/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23108/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23108", "html_url": "https://github.com/huggingface/transformers/pull/23108", "diff_url": "https://github.com/huggingface/transformers/pull/23108.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23108.patch", "merged_at": 1683034557000 }
https://api.github.com/repos/huggingface/transformers/issues/23107
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23107/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23107/comments
https://api.github.com/repos/huggingface/transformers/issues/23107/events
https://github.com/huggingface/transformers/pull/23107
1,692,346,174
PR_kwDOCUB6oc5PlTC4
23,107
[docs] Text to speech task guide
{ "login": "MKhalusova", "id": 1065417, "node_id": "MDQ6VXNlcjEwNjU0MTc=", "avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MKhalusova", "html_url": "https://github.com/MKhalusova", "followers_url": "https://api.github.com/users/MKhalusova/followers", "following_url": "https://api.github.com/users/MKhalusova/following{/other_user}", "gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}", "starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions", "organizations_url": "https://api.github.com/users/MKhalusova/orgs", "repos_url": "https://api.github.com/users/MKhalusova/repos", "events_url": "https://api.github.com/users/MKhalusova/events{/privacy}", "received_events_url": "https://api.github.com/users/MKhalusova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "PR with images: https://huggingface.co/datasets/huggingface/documentation-images/discussions/86" ]
1,683
1,683
1,683
CONTRIBUTOR
null
This PR adds a multimodal task guide on fine-tuning SpeechT5 for text-to-speech. It's based on a wonderfully [detailed notebook](https://colab.research.google.com/drive/1i7I5pzBcU3WDFarDnzweIj4-sVVoIUFJ#scrollTo=uELTb9CcOaCp) by @hollance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23107/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23107/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23107", "html_url": "https://github.com/huggingface/transformers/pull/23107", "diff_url": "https://github.com/huggingface/transformers/pull/23107.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23107.patch", "merged_at": 1683220633000 }
https://api.github.com/repos/huggingface/transformers/issues/23106
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23106/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23106/comments
https://api.github.com/repos/huggingface/transformers/issues/23106/events
https://github.com/huggingface/transformers/pull/23106
1,692,291,949
PR_kwDOCUB6oc5PlHcX
23,106
🌐 [i18n-KO] Translated `asr.mdx` to Korean
{ "login": "sim-so", "id": 96299403, "node_id": "U_kgDOBb1piw", "avatar_url": "https://avatars.githubusercontent.com/u/96299403?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sim-so", "html_url": "https://github.com/sim-so", "followers_url": "https://api.github.com/users/sim-so/followers", "following_url": "https://api.github.com/users/sim-so/following{/other_user}", "gists_url": "https://api.github.com/users/sim-so/gists{/gist_id}", "starred_url": "https://api.github.com/users/sim-so/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sim-so/subscriptions", "organizations_url": "https://api.github.com/users/sim-so/orgs", "repos_url": "https://api.github.com/users/sim-so/repos", "events_url": "https://api.github.com/users/sim-so/events{/privacy}", "received_events_url": "https://api.github.com/users/sim-so/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hope you have a great week!\r\nCould you please review this PR?\r\n@sgugger, @ArthurZucker, @eunseojo" ]
1,683
1,685
1,684
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Translated the `asr.mdx` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (번역 누락/중복 검사) - [x] Grammar Check (맞춤법 검사) - [x] Review or Add new terms to glossary (용어 확인 및 추가) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview로 정상작동 확인) ## Who can review? (Initial) Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) @sgugger, @ArthurZucker, @eunseojo May you please review this PR? <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23106/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23106/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23106", "html_url": "https://github.com/huggingface/transformers/pull/23106", "diff_url": "https://github.com/huggingface/transformers/pull/23106.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23106.patch", "merged_at": 1684243377000 }
https://api.github.com/repos/huggingface/transformers/issues/23105
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23105/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23105/comments
https://api.github.com/repos/huggingface/transformers/issues/23105/events
https://github.com/huggingface/transformers/pull/23105
1,692,169,107
PR_kwDOCUB6oc5Pks3J
23,105
Enable to use custom tracer in FX `symbolic_trace`
{ "login": "regisss", "id": 15324346, "node_id": "MDQ6VXNlcjE1MzI0MzQ2", "avatar_url": "https://avatars.githubusercontent.com/u/15324346?v=4", "gravatar_id": "", "url": "https://api.github.com/users/regisss", "html_url": "https://github.com/regisss", "followers_url": "https://api.github.com/users/regisss/followers", "following_url": "https://api.github.com/users/regisss/following{/other_user}", "gists_url": "https://api.github.com/users/regisss/gists{/gist_id}", "starred_url": "https://api.github.com/users/regisss/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/regisss/subscriptions", "organizations_url": "https://api.github.com/users/regisss/orgs", "repos_url": "https://api.github.com/users/regisss/repos", "events_url": "https://api.github.com/users/regisss/events{/privacy}", "received_events_url": "https://api.github.com/users/regisss/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Pinging @sgugger for final approval." ]
1,683
1,683
1,683
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR enables to specify the tracer to use when using `symbolic_trace` and Torch FX. For instance, this can be useful when the user wants a different tracing granularity to not enter some specific modules (e.g. see https://github.com/huggingface/optimum-habana/pull/223). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23105/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23105/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23105", "html_url": "https://github.com/huggingface/transformers/pull/23105", "diff_url": "https://github.com/huggingface/transformers/pull/23105.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23105.patch", "merged_at": 1683132457000 }
https://api.github.com/repos/huggingface/transformers/issues/23104
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23104/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23104/comments
https://api.github.com/repos/huggingface/transformers/issues/23104/events
https://github.com/huggingface/transformers/pull/23104
1,692,129,822
PR_kwDOCUB6oc5PkkWx
23,104
Add focalnet backbone
{ "login": "alaradirik", "id": 8944735, "node_id": "MDQ6VXNlcjg5NDQ3MzU=", "avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alaradirik", "html_url": "https://github.com/alaradirik", "followers_url": "https://api.github.com/users/alaradirik/followers", "following_url": "https://api.github.com/users/alaradirik/following{/other_user}", "gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}", "starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions", "organizations_url": "https://api.github.com/users/alaradirik/orgs", "repos_url": "https://api.github.com/users/alaradirik/repos", "events_url": "https://api.github.com/users/alaradirik/events{/privacy}", "received_events_url": "https://api.github.com/users/alaradirik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,683
1,683
1,683
CONTRIBUTOR
null
# What does this PR do? Adds `FocalNetBackbone` class to be used by X-Decoder and possibly other frameworks as FocalNet was published fairly recently. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23104/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23104/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23104", "html_url": "https://github.com/huggingface/transformers/pull/23104", "diff_url": "https://github.com/huggingface/transformers/pull/23104.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23104.patch", "merged_at": 1683131562000 }
https://api.github.com/repos/huggingface/transformers/issues/23103
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23103/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23103/comments
https://api.github.com/repos/huggingface/transformers/issues/23103/events
https://github.com/huggingface/transformers/issues/23103
1,692,037,470
I_kwDOCUB6oc5k2nFe
23,103
Sentences tokenized by LLaMA's tokenizer have `bos` tokens but do not have `eos` tokens.
{ "login": "yqy2001", "id": 55196500, "node_id": "MDQ6VXNlcjU1MTk2NTAw", "avatar_url": "https://avatars.githubusercontent.com/u/55196500?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yqy2001", "html_url": "https://github.com/yqy2001", "followers_url": "https://api.github.com/users/yqy2001/followers", "following_url": "https://api.github.com/users/yqy2001/following{/other_user}", "gists_url": "https://api.github.com/users/yqy2001/gists{/gist_id}", "starred_url": "https://api.github.com/users/yqy2001/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yqy2001/subscriptions", "organizations_url": "https://api.github.com/users/yqy2001/orgs", "repos_url": "https://api.github.com/users/yqy2001/repos", "events_url": "https://api.github.com/users/yqy2001/events{/privacy}", "received_events_url": "https://api.github.com/users/yqy2001/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Same issue", "Hey! Sorry for the late reply, and thanks for opening an issue 🤗 \r\nThis is expected, because the official repository's default behaviour is the same. That is because during inference you don't need it to be added. You should initialise the tokenizer with the argument set to `True`. Tell me if this does not adresses your issue ", "Oh, I got it. Thanks for your reply! @ArthurZucker " ]
1,683
1,685
1,685
CONTRIBUTOR
null
### System Info transformers version: main Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.10 Python version: 3.8.16 Huggingface_hub version: 0.11.1 PyTorch version (GPU?): 1.12.1 (True) Tensorflow version (GPU?): not installed (NA) Flax version (CPU?/GPU?/TPU?): not installed (NA) Jax version: not installed JaxLib version: not installed Using GPU in script?: Yes Using distributed or parallel set-up in script?: Not ### Who can help? @ArthurZucker @sgug ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I find that the batches tokenized by llama's tokenizer have `bos` tokens but do not have `eos` tokens, leading to my finetuned llama do not stop properly during inference. Is it a bug, or are there some reasons for this practice? https://github.com/huggingface/transformers/blob/b8648290d2d97e7c7dbccd2d4a6a4f44e70d3b63/src/transformers/models/llama/tokenization_llama.py#L72 ### Expected behavior An explanation to my confusion.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23103/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 3 }
https://api.github.com/repos/huggingface/transformers/issues/23103/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23102
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23102/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23102/comments
https://api.github.com/repos/huggingface/transformers/issues/23102/events
https://github.com/huggingface/transformers/issues/23102
1,691,820,277
I_kwDOCUB6oc5k1yD1
23,102
Strictly Generate JSON
{ "login": "Ryul0rd", "id": 18477649, "node_id": "MDQ6VXNlcjE4NDc3NjQ5", "avatar_url": "https://avatars.githubusercontent.com/u/18477649?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ryul0rd", "html_url": "https://github.com/Ryul0rd", "followers_url": "https://api.github.com/users/Ryul0rd/followers", "following_url": "https://api.github.com/users/Ryul0rd/following{/other_user}", "gists_url": "https://api.github.com/users/Ryul0rd/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ryul0rd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ryul0rd/subscriptions", "organizations_url": "https://api.github.com/users/Ryul0rd/orgs", "repos_url": "https://api.github.com/users/Ryul0rd/repos", "events_url": "https://api.github.com/users/Ryul0rd/events{/privacy}", "received_events_url": "https://api.github.com/users/Ryul0rd/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nThis seems very similar to this repo: https://github.com/1rgs/jsonformer. It's a wrapper around HF Transformer models (specifically, `xxxForCausalLM` models) to only fill in the values, not the keys of JSON schemas.", "Thanks for pointing out that repo to me as I was unaware. It does claim to do exactly what I want but does have some issues currently, including both a failure to actually generate correct JSON reliably (The example I used above made it throw an error) and performance issues similar to my own. I took a look at how they were approaching the problem and I don't think they can fix their performance issues without a complete rewrite either.\r\n\r\nI think I'll take a stab at the Rust rewrite and see how it goes. Would having Rust in transformers be an issue? If so I can just make my own library but I do think it would be better if the feature had the visibility boost of actually being in transformers.", "Since the fast core of the tokenizers library is also implemented in Rust it shouldn't be an issue to have your implementation in Rust as well.\n\nBtw: Did you take a look at [Kor](https://github.com/eyurtsev/)? It tries a similar thing within langchain...", "I did look at Kor. My issue with it is that it's just using the \"prompt and hope for the best\" approach rather than actually providing any sort of guarantees about the output like jsonformer and my approach are doing.", "Yes it's more like a prompt template engine. One cool feature is that they support pydantic models.", "@Ryul0rd author of https://github.com/1rgs/jsonformer here, ended up fixing a few bugs and perf issues over the last day. Can you try once again? If it doesn't work can you send me a repro case? Thanks!\r\n\r\nExample notebook here: https://colab.research.google.com/github/1rgs/jsonformer/blob/main/Jsonformer_example.ipynb", "There are now several libraries that support various methods of getting structured output from LMs. In addition to jsonformer, there's also [guidance](https://github.com/microsoft/guidance) and [LMQL](https://lmql.ai/). I'd say all 3 are still in a pretty rough/early stage at the moment but they've all got slightly different ideas about how to do things so. It still might be worth adding something like this to transformers at some point but it might be worth holding off a bit until we see which ideas work out best.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "any update on this", "Another work which might be worth exploring: https://github.com/normal-computing/outlines." ]
1,683
1,692
1,687
NONE
null
### Feature request It would be nice if we could force a generative LM to only produce JSON with a specific schema. I was thinking the end user should be able to do something as simple as `model.generate_json(input_ids, tokenizer, schema=MyDataclass)`. More specifically, `MyDataclass` should be any dataclass made up of the following types: int, float, bool, str, list, option, enum (treated as strings in the JSON), or another dataclass following these rules. I've already gone ahead and done a proof of concept showing this is possible [here](https://github.com/Ryul0rd/llm-json-output). I haven't put it in a PR because the code is a mess and it has very poor performance (10x or more normal inference time) but it otherwise works. The performance aspect in particular is an issue because I did some profiling and there doesn't seem to be anything I'm doing that's obviously inefficient so this might just be a limitation of Python and using a language like C++ or Rust might be necessary. I'm not sure how the maintainers feel about adding either of these languages to the library. Another feature my implementation doesn't currently support is adding additional constraints beyond types. eg. min or max length on strings or arrays, min or max values on ints and floats. The max length on strings and arrays is particularly important because without that you can't guarantee the full JSON string will fit in your output token budget. ### Motivation People are starting to build LLMs into larger apps by creating plugins/chains/agents etc. In many of these cases, we want the LM to produce some structured output of some kind that we can then parse. JSON is a common choice here but simply prompting a model and hoping for the best doesn't always work exactly the way you want it to. The black box nature of ANNs means failures are hard to predict and debug. Forcing the model to output valid JSON with a certain schema would improve the ability of developers to reason about the space of the model output. This also has the rather nice property that you can get reasonable output from models that aren't instruct/assistant finetuned. Check out this example from GPT2. The first 3 lines are the prompt and the fourth is generated: ``` Plain Text: Max is a 37 year old guy. His hobbies include gaming and martial arts. JSON: {"name":"Max","age":37,"is_male":true,"email_address":null} ``` Without forcing it to adhere to the JSON as output, GPT2 produces the following: ``` Plain Text: Max is a 37 year old guy. His hobbies include gaming and martial arts. JSON: Max is a 37 year old guy. His hobbies include gaming and martial arts. JSON: Max is a 37 year old guy. His hobbies include gaming and martial arts. JSON: Max is a 37 year old guy. His hobbies include gaming and martial arts. JSON: Max is a 37 ``` ### Your contribution I'd be open to writing the final code and making a PR once I get people's thoughts/advice depending on what people think of the performance/language issue. I've been learning Rust recently but am not an expert.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23102/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23102/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23101
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23101/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23101/comments
https://api.github.com/repos/huggingface/transformers/issues/23101/events
https://github.com/huggingface/transformers/pull/23101
1,691,816,553
PR_kwDOCUB6oc5PjhZp
23,101
Update perf_train_gpu_one.mdx
{ "login": "aasthavar", "id": 81507417, "node_id": "MDQ6VXNlcjgxNTA3NDE3", "avatar_url": "https://avatars.githubusercontent.com/u/81507417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aasthavar", "html_url": "https://github.com/aasthavar", "followers_url": "https://api.github.com/users/aasthavar/followers", "following_url": "https://api.github.com/users/aasthavar/following{/other_user}", "gists_url": "https://api.github.com/users/aasthavar/gists{/gist_id}", "starred_url": "https://api.github.com/users/aasthavar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aasthavar/subscriptions", "organizations_url": "https://api.github.com/users/aasthavar/orgs", "repos_url": "https://api.github.com/users/aasthavar/repos", "events_url": "https://api.github.com/users/aasthavar/events{/privacy}", "received_events_url": "https://api.github.com/users/aasthavar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23101). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,683
1,688
1,688
NONE
null
# What does this PR do? Minor changes - Corrected a word's spelling. Changed markdown syntax for heading and for URL to be seen as a link. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger, @stevhliu and @MKhalusova <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23101/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23101/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23101", "html_url": "https://github.com/huggingface/transformers/pull/23101", "diff_url": "https://github.com/huggingface/transformers/pull/23101.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23101.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/23100
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23100/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23100/comments
https://api.github.com/repos/huggingface/transformers/issues/23100/events
https://github.com/huggingface/transformers/issues/23100
1,691,776,798
I_kwDOCUB6oc5k1nce
23,100
gen_kwargs in Seq2SeqTrainer
{ "login": "TJSun009", "id": 51209730, "node_id": "MDQ6VXNlcjUxMjA5NzMw", "avatar_url": "https://avatars.githubusercontent.com/u/51209730?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TJSun009", "html_url": "https://github.com/TJSun009", "followers_url": "https://api.github.com/users/TJSun009/followers", "following_url": "https://api.github.com/users/TJSun009/following{/other_user}", "gists_url": "https://api.github.com/users/TJSun009/gists{/gist_id}", "starred_url": "https://api.github.com/users/TJSun009/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TJSun009/subscriptions", "organizations_url": "https://api.github.com/users/TJSun009/orgs", "repos_url": "https://api.github.com/users/TJSun009/repos", "events_url": "https://api.github.com/users/TJSun009/events{/privacy}", "received_events_url": "https://api.github.com/users/TJSun009/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante ", "My bad I realised the preprocess_logits_for_metrics function that I found [here](https://discuss.huggingface.co/t/cuda-out-of-memory-when-using-trainer-with-compute-metrics/2941/13) was truncating the generation output.\r\n\r\n" ]
1,683
1,683
1,683
NONE
null
https://github.com/huggingface/transformers/blob/b8648290d2d97e7c7dbccd2d4a6a4f44e70d3b63/src/transformers/trainer_seq2seq.py#L257 Hi, I'm trying to use the Seq2SeqTrainer with generation in the evaluation_loop and it looks like the config isn't being properly passed to the prediction step. I'm having to manually set self._gen_kwargs as this is not initialised anywhere else in the evaluation_loop. It is initialised in the evaluate call but this uses the Trainer evaluate implementation which lacks generation. Am I missing something?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23100/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23100/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23099
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23099/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23099/comments
https://api.github.com/repos/huggingface/transformers/issues/23099/events
https://github.com/huggingface/transformers/issues/23099
1,691,738,991
I_kwDOCUB6oc5k1eNv
23,099
learning rate resets on resumption from checkpoint
{ "login": "agneet42", "id": 22055826, "node_id": "MDQ6VXNlcjIyMDU1ODI2", "avatar_url": "https://avatars.githubusercontent.com/u/22055826?v=4", "gravatar_id": "", "url": "https://api.github.com/users/agneet42", "html_url": "https://github.com/agneet42", "followers_url": "https://api.github.com/users/agneet42/followers", "following_url": "https://api.github.com/users/agneet42/following{/other_user}", "gists_url": "https://api.github.com/users/agneet42/gists{/gist_id}", "starred_url": "https://api.github.com/users/agneet42/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/agneet42/subscriptions", "organizations_url": "https://api.github.com/users/agneet42/orgs", "repos_url": "https://api.github.com/users/agneet42/repos", "events_url": "https://api.github.com/users/agneet42/events{/privacy}", "received_events_url": "https://api.github.com/users/agneet42/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "When doing `model_path = '=/checkpoint-38000'` you are not resuming training from the checkpoint, you are starting a new fresh training with the model of this checkpoint.\r\n\r\nTo resume training from a checkpoint, you need to use the [`resume_from_checkpoint` argument](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Trainer.train.resume_from_checkpoint) of `Trainer.train`.", "@sgugger not sure what I am missing : \r\n\r\nCode : \r\n\r\n```\r\nmodel_path = 'output_dir/checkpoint-38000'\r\n \r\n model = AutoModelForCausalLM.from_pretrained(model_path)\r\n tokenizer = AutoTokenizer.from_pretrained(model_path)\r\n tokenizer.pad_token_id = tokenizer.eos_token_id\r\n\r\n train_path = '/train/*'\r\n train_data = glob(train_path)\r\n \r\n val_path = '/val/*'\r\n val_data = glob(val_path)\r\n\r\n dataset = load_dataset(\"json\", data_files = {\"train\": train_data, \"validation\" : val_data})\r\n dataset = dataset.map(transform, batched=True, remove_columns = [\"id\" ,\"tokens\"])\r\n \r\n train_dataset = dataset[\"train\"]\r\n val_dataset = dataset[\"validation\"]\r\n \r\n print('Training data length', len(train_dataset))\r\n print('Validation data length', len(val_dataset))\r\n \r\n parser = HfArgumentParser(TrainingArguments)\r\n parser.add_argument(\"--model_name_or_dir\")\r\n \r\n training_args, args = parser.parse_args_into_dataclasses()\r\n transformers.logging.set_verbosity_debug()\r\n \r\n trainer = Trainer(\r\n args=training_args,\r\n model=model,\r\n tokenizer = tokenizer,\r\n train_dataset=train_dataset,\r\n eval_dataset=val_dataset,\r\n data_collator=DataCollatorForTokenClassification(tokenizer, padding='longest'),\r\n compute_metrics=None, \r\n callbacks = [TensorBoardCallback()] \r\n )\r\n if trainer.is_world_process_zero():\r\n print(dataset)\r\n \r\n trainer.pop_callback(MLflowCallback)\r\n \r\n if training_args.do_train:\r\n if trainer.is_world_process_zero():\r\n print(\"Training...\")\r\n\r\n start = time.time()\r\n trainer.train(resume_from_checkpoint=True)\r\n mlflow.log_metric(\r\n \"time/epoch\", (time.time() - start) / 60 / training_args.num_train_epochs\r\n )\r\n```\r\n\r\nScript : \r\n`\r\naccelerate launch train_cerebras_checkpoint.py \\\r\n--resume_from_checkpoint True \\\r\n--output_dir /output_dir \\\r\n--num_train_epochs 30 \\\r\n--do_train --per_device_train_batch_size 10 \\\r\n--fsdp \"full_shard auto_wrap\" \\\r\n--fsdp_transformer_layer_cls_to_wrap \"GPT2Block\" \\\r\n--logging_steps 1 \\\r\n--save_strategy \"steps\" \\\r\n--save_steps 2000 \\\r\n--fp16 \\\r\n--gradient_checkpointing true\r\n`\r\nIn the logs, I see the following --\r\n\r\n`Continuing training from checkpoint, will skip to saved global_step`\r\n\r\nHowever the learning rate still resets. I expect it to be around `e-06` but it is at `e-05`\r\nI can confirm that `output_dir` contains `checkpoint-38000`", "I also debugged and noticed that the execution goes through here - https://github.com/huggingface/transformers/blob/v4.26.1/src/transformers/trainer.py#L2333\r\n\r\nFurthermore, I checked the last_lr from my optimizer and it seems to be as I see in the training_state.json : \r\n```\r\npath = '/output_dir/checkpoint-38000/scheduler.pt'\r\nx = torch.load(path)\r\nx['_last_lr']\r\n[5.258219395091516e-06, 5.258219395091516e-06]\r\n```\r\n\r\nNot able to understand why therefore the LR starts from `e-05` when I resume from checkpoint.\r\n@sgugger", "@agneet42 \r\nIs there a typo here:\r\n```\r\nmodel_path = 'output_dir/checkpoint-38000'\r\n```\r\nShould be below or are you running under root path?\r\n```\r\nmodel_path = '/output_dir/checkpoint-38000'\r\n```", "@sgugger Hi, I'm using `Trainer.train(last_checkpoint_path)` but still got lr reset. Here is some info might help.\r\n\r\n- checkpoint folder files\r\n```text\r\n- config.json\r\n- generation_config.json\r\n- optimizer.pt\r\n- pytorch_model.bin\r\n- rng_state.pth\r\n- scaler.pt\r\n- trainer_state.json\r\n- training_args.bin\r\n```\r\n\r\n- Code: Load from checkpoint\r\n```python\r\n\r\nargs.from_checkpoint = \"./checkpoint-30000\"\r\n\r\n\r\n# Define the training arguments\r\ntraining_args = TrainingArguments(\r\n output_dir=args.save_dir,\r\n overwrite_output_dir=True,\r\n num_train_epochs=2,\r\n per_device_train_batch_size=12,\r\n per_device_eval_batch_size=12,\r\n gradient_accumulation_steps=3,\r\n evaluation_strategy='steps',\r\n eval_steps=40000,\r\n save_steps=10000,\r\n logging_steps=100,\r\n learning_rate=5e-5,\r\n warmup_steps=1000,\r\n fp16=True,\r\n logging_dir='./logs'\r\n)\r\n\r\n# Create a trainer instance\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args, \r\n train_dataset=train_dataset, \r\n eval_dataset=eval_dataset, \r\n data_collator=lambda data: {'input_ids': torch.stack([f[0] for f in data]),\r\n 'attention_mask': torch.stack([f[1] for f in data]),\r\n 'labels': torch.stack([f[0] for f in data])})\r\n\r\n# Fine-tune the model\r\ntrainer.train(args.from_checkpoint)\r\n```\r\n\r\nLog: Previous training logs\r\n```text\r\n{'loss': 0.1625, 'learning_rate': 3.67605680426363e-06, 'epoch': 0.93} \r\n{'loss': 0.1627, 'learning_rate': 3.3450296269323714e-06, 'epoch': 0.94} \r\n{'loss': 0.1618, 'learning_rate': 3.0140024496011123e-06, 'epoch': 0.94} \r\n{'loss': 0.1617, 'learning_rate': 2.6829752722698537e-06, 'epoch': 0.95} \r\n{'loss': 0.1623, 'learning_rate': 2.3519480949385943e-06, 'epoch': 0.95} \r\n{'loss': 0.1608, 'learning_rate': 2.0209209176073357e-06, 'epoch': 0.96}\r\n```\r\n\r\nLog: After load from checkpoint logs\r\n```text\r\n{'loss': 0.1715, 'learning_rate': 2.631964570647042e-05, 'epoch': 0.96} \r\n{'loss': 0.1757, 'learning_rate': 2.6238236347650525e-05, 'epoch': 0.97} \r\n{'loss': 0.1758, 'learning_rate': 2.6156826988830634e-05, 'epoch': 0.97} \r\n{'loss': 0.1766, 'learning_rate': 2.6075417630010747e-05, 'epoch': 0.97} \r\n{'loss': 0.1754, 'learning_rate': 2.5994008271190857e-05, 'epoch': 0.98} \r\n{'loss': 0.1789, 'learning_rate': 2.5912598912370966e-05, 'epoch': 0.98} \r\n{'loss': 0.1797, 'learning_rate': 2.583118955355108e-05, 'epoch': 0.98} \r\n{'loss': 0.18, 'learning_rate': 2.5750594288319384e-05, 'epoch': 0.99}\r\n```\r\n\r\n", "@wmhcqw I still don't have any code I can reproduce on this issue. To be able to reproduce the code needs to include the dataset/model creation as whole as the creation of the checkpoint from which you are then resuming training.\r\n\r\nResuming training is tested in our CI and there is no issue of learning rate resetting there, so the examples of this situation we have on our side work. To debug what is particular to the bug you are encountering, I need to be able to reproduce it.", "@sgugger I think I've found the problem. Here's the sample code to reproduce this issue.\r\n\r\nFile: train.py\r\n```python\r\nimport os\r\nimport argparse\r\n\r\nfrom tqdm import tqdm\r\n\r\nimport torch\r\nimport torch.nn as nn\r\nfrom torch.utils.data import DataLoader, Dataset\r\nfrom transformers import GPTNeoConfig, GPTNeoForCausalLM, GPT2Tokenizer, TrainingArguments, Trainer\r\n\r\nimport random\r\nfrom random import randint\r\n\r\n\r\nclass DummyDataset(Dataset):\r\n \r\n def __init__(self, words, tokenizer):\r\n self.tokenizer = tokenizer\r\n self.data = {\r\n \"input_ids\": [],\r\n \"attention_mask\": []\r\n }\r\n for word in tqdm(words):\r\n res = tokenizer(word)\r\n self.data[\"input_ids\"].append(torch.LongTensor(res[\"input_ids\"]))\r\n self.data[\"attention_mask\"].append(torch.LongTensor(res[\"attention_mask\"]))\r\n \r\n def __len__(self):\r\n return len(self.data['input_ids'])\r\n \r\n def __getitem__(self, idx):\r\n input_ids = self.data['input_ids'][idx]\r\n attention_mask = self.data['attention_mask'][idx]\r\n return input_ids, attention_mask\r\n \r\n\r\nif __name__ == \"__main__\":\r\n tokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\n # print(tokenizer(\"Hello World\"))\r\n \r\n names=[\"We\",\"I\",\"They\",\"He\",\"She\",\"Jack\",\"Jim\",\"Rose\",\"You\"]\r\n verbs=[\"was\", \"is\", \"are\", \"were\"]\r\n nouns=[\"playing a game\", \"watching television\", \"talking\", \"dancing\", \"speaking\", \"playing basketball\", \"eating dinner\"]\r\n\r\n random.seed(42)\r\n train_sens = []\r\n for i in range(1000):\r\n train_sens.append(names[randint(0,len(names)-1)]+\" \"+verbs[randint(0,len(verbs)-1)]+\" \"+nouns[randint(0,len(nouns)-1)])\r\n \r\n eval_sens = []\r\n for i in range(100):\r\n eval_sens.append(names[randint(0,len(names)-1)]+\" \"+verbs[randint(0,len(verbs)-1)]+\" \"+nouns[randint(0,len(nouns)-1)])\r\n \r\n train_dataset = DummyDataset(train_sens, tokenizer)\r\n eval_dataset = DummyDataset(eval_sens, tokenizer)\r\n \r\n config = GPTNeoConfig(\r\n vocab_size=len(tokenizer.get_vocab()),\r\n n_positions=1024,\r\n n_ctx=2048,\r\n n_embd=768,\r\n n_layer=1,\r\n n_head=1,\r\n intermediate_size=3072\r\n )\r\n model = GPTNeoForCausalLM(config).cuda()\r\n \r\n training_args = TrainingArguments(\r\n output_dir=\"./dummpy_model\",\r\n overwrite_output_dir=True,\r\n num_train_epochs=1, # 2\r\n per_device_train_batch_size=1,\r\n per_device_eval_batch_size=1,\r\n gradient_accumulation_steps=1,\r\n evaluation_strategy='steps',\r\n eval_steps=1000,\r\n save_steps=100,\r\n logging_steps=10,\r\n learning_rate=5e-5,\r\n warmup_steps=10,\r\n fp16=True,\r\n logging_dir='./logs'\r\n )\r\n \r\n trainer = Trainer(\r\n model=model,\r\n args=training_args, \r\n train_dataset=train_dataset, \r\n eval_dataset=eval_dataset, \r\n data_collator=lambda data: {'input_ids': torch.stack([f[0] for f in data]),\r\n 'attention_mask': torch.stack([f[1] for f in data]),\r\n 'labels': torch.stack([f[0] for f in data])})\r\n\r\n trainer.train()\r\n # trainer.train(\"./dummpy_model/checkpoint-1000\")\r\n \r\n```\r\n\r\nSteps to reproduce:\r\n\r\n1. run `python train.py`, and you will get a checkpoint folder named 'checkpoint-1000'\r\n2. change the `train.py` code\r\n - change the training_args num_train_epcohs from 1->2 (**This is the reason, the total steps changed.**)\r\n - comment trainer.train()\r\n - uncomment trainer.train(\"./dummpy_model/checkpoint-1000\")\r\n3. run `python train.py` again, training resume from step 1000 but with reset lr.\r\n\r\nLogs:\r\n\r\nStep1. `python train.py`\r\n```text\r\n{'loss': 1.6059, 'learning_rate': 2.7878787878787883e-05, 'epoch': 0.45} \r\n{'loss': 1.6617, 'learning_rate': 2.7373737373737374e-05, 'epoch': 0.46} \r\n{'loss': 1.4485, 'learning_rate': 2.686868686868687e-05, 'epoch': 0.47} \r\n{'loss': 1.5028, 'learning_rate': 2.636363636363636e-05, 'epoch': 0.48} \r\n{'loss': 1.5889, 'learning_rate': 2.585858585858586e-05, 'epoch': 0.49} \r\n{'loss': 1.3763, 'learning_rate': 2.5353535353535356e-05, 'epoch': 0.5} \r\n{'loss': 1.5049, 'learning_rate': 2.4848484848484847e-05, 'epoch': 0.51} \r\n... \r\n{'loss': 1.2957, 'learning_rate': 6.060606060606061e-07, 'epoch': 0.99} \r\n{'loss': 1.406, 'learning_rate': 1.0101010101010101e-07, 'epoch': 1.0} \r\n{'eval_loss': 1.3103933334350586, 'eval_runtime': 5.7138, 'eval_samples_per_second': 17.502, 'eval_steps_per_second': 17.502, 'epoch': 1.0} \r\n{'train_runtime': 497.58, 'train_samples_per_second': 2.01, 'train_steps_per_second': 2.01, 'train_loss': 1.8285735349655152, 'epoch': 1.0} \r\n```\r\n\r\nStep3. `python train.py`\r\n```text\r\n{'loss': 1.5102, 'learning_rate': 2.492462311557789e-05, 'epoch': 1.01} \r\n{'loss': 1.3294, 'learning_rate': 2.4673366834170854e-05, 'epoch': 1.02} \r\n{'loss': 1.511, 'learning_rate': 2.442211055276382e-05, 'epoch': 1.03} \r\n{'loss': 1.4702, 'learning_rate': 2.4170854271356786e-05, 'epoch': 1.04} \r\n{'loss': 1.5096, 'learning_rate': 2.391959798994975e-05, 'epoch': 1.05} \r\n{'loss': 1.4866, 'learning_rate': 2.3668341708542715e-05, 'epoch': 1.06} \r\n{'loss': 1.4644, 'learning_rate': 2.3417085427135678e-05, 'epoch': 1.07} \r\n{'loss': 1.2187, 'learning_rate': 2.3165829145728644e-05, 'epoch': 1.08} \r\n{'loss': 1.5627, 'learning_rate': 2.291457286432161e-05, 'epoch': 1.09} \r\n{'loss': 1.5059, 'learning_rate': 2.2663316582914573e-05, 'epoch': 1.1} \r\n```\r\n\r\nAs you can see, the learning rate is reset to where epoch 0.5, so I think the learning rate is not saved but the steps, and learning rate is calculated according to the total steps.\r\n\r\nThis is not a bug. I thought the learning rate is saved but I was wrong. Changing training arguments before loading from checkpoint is not an expected behaviour, I think.\r\n", "> 2. change the train.py code change the training_args num_train_epcohs from 1->2 (This is the reason, the total steps changed.)\r\n\r\nYou cannot change a single a training argument when resuming training and expect the training to resume properly. This is the source of the bug, not the `Trainer` itself.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@sgugger I am facing related problem, though not pertaining to learning rate but relates to resuming the training with change in GPUs available. I have written the detailed issue [here (huggingface forums)](https://discuss.huggingface.co/t/skipped-batches-do-not-consider-distributed-training/43832). Would really appreciate if you can help! \r\n\r\nThanks and regards!", "There used to be a bug in huggingface that hf loses control of resuming lr_scheduler when using deepspeed. The newest version have fixed it. Ref: https://github.com/huggingface/transformers/issues/24656#issuecomment-1733069714\r\n\r\n\r\n> Resuming training is tested in our CI and there is no issue of learning rate resetting there, so the examples of this situation we have on our side work. To debug what is particular to the bug you are encountering, I need to be able to reproduce it.\r\n\r\n@sgugger Have you considered deepspeed when designing the CI?" ]
1,682
1,695
1,687
NONE
null
### System Info - `transformers` version: 4.26.1 - Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.12 - Huggingface_hub version: 0.13.0 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help? @sgugger @stas00 @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``Steps to reproduce the behaviour : 1. Training Script -- ``` if __name__ == "__main__": model_path = '/checkpoint-38000' model = AutoModelForCausalLM.from_pretrained(model_path) tokenizer = AutoTokenizer.from_pretrained(model_path) tokenizer.pad_token_id = tokenizer.eos_token_id train_path = '/train/*' train_data = glob(train_path) val_path = 'val/*' val_data = glob(val_path) dataset = load_dataset("json", data_files = {"train": train_data, "validation" : val_data}) dataset = dataset.map(transform, batched=True, remove_columns = ["id" ,"tokens"]) train_dataset = dataset["train"] val_dataset = dataset["validation"] parser = HfArgumentParser(TrainingArguments) parser.add_argument("--model_name_or_dir") training_args, args = parser.parse_args_into_dataclasses() transformers.logging.set_verbosity_debug() trainer = Trainer( model, training_args, train_dataset=train_dataset, eval_dataset=val_dataset, tokenizer=tokenizer, data_collator=DataCollatorForTokenClassification(tokenizer, padding='longest'), compute_metrics=None, callbacks = [TensorBoardCallback()] ) if trainer.is_world_process_zero(): print(dataset) trainer.pop_callback(MLflowCallback) if training_args.do_train: if trainer.is_world_process_zero(): print("Training...") start = time.time() trainer.train(model_path=model_path) mlflow.log_metric( "time/epoch", (time.time() - start) / 60 / training_args.num_train_epochs ) ``` 2. Params -- ```export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5 accelerate launch train_cerebras_checkpoint.py \ --output_dir output_dir \ --num_train_epochs 30 \ --do_train --per_device_train_batch_size 10 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap "GPT2Block" \ --logging_steps 1 \ --save_strategy "steps" \ --save_steps 2000 \ --fp16 \ --gradient_checkpointing true ``` 3. trainer_state.json (only last few epochs shown)-- ``` { "epoch": 13.47, "learning_rate": 5.270405303307255e-06, "loss": 2.4699, "step": 37990 }, { "epoch": 13.47, "learning_rate": 5.2691867124856815e-06, "loss": 2.4614, "step": 37991 }, { "epoch": 13.47, "learning_rate": 5.267968121664108e-06, "loss": 2.3527, "step": 37992 }, { "epoch": 13.47, "learning_rate": 5.266749530842534e-06, "loss": 2.42, "step": 37993 }, { "epoch": 13.47, "learning_rate": 5.2655309400209605e-06, "loss": 2.6322, "step": 37994 }, { "epoch": 13.47, "learning_rate": 5.264312349199386e-06, "loss": 2.566, "step": 37995 }, { "epoch": 13.47, "learning_rate": 5.263093758377812e-06, "loss": 2.5026, "step": 37996 }, { "epoch": 13.47, "learning_rate": 5.261875167556239e-06, "loss": 2.6096, "step": 37997 }, { "epoch": 13.47, "learning_rate": 5.260656576734664e-06, "loss": 2.7513, "step": 37998 }, { "epoch": 13.47, "learning_rate": 5.2594379859130905e-06, "loss": 2.5066, "step": 37999 }, { "epoch": 13.48, "learning_rate": 5.258219395091516e-06, "loss": 2.7268, "step": 38000 } ``` 4. Learning rate after resumption -- ``` 'loss': 2.4654, 'learning_rate': 2.754905437352246e-05, 'epoch': 13.48} {'loss': 2.8525, 'learning_rate': 2.754905437352246e-05, 'epoch': 13.48} {'loss': 2.9781, 'learning_rate': 2.7548463356973997e-05, 'epoch': 13.48} {'loss': 2.8067, 'learning_rate': 2.7547872340425534e-05, 'epoch': 13.48} ``` ### Expected behavior 1. Learning rate should resume from the last stored LR. 2. The loss seems higher post resumption as compared to before (Maybe due to LR?) 3. I also notice that the time per iteration has almost double (19s to 38s), even though I am using the same number of GPUs
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23099/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23099/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23098
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23098/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23098/comments
https://api.github.com/repos/huggingface/transformers/issues/23098/events
https://github.com/huggingface/transformers/pull/23098
1,691,731,103
PR_kwDOCUB6oc5PjPXr
23,098
fix: Fix incorrent loading config in AutoTokenizer.from_pretrained
{ "login": "zsaladin", "id": 6466389, "node_id": "MDQ6VXNlcjY0NjYzODk=", "avatar_url": "https://avatars.githubusercontent.com/u/6466389?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zsaladin", "html_url": "https://github.com/zsaladin", "followers_url": "https://api.github.com/users/zsaladin/followers", "following_url": "https://api.github.com/users/zsaladin/following{/other_user}", "gists_url": "https://api.github.com/users/zsaladin/gists{/gist_id}", "starred_url": "https://api.github.com/users/zsaladin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zsaladin/subscriptions", "organizations_url": "https://api.github.com/users/zsaladin/orgs", "repos_url": "https://api.github.com/users/zsaladin/repos", "events_url": "https://api.github.com/users/zsaladin/events{/privacy}", "received_events_url": "https://api.github.com/users/zsaladin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23098). All of your documentation changes will be reflected on that endpoint.", "No this fix is incorrect. The problem lies in the checkpoint you are using, which does not specify the tokenizer class in the `tokenizer_config.json` present in the subfolder. If this was done properly, this path would not be executed.", "The current function finds `config.json` in subfolder not root if tokenizer class in `tokenizer_config.json` is not specified. So `AutoTokenizer.from_pretrained(\"facebook/rag-token-base\", subfolder=\"generator_tokenizer\")` fails since [facebook/rag-token-base/generator_tokenizer](https://huggingface.co/facebook/rag-token-base/tree/main/generator_tokenizer) doesn't have `config.json`.\r\n\r\nThis PR makes the function find `config.json` in root not subfolder to run the example code described in the document. Or \r\n`config.json` has to be put in [facebook/rag-token-base/generator_tokenizer](https://huggingface.co/facebook/rag-token-base/tree/main/generator_tokenizer)?", "As I said above, the function should not even attempt to find a `config.json`. It only does so because the tokenizer config is not right.", "Then there are several problems in this.\r\n\r\n1. Docs of [AutoTokenizer.from_pretrained](https://huggingface.co/docs/transformers/v4.28.1/en/model_doc/auto#transformers.AutoTokenizer.from_pretrained) and [PreTrainedTokenizerBase.from_pretrained](https://huggingface.co/docs/transformers/v4.28.1/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.from_pretrained) should be fixed. `subfolder` in the docs is decsribed with incorrect example [facebook/rag-token-base](https://huggingface.co/facebook/rag-token-base/tree/main/generator_tokenizer). It may cause confusion.\r\n\r\n2. Correct error message has to be shown like \"tokenizer class is not specified in tokenizer_config.json\" instead of finding `config.json`\r\n\r\nDo I understand your comment properly?", "No, there is one problem and it is that the tokenizer config file in that repo is wrong and should be fixed. There is nothing to change after that.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,682
1,688
1,688
NONE
null
# What does this PR do? The description in argument `subfolder` of `AutoTokenizer.from_pretrained` doesn't work properly. > subfolder (str, optional) — In case the relevant files are located inside a subfolder of the model repo on huggingface.co (e.g. for facebook/rag-token-base), specify it here. The code the error occurs ```py from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("facebook/rag-token-base", subfolder="generator_tokenizer") ``` The error message ``` None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. Traceback (most recent call last): File "/Users/daehee/Workspace/Projects/transformers/main.py", line 4, in <module> tokenizer = AutoTokenizer.from_pretrained("facebook/rag-token-base", subfolder="generator_tokenizer") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/daehee/Workspace/Projects/transformers/src/transformers/models/auto/tokenization_auto.py", line 657, in from_pretrained config = AutoConfig.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/daehee/Workspace/Projects/transformers/src/transformers/models/auto/configuration_auto.py", line 922, in from_pretrained config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/daehee/Workspace/Projects/transformers/src/transformers/configuration_utils.py", line 574, in get_config_dict config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/daehee/Workspace/Projects/transformers/src/transformers/configuration_utils.py", line 629, in _get_config_dict resolved_config_file = cached_file( ^^^^^^^^^^^^ File "/Users/daehee/Workspace/Projects/transformers/src/transformers/utils/hub.py", line 404, in cached_file raise EnvironmentError(f"Could not locate {full_filename} inside {path_or_repo_id}.") OSError: Could not locate generator_tokenizer/config.json inside facebook/rag-token-base. ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23098/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23098/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23098", "html_url": "https://github.com/huggingface/transformers/pull/23098", "diff_url": "https://github.com/huggingface/transformers/pull/23098.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23098.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/23097
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23097/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23097/comments
https://api.github.com/repos/huggingface/transformers/issues/23097/events
https://github.com/huggingface/transformers/pull/23097
1,691,642,990
PR_kwDOCUB6oc5Pi8s_
23,097
Sliding window pipeline with average of logits
{ "login": "boyleconnor", "id": 6520892, "node_id": "MDQ6VXNlcjY1MjA4OTI=", "avatar_url": "https://avatars.githubusercontent.com/u/6520892?v=4", "gravatar_id": "", "url": "https://api.github.com/users/boyleconnor", "html_url": "https://github.com/boyleconnor", "followers_url": "https://api.github.com/users/boyleconnor/followers", "following_url": "https://api.github.com/users/boyleconnor/following{/other_user}", "gists_url": "https://api.github.com/users/boyleconnor/gists{/gist_id}", "starred_url": "https://api.github.com/users/boyleconnor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/boyleconnor/subscriptions", "organizations_url": "https://api.github.com/users/boyleconnor/orgs", "repos_url": "https://api.github.com/users/boyleconnor/repos", "events_url": "https://api.github.com/users/boyleconnor/events{/privacy}", "received_events_url": "https://api.github.com/users/boyleconnor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23097). All of your documentation changes will be reflected on that endpoint.", "Can you provide some kind of benchmarks in terms fo benefits ?\r\n\r\nIf this is better in some form we can consider adding it in the pipelines.\r\n\r\nIn general we try to avoid adding new pipelines which don't change the I/O.\r\nAlso you can have your own pipeline coding on the hub directly : https://huggingface.co/docs/transformers/v4.28.1/en/main_classes/pipelines#pipeline-custom-code", "@Narsil, @wigwit and I can work on trying to benchmark this (versus the existing approach) on an NER dataset.\r\n\r\nAlso, do you think this functionality would actually be more at home inside the existing `TokenClassificationPipeline` (as an alternative to the existing extract-entities-then-resolve-overlaps approach)? It could be activated by the user passing `average_logits=True` as one of the `__init__()` parameters. I'm realizing that would probably make more sense than creating another pipeline class and \"task\" just for the sliding window behavior.", "Thanks for your PR but this looks more like a pipeline that would benefit from living entirely on the Hub using the [custom pipeline](https://huggingface.co/docs/transformers/add_new_pipeline#share-your-pipeline-on-the-hub) API than being added into Transformers.", "Should this be closed?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,682
1,687
1,687
CONTRIBUTOR
null
# What does this PR do? This adds a variation on the existing `ChunkingPipeline` approach for handling strings with more than `model_max_length` tokens. After using the tokenizer to split the text into chunks (identically to how `ChunkingPipeline` does so), `SlidingWindowTokenClassificationPipeline` then averages the values of the logits for each token (across all sliding "windows" that happen to cover that token), and finally feeds those logits into the usual entity-extraction logic. The existing implementation of `TokenClassificationPipeline` instead runs entity extraction on each window separately, then takes the highest-scoring entity in case of any overlap. Implements #14631 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. **[Link](https://github.com/huggingface/transformers/issues/14631) to discussion** - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @Narsil
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23097/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23097/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23097", "html_url": "https://github.com/huggingface/transformers/pull/23097", "diff_url": "https://github.com/huggingface/transformers/pull/23097.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23097.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/23096
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23096/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23096/comments
https://api.github.com/repos/huggingface/transformers/issues/23096/events
https://github.com/huggingface/transformers/issues/23096
1,691,629,691
I_kwDOCUB6oc5k1Dh7
23,096
Dramatic Performance Drop of `CLIPVisionModel` Related Model After Upgrading `transformers` From `4.27.4` to `4.28.x`
{ "login": "submartingales", "id": 100008553, "node_id": "U_kgDOBfYCaQ", "avatar_url": "https://avatars.githubusercontent.com/u/100008553?v=4", "gravatar_id": "", "url": "https://api.github.com/users/submartingales", "html_url": "https://github.com/submartingales", "followers_url": "https://api.github.com/users/submartingales/followers", "following_url": "https://api.github.com/users/submartingales/following{/other_user}", "gists_url": "https://api.github.com/users/submartingales/gists{/gist_id}", "starred_url": "https://api.github.com/users/submartingales/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/submartingales/subscriptions", "organizations_url": "https://api.github.com/users/submartingales/orgs", "repos_url": "https://api.github.com/users/submartingales/repos", "events_url": "https://api.github.com/users/submartingales/events{/privacy}", "received_events_url": "https://api.github.com/users/submartingales/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I found **this issue may impact tons of vision workloads** and I hope it can be resolved as soon as possible.", "Also cc @younesbelkada ", "Hi @submartingales, thanks for reporting! \r\n\r\nSo that I can pin down the issue, is the input image the same before being passed to the processor? e.g. \r\n\r\n```python\r\nimport torch\r\nfrom transformers import CLIPProcessor, CLIPModel\r\n\r\ntorch.manual_seed(0)\r\n\r\n# Dummy image which is always the same for each version\r\nimage = torch.randint(0, 256, (3, 300, 300))\r\n\r\nprocessor = CLIPProcessor.from_pretrained(\"openai/clip-vit-base-patch32\")\r\nmodel = CLIPModel.from_pretrained(\"openai/clip-vit-base-patch32\")\r\n\r\n# Model inputs might change based on if there's a change in processing logic\r\ninputs = processor(text=[\"a photo of a cat\", \"a photo of a dog\"], images=image, return_tensors=\"pt\", padding=True)\r\noutputs = model(**inputs)\r\n```\r\n\r\nOr is are the `pixel_values` exactly the same? \r\n```python\r\nimport torch\r\nfrom transformers import CLIPProcessor, CLIPModel\r\n\r\ntorch.manual_seed(0)\r\n\r\npixel_values = torch.rand(1, 3, 224, 224)\r\ninput_ids = torch.Tensor(\r\n [[49406, 320, 1125, 539, 320, 2368, 49407],\r\n [49406, 320, 1125, 539, 320, 1929, 49407]]\r\n).long()\r\n\r\nprocessor = CLIPProcessor.from_pretrained(\"openai/clip-vit-base-patch32\")\r\nmodel = CLIPModel.from_pretrained(\"openai/clip-vit-base-patch32\")\r\n\r\n# The model inputs exactly the same for different library versions\r\ninputs = {\"input_ids\": input_ids, \"pixel_values\": pixel_values}\r\noutputs = model(**inputs)\r\n```\r\n\r\nWith regards to expected behaviour, could you give some more information about what's changed? Specifically what is being measured in terms of performance e.g. is it the clip loss? And how much it has changed? ", "@amyeroberts I will make two notebooks to clarify, please wait for several minutes.", "@amyeroberts Actually, we cannot disclose all resources that are required to run the notebooks for reasons you will definitely know once you have read them. But the performance drop (the last cell's output, the higher the better) are consistent on different platforms and the only variable is the version of `transformers` so at least for now we believe the model's behavior change is caused by the package upgrading.\r\n\r\nThe only difference between two notebooks given in the zip file is the `transformers` 's version, and the checkpoint we have loaded are exactly the same.\r\n[two-version-notebook.zip](https://github.com/huggingface/transformers/files/11377971/two-version-notebook.zip)\r\n", "@amyeroberts In short, every time we try upgrading the `transformers` version for new features, no matter what `torch` version we are using, what platform we are running on, we found our prediction workflow failed. For the specific task solved in the notebooks, another observation I can provide is that once `transformers` has been upgraded to `4.28.1`, the final prediction, say, the output for each input image, when loaded a model with the same weights, the model is possible to generate output with magnitude differences of over a thousand times for each input image and finally result in the performance drop.", "The uploaded two notebooks demonstrate the performance drop considering inference. What we are experiencing at the same time, is that during training using `cos` loss which is related to the task in the notebook, the `transformers==4.27.4` powered model converge easily on about $50k$ images but `transformers==4.28.1` based model won't converge on just $1k$ images.\r\n\r\nThe architecture we have chosen is straight forward and if we load `from_pretrained('laion/CLIP-ViT-H-14-laion2B-s32B-b79K')` regardless of the Internet connection restriction on certain platform, the problem still exists.", "@amyeroberts With respect to the output difference, at the $8st$ cell of the two given notebook, we can see that the tensor output for the first sample, is different.\r\n\r\nThe `4.28.1` version gives\r\n```\r\n-0.776564\r\n1.751475\r\n1.938180\r\n0.474142\r\n-0.191921\r\n...\r\n```\r\n\r\nwhile `4.27.4` gives\r\n```\r\n-2.197644\r\n2.167892\r\n-0.369088\r\n-0.928763\r\n-3.423420\r\n...\r\n```", "> @amyeroberts In short, every time we try upgrading the `transformers` version for new features, no matter what `torch` version we are using, what platform we are running on, we found our prediction workflow failed. For the specific task solved in the notebooks, another observation I can provide is that once `transformers` has been upgraded to `4.28.1`, the final prediction, say, the output for each input image, when loaded a model with the same weights, the model is possible to generate output with magnitude differences of over a thousand times for each input image and finally result in the performance drop.\r\n\r\n@amyeroberts The `thousand times` I mean above is related to a similar strategy with another weight checkpoint, which is not presented in [two-version-notebook.zip](https://github.com/huggingface/transformers/files/11377971/two-version-notebook.zip). \r\n\r\nMy coworkers guess that something important related to the overall CLIP workflow has changed between `4.27.4` and `4.28.1`, which has caused some incompatibilities issues.", "@amyeroberts Any progress on this issue? If you can roughly locate the code change related to this issue, I am happy to submit a pull request to fix it.", "Hi @submartingales, thanks for sharing more details and the notebooks. \r\n\r\nI suspect this is related to a change in the cropping behaviour identified in a similar [issue](https://github.com/huggingface/transformers/issues/22505).\r\n\r\nThe fastest way to regain the old behaviour whilst waiting for the fix to be merged would be implementing an image processor which overrides the cropping behaviour e.g. something like this: \r\n\r\n```python\r\nfrom typing import Dict, Optional, Union\r\n\r\nimport numpy as np\r\nfrom transformers import CLIPTokenizer, CLIPImageProcessor, CLIPProcessor\r\nfrom transformers.image_transforms import get_image_size, to_channel_dimension_format\r\nfrom transformers.image_utils import ChannelDimension, get_image_size, infer_channel_dimension_format\r\nfrom transformers.image_processing_utils import get_size_dict\r\n\r\n\r\nclass NewCLIPImageProcessor(CLIPImageProcessor):\r\n def center_crop(\r\n self,\r\n image: np.ndarray,\r\n size: Dict[str, int],\r\n data_format: Optional[Union[str, ChannelDimension]] = None,\r\n **kwargs\r\n ) -> np.ndarray:\r\n size = get_size_dict(size)\r\n if \"height\" not in size or \"width\" not in size:\r\n raise ValueError(f\"The `size` parameter must contain the keys (height, width). Got {size.keys()}\")\r\n\r\n image = to_channel_dimension_format(image, ChannelDimension.FIRST)\r\n if data_format is None:\r\n data_format = infer_channel_dimension_format(image)\r\n\r\n image_height, image_width = get_image_size(image)\r\n crop_height, crop_width = size[\"height\"], size[\"width\"]\r\n\r\n crop_top = int((image_height - crop_height + 1) * 0.5)\r\n crop_left = int((image_width - crop_width + 1) * 0.5)\r\n\r\n image = image[:, crop_top : crop_top + crop_height, crop_left : crop_left + crop_width]\r\n image = to_channel_dimension_format(image, data_format)\r\n return image\r\n\r\nimage_processor = NewCLIPImageProcessor.from_pretrained(\"openai/clip-vit-base-patch32\")\r\ntokenizer = CLIPTokenizer.from_pretrained(\"openai/clip-vit-base-patch32\")\r\nprocessor = CLIPProcessor(image_processor=image_processor, tokenizer=tokenizer)\r\n```\r\n\r\n\r\n", "@amyeroberts Thank you for your code to fix this but we are sorry to inform that after we have updated the `processor` by incorporating your code snippet, the problem still exists and the model's output based on `transformers==4.28.1` does not change to what it shall be.", "@amyeroberts These days we have performed further experiments using `transformers==4.29.2` and such output change persists in `transformers==4.29.2` and the output tensor is `allclose`d to what is outputed by `transformers==4.28.1`.", "@submartingales If you do not share a reproducer of the bug, there is really nothing we can do to help.", "@sgugger Now we make public all resources required to reproduce the bug, in two public notebooks with all related checkpoints loaded in public datasets. Any account can now copy & edit the notebook and reproduce the behavior change with \"pin to original environment\" checked.\r\n+ The `transformers==4.29.2` version whose output is allclosed to `transformers==4.28.x` is given in https://www.kaggle.com/code/qiexifan/huggingface-transformers-4292-versioning-last-500k\r\n+ The `transformers==4.27.4` version is given in https://www.kaggle.com/code/qiexifan/huggingface-transformers-4274-versioning-last-500k", "Hi @submartingales, thanks for sharing the repro. \r\n\r\nI've tracked down the change in the model outputs down to a bug fix in 4.28.x: #22458. \r\n\r\nIn the shared notebooks in the `ImageDataset` class, the images are converted to torch tensors in the `__getitem__` method using `ToTensor()`. `ToTensor()` doesn't just convert the PIL image to a tensor, but also scales the pixel values between 0-1. \r\n\r\nThe image transforms library uses Pillow to resize the images. If the input is an array, then its first converted to a PIL image, and then converted back to an array. To convert an array to a PIL.Image.Image, its pixels must be integer values between [0, 255]. In 4.27.4, if the input had pixel values [0, 1], and we rescale so this conversion happened, the output array wasn't rescaled back down -> the output array had pixel values between [0, 255].\r\n\r\nIf using `ToTensor` then the image processor should have `do_rescale=False` set to prevent the pixel values being divided by `255` twice. This was likely the cause of the degraded performance (as the images in 4.27.4 had their pixel values multiplied by 255 when resizing, nullifying this double divide. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,682
1,687
1,687
NONE
null
### System Info @amyeroberts Related versions are ``` transformers==4.27.4 ``` and ``` transformers==4.28.1 ``` ### Who can help? @amyeroberts ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I'm sure that `CLIPVisionModel` loaded `from_pretrained` like from the `laion` pretrained CLIP ViT will output totally different tensor output with exactly same input image. And using `transformers==4.28.1` will lead to a dramatic performance drop for reasons worth digging. Extensive tests have been conducted to verify that this issue seems irrelevant to the torch version (e.g. `2.0` or `1.13`) Probably you can reproduce this by load from `laion/CLIP-ViT-H-14-laion2B-s32B-b79K` which is quite popular. ### Expected behavior Two version output almost same tensor given same input image.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23096/reactions", "total_count": 5, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/23096/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23095
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23095/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23095/comments
https://api.github.com/repos/huggingface/transformers/issues/23095/events
https://github.com/huggingface/transformers/issues/23095
1,691,567,100
I_kwDOCUB6oc5k00P8
23,095
`torch.compile` is ignored when using DeepSpeed
{ "login": "xplip", "id": 25847814, "node_id": "MDQ6VXNlcjI1ODQ3ODE0", "avatar_url": "https://avatars.githubusercontent.com/u/25847814?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xplip", "html_url": "https://github.com/xplip", "followers_url": "https://api.github.com/users/xplip/followers", "following_url": "https://api.github.com/users/xplip/following{/other_user}", "gists_url": "https://api.github.com/users/xplip/gists{/gist_id}", "starred_url": "https://api.github.com/users/xplip/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xplip/subscriptions", "organizations_url": "https://api.github.com/users/xplip/orgs", "repos_url": "https://api.github.com/users/xplip/repos", "events_url": "https://api.github.com/users/xplip/events{/privacy}", "received_events_url": "https://api.github.com/users/xplip/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I don't think DeepSpeed support `torch.compile` yet, so this was done intentionally. If the situation has changed, we can of course revisit.", "What Sylvain said, wrt the 2 not working together.\r\n\r\nBut it's not that Deepspeed doesn't support `torch.compile`, it's rather that `torch.compile` is very immature - somewhere between alpha and beta state based on my experiments - many other things break with `torch.compile` besides Deepspeed.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,682
1,688
1,688
NONE
null
### System Info Since #22279, `torch.compile` is called at the end of `_wrap_model`. However, [these lines](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#LL1397C1-L1398C34) immediately return the deepspeed engine, so `torch.compile` is never executed, even when asking for it in the training args. I don't think this is intended because DeepSpeed does not automatically run `torch.compile`, but please correct me if I am wrong. @stas00 @sgugger ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Install transformers and deepspeed with torch 2.0 2. Run any HF transformers training script with `--deepspeed ds_config.json --torch_compile` 3. See logs, no `torch.compile` logs to be found. this means training is slower with zero 0 than without using deepspeed (due to the lack of speedup from compilation) ### Expected behavior `torch.compile` should still be called somewhere in the trainer when using DeepSpeed
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23095/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23095/timeline
completed
null
null