url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/24370
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24370/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24370/comments
https://api.github.com/repos/huggingface/transformers/issues/24370/events
https://github.com/huggingface/transformers/issues/24370
1,765,169,267
I_kwDOCUB6oc5pNlhz
24,370
Incorrect information on BlipImageProcessor documentation
{ "login": "LukeBailey181", "id": 55068209, "node_id": "MDQ6VXNlcjU1MDY4MjA5", "avatar_url": "https://avatars.githubusercontent.com/u/55068209?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LukeBailey181", "html_url": "https://github.com/LukeBailey181", "followers_url": "https://api.github.com/users/LukeBailey181/followers", "following_url": "https://api.github.com/users/LukeBailey181/following{/other_user}", "gists_url": "https://api.github.com/users/LukeBailey181/gists{/gist_id}", "starred_url": "https://api.github.com/users/LukeBailey181/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LukeBailey181/subscriptions", "organizations_url": "https://api.github.com/users/LukeBailey181/orgs", "repos_url": "https://api.github.com/users/LukeBailey181/repos", "events_url": "https://api.github.com/users/LukeBailey181/events{/privacy}", "received_events_url": "https://api.github.com/users/LukeBailey181/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Transferring to transformers", "Hi @LukeBailey181, thanks for pointing out! \r\n\r\nWould you like to open a PR to update the documentation? This way you would get the git contribution. ", "Wonderful yes I will create the PR!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,686
1,690
1,690
NONE
null
**Is your feature request related to a problem? Please describe.** There is slightly incorrect information on the [BlipImageProcessor documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip#transformers.BlipImageProcessor). **Describe the solution you'd like** The documentation currently says that the `image_mean` and `image_std` arguments default to `IMAGENET_STANDARD_MEAN` and `IMAGENET_STANDARD_STD` respectively. This seems to be incorrect looking at the source code however, in the init of `BlipImageProcessor` found [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/blip/image_processing_blip.py#L43) the values default to `OPENAI_CLIP_MEAN` and `OPENAI_CLIP_STD` respectively as defined [here](https://github.com/huggingface/transformers/blob/main/src/transformers/utils/constants.py). Means are pretty similar but the standard deviations are quite different, so may be worth updating the documentation here :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24370/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24370/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24095
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24095/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24095/comments
https://api.github.com/repos/huggingface/transformers/issues/24095/events
https://github.com/huggingface/transformers/pull/24095
1,746,772,875
PR_kwDOCUB6oc5SdfDy
24,095
fix get_keys_to_not_convert function
{ "login": "SunMarc", "id": 57196510, "node_id": "MDQ6VXNlcjU3MTk2NTEw", "avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SunMarc", "html_url": "https://github.com/SunMarc", "followers_url": "https://api.github.com/users/SunMarc/followers", "following_url": "https://api.github.com/users/SunMarc/following{/other_user}", "gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}", "starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions", "organizations_url": "https://api.github.com/users/SunMarc/orgs", "repos_url": "https://api.github.com/users/SunMarc/repos", "events_url": "https://api.github.com/users/SunMarc/events{/privacy}", "received_events_url": "https://api.github.com/users/SunMarc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
MEMBER
null
# What does this PR do ? Fix the behavior of the get_keys_to_not_convert function for the following cases: - If the lm_head is tied, we won't be able to see it using the method named_parameters() and the last visible module was added instead -> using `named_children() `instead - Fix tied_params variable that we should not crop.( Example of what was happening : `[['lm_head.weight', 'model.decoder.embed_tokens.weight']] -> ['model.decoder.embed_tokens.weight']`)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24095/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24095/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24095", "html_url": "https://github.com/huggingface/transformers/pull/24095", "diff_url": "https://github.com/huggingface/transformers/pull/24095.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24095.patch", "merged_at": 1686233668000 }
https://api.github.com/repos/huggingface/transformers/issues/24094
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24094/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24094/comments
https://api.github.com/repos/huggingface/transformers/issues/24094/events
https://github.com/huggingface/transformers/pull/24094
1,746,724,437
PR_kwDOCUB6oc5SdUcF
24,094
fix blip2config int8 error to serialize json
{ "login": "Andrechang", "id": 9553458, "node_id": "MDQ6VXNlcjk1NTM0NTg=", "avatar_url": "https://avatars.githubusercontent.com/u/9553458?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Andrechang", "html_url": "https://github.com/Andrechang", "followers_url": "https://api.github.com/users/Andrechang/followers", "following_url": "https://api.github.com/users/Andrechang/following{/other_user}", "gists_url": "https://api.github.com/users/Andrechang/gists{/gist_id}", "starred_url": "https://api.github.com/users/Andrechang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Andrechang/subscriptions", "organizations_url": "https://api.github.com/users/Andrechang/orgs", "repos_url": "https://api.github.com/users/Andrechang/repos", "events_url": "https://api.github.com/users/Andrechang/events{/privacy}", "received_events_url": "https://api.github.com/users/Andrechang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24094). All of your documentation changes will be reflected on that endpoint.", "I ran `pip install -e .` on transformer main branch 8b169142f8b5735f25ad81313ee382350161d993\r\nThen ran the code snippet and gave the error: \r\n`TypeError: Object of type BitsAndBytesConfig is not JSON serializable`\r\n\r\nAdding my commit will fix this\r\n\r\nprint screen attached\r\n![install0](https://github.com/huggingface/transformers/assets/9553458/1a95e026-fbfb-4eac-8454-bcdb35b86888)\r\n\r\n![error0](https://github.com/huggingface/transformers/assets/9553458/f77f56ce-341e-4fff-a2c4-fb263539c718)\r\n\r\n\r\n", "Hi @Andrechang \r\nAgain thanks very much for flagging the issue, I realized that you have flagged an issue that is quite important to fix, therefore I made https://github.com/huggingface/transformers/pull/24137 and added you as a co-author, will merge that PR and I will close this one. Feel free to re-open it if you think that the issue is not resolved. Again thanks a lot!", "Thank you for checking and for fix\r\n" ]
1,686
1,686
1,686
CONTRIBUTOR
null
# What does this PR do? Running the following using transformers=4.30.0.dev0 ``` bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) model = AutoModelForVision2Seq.from_pretrained("Salesforce/blip2-opt-2.7b", quantization_config=bnb_config, device_map='auto') print(model.config) ``` Will give the following error: ``` TypeError: Object of type BitsAndBytesConfig is not JSON serializable ``` Solution add convert BitsAndBytesConfig to json in Blip2Config ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @amyeroberts @NielsRogge <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24094/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24094/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24094", "html_url": "https://github.com/huggingface/transformers/pull/24094", "diff_url": "https://github.com/huggingface/transformers/pull/24094.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24094.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24093
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24093/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24093/comments
https://api.github.com/repos/huggingface/transformers/issues/24093/events
https://github.com/huggingface/transformers/pull/24093
1,746,723,861
PR_kwDOCUB6oc5SdUUU
24,093
wrapped up efficient-memory contrastive search
{ "login": "blbadger", "id": 54602201, "node_id": "MDQ6VXNlcjU0NjAyMjAx", "avatar_url": "https://avatars.githubusercontent.com/u/54602201?v=4", "gravatar_id": "", "url": "https://api.github.com/users/blbadger", "html_url": "https://github.com/blbadger", "followers_url": "https://api.github.com/users/blbadger/followers", "following_url": "https://api.github.com/users/blbadger/following{/other_user}", "gists_url": "https://api.github.com/users/blbadger/gists{/gist_id}", "starred_url": "https://api.github.com/users/blbadger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/blbadger/subscriptions", "organizations_url": "https://api.github.com/users/blbadger/orgs", "repos_url": "https://api.github.com/users/blbadger/repos", "events_url": "https://api.github.com/users/blbadger/events{/privacy}", "received_events_url": "https://api.github.com/users/blbadger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,686
1,686
1,686
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24093/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24093/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24093", "html_url": "https://github.com/huggingface/transformers/pull/24093", "diff_url": "https://github.com/huggingface/transformers/pull/24093.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24093.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24092
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24092/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24092/comments
https://api.github.com/repos/huggingface/transformers/issues/24092/events
https://github.com/huggingface/transformers/pull/24092
1,746,628,682
PR_kwDOCUB6oc5Sc_LV
24,092
[BlenderBotSmall] Update doc example
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
COLLABORATOR
null
# What does this PR do? Blenderbot small seems to be using `__start__` and `__end__`, updated the doc to reflect that. Super hard to fin original source, but based on the tokenizer this is what is used.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24092/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24092/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24092", "html_url": "https://github.com/huggingface/transformers/pull/24092", "diff_url": "https://github.com/huggingface/transformers/pull/24092.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24092.patch", "merged_at": 1686321118000 }
https://api.github.com/repos/huggingface/transformers/issues/24091
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24091/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24091/comments
https://api.github.com/repos/huggingface/transformers/issues/24091/events
https://github.com/huggingface/transformers/pull/24091
1,746,498,372
PR_kwDOCUB6oc5Scie2
24,091
⚠️ Time to say goodbye to py37
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,687
1,687
COLLABORATOR
null
# What does this PR do? [Sorry for the spams, GitHub had some issues] Same as #24075, but that PR got freezed after I force pushed (after rebase), and my changes to address the comments were not able to appear) (amy have already approved #24075) ---- Byebye! EOL of python 3.7 is `2023/06/27`. https://endoflife.date/python
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24091/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24091/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24091", "html_url": "https://github.com/huggingface/transformers/pull/24091", "diff_url": "https://github.com/huggingface/transformers/pull/24091.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24091.patch", "merged_at": 1687929759000 }
https://api.github.com/repos/huggingface/transformers/issues/24090
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24090/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24090/comments
https://api.github.com/repos/huggingface/transformers/issues/24090/events
https://github.com/huggingface/transformers/issues/24090
1,746,393,278
I_kwDOCUB6oc5oF9i-
24,090
Deepspeed hang when tuning redpajama-3b
{ "login": "sei-amellinger", "id": 30724972, "node_id": "MDQ6VXNlcjMwNzI0OTcy", "avatar_url": "https://avatars.githubusercontent.com/u/30724972?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sei-amellinger", "html_url": "https://github.com/sei-amellinger", "followers_url": "https://api.github.com/users/sei-amellinger/followers", "following_url": "https://api.github.com/users/sei-amellinger/following{/other_user}", "gists_url": "https://api.github.com/users/sei-amellinger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sei-amellinger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sei-amellinger/subscriptions", "organizations_url": "https://api.github.com/users/sei-amellinger/orgs", "repos_url": "https://api.github.com/users/sei-amellinger/repos", "events_url": "https://api.github.com/users/sei-amellinger/events{/privacy}", "received_events_url": "https://api.github.com/users/sei-amellinger/received_events", "type": "User", "site_admin": false }
[ { "id": 5616426447, "node_id": "LA_kwDOCUB6oc8AAAABTsPdzw", "url": "https://api.github.com/repos/huggingface/transformers/labels/solved", "name": "solved", "color": "B1D6DC", "default": false, "description": "" } ]
closed
false
null
[]
[ "cc @stas00 ", "Hello,\r\n\r\n```\r\naccelerator = Accelerator()\r\n```\r\nThis shouldn't be there as Trainer creates it internally now.\r\n\r\nWithout any deepspeed config, I'm confused why you are trying to use deepspeed launcher via `deepspeed tune.py`? \r\n\r\n", "Please refer here https://huggingface.co/docs/transformers/main_classes/deepspeed for properly using deepspeed as well as this PR #23236 in case you want to use accelerate launcher for the same.", "I've been trying to follow the docs in the link above. That's why I added the issue last week about the tuning not working as described in the page. This week I am trying to get my own code working since I know that deepspeed by itself works.\r\n\r\nEven with using the config from the test area \"tests/deepspeed/ds_config_zero3.json\" I get the same deadlock. If one doesn't use the config, what are all the defaults btw? I found it hard to get a clear understanding for comparison of the all the defaults.\r\n\r\nSo I guess one of my assumptions is that even if I don't take advantage of all the features and get better acceleration, deepspeed should just \"work\" without deadlocking? If I do the same command \"deepspeed tune.py\" on one gpu it completes fine. Is that not true? My plan is to get a working baseline then tweak from there.\r\n", "Okay, after reading much further down beyond the working with multiple gpu examples I have found the \"Shared Configuration\" section with statements like: \"be very careful that your the [Trainer](https://huggingface.co/docs/transformers/v4.29.1/en/main_classes/trainer#transformers.Trainer) arguments and DeepSpeed configurations agree. For example, are you using the same learning rate, or batch size, or gradient accumulation settings? if these mismatch the training may fail in very difficult to detect ways. You have been warned.\"\r\n\r\nI will dig through the configuration files and try to debug these issues.\r\n\r\nAny guidance on how to detect this mismatches? Is there any way to print/compare all the configs? Any help is appreciated.", "Okay, now that I look at the example more closely (run_translation.py) I see that it does all the handling of args, etc. Is there a tutorial on writing a deepspeed enabled script?\r\n\r\nAnyway, I am now int the process of passing in the deepspeed myself to my trainer using the argument properly, I still have a hang but elsewhere. It still hangs.\r\n\r\nThe stack trace is attached.\r\nThe deepspeed config is attached.\r\nThe training_args are attached.\r\n\r\nAs far as I can tell these values should work together, but I have some trouble reconciling things like \"linear\" and \"WarmupLR\".\r\n\r\nThoughts on debugging approaches/options?\r\n[ds_config_zero3.txt](https://github.com/huggingface/transformers/files/11710676/ds_config_zero3.txt)\r\n[stack_trace.txt](https://github.com/huggingface/transformers/files/11710677/stack_trace.txt)\r\n[training_args.txt](https://github.com/huggingface/transformers/files/11710678/training_args.txt)\r\n", "Any thoughts on this?", "Code: Note that I have removed few lines as they were incorrect/unnecessary\r\n\r\n```diff\r\nimport transformers\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\nfrom transformers import DataCollatorForLanguageModeling\r\nfrom transformers import AutoModelForCausalLM, TrainingArguments, Trainer\r\nfrom datasets import load_dataset\r\n\r\n- from accelerate import Accelerator\r\n\r\nMIN_TRANSFORMERS_VERSION = '4.25.1'\r\n\r\nassert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'\r\n\r\n- accelerator = Accelerator()\r\n\r\n# ==============================================================================\r\n# DDP: Usually we use NCCL, so set that.\r\n# Maybe need to use: NCCL_P2P_DISABLE=1\r\ntraining_args = TrainingArguments(\r\n output_dir=\"redpajama-tuning-test\",\r\n #evaluation_strategy=\"epoch\",\r\n learning_rate=2e-5,\r\n weight_decay=0.01,\r\n per_device_train_batch_size=4,\r\n per_device_eval_batch_size=4,\r\n #log_level=\"debug\",\r\n report_to=\"none\",\r\n ddp_backend=\"nccl\",\r\n ddp_timeout=60,\r\n push_to_hub=False,\r\n+ deepspeed=\"ds_config_zero3.json\"\r\n)\r\n\r\n# =============================================================================\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"togethercomputer/RedPajama-INCITE-Base-3B-v1\")\r\nmodel = AutoModelForCausalLM.from_pretrained(\"togethercomputer/RedPajama-INCITE-Base-3B-v1\")\r\nmodel.train()\r\n- model = model.half()\r\n- model = model.cuda()\r\n\r\n\r\n# =============================================================================\r\n\r\ntokenizer.model_max_length=512\r\ntokenizer.pad_token = tokenizer.eos_token\r\n\r\neli5 = load_dataset(\"eli5\", split=\"train_asks[:5000]\")\r\neli5 = eli5.train_test_split(test_size=0.2)\r\neli5 = eli5.flatten()\r\n\r\ndef preprocess_function(examples):\r\n return tokenizer([\" \".join(x) for x in examples[\"answers.text\"]])\r\n\r\nwith training_args.main_process_first(desc=\"tokenizing\"):\r\n tokenized_eli5 = eli5.map(\r\n preprocess_function,\r\n batched=True,\r\n num_proc=4,\r\n remove_columns=eli5[\"train\"].column_names\r\n )\r\n\r\nblock_size = 512\r\ndef group_texts(examples):\r\n concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}\r\n total_length = len(concatenated_examples[list(examples.keys())[0]])\r\n if total_length >= block_size:\r\n total_length = (total_length // block_size) * block_size\r\n result = {\r\n k: [t[i : i + block_size] for i in range(0, total_length, block_size)]\r\n for k, t in concatenated_examples.items()\r\n }\r\n result[\"labels\"] = result[\"input_ids\"].copy()\r\n return result\r\n\r\nwith training_args.main_process_first(desc=\"grouping\"):\r\n lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4)\r\n\r\n# =================================================================================\r\ndata_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=lm_dataset[\"train\"],\r\n eval_dataset=lm_dataset[\"test\"],\r\n tokenizer=tokenizer,\r\n data_collator=data_collator\r\n)\r\ntrainer.train()\r\n```\r\n\r\nds config:\r\n```\r\n{\r\n \"fp16\": {\r\n \"enabled\": \"auto\",\r\n \"loss_scale\": 0,\r\n \"loss_scale_window\": 1000,\r\n \"initial_scale_power\": 16,\r\n \"hysteresis\": 2,\r\n \"min_loss_scale\": 1\r\n },\r\n\r\n \"bf16\": {\r\n \"enabled\": \"auto\"\r\n },\r\n\r\n \"optimizer\": {\r\n \"type\": \"AdamW\",\r\n \"params\": {\r\n \"lr\": \"auto\",\r\n \"betas\": \"auto\",\r\n \"eps\": \"auto\",\r\n \"weight_decay\": \"auto\"\r\n }\r\n },\r\n\r\n \"scheduler\": {\r\n \"type\": \"WarmupLR\",\r\n \"params\": {\r\n \"warmup_min_lr\": \"auto\",\r\n \"warmup_max_lr\": \"auto\",\r\n \"warmup_num_steps\": \"auto\"\r\n }\r\n },\r\n\r\n \"zero_optimization\": {\r\n \"stage\": 3,\r\n \"offload_optimizer\": {\r\n \"device\": \"cpu\",\r\n \"pin_memory\": true\r\n },\r\n \"offload_param\": {\r\n \"device\": \"cpu\",\r\n \"pin_memory\": true\r\n },\r\n \"overlap_comm\": true,\r\n \"contiguous_gradients\": true,\r\n \"sub_group_size\": 1e9,\r\n \"reduce_bucket_size\": \"auto\",\r\n \"stage3_prefetch_bucket_size\": \"auto\",\r\n \"stage3_param_persistence_threshold\": \"auto\",\r\n \"stage3_max_live_parameters\": 1e9,\r\n \"stage3_max_reuse_distance\": 1e9,\r\n \"stage3_gather_16bit_weights_on_model_save\": true\r\n },\r\n\r\n \"gradient_accumulation_steps\": \"auto\",\r\n \"gradient_clipping\": \"auto\",\r\n \"steps_per_print\": 2000,\r\n \"train_batch_size\": \"auto\",\r\n \"train_micro_batch_size_per_gpu\": \"auto\",\r\n \"wall_clock_breakdown\": false\r\n}\r\n\r\n```\r\n\r\ncommand:\r\n```\r\nCUDA_VISIBLE_DEVICES=2,3 deepspeed issue_24090.py\r\n```\r\n\r\nOutput logs:\r\n```\r\n[2023-06-22 10:30:06,594] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect)\r\n[2023-06-22 10:30:08,119] [WARNING] [runner.py:196:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.\r\nDetected CUDA_VISIBLE_DEVICES=2,3: setting --include=localhost:2,3\r\n[2023-06-22 10:30:08,159] [INFO] [runner.py:555:main] cmd = /home/sourab/miniconda3/envs/ml/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMiwgM119 --master_addr=127.0.0.1 --master_port=29500 --enable_each_rank_log=None issue_24090.py\r\n[2023-06-22 10:30:09,458] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect)\r\n[2023-06-22 10:30:10,887] [INFO] [launch.py:145:main] WORLD INFO DICT: {'localhost': [2, 3]}\r\n[2023-06-22 10:30:10,887] [INFO] [launch.py:151:main] nnodes=1, num_local_procs=2, node_rank=0\r\n[2023-06-22 10:30:10,887] [INFO] [launch.py:162:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1]})\r\n[2023-06-22 10:30:10,887] [INFO] [launch.py:163:main] dist_world_size=2\r\n[2023-06-22 10:30:10,887] [INFO] [launch.py:165:main] Setting CUDA_VISIBLE_DEVICES=2,3\r\n[2023-06-22 10:30:13,111] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect)\r\n[2023-06-22 10:30:13,133] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect)\r\n\r\n===================================BUG REPORT===================================\r\nWelcome to bitsandbytes. For bug reports, please run\r\n\r\npython -m bitsandbytes\r\n\r\n and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues\r\n================================================================================\r\nbin /home/sourab/miniconda3/envs/ml/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cuda118.so\r\n/home/sourab/miniconda3/envs/ml/lib/python3.11/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/home/sourab/miniconda3/envs/ml/lib/libcudart.so.11.0'), PosixPath('/home/sourab/miniconda3/envs/ml/lib/libcudart.so')}.. We'll flip a coin and try one of these, in order to fail forward.\r\nEither way, this might cause trouble in the future:\r\nIf you get `CUDA error: invalid device function` errors, the above might be the cause and the solution is to make sure only one ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] in the paths that we search based on your env.\r\n warn(msg)\r\nCUDA SETUP: CUDA runtime path found: /home/sourab/miniconda3/envs/ml/lib/libcudart.so.11.0\r\nCUDA SETUP: Highest compute capability among GPUs detected: 8.0\r\nCUDA SETUP: Detected CUDA version 118\r\nCUDA SETUP: Loading binary /home/sourab/miniconda3/envs/ml/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cuda118.so...\r\n\r\n===================================BUG REPORT===================================\r\nWelcome to bitsandbytes. For bug reports, please run\r\n\r\npython -m bitsandbytes\r\n\r\n and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues\r\n================================================================================\r\nbin /home/sourab/miniconda3/envs/ml/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cuda118.so\r\n/home/sourab/miniconda3/envs/ml/lib/python3.11/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/home/sourab/miniconda3/envs/ml/lib/libcudart.so.11.0'), PosixPath('/home/sourab/miniconda3/envs/ml/lib/libcudart.so')}.. We'll flip a coin and try one of these, in order to fail forward.\r\nEither way, this might cause trouble in the future:\r\nIf you get `CUDA error: invalid device function` errors, the above might be the cause and the solution is to make sure only one ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] in the paths that we search based on your env.\r\n warn(msg)\r\nCUDA SETUP: CUDA runtime path found: /home/sourab/miniconda3/envs/ml/lib/libcudart.so.11.0\r\nCUDA SETUP: Highest compute capability among GPUs detected: 8.0\r\nCUDA SETUP: Detected CUDA version 118\r\nCUDA SETUP: Loading binary /home/sourab/miniconda3/envs/ml/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cuda118.so...\r\n[2023-06-22 10:30:14,921] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented\r\n[2023-06-22 10:30:14,921] [INFO] [comm.py:594:init_distributed] cdb=None\r\n[2023-06-22 10:30:14,921] [INFO] [comm.py:625:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl\r\n[2023-06-22 10:30:14,932] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented\r\n[2023-06-22 10:30:14,932] [INFO] [comm.py:594:init_distributed] cdb=None\r\n[2023-06-22 10:30:23,033] [INFO] [partition_parameters.py:453:__exit__] finished initializing model with 2.78B parameters\r\nFound cached dataset eli5 (/raid/sourab/.cache/huggingface/datasets/eli5/LFQA_reddit/1.0.0/17574e5502a10f41bbd17beba83e22475b499fa62caa1384a3d093fc856fe6fa)\r\nFound cached dataset eli5 (/raid/sourab/.cache/huggingface/datasets/eli5/LFQA_reddit/1.0.0/17574e5502a10f41bbd17beba83e22475b499fa62caa1384a3d093fc856fe6fa)\r\nMap (num_proc=4): 0%| | 0/4000 [00:00<?, ? examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (661 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (1066 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (557 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (560 > 512). Running this sequence through the model will result in indexing errors\r\nMap (num_proc=4): 0%| | 0/1000 [00:00<?, ? examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (990 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (814 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (723 > 512). Running this sequence through the model will result in indexing errors\r\nMap (num_proc=4): 25%|███████████████ | 250/1000 [00:00<00:00, 1105.17 examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (661 > 512). Running this sequence through the model will result in indexing errors\r\nMap (num_proc=4): 0%| | 0/4000 [00:00<?, ? examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (550 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (632 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (3324 > 512). Running this sequence through the model will result in indexing errors\r\nMap (num_proc=4): 25%|██████████████▊ | 1000/4000 [00:00<00:02, 1384.05 examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (631 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (556 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (845 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (866 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (816 > 512). Running this sequence through the model will result in indexing errors\r\nUsing /raid/sourab/.cache/huggingface/torch_extensions/py311_cu118 as PyTorch extensions root... \r\nDetected CUDA files, patching ldflags\r\nEmitting ninja build file /raid/sourab/.cache/huggingface/torch_extensions/py311_cu118/cpu_adam/build.ninja...\r\nBuilding extension module cpu_adam...\r\nAllowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)\r\nninja: no work to do.\r\nLoading extension module cpu_adam...\r\nTime to load cpu_adam op: 2.314084529876709 seconds\r\nUsing /raid/sourab/.cache/huggingface/torch_extensions/py311_cu118 as PyTorch extensions root...\r\nEmitting ninja build file /raid/sourab/.cache/huggingface/torch_extensions/py311_cu118/utils/build.ninja...\r\nBuilding extension module utils...\r\nAllowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)\r\nninja: no work to do.\r\nLoading extension module utils...\r\nTime to load utils op: 0.05933046340942383 seconds\r\nParameter Offload: Total persistent parameters: 1070080 in 258 params\r\nUsing /raid/sourab/.cache/huggingface/torch_extensions/py311_cu118 as PyTorch extensions root...\r\nDetected CUDA files, patching ldflags\r\nEmitting ninja build file /raid/sourab/.cache/huggingface/torch_extensions/py311_cu118/cpu_adam/build.ninja...\r\nBuilding extension module cpu_adam...\r\nAllowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)\r\nninja: no work to do.\r\nLoading extension module cpu_adam...\r\nTime to load cpu_adam op: 2.3131399154663086 seconds\r\nUsing /raid/sourab/.cache/huggingface/torch_extensions/py311_cu118 as PyTorch extensions root...\r\nEmitting ninja build file /raid/sourab/.cache/huggingface/torch_extensions/py311_cu118/utils/build.ninja...\r\nBuilding extension module utils...\r\nAllowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)\r\nninja: no work to do.\r\nLoading extension module utils...\r\nTime to load utils op: 0.05986285209655762 seconds\r\nUsing /raid/sourab/.cache/huggingface/torch_extensions/py311_cu118 as PyTorch extensions root...\r\nNo modifications detected for re-loaded extension module utils, skipping build step...\r\nLoading extension module utils...\r\nTime to load utils op: 0.0002925395965576172 seconds\r\nUsing /raid/sourab/.cache/huggingface/torch_extensions/py311_cu118 as PyTorch extensions root...\r\nNo modifications detected for re-loaded extension module utils, skipping build step...\r\nLoading extension module utils...\r\nTime to load utils op: 0.00027251243591308594 seconds\r\n 0%| | 0/837 [00:00<?, ?it/s]You're using a GPTNeoXTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\nYou're using a GPTNeoXTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\n/home/sourab/miniconda3/envs/ml/lib/python3.11/site-packages/deepspeed/runtime/zero/stage3.py:1209: UserWarning: The torch.cuda.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='cuda') to create tensors. (Triggered internally at /opt/conda/conda-bld/pytorch_1687280020902/work/torch/csrc/tensor/python_tensor.cpp:83.)\r\n total_norm_cuda = get_accelerator().FloatTensor([float(total_norm)])\r\n/home/sourab/miniconda3/envs/ml/lib/python3.11/site-packages/deepspeed/runtime/zero/stage3.py:1209: UserWarning: The torch.cuda.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='cuda') to create tensors. (Triggered internally at /opt/conda/conda-bld/pytorch_1687280020902/work/torch/csrc/tensor/python_tensor.cpp:83.)\r\n total_norm_cuda = get_accelerator().FloatTensor([float(total_norm)])\r\n 1%|█▏ | 12/837 [01:05<1:14:36, 5.43s/it]\r\n```\r\n\r\nversions:\r\n```\r\n- `Accelerate` version: 0.21.0.dev0 -> install from main branch\r\n- `Transformers` version: 4.31.0.dev0 -> install from main branch\r\n- DeepSpeed version: 0.9.4\r\n- Platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.31\r\n- Python version: 3.11.3\r\n- Numpy version: 1.24.3\r\n- PyTorch version (GPU?): 2.1.0.dev20230620 (True)\r\n- PyTorch XPU available: False\r\n- System RAM: 503.55 GB\r\n- GPU type: NVIDIA A100-SXM4-80GB\r\n```\r\n\r\nSummary:\r\nThe code runs and no deadlocks or hangs. However, the warnings related to the data prep during tokenization seem concerning (not relevant to DeepSpeed/trainer)\r\n", "Thanks for looking in this!\r\n\r\nI made the changes and tried it again.\r\n\r\nIf I take out the model.half() and model.cuda() it won't fit on my 2x 40G A100's. I tried turning on fp16 via deepspeed by setting it to true and setting `fp16=true` in my TrainingConfigs. It starts up, but still hangs at the very end of the pass.\r\n\r\n```\r\n100%|██| 417/417 [15:45<00:00, 2.21s/it]\r\n```\r\n\r\nWhen I use pyspy again it is still hung up on \"to\" lines and down in gpt_neo.\r\n\r\nCan you run it with fp16 turned on?\r\n\r\nThanks again!\r\n\r\nEDIT: Missing \"hangs\" at the end of the run.", "I am trying to come up with a similar version using run_clm and I can't get it to fit into memory on my gpu at all. When I turn on fp16 and model.cuda() (even without deepspeed) I can't get it to run right.\r\n\r\nI am taking this code from a notebook (which runs fine on an a100) and trying to adapt it to run on the huggingface and deepspeed infrastructe. This seems surprising hard. The script I have above doesn't do that much at all. Why is it so hard to turn into run_clm? Is there a tutorial for this sort of thing?", "Hello, what is the issue you are facing and please provide a minimal example for deep dive. The above script works with deepSpeed as I have mentioned with all the steps in detail in previous message", "also could you try running with gradient checkpointing enabled so that you could fit very long sequences while training CLM models with large models. Add `--gradient_checkpointing` to the command you run", "> Hello, what is the issue you are facing and please provide a minimal example for deep dive. The above script works with deepSpeed as I have mentioned with all the steps in detail in previous message\r\n\r\nAs I mentioned before I only have 40G A100's whereas you have 80's so removing the model.half() makes it OOM for me. I tried turning on fp16 at trainer arg and the deepspeed settings but it still hangs there. What happens when you turn on fp16?\r\n\r\nFor me I have a very clear reproducible case with the fp16 parts turned on. I've attached updated versions (take off the .txt) of the ds config and the script to run with \"deepspeed tune-broken.py\" that should work.\r\n\r\n[tune-broken.py.txt](https://github.com/huggingface/transformers/files/11904199/tune-broken.py.txt)\r\n[ds_config_zero3.json.txt](https://github.com/huggingface/transformers/files/11904206/ds_config_zero3.json.txt)\r\n\r\n> also could you try running with gradient checkpointing enabled so that you could fit very long sequences while training CLM models with large models. Add `--gradient_checkpointing` to the command you run\r\n\r\nI'll also try the gradient checkpointing as I am down to sequence len of 400.\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi, have you had a chance to reproduce with fp16?", "Hello, changed the code to use `fp16` by adding the below line:\r\n```diff\r\ntraining_args = TrainingArguments(\r\n output_dir=\"redpajama-tuning-test\",\r\n #evaluation_strategy=\"epoch\",\r\n learning_rate=2e-5,\r\n weight_decay=0.01,\r\n per_device_train_batch_size=4,\r\n per_device_eval_batch_size=4,\r\n #log_level=\"debug\",\r\n report_to=\"none\",\r\n ddp_backend=\"nccl\",\r\n ddp_timeout=60,\r\n push_to_hub=False,\r\n+ deepspeed=\"ds_config_zero3.json\",\r\n+ fp16=True,\r\n)\r\n```\r\n\r\nOutput:\r\n```\r\n[2023-07-24 19:54:43,521] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)\r\nThe following values were not passed to `accelerate launch` and had defaults used instead:\r\n\t\tMore than one GPU was found, enabling multi-GPU training.\r\n\t\tIf this was unintended please pass in `--num_processes=1`.\r\n\t`--num_machines` was set to a value of `1`\r\n\t`--mixed_precision` was set to a value of `'no'`\r\n\t`--dynamo_backend` was set to a value of `'no'`\r\nTo avoid this warning pass in values for each of the problematic parameters or run `accelerate config`.\r\n[2023-07-24 19:54:48,144] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)\r\n[2023-07-24 19:54:48,151] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)\r\n[2023-07-24 19:54:49,459] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented\r\n[2023-07-24 19:54:49,459] [INFO] [comm.py:616:init_distributed] cdb=None\r\n[2023-07-24 19:54:49,459] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented\r\n[2023-07-24 19:54:49,459] [INFO] [comm.py:616:init_distributed] cdb=None\r\n[2023-07-24 19:54:49,459] [INFO] [comm.py:643:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl\r\n[2023-07-24 19:54:59,209] [INFO] [partition_parameters.py:326:__exit__] finished initializing model with 2.78B parameters\r\nDownloading builder script: 100%|█████████████████████████████████████████████████████████| 18.2k/18.2k [00:00<00:00, 65.8MB/s]\r\nDownloading metadata: 100%|███████████████████████████████████████████████████████████████| 6.36k/6.36k [00:00<00:00, 38.6MB/s]\r\nDownloading readme: 100%|█████████████████████████████████████████████████████████████████| 15.8k/15.8k [00:00<00:00, 69.3MB/s]\r\nFound cached dataset eli5 (/raid/sourab/.cache/huggingface/datasets/eli5/LFQA_reddit/1.0.0/17574e5502a10f41bbd17beba83e22475b499fa62caa1384a3d093fc856fe6fa)\r\nFound cached dataset eli5 (/raid/sourab/.cache/huggingface/datasets/eli5/LFQA_reddit/1.0.0/17574e5502a10f41bbd17beba83e22475b499fa62caa1384a3d093fc856fe6fa)\r\nMap (num_proc=4): 0%| | 0/4000 [00:00<?, ? examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (803 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (743 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (923 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (1463 > 512). Running this sequence through the model will result in indexing errors\r\nMap (num_proc=4): 0%| | 0/1000 [00:00<?, ? examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (770 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (608 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (810 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (516 > 512). Running this sequence through the model will result in indexing errors\r\nMap (num_proc=4): 0%| | 0/4000 [00:00<?, ? examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (544 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (833 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (596 > 512). Running this sequence through the model will result in indexing errors\r\nMap (num_proc=4): 25%|██████████████▊ | 1000/4000 [00:00<00:02, 1432.28 examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (743 > 512). Running this sequence through the model will result in indexing errors\r\nMap (num_proc=4): 25%|███████████████ | 1000/4000 [00:01<00:03, 925.60 examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (616 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (551 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (774 > 512). Running this sequence through the model will result in indexing errors\r\nMap (num_proc=4): 25%|███████████████ | 250/1000 [00:00<00:00, 1145.15 examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (617 > 512). Running this sequence through the model will result in indexing errors\r\n[2023-07-24 19:55:10,594] [WARNING] [cpu_adam.py:84:__init__] FP16 params for CPUAdam may not work on AMD CPUs \r\n[2023-07-24 19:55:10,604] [WARNING] [cpu_adam.py:84:__init__] FP16 params for CPUAdam may not work on AMD CPUs\r\nUsing /raid/sourab/.cache/huggingface/torch_extensions/py310_cu118 as PyTorch extensions root...\r\nUsing /raid/sourab/.cache/huggingface/torch_extensions/py310_cu118 as PyTorch extensions root...\r\nDetected CUDA files, patching ldflags\r\nEmitting ninja build file /raid/sourab/.cache/huggingface/torch_extensions/py310_cu118/cpu_adam/build.ninja...\r\nBuilding extension module cpu_adam...\r\nAllowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)\r\nninja: no work to do.\r\nLoading extension module cpu_adam...\r\nTime to load cpu_adam op: 2.2673165798187256 seconds\r\nLoading extension module cpu_adam...\r\nTime to load cpu_adam op: 2.3539650440216064 seconds\r\nParameter Offload: Total persistent parameters: 1070080 in 258 params\r\n 0%| | 0/840 [00:00<?, ?it/s]You're using a GPTNeoXTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\nYou're using a GPTNeoXTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\n 1%|▋ | 6/840 [01:27<3:25:40, 14.80s/it]\r\n```\r\n\r\nAlso, the memory usage is `21489MiB` (21GB vram) on both the GPUs", "Can you post the final results please?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Can we reopen this since I didn't actually get an answer?", "Did you try the `training_args` provided by @pacman100 ?", "I have! I have used all the configs as reported in the issue and I still get the problem. However the problem I get is a deadlock near the evaluation time, or near the end of training epoch. So, not when it starts up. As with @pacman100 I can *start* the training, it just won't complete. So I am trying to get us to the same reproducible case, meaning we are both using half precision (because I only have 40G cards, not the 80G ones you folks have) and it runs all the way to the end.", "I'll let @pacman100 answer and the results he got", "Hello,\r\n\r\ntraining for hours isn't feasible and it won't be a minimal reproducer if it takes hours. For that reason, changed the following in the above code snippet:\r\n```diff\r\n...\r\n- eli5 = load_dataset(\"eli5\", split=\"train_asks[:5000]\")\r\n+ eli5 = load_dataset(\"eli5\", split=\"train_asks[:100]\")\r\n``` \r\n\r\nrunning the above code:\r\n```\r\nCUDA_VISIBLE_DEVICES=0,1 deepspeed issue_24090.py\r\n```\r\nOutput logs:\r\n```\r\n[2023-08-29 13:14:58,392] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)\r\n[2023-08-29 13:15:00,207] [WARNING] [runner.py:196:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.\r\nDetected CUDA_VISIBLE_DEVICES=0,1: setting --include=localhost:0,1\r\n[2023-08-29 13:15:00,207] [INFO] [runner.py:555:main] cmd = /home/sourab/miniconda3/envs/hf/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMV19 --master_addr=127.0.0.1 --master_port=29500 --enable_each_rank_log=None issue_24090.py\r\n[2023-08-29 13:15:02,294] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)\r\n[2023-08-29 13:15:04,084] [INFO] [launch.py:145:main] WORLD INFO DICT: {'localhost': [0, 1]}\r\n[2023-08-29 13:15:04,084] [INFO] [launch.py:151:main] nnodes=1, num_local_procs=2, node_rank=0\r\n[2023-08-29 13:15:04,084] [INFO] [launch.py:162:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1]})\r\n[2023-08-29 13:15:04,084] [INFO] [launch.py:163:main] dist_world_size=2\r\n[2023-08-29 13:15:04,084] [INFO] [launch.py:165:main] Setting CUDA_VISIBLE_DEVICES=0,1\r\n[2023-08-29 13:15:07,256] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)\r\n[2023-08-29 13:15:07,301] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)\r\n[2023-08-29 13:15:08,834] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented\r\n[2023-08-29 13:15:08,834] [INFO] [comm.py:616:init_distributed] cdb=None\r\n[2023-08-29 13:15:08,860] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented\r\n[2023-08-29 13:15:08,860] [INFO] [comm.py:616:init_distributed] cdb=None\r\n[2023-08-29 13:15:08,860] [INFO] [comm.py:643:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl\r\n[2023-08-29 13:15:17,771] [INFO] [partition_parameters.py:326:__exit__] finished initializing model with 2.78B parameters\r\nFound cached dataset eli5 (/raid/sourab/.cache/huggingface/datasets/eli5/LFQA_reddit/1.0.0/17574e5502a10f41bbd17beba83e22475b499fa62caa1384a3d093fc856fe6fa)\r\nFound cached dataset eli5 (/raid/sourab/.cache/huggingface/datasets/eli5/LFQA_reddit/1.0.0/17574e5502a10f41bbd17beba83e22475b499fa62caa1384a3d093fc856fe6fa)\r\nMap (num_proc=4): 0%| | 0/80 [00:00<?, ? examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (790 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (894 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (521 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (602 > 512). Running this sequence through the model will result in indexing errors\r\nMap (num_proc=4): 0%| | 0/20 [00:00<?, ? examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (1481 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (815 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (863 > 512). Running this sequence through the model will result in indexing errors\r\nMap (num_proc=4): 0%| | 0/80 [00:00<?, ? examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (624 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (987 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (1481 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (579 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (662 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (698 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (1956 > 512). Running this sequence through the model will result in indexing errors\r\n/home/sourab/transformers/src/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations\r\n warnings.warn(\r\n/home/sourab/transformers/src/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations\r\n warnings.warn(\r\n[2023-08-29 13:15:23,022] [WARNING] [cpu_adam.py:84:__init__] FP16 params for CPUAdam may not work on AMD CPUs\r\n[2023-08-29 13:15:23,593] [WARNING] [cpu_adam.py:84:__init__] FP16 params for CPUAdam may not work on AMD CPUs\r\nUsing /raid/sourab/.cache/huggingface/torch_extensions/py310_cu118 as PyTorch extensions root...\r\nDetected CUDA files, patching ldflags\r\nEmitting ninja build file /raid/sourab/.cache/huggingface/torch_extensions/py310_cu118/cpu_adam/build.ninja...\r\nBuilding extension module cpu_adam...\r\nAllowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)\r\nninja: no work to do.\r\nLoading extension module cpu_adam...\r\nTime to load cpu_adam op: 2.2999038696289062 seconds\r\nUsing /raid/sourab/.cache/huggingface/torch_extensions/py310_cu118 as PyTorch extensions root...\r\nDetected CUDA files, patching ldflags\r\nEmitting ninja build file /raid/sourab/.cache/huggingface/torch_extensions/py310_cu118/cpu_adam/build.ninja...\r\nBuilding extension module cpu_adam...\r\nAllowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)\r\nninja: no work to do.\r\nLoading extension module cpu_adam...\r\nTime to load cpu_adam op: 2.302565336227417 seconds\r\nParameter Offload: Total persistent parameters: 1070080 in 258 params\r\n 0%| | 0/18 [00:00<?, ?it/s]You're using a GPTNeoXTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\nYou're using a GPTNeoXTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\n 11%|██████████ | 2/18 [00:09<01:16, 4.78s/it]/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/deepspeed/runtime/zero/stage3.py:1252: UserWarning: The torch.cuda.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='cuda') to create tensors. (Triggered internally at ../torch/csrc/tensor/python_tensor.cpp:83.)\r\n total_norm_cuda = get_accelerator().FloatTensor([float(total_norm)])\r\n/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/deepspeed/runtime/zero/stage3.py:1252: UserWarning: The torch.cuda.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='cuda') to create tensors. (Triggered internally at ../torch/csrc/tensor/python_tensor.cpp:83.)\r\n total_norm_cuda = get_accelerator().FloatTensor([float(total_norm)])\r\n{'train_runtime': 80.1289, 'train_samples_per_second': 1.797, 'train_steps_per_second': 0.225, 'train_loss': 2.0953504774305554, 'epoch': 3.0}\r\n100%|██████████████████████████████████████████████████████████████████████████████████████████| 18/18 [01:20<00:00, 4.45s/it]\r\n[2023-08-29 13:17:03,208] [INFO] [launch.py:347:main] Process 3032445 exits successfully.\r\n[2023-08-29 13:17:04,210] [INFO] [launch.py:347:main] Process 3032446 exits successfully.\r\n```\r\n\r\nTherefore, working fine and as expected.", "Okay. It \"works\" for me at the reduce data set, but I can still reproduce with the larger set. But that sort of thing is usually tricky with deadlocks, where they take a certain set of conditions.\r\n\r\nIt is cool if you don't want to spend more time on this. I know it is hard to repro and very time consuming to do so. It took me many days to even whittle it down to the error conditions in the first place. It fails for me at \" | 1399/4221 [1:07:10<1:52:58, 2.40s/it\". So yeah, that is a lot of investment or you guys for something that is probably a library mismatch.\r\n\r\nHowever, since nothing was actually fixed/chjang, can you change the label to something like \"Not reproducible\" to more accurately reflect the conditions? Thanks! :)\r\n\r\n-Andrew", "Hello,\r\n\r\nI've run it as per the original reproducer by reverting the change which was reducing the dataset. Below is the output:\r\n```\r\n[2023-08-30 08:41:49,039] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)\r\n[2023-08-30 08:41:51,036] [WARNING] [runner.py:196:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.\r\nDetected CUDA_VISIBLE_DEVICES=0,1,2,3 but ignoring it because one or several of --include/--exclude/--num_gpus/--num_nodes cl args were used. If you want to use CUDA_VISIBLE_DEVICES don't pass any of these arguments to deepspeed.\r\n[2023-08-30 08:41:51,036] [INFO] [runner.py:555:main] cmd = /home/sourab/miniconda3/envs/hf/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMiwgM119 --master_addr=127.0.0.1 --master_port=29500 --enable_each_rank_log=None issue_24090.py\r\n[2023-08-30 08:41:53,175] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)\r\n[2023-08-30 08:41:55,047] [INFO] [launch.py:145:main] WORLD INFO DICT: {'localhost': [2, 3]}\r\n[2023-08-30 08:41:55,047] [INFO] [launch.py:151:main] nnodes=1, num_local_procs=2, node_rank=0\r\n[2023-08-30 08:41:55,047] [INFO] [launch.py:162:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1]})\r\n[2023-08-30 08:41:55,047] [INFO] [launch.py:163:main] dist_world_size=2\r\n[2023-08-30 08:41:55,047] [INFO] [launch.py:165:main] Setting CUDA_VISIBLE_DEVICES=2,3\r\n[2023-08-30 08:41:58,817] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)\r\n[2023-08-30 08:41:59,454] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)\r\n[2023-08-30 08:42:00,569] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented\r\n[2023-08-30 08:42:00,569] [INFO] [comm.py:616:init_distributed] cdb=None\r\n[2023-08-30 08:42:01,064] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented\r\n[2023-08-30 08:42:01,064] [INFO] [comm.py:616:init_distributed] cdb=None\r\n[2023-08-30 08:42:01,064] [INFO] [comm.py:643:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl\r\n[2023-08-30 08:42:10,382] [INFO] [partition_parameters.py:326:__exit__] finished initializing model with 2.78B parameters\r\nFound cached dataset eli5 (/raid/sourab/.cache/huggingface/datasets/eli5/LFQA_reddit/1.0.0/17574e5502a10f41bbd17beba83e22475b499fa62caa1384a3d093fc856fe6fa)\r\nFound cached dataset eli5 (/raid/sourab/.cache/huggingface/datasets/eli5/LFQA_reddit/1.0.0/17574e5502a10f41bbd17beba83e22475b499fa62caa1384a3d093fc856fe6fa)\r\nMap (num_proc=4): 0%| | 0/4000 [00:00<?, ? examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (771 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (540 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (1258 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (608 > 512). Running this sequence through the model will result in indexing errors\r\nMap (num_proc=4): 0%| | 0/1000 [00:00<?, ? examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (672 > 512). Running this sequence through the model will result in indexing errors\r\nMap (num_proc=4): 25%|███████████████ | 250/1000 [00:00<00:00, 1199.38 examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (844 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (686 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (2027 > 512). Running this sequence through the model will result in indexing errors\r\nMap (num_proc=4): 0%| | 0/4000 [00:00<?, ? examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (1673 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (540 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (528 > 512). Running this sequence through the model will result in indexing errors\r\nMap (num_proc=4): 25%|██████████████▊ | 1000/4000 [00:00<00:02, 1302.93 examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (738 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (781 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (595 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (754 > 512). Running this sequence through the model will result in indexing errors\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (637 > 512). Running this sequence through the model will result in indexing errors\r\n/home/sourab/transformers/src/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations\r\n warnings.warn(\r\nMap (num_proc=4): 0%| | 0/4000 [00:00<?, ? examples/s][2023-08-30 08:42:18,448] [WARNING] [cpu_adam.py:84:__init__] FP16 params for CPUAdam may not work on AMD CPUs\r\n/home/sourab/transformers/src/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations\r\n warnings.warn(\r\n[2023-08-30 08:42:21,043] [WARNING] [cpu_adam.py:84:__init__] FP16 params for CPUAdam may not work on AMD CPUs\r\nUsing /raid/sourab/.cache/huggingface/torch_extensions/py310_cu118 as PyTorch extensions root...\r\nDetected CUDA files, patching ldflags\r\nEmitting ninja build file /raid/sourab/.cache/huggingface/torch_extensions/py310_cu118/cpu_adam/build.ninja...\r\nBuilding extension module cpu_adam...\r\nAllowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)\r\nninja: no work to do.\r\nLoading extension module cpu_adam...\r\nTime to load cpu_adam op: 2.4363255500793457 seconds\r\nParameter Offload: Total persistent parameters: 1070080 in 258 params\r\nUsing /raid/sourab/.cache/huggingface/torch_extensions/py310_cu118 as PyTorch extensions root...\r\nDetected CUDA files, patching ldflags\r\nEmitting ninja build file /raid/sourab/.cache/huggingface/torch_extensions/py310_cu118/cpu_adam/build.ninja...\r\nBuilding extension module cpu_adam...\r\nAllowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)\r\nninja: no work to do.\r\nLoading extension module cpu_adam...\r\nTime to load cpu_adam op: 2.344202756881714 seconds\r\n 0%| | 0/840 [00:00<?, ?it/s]You're using a GPTNeoXTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\nYou're using a GPTNeoXTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\n 0%|▏ | 2/840 [00:13<1:30:53, 6.51s/it]/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/deepspeed/runtime/zero/stage3.py:1252: UserWarning: The torch.cuda.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='cuda') to create tensors. (Triggered internally at ../torch/csrc/tensor/python_tensor.cpp:83.)\r\n total_norm_cuda = get_accelerator().FloatTensor([float(total_norm)])\r\n/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/deepspeed/runtime/zero/stage3.py:1252: UserWarning: The torch.cuda.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='cuda') to create tensors. (Triggered internally at ../torch/csrc/tensor/python_tensor.cpp:83.)\r\n total_norm_cuda = get_accelerator().FloatTensor([float(total_norm)])\r\n{'loss': 2.3251, 'learning_rate': 2e-05, 'epoch': 1.79} \r\n 60%|████████████████████████████████████████████████████▍ | 500/840 [41:13<26:30, 4.68s/it]/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/nn/modules/module.py:1879: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/nn/modules/module.py:1879: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n{'train_runtime': 4206.7032, 'train_samples_per_second': 1.596, 'train_steps_per_second': 0.2, 'train_loss': 1.8848077683221727, 'epoch': 3.0}\r\n100%|██████████████████████████████████████████████████████████████████████████████████████| 840/840 [1:10:06<00:00, 5.01s/it]\r\n[2023-08-30 09:52:48,824] [INFO] [launch.py:347:main] Process 4052259 exits successfully.\r\n[2023-08-30 09:52:49,825] [INFO] [launch.py:347:main] Process 4052260 exits successfully.\r\n```\r\n\r\nTherefore, working fine and as expected.", "I am glad it works for you. :)" ]
1,686
1,693
1,693
NONE
null
### System Info transformers-cli says: ``` Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.30.0.dev0 - Platform: Linux-4.18.0-425.10.1.el8_7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.13 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ds_report says: ``` Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.30.0.dev0 - Platform: Linux-4.18.0-425.10.1.el8_7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.13 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> root@etc-gpu-12:/workspace# ds_report [2023-06-07 16:33:38,748] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect) -------------------------------------------------- DeepSpeed C++/CUDA extension op report -------------------------------------------------- NOTE: Ops not installed will be just-in-time (JIT) compiled at runtime if needed. Op compatibility means that your system meet the required dependencies to JIT install the op. -------------------------------------------------- JIT compiled ops requires ninja ninja .................. [OKAY] -------------------------------------------------- op name ................ installed .. compatible -------------------------------------------------- [WARNING] async_io requires the dev libaio .so object and headers but these were not found. [WARNING] async_io: please install the libaio-dev package with apt [WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found. async_io ............... [NO] ....... [NO] cpu_adagrad ............ [NO] ....... [OKAY] cpu_adam ............... [NO] ....... [OKAY] fused_adam ............. [NO] ....... [OKAY] fused_lamb ............. [NO] ....... [OKAY] quantizer .............. [NO] ....... [OKAY] random_ltd ............. [NO] ....... [OKAY] [WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.0 [WARNING] using untested triton version (2.0.0), only 1.0.0 is known to be compatible sparse_attn ............ [NO] ....... [NO] spatial_inference ...... [NO] ....... [OKAY] transformer ............ [NO] ....... [OKAY] stochastic_transformer . [NO] ....... [OKAY] transformer_inference .. [NO] ....... [OKAY] utils .................. [NO] ....... [OKAY] -------------------------------------------------- DeepSpeed general environment info: torch install path ............... ['/opt/conda/lib/python3.8/site-packages/torch'] torch version .................... 2.0.1+cu117 deepspeed install path ........... ['/opt/conda/lib/python3.8/site-packages/deepspeed'] deepspeed info ................... 0.9.4+f2f5f21b, f2f5f21b, master torch cuda version ............... 11.7 torch hip version ................ None nvcc version ..................... 11.6 deepspeed wheel compiled w. ...... torch 1.12, cuda 11.6 ``` I am running this using a docker image from this dockerfile: ``` # https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel_22-04.html#rel_22-04 FROM nvcr.io/nvidia/pytorch:22.04-py3 # https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel-23-01.html # This one OOM's on the tune-broken case # FROM nvcr.io/nvidia/pytorch:23.01-py3 RUN git clone https://github.com/huggingface/transformers.git RUN pip install transformers/. RUN pip install git+https://github.com/huggingface/accelerate.git # RUN git clone https://github.com/huggingface/accelerate.git # RUN pip install accelerate/. RUN pip install git+https://github.com/microsoft/DeepSpeed.git # RUN git clone https://github.com/microsoft/DeepSpeed.git # RUN pip install deepspeed/. RUN pip install git+https://github.com/huggingface/peft.git RUN pip install datasets evaluate loralib --upgrade --quiet RUN pip install bitsandbytes rouge-score tensorboard py7zr einops py-spy RUN pip install jupyter RUN pip uninstall -y apex RUN pip uninstall -y apex # This is so we can run the translation test RUN pip install -r transformers/examples/pytorch/translation/requirements.txt ``` ### Who can help? @pacman100 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Summary: - I kick off the training script using deepspeed and NO configuration and it fails. I've also tried with `ds_config_zero3.json` from the test directory and it fails too. My script "tune.py": ``` #! /usr/bin/env python3 import transformers from transformers import AutoTokenizer, AutoModelForCausalLM from transformers import DataCollatorForLanguageModeling from transformers import AutoModelForCausalLM, TrainingArguments, Trainer from datasets import load_dataset from accelerate import Accelerator MIN_TRANSFORMERS_VERSION = '4.25.1' assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.' accelerator = Accelerator() # ============================================================================== # DDP: Usually we use NCCL, so set that. # Maybe need to use: NCCL_P2P_DISABLE=1 training_args = TrainingArguments( output_dir="redpajama-tuning-test", #evaluation_strategy="epoch", learning_rate=2e-5, weight_decay=0.01, per_device_train_batch_size=4, per_device_eval_batch_size=4, #log_level="debug", report_to="none", ddp_backend="nccl", ddp_timeout=60, push_to_hub=False ) # ============================================================================= tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1") model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1") model.train() model = model.half() model = model.cuda() # ============================================================================= tokenizer.model_max_length=512 tokenizer.pad_token = tokenizer.eos_token eli5 = load_dataset("eli5", split="train_asks[:5000]") eli5 = eli5.train_test_split(test_size=0.2) eli5 = eli5.flatten() def preprocess_function(examples): return tokenizer([" ".join(x) for x in examples["answers.text"]]) with training_args.main_process_first(desc="tokenizing"): tokenized_eli5 = eli5.map( preprocess_function, batched=True, num_proc=4, remove_columns=eli5["train"].column_names ) block_size = 512 def group_texts(examples): concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} total_length = len(concatenated_examples[list(examples.keys())[0]]) if total_length >= block_size: total_length = (total_length // block_size) * block_size result = { k: [t[i : i + block_size] for i in range(0, total_length, block_size)] for k, t in concatenated_examples.items() } result["labels"] = result["input_ids"].copy() return result with training_args.main_process_first(desc="grouping"): lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4) # ================================================================================= data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False) trainer = Trainer( model=model, args=training_args, train_dataset=lm_dataset["train"], eval_dataset=lm_dataset["test"], tokenizer=tokenizer, data_collator=data_collator ) trainer.train() ``` I run it with: ``` deepspeed tune.py ``` When it deadlock (pretty reproducibly, sometimes it completes) I use py-spy to get the stack traces. ``` root@etc-gpu-12:/workspace# py-spy dump --pid 4282 Process 4282: /opt/conda/bin/python3.8 -u tune-broken.py --local_rank=0 Python v3.8.13 (/opt/conda/bin/python3.8) Thread 4282 (active+gil): "MainThread" store_flos (transformers/trainer.py:2938) _inner_training_loop (transformers/trainer.py:2059) train (transformers/trainer.py:1643) <module> (tune-broken.py:89) Thread 4754 (idle): "Thread-4" wait (threading.py:306) wait (threading.py:558) run (tqdm/_monitor.py:60) _bootstrap_inner (threading.py:932) _bootstrap (threading.py:890) Thread 4964 (idle) Thread 4965 (idle) root@etc-gpu-12:/workspace# py-spy dump --pid 4283 Process 4283: /opt/conda/bin/python3.8 -u tune-broken.py --local_rank=1 Python v3.8.13 (/opt/conda/bin/python3.8) Thread 4283 (active): "MainThread" forward (transformers/models/gpt_neox/modeling_gpt_neox.py:278) _call_impl (torch/nn/modules/module.py:1501) forward (transformers/models/gpt_neox/modeling_gpt_neox.py:149) _call_impl (torch/nn/modules/module.py:1501) forward (transformers/models/gpt_neox/modeling_gpt_neox.py:331) _call_impl (torch/nn/modules/module.py:1501) forward (transformers/models/gpt_neox/modeling_gpt_neox.py:564) _call_impl (torch/nn/modules/module.py:1501) forward (transformers/models/gpt_neox/modeling_gpt_neox.py:673) _call_impl (torch/nn/modules/module.py:1501) _run_ddp_forward (torch/nn/parallel/distributed.py:1110) forward (torch/nn/parallel/distributed.py:1156) _call_impl (torch/nn/modules/module.py:1501) compute_loss (transformers/trainer.py:2763) training_step (transformers/trainer.py:2738) _inner_training_loop (transformers/trainer.py:1928) train (transformers/trainer.py:1643) <module> (tune-broken.py:89) Thread 4824 (idle): "Thread-4" wait (threading.py:306) wait (threading.py:558) run (tqdm/_monitor.py:60) _bootstrap_inner (threading.py:932) _bootstrap (threading.py:890) Thread 4966 (idle) Thread 4967 (idle) ``` ### Expected behavior That it shouldn't deadlock.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24090/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24090/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24089
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24089/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24089/comments
https://api.github.com/repos/huggingface/transformers/issues/24089/events
https://github.com/huggingface/transformers/pull/24089
1,746,376,858
PR_kwDOCUB6oc5ScH00
24,089
Up pinned accelerate version
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Let's wait for the patch and pin 0.20.1", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
CONTRIBUTOR
null
# What does this PR do? Increases pinned accelerate version, and also let's the `is_accelerate_available` check see for a specific version, since now we care much more about just if `PartialState` is available. Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger re-opened again so tests can pass and we can merge 😅
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24089/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24089/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24089", "html_url": "https://github.com/huggingface/transformers/pull/24089", "diff_url": "https://github.com/huggingface/transformers/pull/24089.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24089.patch", "merged_at": 1686169311000 }
https://api.github.com/repos/huggingface/transformers/issues/24088
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24088/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24088/comments
https://api.github.com/repos/huggingface/transformers/issues/24088/events
https://github.com/huggingface/transformers/pull/24088
1,746,375,500
PR_kwDOCUB6oc5ScHiJ
24,088
Do not prepare lr scheduler as it as the right number of steps
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
COLLABORATOR
null
# What does this PR do? Right now the LR scheduler is prepared by the Accelerate but it has a number of steps that already accounts for the number of processes. This results in the LR scheduler being stepp through num_processes too fast. This PR thus removes the lr_scheduler from the prepared objects. Should fix #23986
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24088/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24088/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24088", "html_url": "https://github.com/huggingface/transformers/pull/24088", "diff_url": "https://github.com/huggingface/transformers/pull/24088.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24088.patch", "merged_at": 1686166293000 }
https://api.github.com/repos/huggingface/transformers/issues/24087
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24087/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24087/comments
https://api.github.com/repos/huggingface/transformers/issues/24087/events
https://github.com/huggingface/transformers/pull/24087
1,746,360,616
PR_kwDOCUB6oc5ScERt
24,087
[Not to merge before 2023/06/28] Time to say goodbye to py37
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24087). All of your documentation changes will be reflected on that endpoint." ]
1,686
1,686
1,686
COLLABORATOR
null
# What does this PR do? Same as #24075, but that PR got freezed after I force pushed (after rebase), and my changes to address the comments were not able to appear) ---- Byebye! EOL of python 3.7 is `2023/06/27`. https://endoflife.date/python
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24087/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24087/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24087", "html_url": "https://github.com/huggingface/transformers/pull/24087", "diff_url": "https://github.com/huggingface/transformers/pull/24087.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24087.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24086
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24086/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24086/comments
https://api.github.com/repos/huggingface/transformers/issues/24086/events
https://github.com/huggingface/transformers/pull/24086
1,746,295,084
PR_kwDOCUB6oc5Sb137
24,086
Add bark
{ "login": "ylacombe", "id": 52246514, "node_id": "MDQ6VXNlcjUyMjQ2NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ylacombe", "html_url": "https://github.com/ylacombe", "followers_url": "https://api.github.com/users/ylacombe/followers", "following_url": "https://api.github.com/users/ylacombe/following{/other_user}", "gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}", "starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions", "organizations_url": "https://api.github.com/users/ylacombe/orgs", "repos_url": "https://api.github.com/users/ylacombe/repos", "events_url": "https://api.github.com/users/ylacombe/events{/privacy}", "received_events_url": "https://api.github.com/users/ylacombe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "cc @sanchit-gandhi ", "PR supersedes #23375", "Hi @sanchit-gandhi , I think it's finally time for the final review! You might want to check the refactoring of `generate_text_semantic`, `generate_coarse`, `generate_fine`, but otherwise, sounds good!", "Hi @amyeroberts, the PR is ready for review ! I'd be delighted to get your feedback on this when you have a chance. Let me know if I can help with anything!", "@ylacombe Great! Could you resolve the conflicts? Once that's done I'll review 👍 ", "Hi @amyeroberts, as demanded, I resolved the merge conflicts! I've also updated the `speaker_embeddings` processing in `BarkProcessor`. Could you take a look when you have time ? \r\nThanks! ", "Before I start a full review of this model, could you explain why the model was written in this structure - with no `forward` method and no task specific model e.g. `BarkForTextToSpeech`? ", "> Before I start a full review of this model, could you explain why the model was written in this structure - with no `forward` method and no task specific model e.g. `BarkForTextToSpeech`?\r\n\r\nOf course! \r\n\r\nYou can't really `forward` through `BarkModel` because it uses the `generate` methods of its sub-models, each one with its one `GenerationConfig`.\r\n\r\nTo be a little bit more specific, `BarkModel` is a nested model composed of 4 sub-models.\r\n\r\nThe first 3 submodels follow a classic `transformer` architecture - hence the existence of a `forward` method for these submodels. However, when used by `BarkModel` in `generate_speech`, they are used in a non-traditional way (sliding windows, addition of input vectors alongside dimensions) and directly with their `generate` methods, model by model.\r\n\r\nTo be more in line with `transformers` paradigm, we decided to keep the classical architecture for the 3 sub-models (the fourth, [`Encodec`](https://huggingface.co/docs/transformers/main/model_doc/encodec), being already implemented) and to provide a `generate_speech` method for the final model, with nested configs and nested generation configs.\r\n\r\nUsing `forward` would have meant a much messier and probably slightly more confusing path, since it's not actually a matter of `forwarding` a list of tokens to generate new audio tokens one by one, but of generating the audio all at once!\r\n\r\n@sanchit-gandhi, I might have missed some arguments here, feel free to contribute!\r\n\r\n@amyeroberts, let me know if this answers your question!", "Regarding the last part of your question, the `BarkModel` architecture can't really be used for anything other than this specific task, so it's a kind of single-purpose model.\r\n\r\nWe considered adding task-specific sub-models, since the two first sub-models are GPT2-like auto-regressive models, but we decided not to move forward, for multiple reasons imo:\r\n\r\n1. The first two sub-models, `BarkSemanticModel` and `BarkCoarseModel` could have had task-specific sub-classes, but I think this complicates both the code and users' understanding of the model architecture. What's more, although their architecture is general, it's only used here with an `lm_head`, and I can't think of any other use for them.\r\n2. The third sub-model, `BarkFineModel`, needs [multiple embeddings layers and lm_heads](https://github.com/ylacombe/transformers/blob/33081dc7e3650ff07f2c766c3aa69bc7b6c82351/src/transformers/models/bark/modeling_bark.py#L968C2-L985), one per codebook channels of `Encodec`. So it's a non-regular type of task.\r\n\r\n", "In reality, the Bark TTS model is just three auto-regressive models concatenated together.\r\n\r\nTo generate with the Bark TTS model, you first have to generate **all** the ids for the first model. You then forward **all** of these ids to the coarse model to generate a new set of ids. You subsequently forward **all** of these generated ids to the third model to get your audio code outputs. So we can't just define one `forward` call and then auto-regressively generate with it (this would only get you one set of ids, not the three stages that you need).\r\n\r\n-> for this reason it doesn't really make sense to have a `forward` call for the `BarkModel`, since the model is just a placeholder to hold the three submodes together, and pipe the generated outputs from one model into the next\r\n\r\nRegarding why this the model isn't called `BarkForTextToSpeech`, it's the same argument as VITS: https://github.com/huggingface/transformers/pull/24085#discussion_r1252222434\r\n\r\nHappy to rename to `BarkForTextToSpeech` if you feel that this gives a more unified API between models, but Bark can **only** do text-to-speech, so this part of the name is somewhat redundant", "@sanchit-gandhi @ylacombe Thanks for the detailed explanations! ", "Hi @amyeroberts, you're welcome!\r\nHave you had time to look into the PR? I'd be happy to answer any questions you might have about my code, as it's a rather atypical model!", "Hi @amyeroberts ,\r\nThanks for the comprehensive review!\r\nI've answered most of your comments, but there are still a few I've asked questions/clarifications about! \r\n", "Hi @amyeroberts and @sgugger!\r\n\r\nMany thanks for the additional review (and thanks @sanchit-gandhi for your insights)!\r\nI've addressed most of your comments, especially those requiring more consistency with transformers regarding naming the `generate_xxx`. I still have a few comments to resolve, I'll wait for your returns on that!\r\n\r\n", "Hi @amyeroberts, \r\nthere was a time-out when executing the python snippet of the `generate` docstrings.\r\nI took advantage of this to [add the ability to specify sub-model specific parameters](https://github.com/huggingface/transformers/pull/24086/commits/97cdc38e66b600d3bc1d82c56099acf3cdc6a0f8) in `BarkModel.generate`. \r\n\r\nTo give a concrete example, you can specify now how many `max_new_tokens` you want for the `semantic` part of the model:\r\n```audio_array = model.generate(**inputs, semantic_max_new_tokens=100)```\r\n\r\nNow that it is done, there are still a few comments to resolve, so I look forward to hearing from you!", "Hey @amyeroberts,\r\nI've addressed your last remarks! Does that work with you?\r\nMany thanks!", "@ylacombe LGTM! I think we're good to merge 👍 " ]
1,686
1,691
1,689
COLLABORATOR
null
This PR aims at integrating Bark, a TTS model, to `transformers`. `Bark` was designed and trained by [Suno-AI team](https://github.com/suno-ai/bark) and is made of 4 main components: - A `semantic model ` (also named `text model`), i.e a causal autoregressive transformer (GPT2-like), which takes into input a tokenized text - A `coarse acoustics model` (also named `coarse model`), also a causal autoregressive transformer, taking into input the results of the last model. It aims at regressing the first two audio codebooks necessary to `encodec`. - A `fine acoustics model` (`fine model`), this time a non-causal autoencoder transformer, which iteratively predicts the 6 last codebooks based on the sum of the previous codebooks embeddings. - having predicted 8 codebooks channels of `encodec`, Bark uses `encodec` to generate the output audio array. Note that each of the first 3 modules can take optional conditional speaker embeddings aiming at conditioning the output audio according to `specific preset voices.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24086/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24086/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24086", "html_url": "https://github.com/huggingface/transformers/pull/24086", "diff_url": "https://github.com/huggingface/transformers/pull/24086.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24086.patch", "merged_at": 1689612805000 }
https://api.github.com/repos/huggingface/transformers/issues/24085
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24085/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24085/comments
https://api.github.com/repos/huggingface/transformers/issues/24085/events
https://github.com/huggingface/transformers/pull/24085
1,746,220,951
PR_kwDOCUB6oc5SblmC
24,085
add VITS model
{ "login": "hollance", "id": 346853, "node_id": "MDQ6VXNlcjM0Njg1Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hollance", "html_url": "https://github.com/hollance", "followers_url": "https://api.github.com/users/hollance/followers", "following_url": "https://api.github.com/users/hollance/following{/other_user}", "gists_url": "https://api.github.com/users/hollance/gists{/gist_id}", "starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hollance/subscriptions", "organizations_url": "https://api.github.com/users/hollance/orgs", "repos_url": "https://api.github.com/users/hollance/repos", "events_url": "https://api.github.com/users/hollance/events{/privacy}", "received_events_url": "https://api.github.com/users/hollance/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "Notes about the tokenizer:\r\n\r\n1. This is not the VITS tokenizer but the one for MMS-TTS.\r\n2. The vocab doesn't have padding (or unknown) tokens in it, but uses token_id 0 for this. That breaks on the HF tokenizers because it will split the input text on the padding token, so if I set `pad_token_id = 0` then the letters that token_id 0 corresponds to will disappear from the text.\r\n3. To fix this issue, I'm adding `<pad>` and `<unk>` to the vocab, but then in the model we set such token_ids to 0 before feeding the input into the first layer. It's a bit hacky. Ideas for a nicer solution are appreciated.\r\n4. The tokenizer also inserts an additional token_id 0 in between every token. No idea why but that's how it works.\r\n", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24085). All of your documentation changes will be reflected on that endpoint.", "This is ready for a first review yet.\r\n\r\nTwo checkpoints are currently available:\r\n\r\n* https://huggingface.co/Matthijs/mms-tts-eng\r\n* https://huggingface.co/Matthijs/mms-tts-nld\r\n\r\nSmall usage example:\r\n\r\n```\r\nfrom transformers import VitsMmsTokenizer, VitsModel\r\nimport torch\r\n\r\ntokenizer = VitsMmsTokenizer.from_pretrained(\"Matthijs/mms-tts-eng\")\r\nmodel = VitsModel.from_pretrained(\"Matthijs/mms-tts-eng\")\r\n \r\ninputs = tokenizer(text=\"Hello, my dog is cute\", return_tensors=\"pt\")\r\n\r\noutputs = model(inputs[\"input_ids\"])\r\nspeech = outputs.audio\r\n```\r\n\r\nThe current model is the MMS-TTS version, not the original VITS version. The conversion scripts can handle both, but for original VITS support the tokenizer is still missing.\r\n\r\nStill needs to be done: \r\n\r\n* tests\r\n* tokenizer for actual VITS \r\n\r\n@Vaibhavs10 For this review, in particular could you verify the names of the layers in the flow layers etc make sense? Thanks!\r\n", "Some of the MMS-TTS checkpoints require the use of the tool `uromanize` from https://github.com/isi-nlp/uroman to convert the input script into the Latin alphabet. Since this is a separate Perl script, it is not included in Transformers and the user will have to run `uromanize.pl` themselves before using the tokenizer.", "> I'm not too sure why I'm asked for a review here as all comments from @sanchit-gandhi are being ignored. \r\n\r\nNo they aren't?! I've integrated most of his suggestions and replied with counterarguments otherwise.\r\n", "Tokenizer can now handle both the original VITS models (which require phonemization) and the MMS-TTS models.", "Hey @sgugger / @amyeroberts - this one is ready for a review! We've got one open discussion around variable namings: https://github.com/huggingface/transformers/pull/24085#discussion_r1243884355\r\n\r\nBut otherwise the comments have been resolved and the code cleaned-up. Please address any comments / suggestions to myself, as I'll be taking over this PR for the rest of the integration", "Would be really great to get your review here @amyeroberts! We're aiming to have this model feature as part of the next Unit of the audio transformers course 🤗 https://github.com/huggingface/audio-transformers-course/pull/61", "This is ready for a second look @amyeroberts", "It would be awesome to get a second look here @amyeroberts before you go on leave!", "I just installed transformers from this branch, but I'm having some issues both with the provided examples and with the course code. Here is a minimal reproduction https://colab.research.google.com/drive/1nyCvTpAhS89_LgY2JdxSeCSBCbhzMWC3?usp=sharing\r\n\r\n1. With example from https://hf.co/learn/audio-course/chapter6/pre-trained_models#massive-multilingual-speech-mms\r\n\r\n```\r\nfrom transformers import VitsModel, VitsTokenizer\r\nimport torch\r\n\r\nmodel = VitsModel.from_pretrained(\"Matthijs/mms-tts-deu\")\r\ntokenizer = VitsTokenizer.from_pretrained(\"Matthijs/mms-tts-deu\")\r\n\r\ntext_example = (\r\n \"Ich bin Schnappi das kleine Krokodil, komm aus Ägypten das liegt direkt am Nil.\"\r\n)\r\n\r\ninputs = tokenizer(text_example, return_tensors=\"pt\")\r\ninput_ids = inputs[\"input_ids\"]\r\n\r\nwith torch.no_grad():\r\n outputs = model(input_ids)\r\n\r\nspeech = outputs.audio[0]\r\n```\r\n\r\n```py\r\nreturn torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\n```\r\n\r\nfails with index out of range\r\n\r\n2. With example from docs\r\n\r\n```python\r\nfrom transformers import VitsTokenizer\r\n\r\ntokenizer = VitsTokenizer.from_pretrained(\"sanchit-gandhi/mms-tts-eng\")\r\ninputs = tokenizer(text=\"Hello, my dog is cute\", return_tensors=\"pt\")\r\n```\r\n\r\n```\r\nValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`input_ids` in this case) have excessive nesting (inputs type `list` where type `int` is expected).\r\n```", "Hey @osanseviero - for the example in the course, did you pip install from the specific commit ID listed in the [course instructions](https://huggingface.co/learn/audio-course/chapter6/pre-trained_models#massive-multilingual-speech-mms)? The structure of the weights have changed, so the latest code isn't compatible with the weights pushed under the repo id `\"Matthijs/mms-tts-deu\"`. So to use the latest commit, we need to use an updated version of the weights. The API is also a bit different, since it's still a WIP PR. For example, the `.audio` return field has now been replaced by `.waveform`. I think the best thing would be to wait until this PR gets its final reviews and is merged before committing to example use cases! Hopefully it's not long now!\r\n\r\nThanks for highlighting the tokenizer issue - will take a look at why that's failing! That's indeed a bug that needs to be fixed before merge (shouldn't block the next review though!)", "All points addressed so ready for a final review @amyeroberts. Thanks for your in-depth reviews here - the PR looks in pretty good shape!", "Hey @amyeroberts - to have full compatibility with the text-to-audio pipeline class, we need to indicate the `sampling_rate` of the predicted audio waveforms in the model config:\r\nhttps://github.com/huggingface/transformers/blob/2be8a9098e06262bdd5c16b5e8a70f145df88e96/src/transformers/pipelines/text_to_audio.py#L82\r\n\r\nThe `sampling_rate` corresponds to the sampling rate of the target audio that the model was trained on. It is not possible to determine in any way other than from the value in the original config of the model. MMS TTS models use a sampling rate of 16kHz, VITS TTS models use a sampling rate of 22kHz, but otherwise their configs are the same. The user needs to have an idea of the sampling rate that the model generates in order to know what rate to playback the audio, otherwise this leaves them prone to silent errors. IMO adding it as an attribute of the main model class should suffice here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/ff3b08c3b2b5b33651f30356e634a5efca1c5f2a/src/transformers/models/vits/modeling_vits.py#L1374\r\n\r\nNote that we cannot just add the `sampling_rate` to the config and not the modelling file, this is not allowed by the CI: \r\nhttps://app.circleci.com/pipelines/github/huggingface/transformers/71815/workflows/c0[…]cb-a064-7c1019e03630/jobs/904373/parallel-runs/0/steps/0-116\r\n\r\ncc @ylacombe ", "As discussed offline with @amyeroberts, we'll add it as an allowed attribute in the config checker: https://github.com/huggingface/transformers/pull/24085/commits/8b01633bccd298d3f9ff8f628b75336202fc53c4", "Hi @hollance Thank you for adding this model into `transformers` 🤗 .\r\n\r\nThere is a test failing \r\n```python\r\npython3 -m pytest -v tests/models/vits/test_modeling_vits.py::VitsModelTest::test_initialization\r\n```\r\nI skip it on the `main` branch. Would you like to help us investigate this test if you have some bandwidth? Otherwise we can take this on our side too.\r\n\r\nIf you decide to take a look, you have to remove the following line\r\nhttps://github.com/huggingface/transformers/blob/ab8cba824e3887d90cb9f4d5866fde9243f2c9fe/tests/models/vits/test_modeling_vits.py#L172\r\nso the test will be collected and run by `pytest`.\r\n\r\nLet me know :-) Thank you!" ]
1,686
1,693
1,693
CONTRIBUTOR
null
# What does this PR do? Adds the VITS model for text-to-speech, in particular to support the MMS-TTS checkpoints (which use the same model architecture but a different tokenizer). <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24085/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24085/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24085", "html_url": "https://github.com/huggingface/transformers/pull/24085", "diff_url": "https://github.com/huggingface/transformers/pull/24085.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24085.patch", "merged_at": 1693561807000 }
https://api.github.com/repos/huggingface/transformers/issues/24084
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24084/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24084/comments
https://api.github.com/repos/huggingface/transformers/issues/24084/events
https://github.com/huggingface/transformers/pull/24084
1,746,207,665
PR_kwDOCUB6oc5Sbiue
24,084
Update delete_doc_comment_trigger.yml
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24084). All of your documentation changes will be reflected on that endpoint." ]
1,686
1,686
1,686
CONTRIBUTOR
null
fix base workflow name, follow up to https://github.com/huggingface/transformers/pull/24079
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24084/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24084/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24084", "html_url": "https://github.com/huggingface/transformers/pull/24084", "diff_url": "https://github.com/huggingface/transformers/pull/24084.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24084.patch", "merged_at": 1686153348000 }
https://api.github.com/repos/huggingface/transformers/issues/24083
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24083/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24083/comments
https://api.github.com/repos/huggingface/transformers/issues/24083/events
https://github.com/huggingface/transformers/pull/24083
1,746,183,961
PR_kwDOCUB6oc5Sbdoi
24,083
Up pinned accelerate version
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
CONTRIBUTOR
null
# What does this PR do? Increases pinned accelerate version, and also let's the `is_accelerate_available` check see for a specific version, since now we care much more about just if `PartialState` is available. Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24083/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24083/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24083", "html_url": "https://github.com/huggingface/transformers/pull/24083", "diff_url": "https://github.com/huggingface/transformers/pull/24083.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24083.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24082
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24082/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24082/comments
https://api.github.com/repos/huggingface/transformers/issues/24082/events
https://github.com/huggingface/transformers/pull/24082
1,746,183,434
PR_kwDOCUB6oc5Sbdg8
24,082
testing doc build actions
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "it worked !" ]
1,686
1,700
1,686
CONTRIBUTOR
null
testing #24079
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24082/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24082/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24082", "html_url": "https://github.com/huggingface/transformers/pull/24082", "diff_url": "https://github.com/huggingface/transformers/pull/24082.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24082.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24081
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24081/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24081/comments
https://api.github.com/repos/huggingface/transformers/issues/24081/events
https://github.com/huggingface/transformers/issues/24081
1,746,161,330
I_kwDOCUB6oc5oFE6y
24,081
Error when fine-tuning RWKV using HuggingFace Trainer: 2 positional arguments but 3 were given
{ "login": "breadbrowser", "id": 64813014, "node_id": "MDQ6VXNlcjY0ODEzMDE0", "avatar_url": "https://avatars.githubusercontent.com/u/64813014?v=4", "gravatar_id": "", "url": "https://api.github.com/users/breadbrowser", "html_url": "https://github.com/breadbrowser", "followers_url": "https://api.github.com/users/breadbrowser/followers", "following_url": "https://api.github.com/users/breadbrowser/following{/other_user}", "gists_url": "https://api.github.com/users/breadbrowser/gists{/gist_id}", "starred_url": "https://api.github.com/users/breadbrowser/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/breadbrowser/subscriptions", "organizations_url": "https://api.github.com/users/breadbrowser/orgs", "repos_url": "https://api.github.com/users/breadbrowser/repos", "events_url": "https://api.github.com/users/breadbrowser/events{/privacy}", "received_events_url": "https://api.github.com/users/breadbrowser/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada @ArthurZucker ", "I think this has been fixed in https://github.com/huggingface/transformers/pull/23774 \r\nCan you try to install transformers with:\r\n```bash\r\npip install git+https://github.com/huggingface/transformers.git\r\n```", "> I think this has been fixed in #23774 Can you try to install transformers with:\r\n> \r\n> ```shell\r\n> pip install git+https://github.com/huggingface/transformers.git\r\n> ```\r\n\r\nthanks, it works now" ]
1,686
1,686
1,686
NONE
null
### System Info i am using Kaggle and I am using 2 t4 gpus. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction steps to reproduce: 1. run my code 2. it just happens ### Expected behavior this is my code https://www.kaggle.com/code/lostgoldplayer/fine-tuning using the huggingface trainer, i am trying to fine-tune "RWKV/rwkv-4-430m-pile" using my dataset "breadlicker45/musenet-encoders-12k".
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24081/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24081/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24080
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24080/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24080/comments
https://api.github.com/repos/huggingface/transformers/issues/24080/events
https://github.com/huggingface/transformers/pull/24080
1,746,156,599
PR_kwDOCUB6oc5SbXro
24,080
Byebye pytorch 1.9
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
COLLABORATOR
null
# What does this PR do? PyTorch 1.9 was related on 2021/06/15. It's sad, but 2 years is long enough :-)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24080/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24080/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24080", "html_url": "https://github.com/huggingface/transformers/pull/24080", "diff_url": "https://github.com/huggingface/transformers/pull/24080.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24080.patch", "merged_at": 1686926304000 }
https://api.github.com/repos/huggingface/transformers/issues/24079
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24079/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24079/comments
https://api.github.com/repos/huggingface/transformers/issues/24079/events
https://github.com/huggingface/transformers/pull/24079
1,746,155,924
PR_kwDOCUB6oc5SbXht
24,079
[doc build] Use secrets
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24079). All of your documentation changes will be reflected on that endpoint." ]
1,686
1,686
1,686
CONTRIBUTOR
null
Companion pr to https://github.com/huggingface/doc-builder/pull/379
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24079/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24079/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24079", "html_url": "https://github.com/huggingface/transformers/pull/24079", "diff_url": "https://github.com/huggingface/transformers/pull/24079.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24079.patch", "merged_at": 1686152020000 }
https://api.github.com/repos/huggingface/transformers/issues/24078
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24078/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24078/comments
https://api.github.com/repos/huggingface/transformers/issues/24078/events
https://github.com/huggingface/transformers/pull/24078
1,746,043,959
PR_kwDOCUB6oc5Sa_fi
24,078
Pop
{ "login": "jamesthesnake", "id": 8227820, "node_id": "MDQ6VXNlcjgyMjc4MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/8227820?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jamesthesnake", "html_url": "https://github.com/jamesthesnake", "followers_url": "https://api.github.com/users/jamesthesnake/followers", "following_url": "https://api.github.com/users/jamesthesnake/following{/other_user}", "gists_url": "https://api.github.com/users/jamesthesnake/gists{/gist_id}", "starred_url": "https://api.github.com/users/jamesthesnake/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jamesthesnake/subscriptions", "organizations_url": "https://api.github.com/users/jamesthesnake/orgs", "repos_url": "https://api.github.com/users/jamesthesnake/repos", "events_url": "https://api.github.com/users/jamesthesnake/events{/privacy}", "received_events_url": "https://api.github.com/users/jamesthesnake/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,686
1,686
1,686
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24078/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24078/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24078", "html_url": "https://github.com/huggingface/transformers/pull/24078", "diff_url": "https://github.com/huggingface/transformers/pull/24078.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24078.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24077
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24077/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24077/comments
https://api.github.com/repos/huggingface/transformers/issues/24077/events
https://github.com/huggingface/transformers/pull/24077
1,745,957,500
PR_kwDOCUB6oc5Saspo
24,077
Fix expected value in tests of the test fetcher
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24077). All of your documentation changes will be reflected on that endpoint." ]
1,686
1,686
1,686
COLLABORATOR
null
# What does this PR do? #24051 broke a test in the test suite of the test fecther. The PR did not run the CI because the modification was detected as a docstring modification. This is due to a """ in the middle of the file that this PR also fixes.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24077/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24077/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24077", "html_url": "https://github.com/huggingface/transformers/pull/24077", "diff_url": "https://github.com/huggingface/transformers/pull/24077.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24077.patch", "merged_at": 1686152337000 }
https://api.github.com/repos/huggingface/transformers/issues/24076
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24076/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24076/comments
https://api.github.com/repos/huggingface/transformers/issues/24076/events
https://github.com/huggingface/transformers/pull/24076
1,745,918,089
PR_kwDOCUB6oc5SakIB
24,076
Be nice to TF
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "LGTM! If the torch value is `3` we could probably reduce the TF value even lower, but let's try this first.", "`7` avoids the issue, but still 96~98% memory. Change it to `6` and will merge.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24076). All of your documentation changes will be reflected on that endpoint." ]
1,686
1,686
1,686
COLLABORATOR
null
# What does this PR do? ... and to @Rocketknight1 To be serious: to avoid OOM issue introduced in #23234. Note `torch_job` use `pytest_num_workers=3`. See [this comment](https://github.com/huggingface/transformers/pull/24071#issuecomment-1580778679).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24076/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24076/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24076", "html_url": "https://github.com/huggingface/transformers/pull/24076", "diff_url": "https://github.com/huggingface/transformers/pull/24076.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24076.patch", "merged_at": 1686147493000 }
https://api.github.com/repos/huggingface/transformers/issues/24075
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24075/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24075/comments
https://api.github.com/repos/huggingface/transformers/issues/24075/events
https://github.com/huggingface/transformers/pull/24075
1,745,885,875
PR_kwDOCUB6oc5Sac4t
24,075
[Not to merge before 2023/06/28] Time to say goodbye to py37 😭
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "test failures will be addressed soon by related authors. Not related to this PR." ]
1,686
1,686
1,686
COLLABORATOR
null
# What does this PR do? Byebye! EOL of python 3.7 is `2023/06/27`. https://endoflife.date/python
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24075/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24075/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24075", "html_url": "https://github.com/huggingface/transformers/pull/24075", "diff_url": "https://github.com/huggingface/transformers/pull/24075.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24075.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24074
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24074/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24074/comments
https://api.github.com/repos/huggingface/transformers/issues/24074/events
https://github.com/huggingface/transformers/pull/24074
1,745,821,268
PR_kwDOCUB6oc5SaOtc
24,074
[`Hub`] Add `safe_serialization` in push_to_hub
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
CONTRIBUTOR
null
# What does this PR do? This PR adds the possibility to directly push safetensors weight on the Hub, as the `save_pretrained` method was called with default args and it is currently not possible to pass kwargs to use `safe_serialization=True`. cc @sgugger @Narsil Related: https://github.com/huggingface/peft/pull/553 also cc @pacman100
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24074/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24074/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24074", "html_url": "https://github.com/huggingface/transformers/pull/24074", "diff_url": "https://github.com/huggingface/transformers/pull/24074.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24074.patch", "merged_at": 1686143254000 }
https://api.github.com/repos/huggingface/transformers/issues/24073
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24073/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24073/comments
https://api.github.com/repos/huggingface/transformers/issues/24073/events
https://github.com/huggingface/transformers/pull/24073
1,745,720,414
PR_kwDOCUB6oc5SZ4io
24,073
Support PEFT models when saving the model using trainer
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "How do we make sure this is the behavior when checkpointing the model during training?\r\n\r\nThanks!", "hi @zachschillaci27 ,\r\nyou can check the folders that are produced by trainer during training and make sure they contain `adapter_model.safetensors` or `adapter_model.bin` files", "> hi @zachschillaci27 , you can check the folders that are produced by trainer during training and make sure they contain `adapter_model.safetensors` or `adapter_model.bin` files\r\n\r\nThanks for the fast reply!" ]
1,686
1,702
1,686
CONTRIBUTOR
null
# What does this PR do? Currently if one calls `save_model` using the Trainer or `push_to_hub` with a PEFT model, it will push the base model instead of the adapters to reproduce (after `pip install trl peft transformers`): ```python from datasets import load_dataset from trl import SFTTrainer from peft import LoraConfig dataset = load_dataset("imdb", split="train") peft_config = LoraConfig( r=16, lora_alpha=32, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", ) trainer = SFTTrainer( "EleutherAI/gpt-neo-125m", train_dataset=dataset, dataset_text_field="text", peft_config=peft_config ) trainer.save_model("test-sft") ``` cc @pacman100 @sgugger @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24073/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 4, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24073/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24073", "html_url": "https://github.com/huggingface/transformers/pull/24073", "diff_url": "https://github.com/huggingface/transformers/pull/24073.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24073.patch", "merged_at": 1686141055000 }
https://api.github.com/repos/huggingface/transformers/issues/24072
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24072/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24072/comments
https://api.github.com/repos/huggingface/transformers/issues/24072/events
https://github.com/huggingface/transformers/issues/24072
1,745,605,167
I_kwDOCUB6oc5oC9Iv
24,072
Assistant Model With Falcon Fails
{ "login": "yrapop01", "id": 8748211, "node_id": "MDQ6VXNlcjg3NDgyMTE=", "avatar_url": "https://avatars.githubusercontent.com/u/8748211?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yrapop01", "html_url": "https://github.com/yrapop01", "followers_url": "https://api.github.com/users/yrapop01/followers", "following_url": "https://api.github.com/users/yrapop01/following{/other_user}", "gists_url": "https://api.github.com/users/yrapop01/gists{/gist_id}", "starred_url": "https://api.github.com/users/yrapop01/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yrapop01/subscriptions", "organizations_url": "https://api.github.com/users/yrapop01/orgs", "repos_url": "https://api.github.com/users/yrapop01/repos", "events_url": "https://api.github.com/users/yrapop01/events{/privacy}", "received_events_url": "https://api.github.com/users/yrapop01/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey thanks for reporting! Will have a look asap! Though the model is on the hub we should try to make this run smoothly! ", "Hey @yrapop01 👋 \r\n\r\nI've had a look at assisted generation, and here's what I found:\r\n1. The immediate error you see can be fixed -- Falcon has a custom cache structure, so it needs custom code for cache slicing. Easy peasy.\r\n2. We then hit a more complex wall -- the modeling code does not handle the case where there is a cache and the input ids' length is larger than 1 (a bit of a special case that is needed for the assistant [here](https://github.com/huggingface/transformers/blob/0675600a60b260d6bdb9c8ad91d932d690672bf0/src/transformers/generation/utils.py#L4268)).\r\n\r\nThis means it requires modelling changes, but the model is not yet in `transformers`. I'm going to discuss internally, and let you know of our next steps :)", "@yrapop01 we are adding Falcon to `transformers` (as opposed to hub-loaded model code), I'll make sure assisted generation works in the transformers version!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Do you have ETA for adding Falcon to transformers?", "@yrapop01 You can follow the PR here: #24523 ", "Thank you!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,686
1,691
1,691
NONE
null
### System Info transformers version: 4.30.0.dev0 (and also 4.29.2) python version: 3.9 platform: sagemaker notebook on aws running on g5.12xlarge ### Who can help? @ArthurZucker @younesbelkada @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Code: ``` import transformers import torch bnb_config = transformers.BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) model = "tiiuae/falcon-40b-instruct" tokenizer = transformers.AutoTokenizer.from_pretrained(model) model = transformers.AutoModelForCausalLM.from_pretrained(model, quantization_config=bnb_config, device_map="auto", trust_remote_code=True) assistant = "tiiuae/falcon-7b-instruct" assistant = transformers.AutoModelForCausalLM.from_pretrained(assistant, quantization_config=bnb_config, device_map="auto", trust_remote_code=True) text = "Girafatron is" inputs = tokenizer(text, return_tensors="pt", return_token_type_ids=False) inputs = {k: v.cuda() for k, v in inputs.items()} model.eval() with torch.no_grad(): outputs = model.generate(**inputs, max_new_tokens=16, assistant_model=assistant, do_sample=False) ``` Exception: ``` IndexError Traceback (most recent call last) /tmp/ipykernel_38536/3021020828.py in <cell line: 7>() 6 model.eval() 7 with torch.no_grad(): ----> 8 outputs = model.generate(**inputs, 9 max_new_tokens=16, 10 assistant_model=assistant, ~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch/utils/_contextlib.py in decorate_context(*args, **kwargs) 113 def decorate_context(*args, **kwargs): 114 with ctx_factory(): --> 115 return func(*args, **kwargs) 116 117 return decorate_context ~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/generation/utils.py in generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, **kwargs) 1490 1491 # 12. run assisted generate -> 1492 return self.assisted_decoding( 1493 input_ids, 1494 assistant_model=assistant_model, ~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/generation/utils.py in assisted_decoding(self, input_ids, assistant_model, do_sample, logits_processor, logits_warper, stopping_criteria, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, streamer, **model_kwargs) 4380 # 5.3. Discard past key values relative to unused assistant tokens 4381 new_cache_size = new_cur_len - 1 -> 4382 outputs.past_key_values = _crop_past_key_values(self, outputs.past_key_values, new_cache_size) 4383 model_kwargs["assistant_past_key_values"] = _crop_past_key_values( 4384 assistant_model, model_kwargs["assistant_past_key_values"], new_cache_size - 1 ~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/generation/utils.py in _crop_past_key_values(model, past_key_values, maximum_length) 4522 new_past.append( 4523 ( -> 4524 past_key_values[idx][0][:, :, :maximum_length, :], 4525 past_key_values[idx][1][:, :, :maximum_length, :], 4526 ) IndexError: too many indices for tensor of dimension 3``` ### Expected behavior expected to run without errors
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24072/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24072/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24071
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24071/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24071/comments
https://api.github.com/repos/huggingface/transformers/issues/24071/events
https://github.com/huggingface/transformers/pull/24071
1,745,569,876
PR_kwDOCUB6oc5SZXi2
24,071
Make the TF dummies even smaller
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24071). All of your documentation changes will be reflected on that endpoint.", "Adding @frostming's fix for Keras 2.13 to this PR as well", "Tests are passing now, pinging @ydshieh and @amyeroberts for quick review! Unfortunately it's quite hard for me to test if this PR will fix the entire memory issue in the overnight CI, but we'll try this fix and if the memory issues remain then I'll try some other things too.", "I can trigger a run.", "It's more like when several processes are run in parallel: on circleci, 8 pytest processes.", "I also have the same question on why the memory usage increases that much. Previously, we don't really use batch size 1 in dummy if I remember correctly.", "A run is triggered [here](https://app.circleci.com/pipelines/github/huggingface/transformers/65919/workflows/b0c80ba6-3278-47a0-87d9-228c402b35e9/jobs/819654).\r\n\r\nIf you need more changes and more runs to check, you can update the branch \r\n\r\nhttps://github.com/huggingface/transformers/tree/run_even_lower_tf_dummy_memory\r\n\r\non top of this PR branch.", "@amyeroberts I don't have a very good intuition for this, actually. I think it's some combination of:\r\n\r\n- The test runners were already at 90%+ memory usage before all of these PRs and tests are run in parallel as @ydshieh said, which means small perturbations could push them over the limit.\r\n- The update changed the shapes of dummies a bit - they should be smaller on average, especially after this PR, but maybe they ended up a little larger for some high-memory models and that caused the issues.\r\n\r\nIt's also possible that the update sped up building by removing unnecessary build ops left over from TF 1 and not unneccessarily passing dummies when models were already built. Speeding up builds might cause tests to be in the actual model calls more of the time, and if peak model usage occurs during the actual model calls and we have lots of tests running in parallel then more tests being in the calls simultaneously might result in higher peak memory usage for the test server.\r\n\r\nThis is all purely speculative on my part, though - I can't reproduce the problem locally and the nature of the parallel tests makes it hard to point to a single culprit for an OOM error!", "@ydshieh the new run seems to be passing - there's an unrelated issue with one of the vit-mae tests that I can't reproduce locally and that doesn't seem related, but I think this PR resolves most of the problems!", "@Rocketknight1 Unfortunately, it doesn't pass. We have to go to the `Resource` tab, and see the memory usage.\r\n\r\n<img width=\"1017\" alt=\"Screenshot 2023-06-07 145417\" src=\"https://github.com/huggingface/transformers/assets/2521628/59ad6a00-f07e-44e8-8fce-dd4164b7e3f6\">\r\n\r\nAnd if you click [Download the full output as a file](https://circleci.com/api/v1.1/project/github/huggingface/transformers/819654/output/111/0?file=true&allocation-id=64806dd5d4d5c9764f27e205-0-build%2F60721B2E), you will see `worker gw7 crashed and worker restarting disabled`.\r\n\r\n😢 😭 \r\n", "Well, to be more sure, I can revert the PR #23234 on **another branch**, so would be `main` without that PR, and run the test. The goal is to make sure no other PRs contribute to the OOM issue.\r\n\r\nDo you want me to do this?", "No, I'm pretty confident that the change to the dummies is the cause!", "@ydshieh Can we reduce the number of parallel workers by 1 for these tests? I think the speed boost from these PRs (plus some other ones I have planned) should compensate for any slowdown we experience, and it would be good to be able to make small changes without breaking fragile parallel tests like these", "Let me open a PR for that :-)", "(rebasing onto @ydshieh's PR to test everything in combination)" ]
1,686
1,686
1,686
MEMBER
null
cc @ydshieh - this will probably break some things, but if I can make it work it should reduce the memory usage during building a lot
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24071/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24071/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24071", "html_url": "https://github.com/huggingface/transformers/pull/24071", "diff_url": "https://github.com/huggingface/transformers/pull/24071.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24071.patch", "merged_at": 1686151387000 }
https://api.github.com/repos/huggingface/transformers/issues/24070
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24070/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24070/comments
https://api.github.com/repos/huggingface/transformers/issues/24070/events
https://github.com/huggingface/transformers/issues/24070
1,745,549,227
I_kwDOCUB6oc5oCver
24,070
no default_to_square and max_size passed [self.resize(image=image, size=self.size, resample=self.resample) for image in images]
{ "login": "cqray1990", "id": 32585434, "node_id": "MDQ6VXNlcjMyNTg1NDM0", "avatar_url": "https://avatars.githubusercontent.com/u/32585434?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cqray1990", "html_url": "https://github.com/cqray1990", "followers_url": "https://api.github.com/users/cqray1990/followers", "following_url": "https://api.github.com/users/cqray1990/following{/other_user}", "gists_url": "https://api.github.com/users/cqray1990/gists{/gist_id}", "starred_url": "https://api.github.com/users/cqray1990/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cqray1990/subscriptions", "organizations_url": "https://api.github.com/users/cqray1990/orgs", "repos_url": "https://api.github.com/users/cqray1990/repos", "events_url": "https://api.github.com/users/cqray1990/events{/privacy}", "received_events_url": "https://api.github.com/users/cqray1990/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @cqray1990, thanks for raising this issue.\r\n\r\nSo that we can be help, could you explain a bit more about the expected behaviour and what you're trying to do with the image processor? Do you have a checkpoint on the hub you could share? \r\n\r\nTo change the resizing behaviour of the image processor, you can either modify the `size` parameter in the config file e.g.: \r\n\r\n```json\r\n{\r\n\"do_normalize\": true,\r\n\"do_resize\": true,\r\n\"image_processor_type\": \"ViTImageProcessor\",\r\n\"image_mean\": [\r\n0.5,\r\n0.5,\r\n0.5\r\n],\r\n\"image_std\": [\r\n0.5,\r\n0.5,\r\n0.5\r\n],\r\n\"resample\": 2,\r\n\"size\": {\"height\": 384, \"width\": 384},\r\n}\r\n```\r\nnote: the feature extractors for vision models have been deprecated in place of image processors. \r\n\r\nPass it into the image processor when instantiating:\r\n```\r\n# Override size settings from a pretrained checkpoint\r\nimage_processor = ViTImageProcessor.from_pretrained(checkpoint, size={\"height\": 384, \"width\": 384})\r\n\r\n# Create a new image processor, override the default size parameter\r\nimage_processor = ViTImageProcessor(size={\"height\": 384, \"width\": 384})\r\n```\r\n\r\nOr keep the default behaviour and modify just when processing\r\n```\r\nimage_processor = ViTImageProcessor()\r\n\r\n# default behaviour - images are resized to 224x224\r\npixel_values = image_processor(image).pixel_values\r\n\r\n# Override default - images resized to 384x384\r\npixel_values = image_processor(image, size={\"height\": 384, \"width\": 384}).pixel_values\r\n```\r\n\r\n\r\n\r\n", "> Hi @cqray1990, thanks for raising this issue.\r\n> \r\n> So that we can be help, could you explain a bit more about the expected behaviour and what you're trying to do with the image processor? Do you have a checkpoint on the hub you could share?\r\n> \r\n> To change the resizing behaviour of the image processor, you can either modify the `size` parameter in the config file e.g.:\r\n> \r\n> ```json\r\n> {\r\n> \"do_normalize\": true,\r\n> \"do_resize\": true,\r\n> \"image_processor_type\": \"ViTImageProcessor\",\r\n> \"image_mean\": [\r\n> 0.5,\r\n> 0.5,\r\n> 0.5\r\n> ],\r\n> \"image_std\": [\r\n> 0.5,\r\n> 0.5,\r\n> 0.5\r\n> ],\r\n> \"resample\": 2,\r\n> \"size\": {\"height\": 384, \"width\": 384},\r\n> }\r\n> ```\r\n> \r\n> note: the feature extractors for vision models have been deprecated in place of image processors.\r\n> \r\n> Pass it into the image processor when instantiating:\r\n> \r\n> ```\r\n> # Override size settings from a pretrained checkpoint\r\n> image_processor = ViTImageProcessor.from_pretrained(checkpoint, size={\"height\": 384, \"width\": 384})\r\n> \r\n> # Create a new image processor, override the default size parameter\r\n> image_processor = ViTImageProcessor(size={\"height\": 384, \"width\": 384})\r\n> ```\r\n> \r\n> Or keep the default behaviour and modify just when processing\r\n> \r\n> ```\r\n> image_processor = ViTImageProcessor()\r\n> \r\n> # default behaviour - images are resized to 224x224\r\n> pixel_values = image_processor(image).pixel_values\r\n> \r\n> # Override default - images resized to 384x384\r\n> pixel_values = image_processor(image, size={\"height\": 384, \"width\": 384}).pixel_values\r\n> ```\r\n\r\n@amyeroberts \r\nthe ViTImageProcessor ways to preprocess the image is only rdirectly resize to size={\"height\": 384, \"width\": 384},don't inlucde padding, old version 2.24.- have this operation,but the parameters default_to_square and max_size of self.resize( ) function i can't be passes ,cause i need padding", "@cqray1990 The image processors are written to be aligned with the model's preprocessing from its paper, so they won't all perform the same operations. \r\n\r\nCould you share the checkpoint being used? ViT's feature extractor / image processor has never padded the images. [This is the class in v4.24](https://github.com/huggingface/transformers/blob/94b3f544a1f5e04b78d87a2ae32a7ac252e22e31/src/transformers/models/vit/feature_extraction_vit.py). I don't believe the values of `default_to_square` have ever been used by these classes if in the config. `default_to_square` controls the behaviour of how the output image size is calculated and is model specific. `max_size` is a deprecated argument and also hasn't ever been used by the vit image processor. \r\n\r\nIf there's a specific set of transformations you wish to perform with the input images, I suggest [looking through the different model image processors](https://github.com/search?q=repo%3Ahuggingface%2Ftransformers+path%3Asrc%2Ftransformers%2Fmodels%2F**%2Fimage_processing_*.py+ImageProcessor%28BaseImageProcessor%29&type=code), and finding one which suits your needs, or writing your own custom one. If padding is needed, you can search for [image processors that use the `do_pad` flag](https://github.com/search?q=repo%3Ahuggingface%2Ftransformers+path%3Asrc%2Ftransformers%2Fmodels%2F**%2Fimage_processing_*.py+do_pad&type=code).\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,686
1,689
1,689
NONE
null
### System Info ubantu 20.04 cuda 11.6 cudnn8.8 transformer 4.24.0 ### Who can help? @amyeroberts @sgugger @vanpelt ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction if self.do_resize and self.size is not None: images = [self.resize(image=image, size=self.size, resample=self.resample) for image in images] the parameters default_to_square and max_size of self.resize( ) function in feature_extraction_vit can't be passed by preprocessor_config.json file,i don't want to use the default resize ways, how to modify the config file or code. the content of preprocessor_config is as follows: { "do_normalize": true, "do_resize": true, "feature_extractor_type": "ViTFeatureExtractor", "image_mean": [ 0.5, 0.5, 0.5 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": 2, "size": 384, "default_to_square": false, "max_size": 384 } ### Expected behavior if self.do_resize and self.size is not None: images = [self.resize(image=image, size=self.size, resample=self.resample) for image in images] the parameters default_to_square and max_size of self.resize( ) function in feature_extraction_vit can't be passed by preprocessor_config.json file,i don't want to use the default resize ways, how to modify the config file or code. the content of preprocessor_config is as follows: { "do_normalize": true, "do_resize": true, "feature_extractor_type": "ViTFeatureExtractor", "image_mean": [ 0.5, 0.5, 0.5 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": 2, "size": 384, "default_to_square": false, "max_size": 384 }
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24070/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24070/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24069
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24069/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24069/comments
https://api.github.com/repos/huggingface/transformers/issues/24069/events
https://github.com/huggingface/transformers/issues/24069
1,745,544,999
I_kwDOCUB6oc5oCucn
24,069
[LEDModel, Longformer] Make_fx compatibility
{ "login": "Giuseppe5", "id": 18719316, "node_id": "MDQ6VXNlcjE4NzE5MzE2", "avatar_url": "https://avatars.githubusercontent.com/u/18719316?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Giuseppe5", "html_url": "https://github.com/Giuseppe5", "followers_url": "https://api.github.com/users/Giuseppe5/followers", "following_url": "https://api.github.com/users/Giuseppe5/following{/other_user}", "gists_url": "https://api.github.com/users/Giuseppe5/gists{/gist_id}", "starred_url": "https://api.github.com/users/Giuseppe5/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Giuseppe5/subscriptions", "organizations_url": "https://api.github.com/users/Giuseppe5/orgs", "repos_url": "https://api.github.com/users/Giuseppe5/repos", "events_url": "https://api.github.com/users/Giuseppe5/events{/privacy}", "received_events_url": "https://api.github.com/users/Giuseppe5/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@Giuseppe5 As with #23907, happy to look at a PR with a fix! \r\n\r\ncc @ArthurZucker @younesbelkada ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,686
1,689
1,689
NONE
null
### System Info - transformers version: 4.29.2 - Platform: Linux - Python version: 3.8.16 - PyTorch version: 2.0.1 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Related to #23907 To reproduce: ```python from torch.fx.experimental.proxy_tensor import make_fx from transformers import LEDModel model = LEDModel.from_pretrained('allenai/led-base-16384', torchscript=True) inp = model.dummy_inputs['input_ids'] model.eval() fx_g = make_fx(model)(inp) ``` The presence of `.item()` and `torch.div` cause make_fx to fail for LEDModel and Longformer with the following error: ``` RuntimeError: It appears that you're trying to get value out of a tracing tensor with aten._local_scalar_dense.default - erroring out! It's likely that this is caused by data-dependent control flow or similar. It may be possible to trace this with dynamic shapes; try setting tracing_mode='symbolic' in your make_fx call. ``` The calls to `item()` could be removed without side effects, and I believe the same is true for replacing `torch.div` with regular python divisions. Even with these adjustments, make_fx seems to fail in both models because of `is_global_attn`: https://github.com/huggingface/transformers/blob/f1660d7e23d4432513fe060bde4f9b7b29f05204/src/transformers/models/led/modeling_led.py#L233 https://github.com/huggingface/transformers/blob/f1660d7e23d4432513fe060bde4f9b7b29f05204/src/transformers/models/longformer/modeling_longformer.py#L601 I am not sure if it would be possible to have a workaround for that. One thing to note is that the presence of `.item()` and `torch.div` also causes graph breaks when using torch dynamo to get the FX representations of these models. It seems that `is_global_attn` is not an issue in that case. ### Expected behavior The full FX representation of LEDModel/Longformer using make_fx
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24069/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24069/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24068
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24068/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24068/comments
https://api.github.com/repos/huggingface/transformers/issues/24068/events
https://github.com/huggingface/transformers/issues/24068
1,745,494,603
I_kwDOCUB6oc5oCiJL
24,068
Feature request: support more than device keyword arg when calling .to() on BatchEncoding
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 1990918270, "node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue", "name": "Good First Issue", "color": "bbf794", "default": false, "description": "" }, { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@NielsRogge @amyeroberts Hey I am interested on working on this issue. It is my first one so I'll need some guidance. I'll try my best and ping you if i need help!", "@amannagarkar Great! :D Feel free to ask questions on the PR once it's opened! ", "I can help with this too @amyeroberts\r\nWould that be okay? @amannagarkar ", "@Rishab26 I am already working on the issue! I will let you know in a couple of days? I am testing the code for errors and will open a pr shortly! ", "@amannagarkar Sure, no worries. Happy to collaborate together too. I've taken a shot at it 👍", "@NielsRogge I saw in the PR that you said \"instead update multimodal processors in the library that return a BatchEncoding instead of a BatchFeature.\" What do you mean by update the multimodal processors? Also the problem addressed by this issue does not exist (i think, by the discussion in the PR) so maybe we could close it. If not, I'm willing to work on any remaining issue. ", "@Lorenzobattistela you're right in that this issue can be closed, since BatchEncoding is only meant for tokenizers, which always return LongTensors, making the `dtype` irrelevant." ]
1,686
1,693
1,693
CONTRIBUTOR
null
### Feature request I see that the `.to()` [method](https://github.com/huggingface/transformers/blob/f1660d7e23d4432513fe060bde4f9b7b29f05204/src/transformers/tokenization_utils_base.py#L751) of `BatchEncoding` returned by tokenizers only supports the `device` keyword argument. However, the `BatchFeature` returned by image processors/audio feature extractors supports [more keyword arguments](https://github.com/huggingface/transformers/blob/f1660d7e23d4432513fe060bde4f9b7b29f05204/src/transformers/feature_extraction_utils.py#L187), most importantly `dtype`. ### Motivation This is handy as it allows to do: ``` from transformers import ViTImageProcessor import torch from PIL import Image import requests processor = ViTImageProcessor() url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) encoding = processor(image, return_tensors="pt").to(device="cuda", dtype=torch.float16) for k,v in encoding.items(): print(k,v.dtype) ``` which returns ``` pixel_values torch.float16 ``` ### Your contribution I could submit a PR but if anyone has the bandwidth for this, would be great to add it :D
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24068/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24068/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24067
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24067/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24067/comments
https://api.github.com/repos/huggingface/transformers/issues/24067/events
https://github.com/huggingface/transformers/pull/24067
1,745,382,614
PR_kwDOCUB6oc5SYufa
24,067
fix executable batch size issue
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
CONTRIBUTOR
null
# What does this PR do? 1. Fixes #24050 2. Context: We weren't handling properly the `auto_find_batch_size=True` case. Here, we need to free all of the stored model references in the Accelerator each time as mentioned in the https://github.com/huggingface/accelerate/blob/main/examples/by_feature/automatic_gradient_accumulation.py 3. This PR does that.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24067/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24067/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24067", "html_url": "https://github.com/huggingface/transformers/pull/24067", "diff_url": "https://github.com/huggingface/transformers/pull/24067.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24067.patch", "merged_at": 1686155885000 }
https://api.github.com/repos/huggingface/transformers/issues/24066
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24066/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24066/comments
https://api.github.com/repos/huggingface/transformers/issues/24066/events
https://github.com/huggingface/transformers/pull/24066
1,745,260,122
PR_kwDOCUB6oc5SYUQd
24,066
A minor change to fix a bug when using torch.compile()
{ "login": "DongHande", "id": 45357817, "node_id": "MDQ6VXNlcjQ1MzU3ODE3", "avatar_url": "https://avatars.githubusercontent.com/u/45357817?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DongHande", "html_url": "https://github.com/DongHande", "followers_url": "https://api.github.com/users/DongHande/followers", "following_url": "https://api.github.com/users/DongHande/following{/other_user}", "gists_url": "https://api.github.com/users/DongHande/gists{/gist_id}", "starred_url": "https://api.github.com/users/DongHande/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DongHande/subscriptions", "organizations_url": "https://api.github.com/users/DongHande/orgs", "repos_url": "https://api.github.com/users/DongHande/repos", "events_url": "https://api.github.com/users/DongHande/events{/privacy}", "received_events_url": "https://api.github.com/users/DongHande/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,686
1,686
1,686
NONE
null
When using model = torch.compile(model), the class of new model is changed to OptimizedModule. The function, inspect.signature(self.model.forward), will return ["args", "kargs"], withou "input_ids", "labels", and so on. This will result in the dataset remove all columns and the data sample is a empty dict, and incurs bug when forward propagation. By using "self.model._orig_mod.forward", above problem can be fixed. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> With model = torch.compile(model) operation, the class of the model will change from previous model to "torch._dynamo.eval_frame.OptimizedModule", and thus inspect.signature(self.model.forward) will return ["args", "kargs"] instead of expectable variable names of the model. This results in that the data columns are removed and the incurs bug during training. This pull request can fix this bug. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? This is a minor change, and anyone can review it. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24066/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24066/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24066", "html_url": "https://github.com/huggingface/transformers/pull/24066", "diff_url": "https://github.com/huggingface/transformers/pull/24066.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24066.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24065
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24065/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24065/comments
https://api.github.com/repos/huggingface/transformers/issues/24065/events
https://github.com/huggingface/transformers/pull/24065
1,745,251,365
PR_kwDOCUB6oc5SYSZJ
24,065
Add CPMBee model
{ "login": "gongbaitao", "id": 45178523, "node_id": "MDQ6VXNlcjQ1MTc4NTIz", "avatar_url": "https://avatars.githubusercontent.com/u/45178523?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gongbaitao", "html_url": "https://github.com/gongbaitao", "followers_url": "https://api.github.com/users/gongbaitao/followers", "following_url": "https://api.github.com/users/gongbaitao/following{/other_user}", "gists_url": "https://api.github.com/users/gongbaitao/gists{/gist_id}", "starred_url": "https://api.github.com/users/gongbaitao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gongbaitao/subscriptions", "organizations_url": "https://api.github.com/users/gongbaitao/orgs", "repos_url": "https://api.github.com/users/gongbaitao/repos", "events_url": "https://api.github.com/users/gongbaitao/events{/privacy}", "received_events_url": "https://api.github.com/users/gongbaitao/received_events", "type": "User", "site_admin": false }
[ { "id": 5724035499, "node_id": "LA_kwDOCUB6oc8AAAABVS3Zqw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Model%20on%20the%20Hub", "name": "Model on the Hub", "color": "9CA0E9", "default": false, "description": "" } ]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24065). All of your documentation changes will be reflected on that endpoint.", "@ArthurZucker @younesbelkada Please kindly have a review: )", "Hey @gongbaitao ! Thanks a lot for opening a PR and contributing to the HF ecosystem! 🤗 \r\nWe have recently been trying to push for `model on the hub` and have as much support as we can there. It will also be easier to integrate it! Here is a [tutorial](https://huggingface.co/docs/transformers/custom_models) if that sound good to you!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,686
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? Adds the [CPM-Bee](https://github.com/OpenBMB/CPM-Bee/tree/main) pytorch model. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24065/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24065/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24065", "html_url": "https://github.com/huggingface/transformers/pull/24065", "diff_url": "https://github.com/huggingface/transformers/pull/24065.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24065.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24064
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24064/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24064/comments
https://api.github.com/repos/huggingface/transformers/issues/24064/events
https://github.com/huggingface/transformers/pull/24064
1,745,035,473
PR_kwDOCUB6oc5SXjIy
24,064
[WIP] Add VGCN-BERT model
{ "login": "Louis-udm", "id": 25377679, "node_id": "MDQ6VXNlcjI1Mzc3Njc5", "avatar_url": "https://avatars.githubusercontent.com/u/25377679?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Louis-udm", "html_url": "https://github.com/Louis-udm", "followers_url": "https://api.github.com/users/Louis-udm/followers", "following_url": "https://api.github.com/users/Louis-udm/following{/other_user}", "gists_url": "https://api.github.com/users/Louis-udm/gists{/gist_id}", "starred_url": "https://api.github.com/users/Louis-udm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Louis-udm/subscriptions", "organizations_url": "https://api.github.com/users/Louis-udm/orgs", "repos_url": "https://api.github.com/users/Louis-udm/repos", "events_url": "https://api.github.com/users/Louis-udm/events{/privacy}", "received_events_url": "https://api.github.com/users/Louis-udm/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ArthurZucker @younesbelkada ", "Hey! Thanks a lot for opening this PR 🔥 \r\nWe have been pushing a lot for models to be on the hub, as it is a lot easier to implement ! What do you think about trying [this tutorial out](https://huggingface.co/docs/transformers/custom_models)! ", "> Hey! Thanks a lot for opening this PR 🔥 We have been pushing a lot for models to be on the hub, as it is a lot easier to implement ! What do you think about trying [this tutorial out](https://huggingface.co/docs/transformers/custom_models)!\r\n\r\nThanks for your reply @ArthurZucker . I am trying with the new way and put model with code to [here in hub](https://huggingface.co/zhibinlu/vgcn-bert-distilbert-base-uncased)\r\nbut I found I can not put my `modeling_graph.py` in my upload script with this function `model.push_to_hub`. Also, people need import it using `from transformers.models.vgcn_bert.modeling_graph import WordGraph` in this PR, but do you have a suggestion when I put it in hub?\r\n\r\nAnd, how to put model.safetensors instead of model.bin;\r\nhow to put other files like `README.md` (I create manually in the hub UI), `tokenizer.json` etc.\r\n\r\nmy upload script\r\n```\r\nfrom vgcn_bert.configuration_vgcn_bert import VGCNBertConfig\r\nfrom vgcn_bert.modeling_vgcn_bert import VGCNBertModel, VGCNBertForMaskedLM, VGCNBertForMultipleChoice, VGCNBertForQuestionAnswering, VGCNBertForSequenceClassification, VGCNBertForTokenClassification\r\n\r\nimport transformers as tfr\r\nfrom vgcn_bert.modeling_graph import WordGraph\r\n\r\nVGCNBertConfig.register_for_auto_class()\r\nVGCNBertModel.register_for_auto_class(\"AutoModel\")\r\nVGCNBertForMaskedLM.register_for_auto_class(\"AutoModelForMaskedLM\")\r\nVGCNBertForMultipleChoice.register_for_auto_class(\"AutoModelForMultipleChoice\")\r\nVGCNBertForQuestionAnswering.register_for_auto_class(\"AutoModelForQuestionAnswering\")\r\nVGCNBertForSequenceClassification.register_for_auto_class(\"AutoModelForSequenceClassification\")\r\nVGCNBertForTokenClassification.register_for_auto_class(\"AutoModelForTokenClassification\")\r\n\r\ntokenizer = tfr.AutoTokenizer.from_pretrained(\r\n \"zhibinlu/vgcn-distilbert-base-uncased\"\r\n)\r\n\r\nmodel = VGCNBertModel.from_pretrained(\r\n \"zhibinlu/vgcn-distilbert-base-uncased\",\r\n)\r\n\r\nfrom huggingface_hub import notebook_login\r\n\r\nnotebook_login()\r\nmodel.push_to_hub(\"vgcn-bert-distilbert-base-uncased\")\r\n```", "Oups your question slipped through the cracks, let me answers to the best of my knowledge", "> 1. I found I can not put my modeling_graph.py in my upload script with this function model.push_to_hub. \r\n> 2. Also, people need import it using from transformers.models.vgcn_bert.modeling_graph import WordGraph in this PR, but do you have a suggestion when I put it in hub?\r\n> 3. And, how to put model.safetensors instead of model.bin;\r\n> 4. how to put other files like README.md (I create manually in the hub UI), tokenizer.json etc.\r\n\r\n1. That is expected, if you have a look at this [doc page](https://huggingface.co/docs/hub/models-uploading#using-the-huggingfacehub-client-library), it will help you upload the actual code. `push_to_hub` is not made for this!\r\n2. When you put the code on the hub (using `upload` or equivalent), then you simply need to create a `config.json`. If you want an example, here is [one](https://huggingface.co/tiiuae/falcon-7b/blob/main/config.json). Falcon is hosted on the hub. \r\n3. You should be able to save the safetensors weights using `use_safetensors=True` option when pushing to the hub/saving the model.\r\n4. The readme can also be uploaded on the hub like any other files. You can push the tokenizer using `tokenizer.push_to_hub(\"path\")`\r\n\r\nhope this helps you ", "> > 1. I found I can not put my modeling_graph.py in my upload script with this function model.push_to_hub.\r\n> > 2. Also, people need import it using from transformers.models.vgcn_bert.modeling_graph import WordGraph in this PR, but do you have a suggestion when I put it in hub?\r\n> > 3. And, how to put model.safetensors instead of model.bin;\r\n> > 4. how to put other files like README.md (I create manually in the hub UI), tokenizer.json etc.\r\n> \r\n> 1. That is expected, if you have a look at this [doc page](https://huggingface.co/docs/hub/models-uploading#using-the-huggingfacehub-client-library), it will help you upload the actual code. `push_to_hub` is not made for this!\r\n> 2. When you put the code on the hub (using `upload` or equivalent), then you simply need to create a `config.json`. If you want an example, here is [one](https://huggingface.co/tiiuae/falcon-7b/blob/main/config.json). Falcon is hosted on the hub.\r\n> 3. You should be able to save the safetensors weights using `use_safetensors=True` option when pushing to the hub/saving the model.\r\n> 4. The readme can also be uploaded on the hub like any other files. You can push the tokenizer using `tokenizer.push_to_hub(\"path\")`\r\n> \r\n> hope this helps you\r\n\r\n@ArthurZucker Ok, these answers will help me, after getting rid of all the problems, I will cancel this PR.", "The new implement is here:\r\nhttps://huggingface.co/zhibinlu/vgcn-bert-distilbert-base-uncased", "Thanks a lot for sharing this and adding this model! 🔥 " ]
1,686
1,687
1,687
NONE
null
# What does this PR do? Adds the VGCN-BERT model from [VGCN-BERT: Augmenting BERT with Graph Embedding for Text Classification](https://arxiv.org/abs/2004.05707) paper. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #24038 (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. [24038](https://github.com/huggingface/transformers/issues/24038) - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24064/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24064/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24064", "html_url": "https://github.com/huggingface/transformers/pull/24064", "diff_url": "https://github.com/huggingface/transformers/pull/24064.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24064.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24063
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24063/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24063/comments
https://api.github.com/repos/huggingface/transformers/issues/24063/events
https://github.com/huggingface/transformers/issues/24063
1,744,729,328
I_kwDOCUB6oc5n_nTw
24,063
Add option for `trust_remote_code=True` on transformers-cli download
{ "login": "radames", "id": 102277, "node_id": "MDQ6VXNlcjEwMjI3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/102277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/radames", "html_url": "https://github.com/radames", "followers_url": "https://api.github.com/users/radames/followers", "following_url": "https://api.github.com/users/radames/following{/other_user}", "gists_url": "https://api.github.com/users/radames/gists{/gist_id}", "starred_url": "https://api.github.com/users/radames/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/radames/subscriptions", "organizations_url": "https://api.github.com/users/radames/orgs", "repos_url": "https://api.github.com/users/radames/repos", "events_url": "https://api.github.com/users/radames/events{/privacy}", "received_events_url": "https://api.github.com/users/radames/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
closed
false
null
[]
[ "Sounds like a good addition to me! \r\n\r\ncc @sgugger who's been doing a lot of the work enabling remote code integration. ", "Yes, happy to review a PR!", "thanks @amyeroberts and @sgugger , is there any other argument that worth adding and loading a model?" ]
1,686
1,686
1,686
MEMBER
null
### Feature request Currently is very convenient to download models using `transformers-cli download`, however some models need an extra argument for `trust_remote_code=True` for example `transformers-cli download "tiiuae/falcon-40b"` ### Motivation Would it make sense to add `transformers-cli download "tiiuae/falcon-40b" --trust_remote_code` ### Your contribution PR
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24063/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24063/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24062
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24062/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24062/comments
https://api.github.com/repos/huggingface/transformers/issues/24062/events
https://github.com/huggingface/transformers/pull/24062
1,744,698,490
PR_kwDOCUB6oc5SWaek
24,062
[Wav2Vec2] Fix torch srcipt
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It seems like `torch.trace(...)` doesn't like property function as it'll always call them. Since we've added the property as a private property in https://github.com/huggingface/transformers/pull/23813, let's just go the simplest way and change it to a function. \r\n\r\nThis PR should fix: \r\n```\r\ntests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2ModelTest::test_torchscript_output_attentions\r\n(line 1180) ValueError: <class 'transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForCTC'> has no adapter layers. Make sure to define config.adapter_attn_dim.\r\ntests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2ModelTest::test_torchscript_output_hidden_state\r\n(line 1180) ValueError: <class 'transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForCTC'> has no adapter layers. Make sure to define config.adapter_attn_dim.\r\ntests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2ModelTest::test_torchscript_simple\r\n(line 1180) ValueError: <class 'transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForCTC'> has no adapter layers. Make sure to define config.adapter_attn_dim.\r\ntests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2RobustModelTest::test_torchscript_output_attentions\r\n(line 1180) ValueError: <class 'transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForCTC'> has no adapter layers. Make sure to define config.adapter_attn_dim.\r\ntests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2RobustModelTest::test_torchscript_output_hidden_state\r\n(line 1180) ValueError: <class 'transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForCTC'> has no adapter layers. Make sure to define config.adapter_attn_dim.\r\ntests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2RobustModelTest::test_torchscript_simple\r\n(line 1180) ValueError: <class 'transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForCTC'> has no adapter layers. Make sure to define config.adapter_attn_dim.\r\n```\r\n\r\nof the slow tests, such as\r\n```\r\nRUN_SLOW=1 pytest tests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2ModelTest::test_torchscript_simple\r\n```\r\n\r\n@sgugger @ydshieh ", "_The documentation is not available anymore as the PR was closed or merged._", "One possible different way is to raise `AttributeError` instead of `ValueError` if we want to keep the propery." ]
1,686
1,686
1,686
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes Wav2Vec2 torch script slow tests ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24062/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24062/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24062", "html_url": "https://github.com/huggingface/transformers/pull/24062", "diff_url": "https://github.com/huggingface/transformers/pull/24062.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24062.patch", "merged_at": 1686137227000 }
https://api.github.com/repos/huggingface/transformers/issues/24061
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24061/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24061/comments
https://api.github.com/repos/huggingface/transformers/issues/24061/events
https://github.com/huggingface/transformers/issues/24061
1,744,638,654
I_kwDOCUB6oc5n_RK-
24,061
Convert PyTorch checkpoint for more recent models
{ "login": "d-ataman", "id": 15922264, "node_id": "MDQ6VXNlcjE1OTIyMjY0", "avatar_url": "https://avatars.githubusercontent.com/u/15922264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/d-ataman", "html_url": "https://github.com/d-ataman", "followers_url": "https://api.github.com/users/d-ataman/followers", "following_url": "https://api.github.com/users/d-ataman/following{/other_user}", "gists_url": "https://api.github.com/users/d-ataman/gists{/gist_id}", "starred_url": "https://api.github.com/users/d-ataman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/d-ataman/subscriptions", "organizations_url": "https://api.github.com/users/d-ataman/orgs", "repos_url": "https://api.github.com/users/d-ataman/repos", "events_url": "https://api.github.com/users/d-ataman/events{/privacy}", "received_events_url": "https://api.github.com/users/d-ataman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "For each of the models added to the transformers repo, conversion scripts are created to port the weights from the original weights to the library format. You can find these under [models' respective folders](https://github.com/huggingface/transformers/tree/main/src/transformers/models) e.g. for [longformer](https://github.com/huggingface/transformers/blob/main/src/transformers/models/longformer/convert_longformer_original_pytorch_lightning_to_pytorch.py), [longt5](https://github.com/huggingface/transformers/blob/main/src/transformers/models/longt5/convert_longt5x_checkpoint_to_flax.py), or the [t5 script used for mt5](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5_original_tf_checkpoint_to_pytorch.py).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,686
1,689
1,689
NONE
null
### Feature request Is it possible to add more examples for model conversion between PyTorch models e.g. convert_bart_original_pytorch_checkpoint_to_pytorch.py in more recent and popular architectures? (e.g. mt5) ### Motivation It would really accelerate setting up new feature/model development using pretained models on the database. ### Your contribution If anyone can give feedback I am happy to share the resulting conversion script.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24061/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24061/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24060
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24060/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24060/comments
https://api.github.com/repos/huggingface/transformers/issues/24060/events
https://github.com/huggingface/transformers/issues/24060
1,744,484,875
I_kwDOCUB6oc5n-roL
24,060
Empty prediction masks after switching from transformers 4.26.1 to transformers 4.29.0
{ "login": "alzaia", "id": 14980394, "node_id": "MDQ6VXNlcjE0OTgwMzk0", "avatar_url": "https://avatars.githubusercontent.com/u/14980394?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alzaia", "html_url": "https://github.com/alzaia", "followers_url": "https://api.github.com/users/alzaia/followers", "following_url": "https://api.github.com/users/alzaia/following{/other_user}", "gists_url": "https://api.github.com/users/alzaia/gists{/gist_id}", "starred_url": "https://api.github.com/users/alzaia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alzaia/subscriptions", "organizations_url": "https://api.github.com/users/alzaia/orgs", "repos_url": "https://api.github.com/users/alzaia/repos", "events_url": "https://api.github.com/users/alzaia/events{/privacy}", "received_events_url": "https://api.github.com/users/alzaia/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[ { "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false } ]
[ "Hi @alzaia, thanks for reporting this issue. \r\n\r\nTo help us dig into the problem, we need to be able to reproduce the issue. Could you share a minimal snippet that we could use to compare the versions? Specifically, a model checkpoint, any processing logic to produce `sample` and any additional information which might affect the model e.g. `<args_of_my_pytorch_lightning_model>`. ", "Hello @amyeroberts , thanks for the reply. Unfortunately I cannot provide data or checkpoints since this is not a public project, but I can give you more information on the processing logic.\r\n\r\nOn the model side, I have a pretty standard pytorch lightning module that looks like this (example with MaskFormer):\r\n```python\r\n# Load pretrained model weights (I am using the `facebook/maskformer-swin-tiny-coco` weights here)\r\nself.model = MaskFormerForInstanceSegmentation.from_pretrained(args)\r\n# Load image processor\r\nself.img_processor = MaskFormerImageProcessor.from_pretrained(<name_of_pretrained_model>)\r\n# In the forward method\r\noutputs = self.model(pixel_values=pixel_values)\r\n# Then calling the right post-processing method from self.img_processor to reformat the output for my needs\r\n```\r\nOn the data side, my `sample` is a standard pre-processed object with keys `['image', 'mask', 'pixel_values', 'pixel_mask', 'mask_labels', 'class_labels']`. I feed the `pixel_values` tensor in my forward, which is a tensor of shape `torch.Size([1, 3, 256, 256])`. \r\n\r\nDo you guys feel like this is enough information to try to reproduce it? It may be that some default arguments of the `img_processor` or the `from_pretrained` method of the model changed in the newest version? I will try to look more into that.\r\n\r\n\r\n\r\n\r\n\r\n", "@alzaia OK, I understand if you are unable to share e.g. weights. However, to be able to diagnose the model behaviour, it is necessary to know any changes to the default model architecture / behaviour. \r\n\r\n* Is `facebook/maskformer-swin-tiny-coco` the checkpoint being used for both the image processor and the model? \r\n* For `self.model = MaskFormerForInstanceSegmentation.from_pretrained(args)` - could you share `args`? Specifically, are there any settings which might affect the model's weights e.g. `load_in_8bit`, `device_map`? Are there any kwargs overidding the model config defaults e.g. `mask_feature_size`, `backbone_config` etc? \r\n* What format are the images being passed to the image processor e.g. PIL images? \r\n* Are any settings of the image processor being overriden in the processing call e.g. `do_resize=False`? \r\n* When you say `\"calling the right post-processing method\"` - which one is being used? Are the empty predictions being observed in the raw model outputs or after post processing? ", "Thanks for the fast reply @amyeroberts, I can give more specific details on that.\r\n\r\n- Right, I am using the same checkpoint (`facebook/maskformer-swin-tiny-coco`) for both the model and the image processor here.\r\n- For the loading of the pretrained model, I am using the following:\r\n```python\r\n# Doing multiclass (8 classes) with my own label to id mapping dictionary (all args not specified here are using the default values of course):\r\nself.model = MaskFormerForInstanceSegmentation.from_pretrained(\r\n `facebook/maskformer-swin-tiny-coco`,\r\n num_labels=8,\r\n id2label=self.id2label,\r\n label2id=self.label2id,\r\n ignore_mismatched_sizes=True,\r\n )\r\n```\r\n- For the image processor, I am using, for example:\r\n\r\n```python\r\nself.img_processor = MaskFormerImageProcessor.from_pretrained(`facebook/maskformer-swin-tiny-coco`)\r\n self.img_processor.do_resize = True\r\n self.img_processor.size = 256\r\n```\r\n- For the post-processing method, it looks like this:\r\n```python\r\npost_processed_output = self.img_processor.post_process_semantic_segmentation(\r\n outputs,\r\n target_sizes=[\r\n [pixel_values.shape[2], pixel_values.shape[3]]\r\n for _ in range(pixel_values.shape[0])\r\n ],\r\n )\r\n```\r\nAfter running a prediction on the exact same sample to compare the `output` object returned by the model, I realize that it does not return the same logits. For instance, with `v4.26.1` I get an `outputs[\"masks_queries_logits\"]` that looks like this:\r\n```\r\ntensor([[[[-1.0812e+01, -1.1791e+01, -1.1831e+01, ..., -1.1767e+01,\r\n -1.2532e+01, -1.0802e+01],\r\n [-1.1643e+01, -1.1272e+01, -1.1423e+01, ..., -1.1962e+01,\r\n -1.2646e+01, -1.1662e+01],\r\n [-1.1235e+01, -1.0905e+01, -1.0670e+01, ..., -1.1819e+01,\r\n -1.2373e+01, -1.1361e+01],\r\n ...,\r\n```\r\nWhile with `v4.29` I get the following `outputs[\"masks_queries_logits\"]`:\r\n```\r\ntensor([[[[ 5.7369, 6.7416, 6.6841, ..., 4.9752, 5.1076, 4.8850],\r\n [ 7.2394, 8.4208, 8.4004, ..., 6.0630, 6.2035, 5.7476],\r\n [ 7.2572, 8.4786, 8.4989, ..., 5.8866, 6.0229, 5.6737],\r\n ...,\r\n```\r\nWhich seems to indicate that the post-processing is fine, the problem arises during the prediction using the model.\r\n\r\nIn terms of data, I do not do anything fancy, I do start with PIL images, but convert them to numpy arrays, and then process them with the right model processor:\r\n```python\r\nself.img_processor:\r\n processed_inputs = self.img_processor(\r\n images=image, segmentation_maps=mask, return_tensors=\"pt\"\r\n )\r\n```\r\nThanks for any insights to what may be causing the issue.\r\n\r\n\r\n\r\n\r\n\r\n", "@alzaia Thanks for the additional info, it really helps :) \r\n\r\nThere's a few things to note from the examples: \r\n\r\n**1. Model instantiation**\r\n\r\nIn the snippet: \r\n\r\n```python\r\nmodel = MaskFormerForInstanceSegmentation.from_pretrained(\r\n `facebook/maskformer-swin-tiny-coco`,\r\n num_labels=8,\r\n id2label=id2label,\r\n label2id=label2id,\r\n ignore_mismatched_sizes=True,\r\n)\r\n```\r\n\r\nwhen you change the number of prediction classes, the pretrained weights for the classification head will be thrown away, and new randomly initialized weights with the correct dimensions created. Creating the model you should see: \r\n\r\n```\r\nSome weights of MaskFormerForInstanceSegmentation were not initialized from the model checkpoint at facebook/maskformer-swin-tiny-coco and are newly initialized because the shapes did not match:\r\n- class_predictor.weight: found shape torch.Size([134, 256]) in the checkpoint and torch.Size([9, 256]) in the model instantiated\r\n- class_predictor.bias: found shape torch.Size([134]) in the checkpoint and torch.Size([9]) in the model instantiated\r\n- criterion.empty_weight: found shape torch.Size([134]) in the checkpoint and torch.Size([9]) in the model instantiated\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n```\r\n\r\nThis means that the output `output.class_queries_logits` will essentially be nonsense until the model has been finetuned on the downstream task. This is the case for both transformers 4.26.1 and the most recent version. This then has an effect on the segmentation masks post processing, [specifically here](https://github.com/huggingface/transformers/blob/deff5979fee1f989d26e4946c92a5c35ce695af8/src/transformers/models/maskformer/image_processing_maskformer.py#LL988C9-L988C9).\r\n\r\nWhen I ran the following script multiple times with transformers==4.26.1, I saw many different predicted (sometimes empty) masks:\r\n\r\n```python\r\nimport requests\r\n\r\nimport matplotlib.pyplot as plt\r\nimport numpy as np\r\nimport torch\r\nfrom PIL import Image\r\n\r\nimport transformers\r\nfrom transformers import MaskFormerForInstanceSegmentation, MaskFormerImageProcessor\r\n\r\nCHECKPOINT = \"facebook/maskformer-swin-tiny-coco\"\r\n\r\nid2label = {i: str(i) for i in range(8)}\r\nlabel2id = {str(i): i for i in range(8)}\r\n\r\nmodel = MaskFormerForInstanceSegmentation.from_pretrained(\r\n CHECKPOINT, num_labels=8, id2label=id2label, label2id=label2id, ignore_mismatched_sizes=True,\r\n)\r\n\r\nimage_processor = MaskFormerImageProcessor.from_pretrained(\r\n CHECKPOINT,\r\n size={\"shortest_edge\": 256, \"longest_edge\": 1333}\r\n)\r\n\r\nurl = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\n\r\ninputs = image_processor(images=image, return_tensors=\"pt\")\r\ntarget_sizes = [[pv.shape[-2], pv.shape[-1]] for pv in inputs[\"pixel_values\"]]\r\n\r\nwith torch.no_grad():\r\n outputs = model(**inputs)\r\n\r\nsegmentation_masks = image_processor.post_process_semantic_segmentation(\r\n outputs, target_sizes\r\n)\r\n\r\nplt.imshow(segmentation_masks)\r\nplt.show()\r\n```\r\n\r\nSome examples from v.4.26.1:\r\n![maskformer_4_26_1__1](https://github.com/huggingface/transformers/assets/22614925/205d9a3e-7bc8-4df8-af53-54010b3f75ab)\r\n![maskformer_4_26_1__2](https://github.com/huggingface/transformers/assets/22614925/af572f31-f8bb-4125-860b-9171180a5e35)\r\n![maskformer_4_26_1__3](https://github.com/huggingface/transformers/assets/22614925/522abca2-767b-4a78-8930-6e0b749f2db3)\r\n\r\nI observed the same for the most recent release, 4.30.1. and 4.29.2.\r\n\r\nCould you try running this in your environment to see if you observe the same behaviour? This way we can try and pin down the differences between our respective environments. \r\n\r\n**2. Image Processor**\r\n\r\nThe `size` parameter for the image processors is now a dictionary. Although it should still work because of efforts to create backwards compatibility, the equivalent dictionary should be used to change the behaviour. Note: this can also be set in the `from_pretrained` call:\r\n\r\n```python\r\nimage_processor = MaskFormerImageProcessor.from_pretrained(\r\n CHECKPOINT,\r\n size={\"shortest_edge\": 256, \"longest_edge\": 1333}\r\n)\r\n```\r\n\r\n**3. Mask Queries Logits**\r\n\r\nThis is interesting - if I save out `outputs.mask_queries_logits` from a run in v4.30.1/v4.29.2 and v.4.26.1 there's 0 difference. However, if I find the largest absolute difference for `outputs_class_queries_logits` between the two versions, it's typically ~3. This will be due to the randomly initialized head.\r\n\r\n**4. Image processing**\r\n\r\nYou don't need to convert to numpy images before passing to the image processor, you can pass in PIL images directly :) \r\n \r\n\r\n\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,686
1,689
1,689
NONE
null
### System Info Hey guys, I have been working on the 4.26.1 version perfectly fine, but I wanted to switch to 4.29.0 to make use of the latest models (such as SAM). The issue I encounter is that when running my prediction code for models such as MaskFormer and Mask2Former, my outputs between versions 4.26.1 and 4.29.0 do not match at all (4.26.1 works fine for all models, while 4.29.0 gives me empty or wrong predictions). Anything I am missing here? - `transformers` version: 4.29.0 - Platform: Linux-4.15.0-194-generic-x86_64-with-glibc2.27 - Python version: 3.10.9 - Huggingface_hub version: 0.12.1 - Safetensors version: not installed - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am running a very simple prediction pipeline that looks like this: model instance: ```python mask2former = Mask2Former(<args_of_my_pytorch_lightning_model>) checkpoint = torch.load(<path_to_weights>) mask2former.load_state_dict(checkpoint['state_dict']) mask2former.eval() ``` data module instance: ```python dm.setup(stage="test") dl = dm.test_dataloader() ``` simple prediction loop ```python for sample in dl: pred_mask2former = mask2former.forward(sample["pixel_values"]) ``` ### Expected behavior The outputs produced by the model do not match between versions `4.26.1` and `4.29.0` of the `transformers` package. I get the expected behavior with `4.26.1`, but empty or (very) wrong predictions with 4.29.0` on the exact same data (using the same code/models/...).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24060/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24060/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24059
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24059/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24059/comments
https://api.github.com/repos/huggingface/transformers/issues/24059/events
https://github.com/huggingface/transformers/issues/24059
1,744,462,836
I_kwDOCUB6oc5n-mP0
24,059
Error with pip install in Colab Notebook
{ "login": "sonnyarora", "id": 13602077, "node_id": "MDQ6VXNlcjEzNjAyMDc3", "avatar_url": "https://avatars.githubusercontent.com/u/13602077?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sonnyarora", "html_url": "https://github.com/sonnyarora", "followers_url": "https://api.github.com/users/sonnyarora/followers", "following_url": "https://api.github.com/users/sonnyarora/following{/other_user}", "gists_url": "https://api.github.com/users/sonnyarora/gists{/gist_id}", "starred_url": "https://api.github.com/users/sonnyarora/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sonnyarora/subscriptions", "organizations_url": "https://api.github.com/users/sonnyarora/orgs", "repos_url": "https://api.github.com/users/sonnyarora/repos", "events_url": "https://api.github.com/users/sonnyarora/events{/privacy}", "received_events_url": "https://api.github.com/users/sonnyarora/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thank you so much for flagging, just fixed the notebook. Closing the issue, feel free to re-open it if you see more issues!" ]
1,686
1,686
1,686
NONE
null
### System Info Google Colab ### Who can help? @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1) Copy the colab notebook from the page https://huggingface.co/docs/transformers/perf_infer_gpu_one linked at https://colab.research.google.com/drive/1qOjXfQIAULfKvZqwCen8-MoWKGdSatZ4?usp=sharing. 2) Run First Cell to pip install packages. I get the following error: ERROR: Could not find a version that satisfies the requirement bitsandbyte (from versions: none) ERROR: No matching distribution found for bitsandbyte ### Expected behavior I expect the cell to run without error. When I replace ``` !pip install --quiet bitsandbyte ``` with ``` !pip install --quiet bitsandbytes ``` I get the desired behavior.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24059/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24059/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24058
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24058/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24058/comments
https://api.github.com/repos/huggingface/transformers/issues/24058/events
https://github.com/huggingface/transformers/pull/24058
1,744,337,274
PR_kwDOCUB6oc5SVKv3
24,058
Add AzureOpenAiAgent
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24058). All of your documentation changes will be reflected on that endpoint." ]
1,686
1,686
1,686
COLLABORATOR
null
# What does this PR do? This PR adds an AzureOpenAiAgent, superceding #23355 since the contributor there does not seem to want to finish the PR. Fixes #23324
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24058/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24058/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24058", "html_url": "https://github.com/huggingface/transformers/pull/24058", "diff_url": "https://github.com/huggingface/transformers/pull/24058.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24058.patch", "merged_at": 1686170094000 }
https://api.github.com/repos/huggingface/transformers/issues/24057
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24057/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24057/comments
https://api.github.com/repos/huggingface/transformers/issues/24057/events
https://github.com/huggingface/transformers/issues/24057
1,744,337,055
I_kwDOCUB6oc5n-Hif
24,057
CUDA OOM error when loading sharded checkpoint
{ "login": "abarbet", "id": 111083160, "node_id": "U_kgDOBp7-mA", "avatar_url": "https://avatars.githubusercontent.com/u/111083160?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abarbet", "html_url": "https://github.com/abarbet", "followers_url": "https://api.github.com/users/abarbet/followers", "following_url": "https://api.github.com/users/abarbet/following{/other_user}", "gists_url": "https://api.github.com/users/abarbet/gists{/gist_id}", "starred_url": "https://api.github.com/users/abarbet/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abarbet/subscriptions", "organizations_url": "https://api.github.com/users/abarbet/orgs", "repos_url": "https://api.github.com/users/abarbet/repos", "events_url": "https://api.github.com/users/abarbet/events{/privacy}", "received_events_url": "https://api.github.com/users/abarbet/received_events", "type": "User", "site_admin": false }
[ { "id": 5616426447, "node_id": "LA_kwDOCUB6oc8AAAABTsPdzw", "url": "https://api.github.com/repos/huggingface/transformers/labels/solved", "name": "solved", "color": "B1D6DC", "default": false, "description": "" } ]
closed
false
null
[]
[ "cc @pacman100 ", "Hello, looking into this. In the meantime, could you try the main branch of transformers and accelerate and let us know if that works as expected? \r\n", "Hi @pacman100, thanks for taking a look! When you say `main` branch, do you mean bumping the versions of `transformers` and `accelerate`?", "Hello, can you do `pip install git+https://github.com/huggingface/transformers` and `pip install git+https://github.com/huggingface/accelerate`? The above PR adds functionality for `SHARDED_STATE_DICT`.\r\n\r\nUse Accelerate launcher with Trainer. More info here: [Using Accelerate Launcher with Trainer](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#using-accelerate-launcher-with-trainer). Choose state_dict_type as `SHARDED_STATE_DICT` when answering questionnaire post running the command `accelerate config`.\r\n\r\nPlease let us know if this solves the issue.", "Thank you for this update! We utilized the new `SHARDED_STATE_DICT` functionality, but it looks like there may have been a small typo in the `trainer.py` code where the `full_osd` variable isn't saved in fsdp mode. I proposed a fix on the PR below, which allowed me to successfully save a model locally in fsdp mode:\r\nhttps://github.com/huggingface/transformers/pull/24328" ]
1,686
1,687
1,687
NONE
null
### System Info * `transformers` version: 4.27.1 * Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35 * Python version: 3.9.12 * Huggingface_hub version: 0.13.2 * PyTorch version (GPU?): 2.0.0+cu117 (True) * Tensorflow version (GPU?): not installed (NA) * Flax version (CPU?/GPU?/TPU?): not installed (NA) * Jax version: not installed * JaxLib version: not installed * Using GPU in script?: Yes * Using distributed or parallel set-up in script?: Yes, parallel (accelerate auto-mapping) ### Who can help? @sgugger @pacman100 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction This is a port-over from an issue I wrote on the PyTorch forums [here](https://discuss.pytorch.org/t/cuda-oom-error-when-loading-sharded-checkpoint/180710). I received some help from the folks on the PyTorch side, but unfortunately, they seem to be suggesting that there may be an error in the way `Trainer` saves FSDP models. I will rehash the issue here with the additional context: > We fine-tuned Stability’s StableLM-7b using Huggingface’s Trainer API (with FSDP) and then saved the resulting checkpoints in the sharded format that is typical for large language models. Quite surprisingly, however, attempting to load the model for inference leads to a strange error when loading one of the checkpoints (`Unable to load weights from pytorch checkpoint file`) > > We took some further investigative steps by making a simple `torch.load` call on the problem shard, and got a CUDA OOM error. The exceedingly strange thing about this OOM error is that we are working with a node with 8xA100s (80GB), and the given state dict is only 171kB (comprising only 7 layers of the model). So, you can imagine seeing the following error was quite a shock: > > ``` > torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 29.31 GiB (GPU 0; 79.19 GiB total capacity; 55.76 GiB already allocated; 22.48 GiB free; 55.76 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF > ``` > > After looking into this further, I discovered a few threads discussing this issue, like [this one 3](https://discuss.pytorch.org/t/cuda-error-out-of-memory-when-load-models/38011), and attempted some of the fixes, namely loading the state dict on CPU first. After doing so, I received the following error: > `RuntimeError: Trying to resize storage that is not resizable` > > So it seems that approach is out of the question. As I previously said, the strange thing here is that the first two shards load without issue, while the third and fourth cannot be loaded. Additionally, nothing seems particularly out of place in the shard-layer mapping JSON. I am stumped here. The folks at PyTorch let us know that with FSDP models should _not_ be saved using `torch.save` and provided an example script of how they should be saved [here](https://github.com/pytorch/pytorch/blob/e71ab214226af1f9dbded944e939c6447e0e8f09/torch/distributed/checkpoint/examples/fsdp_checkpoint_example.py#L59). Does `Trainer` properly handle these larger models, or is there an extra step we should be taking here? ### Expected behavior Typically, I would expect `save_model` to process the model shards in a way that allows them to be reloaded without issue using `from_pretrained` along with `accelerate`'s auto device mapping.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24057/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24057/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24056
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24056/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24056/comments
https://api.github.com/repos/huggingface/transformers/issues/24056/events
https://github.com/huggingface/transformers/issues/24056
1,744,315,948
I_kwDOCUB6oc5n-CYs
24,056
Multi GPU inference on RTX 4090 fails with RuntimeError: CUDA error: device-side assert triggered (Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.)
{ "login": "kunaldeo", "id": 441799, "node_id": "MDQ6VXNlcjQ0MTc5OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/441799?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kunaldeo", "html_url": "https://github.com/kunaldeo", "followers_url": "https://api.github.com/users/kunaldeo/followers", "following_url": "https://api.github.com/users/kunaldeo/following{/other_user}", "gists_url": "https://api.github.com/users/kunaldeo/gists{/gist_id}", "starred_url": "https://api.github.com/users/kunaldeo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kunaldeo/subscriptions", "organizations_url": "https://api.github.com/users/kunaldeo/orgs", "repos_url": "https://api.github.com/users/kunaldeo/repos", "events_url": "https://api.github.com/users/kunaldeo/events{/privacy}", "received_events_url": "https://api.github.com/users/kunaldeo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @Narsil @ArthurZucker @younesbelkada ", "This torch error usually comes from bad vocabulary ids being sent to the model.\r\n\r\nWhat's odd is that is seems triggered by `device_map=\"auto\"` (it's the only modification with the working code, right ?)\r\nWhich shouldn't torch this in any way. I doubt the actual cause is the referered line by the stacktrace, even despite CUDA_LAUNCH_BLOCKING, but I could be wrong (and that would be super puzzling indeed).\r\n\r\nNote: Afaik, if you launch on multiple GPUs in this way, you would be in PP (PIpeline Parallelism) and not in TP (TensorParallelism), TP being much better at getting better latencies (PP will get you larger batch sizes).\r\nhttps://github.com/huggingface/text-generation-inference might be better to reduce latencies by using multiple GPUs (NVLink presence will definitely be a factor)", "Yes device_map=\"auto\" is the only modification. \r\n\r\nThe other thing is that RTX 4090 doesnt support NVLink.", "Could be linked to `accelerate` here. At least I don't have good ideas to what might be happening here.", "Re `accelerate` - @pacman100 would you have any idea what might be causing this issue? ", "Hello, as this is related to the big model inference/device_map, @sgugger might have a better idea wrt this issue.", "Please post a full reproducer. I don't have access to\"\r\n- your local folder `/models/wizard-vicuna-13B-HF`\r\n- the `get_prompt` function\r\n- the `parse_text` function\r\n\r\nUsing `\"huggyllama/llama-13b\"` on my side and removing `get_prompt` and `parse_text` works fine on two GPUs.", "I have removed all the other code and it is just now the following. I am still getting the same error.\r\n\r\n```python\r\nfrom transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig, pipeline\r\nimport torch\r\nimport os\r\n\r\n# os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0,1\"\r\nmodel_name = \"huggyllama/llama-13b\"\r\ntokenizer = LlamaTokenizer.from_pretrained(model_name)\r\nmodel = LlamaForCausalLM.from_pretrained(model_name,\r\n device_map='auto',\r\n torch_dtype=torch.float16,\r\n )\r\npipe = pipeline(\r\n \"text-generation\",\r\n model=model, \r\n tokenizer=tokenizer, \r\n max_length=512,\r\n temperature=0.7,\r\n top_p=0.95,\r\n repetition_penalty=1.15\r\n)\r\nimport os\r\nos.environ[\"CUDA_LAUNCH_BLOCKING\"] = \"1\"\r\nraw_output = pipe(\"Hi how are you\") \r\n```\r\nError:\r\n```\r\n[0,0,0], thread: [40,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [41,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [42,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [43,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [44,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [45,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [46,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [47,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [48,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [49,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [50,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [51,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [52,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [53,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [54,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [55,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [56,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [57,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [58,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [59,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [60,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [61,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [62,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [63,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\nCell In[5], line 1\r\n----> 1 raw_output = pipe(\"Hi how are you\")\r\n\r\nFile ~/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/pipelines/text_generation.py:201, in TextGenerationPipeline.__call__(self, text_inputs, **kwargs)\r\n 160 def __call__(self, text_inputs, **kwargs):\r\n 161 \"\"\"\r\n 162 Complete the prompt(s) given as inputs.\r\n 163 \r\n (...)\r\n 199 ids of the generated text.\r\n 200 \"\"\"\r\n--> 201 return super().__call__(text_inputs, **kwargs)\r\n\r\nFile ~/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/pipelines/base.py:1118, in Pipeline.__call__(self, inputs, num_workers, batch_size, *args, **kwargs)\r\n 1110 return next(\r\n 1111 iter(\r\n 1112 self.get_iterator(\r\n (...)\r\n 1115 )\r\n 1116 )\r\n 1117 else:\r\n-> 1118 return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)\r\n\r\nFile ~/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/pipelines/base.py:1125, in Pipeline.run_single(self, inputs, preprocess_params, forward_params, postprocess_params)\r\n 1123 def run_single(self, inputs, preprocess_params, forward_params, postprocess_params):\r\n 1124 model_inputs = self.preprocess(inputs, **preprocess_params)\r\n-> 1125 model_outputs = self.forward(model_inputs, **forward_params)\r\n 1126 outputs = self.postprocess(model_outputs, **postprocess_params)\r\n 1127 return outputs\r\n\r\nFile ~/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/pipelines/base.py:1024, in Pipeline.forward(self, model_inputs, **forward_params)\r\n 1022 with inference_context():\r\n 1023 model_inputs = self._ensure_tensor_on_device(model_inputs, device=self.device)\r\n-> 1024 model_outputs = self._forward(model_inputs, **forward_params)\r\n 1025 model_outputs = self._ensure_tensor_on_device(model_outputs, device=torch.device(\"cpu\"))\r\n 1026 else:\r\n\r\nFile ~/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/pipelines/text_generation.py:263, in TextGenerationPipeline._forward(self, model_inputs, **generate_kwargs)\r\n 260 generate_kwargs[\"min_length\"] += prefix_length\r\n 262 # BS x SL\r\n--> 263 generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)\r\n 264 out_b = generated_sequence.shape[0]\r\n 265 if self.framework == \"pt\":\r\n\r\nFile ~/miniconda3/envs/textgen/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)\r\n 112 @functools.wraps(func)\r\n 113 def decorate_context(*args, **kwargs):\r\n 114 with ctx_factory():\r\n--> 115 return func(*args, **kwargs)\r\n\r\nFile ~/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/generation/utils.py:1518, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, **kwargs)\r\n 1512 raise ValueError(\r\n 1513 \"num_return_sequences has to be 1 when doing greedy search, \"\r\n 1514 f\"but is {generation_config.num_return_sequences}.\"\r\n 1515 )\r\n 1517 # 11. run greedy search\r\n-> 1518 return self.greedy_search(\r\n 1519 input_ids,\r\n 1520 logits_processor=logits_processor,\r\n 1521 stopping_criteria=stopping_criteria,\r\n 1522 pad_token_id=generation_config.pad_token_id,\r\n 1523 eos_token_id=generation_config.eos_token_id,\r\n 1524 output_scores=generation_config.output_scores,\r\n 1525 return_dict_in_generate=generation_config.return_dict_in_generate,\r\n 1526 synced_gpus=synced_gpus,\r\n 1527 streamer=streamer,\r\n 1528 **model_kwargs,\r\n 1529 )\r\n 1531 elif is_contrastive_search_gen_mode:\r\n 1532 if generation_config.num_return_sequences > 1:\r\n\r\nFile ~/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/generation/utils.py:2335, in GenerationMixin.greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, streamer, **model_kwargs)\r\n 2332 model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)\r\n 2334 # forward pass to get next token\r\n-> 2335 outputs = self(\r\n 2336 **model_inputs,\r\n 2337 return_dict=True,\r\n 2338 output_attentions=output_attentions,\r\n 2339 output_hidden_states=output_hidden_states,\r\n 2340 )\r\n 2342 if synced_gpus and this_peer_finished:\r\n 2343 continue # don't waste resources running the code we don't need\r\n\r\nFile ~/miniconda3/envs/textgen/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)\r\n 1496 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1497 # this function, and just call forward.\r\n 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1499 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1500 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1501 return forward_call(*args, **kwargs)\r\n 1502 # Do not call functions when jit is used\r\n 1503 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile ~/miniconda3/envs/textgen/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)\r\n 163 output = old_forward(*args, **kwargs)\r\n 164 else:\r\n--> 165 output = old_forward(*args, **kwargs)\r\n 166 return module._hf_hook.post_forward(module, output)\r\n\r\nFile ~/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:688, in LlamaForCausalLM.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 685 return_dict = return_dict if return_dict is not None else self.config.use_return_dict\r\n 687 # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)\r\n--> 688 outputs = self.model(\r\n 689 input_ids=input_ids,\r\n 690 attention_mask=attention_mask,\r\n 691 position_ids=position_ids,\r\n 692 past_key_values=past_key_values,\r\n 693 inputs_embeds=inputs_embeds,\r\n 694 use_cache=use_cache,\r\n 695 output_attentions=output_attentions,\r\n 696 output_hidden_states=output_hidden_states,\r\n 697 return_dict=return_dict,\r\n 698 )\r\n 700 hidden_states = outputs[0]\r\n 701 logits = self.lm_head(hidden_states)\r\n\r\nFile ~/miniconda3/envs/textgen/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)\r\n 1496 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1497 # this function, and just call forward.\r\n 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1499 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1500 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1501 return forward_call(*args, **kwargs)\r\n 1502 # Do not call functions when jit is used\r\n 1503 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile ~/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:578, in LlamaModel.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 570 layer_outputs = torch.utils.checkpoint.checkpoint(\r\n 571 create_custom_forward(decoder_layer),\r\n 572 hidden_states,\r\n (...)\r\n 575 None,\r\n 576 )\r\n 577 else:\r\n--> 578 layer_outputs = decoder_layer(\r\n 579 hidden_states,\r\n 580 attention_mask=attention_mask,\r\n 581 position_ids=position_ids,\r\n 582 past_key_value=past_key_value,\r\n 583 output_attentions=output_attentions,\r\n 584 use_cache=use_cache,\r\n 585 )\r\n 587 hidden_states = layer_outputs[0]\r\n 589 if use_cache:\r\n\r\nFile ~/miniconda3/envs/textgen/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)\r\n 1496 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1497 # this function, and just call forward.\r\n 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1499 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1500 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1501 return forward_call(*args, **kwargs)\r\n 1502 # Do not call functions when jit is used\r\n 1503 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile ~/miniconda3/envs/textgen/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)\r\n 163 output = old_forward(*args, **kwargs)\r\n 164 else:\r\n--> 165 output = old_forward(*args, **kwargs)\r\n 166 return module._hf_hook.post_forward(module, output)\r\n\r\nFile ~/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:292, in LlamaDecoderLayer.forward(self, hidden_states, attention_mask, position_ids, past_key_value, output_attentions, use_cache)\r\n 289 hidden_states = self.input_layernorm(hidden_states)\r\n 291 # Self Attention\r\n--> 292 hidden_states, self_attn_weights, present_key_value = self.self_attn(\r\n 293 hidden_states=hidden_states,\r\n 294 attention_mask=attention_mask,\r\n 295 position_ids=position_ids,\r\n 296 past_key_value=past_key_value,\r\n 297 output_attentions=output_attentions,\r\n 298 use_cache=use_cache,\r\n 299 )\r\n 300 hidden_states = residual + hidden_states\r\n 302 # Fully Connected\r\n\r\nFile ~/miniconda3/envs/textgen/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)\r\n 1496 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1497 # this function, and just call forward.\r\n 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1499 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1500 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1501 return forward_call(*args, **kwargs)\r\n 1502 # Do not call functions when jit is used\r\n 1503 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile ~/miniconda3/envs/textgen/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)\r\n 163 output = old_forward(*args, **kwargs)\r\n 164 else:\r\n--> 165 output = old_forward(*args, **kwargs)\r\n 166 return module._hf_hook.post_forward(module, output)\r\n\r\nFile ~/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:227, in LlamaAttention.forward(self, hidden_states, attention_mask, position_ids, past_key_value, output_attentions, use_cache)\r\n 222 raise ValueError(\r\n 223 f\"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}\"\r\n 224 )\r\n 225 attn_weights = attn_weights + attention_mask\r\n 226 attn_weights = torch.max(\r\n--> 227 attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min, device=attn_weights.device)\r\n 228 )\r\n 230 # upcast attention to fp32\r\n 231 attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)\r\n\r\nRuntimeError: CUDA error: device-side assert triggered\r\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\r\n```\r\nAdditional info \r\n```\r\nmodel.hf_device_map\r\n```\r\n{'model.embed_tokens': 0,\r\n 'model.layers.0': 0,\r\n 'model.layers.1': 0,\r\n 'model.layers.2': 0,\r\n 'model.layers.3': 0,\r\n 'model.layers.4': 0,\r\n 'model.layers.5': 0,\r\n 'model.layers.6': 0,\r\n 'model.layers.7': 0,\r\n 'model.layers.8': 0,\r\n 'model.layers.9': 0,\r\n 'model.layers.10': 0,\r\n 'model.layers.11': 0,\r\n 'model.layers.12': 0,\r\n 'model.layers.13': 1,\r\n 'model.layers.14': 1,\r\n 'model.layers.15': 1,\r\n 'model.layers.16': 1,\r\n 'model.layers.17': 1,\r\n 'model.layers.18': 1,\r\n 'model.layers.19': 1,\r\n 'model.layers.20': 1,\r\n 'model.layers.21': 1,\r\n 'model.layers.22': 1,\r\n 'model.layers.23': 1,\r\n 'model.layers.24': 1,\r\n 'model.layers.25': 1,\r\n 'model.layers.26': 1,\r\n 'model.layers.27': 2,\r\n 'model.layers.28': 2,\r\n 'model.layers.29': 2,\r\n 'model.layers.30': 2,\r\n 'model.layers.31': 2,\r\n 'model.layers.32': 2,\r\n 'model.layers.33': 2,\r\n 'model.layers.34': 2,\r\n 'model.layers.35': 2,\r\n 'model.layers.36': 2,\r\n 'model.layers.37': 2,\r\n 'model.layers.38': 2,\r\n 'model.layers.39': 2,\r\n 'model.norm': 2,\r\n 'lm_head': 2}", "This issue is not happening after transformers update `4.30.2`.", "> This issue is not happening after transformers update `4.30.2`.\r\n\r\nHi @kunaldeo \r\nBut when I run your code above, it still reports the same error, and I checked that the transformer version is 4.30.2, maybe multi-GPU error, when I use a single GPU, it is normal ", "> > This issue is not happening after transformers update `4.30.2`.\r\n> \r\n> Hi @kunaldeo But when I run your code above, it still reports the same error, and I checked that the transformer version is 4.30.2, maybe multi-GPU error, when I use a single GPU, it is normal\r\n\r\n@Xnhyacinth were you able to solve this by any chance? I'm getting the same error with transformers `4.33.3`\r\n(my full case description is [here](https://github.com/huggingface/transformers/issues/22546#issuecomment-1743578707))\r\n", "> > > This issue is not happening after transformers update `4.30.2`.\r\n> > \r\n> > \r\n> > Hi @kunaldeo But when I run your code above, it still reports the same error, and I checked that the transformer version is 4.30.2, maybe multi-GPU error, when I use a single GPU, it is normal\r\n> \r\n> @Xnhyacinth were you able to solve this by any chance? I'm getting the same error with transformers `4.33.3` (my full case description is [here](https://github.com/huggingface/transformers/issues/22546#issuecomment-1743578707))\r\n\r\n@kerenganon @Xnhyacinth - Were either of you able to solve this? I'm getting the same error. It began when I upgraded the CUDA Driver Version from 11.? to 12.2 and updated the NVIDIA Driver Version to 535.113.01 - so perhaps related to driver versions in some way. Prior to upgrading the drivers I had no issues. After the upgrade, I get this error when I attempt to run inference using Llama models across multiple GPUs. The problem doesn't occur if I just use a single GPU. I haven't been able to see any improvement using changes to tokenizer eos or pad token_ids (as suggested elsewhere). The problem seems related to using device_map=\"auto\" (or similar). I'm using transformers 4.31.0, so it doesn't seem to be fixed after 4.30.2 for me.", "Encountered same issue on 4.33.2, tried other 13B models, no one could run pass. nvidia driver is 545.23.06 and CUDA is 12.3", "> > > > This issue is not happening after transformers update `4.30.2`.\r\n> > > \r\n> > > \r\n> > > Hi @kunaldeo But when I run your code above, it still reports the same error, and I checked that the transformer version is 4.30.2, maybe multi-GPU error, when I use a single GPU, it is normal\r\n> > \r\n> > \r\n> > @Xnhyacinth were you able to solve this by any chance? I'm getting the same error with transformers `4.33.3` (my full case description is [here](https://github.com/huggingface/transformers/issues/22546#issuecomment-1743578707))\r\n> \r\n> @kerenganon @Xnhyacinth - Were either of you able to solve this? I'm getting the same error. It began when I upgraded the CUDA Driver Version from 11.? to 12.2 and updated the NVIDIA Driver Version to 535.113.01 - so perhaps related to driver versions in some way. Prior to upgrading the drivers I had no issues. After the upgrade, I get this error when I attempt to run inference using Llama models across multiple GPUs. The problem doesn't occur if I just use a single GPU. I haven't been able to see any improvement using changes to tokenizer eos or pad token_ids (as suggested elsewhere). The problem seems related to using device_map=\"auto\" (or similar). I'm using transformers 4.31.0, so it doesn't seem to be fixed after 4.30.2 for me.\r\n\r\nYes, the problem could be a driver issue, details can be viewed here [https://github.com/huggingface/transformers/issues/26096](https://github.com/huggingface/transformers/issues/26096) and [https://discuss.pytorch.org/t/problem-transfering-tensor-between-gpus/60263/10](https://discuss.pytorch.org/t/problem-transfering-tensor-between-gpus/60263/10), I think you need to change the driver version or fix it with docker.", "> > > This issue is not happening after transformers update `4.30.2`.\r\n> > \r\n> > \r\n> > Hi @kunaldeo But when I run your code above, it still reports the same error, and I checked that the transformer version is 4.30.2, maybe multi-GPU error, when I use a single GPU, it is normal\r\n> \r\n> @Xnhyacinth were you able to solve this by any chance? I'm getting the same error with transformers `4.33.3` (my full case description is [here](https://github.com/huggingface/transformers/issues/22546#issuecomment-1743578707))\r\n\r\nMaybe you need to check the error like [https://github.com/huggingface/transformers/issues/26096](https://github.com/huggingface/transformers/issues/26096), and I think also the driver issue, so change the driver version.", "I tested driver 535 + cuda 12.1 / 12.3, driver 520 + cuda 11.8. Both not working. Got the same `../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [64,0,0] Assertion -sizes[i] <= index && index < sizes[i] && \"index out of bounds\" failed.` \r\nSomeone please give a help. Thanks", "> I tested driver 535 + cuda 12.1 / 12.3, driver 520 + cuda 11.8. Both not working. Got the same `../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [64,0,0] Assertion -sizes[i] <= index && index < sizes[i] && \"index out of bounds\" failed.` Someone please give a help. Thanks\r\n\r\n@abcbdf if you just want to inference on 4090 with multi gpus, maybe you can try vllm with tensor parallel, which solved my problem.", "> > I tested driver 535 + cuda 12.1 / 12.3, driver 520 + cuda 11.8. Both not working. Got the same `../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [64,0,0] Assertion -sizes[i] <= index && index < sizes[i] && \"index out of bounds\" failed.` Someone please give a help. Thanks\r\n> \r\n> @abcbdf if you just want to inference on 4090 with multi gpus, maybe you can try vllm with tensor parallel, which solved my problem.\r\n\r\n@caseylai Thanks for your help. But I'm actually working on training with multi A6000. I found that the problem was also shown in inference", "> I tested driver 535 + cuda 12.1 / 12.3, driver 520 + cuda 11.8. Both not working. Got the same `../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [64,0,0] Assertion -sizes[i] <= index && index < sizes[i] && \"index out of bounds\" failed.` Someone please give a help. Thanks\r\n\r\nmaybe you can try driver 530 + cuda 12.1 or 530 + cuda 11.8", "@Xnhyacinth Thanks. I tried install driver 530.30.02 + cuda 11.8. It still can't work on multi-gpu. This is quite weird because I have another server with basically same environments but it could work on multi-gpu inference/training.\r\nWorking server:\r\ndriver 530.30.02\r\n```\r\npython\r\nPython 3.11.4 (main, Jul 5 2023, 13:45:01) [GCC 11.2.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import torch\r\n>>> print(torch.__config__.show())\r\nPyTorch built with:\r\n - GCC 9.3\r\n - C++ Version: 201703\r\n - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications\r\n - Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e)\r\n - OpenMP 201511 (a.k.a. OpenMP 4.5)\r\n - LAPACK is enabled (usually provided by MKL)\r\n - NNPACK is enabled\r\n - CPU capability usage: AVX2\r\n - CUDA Runtime 11.7\r\n - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86\r\n - CuDNN 8.9.2 (built against CUDA 12.1)\r\n - Built with CuDNN 8.5\r\n - Magma 2.6.1\r\n - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.7, CUDNN_VERSION=8.5.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, \r\n ```\r\n \r\n Not working server:\r\n driver 530.30.02\r\n```\r\n import torch\r\n>>> print(torch.__config__.show())\r\nPyTorch built with:\r\n - GCC 9.3\r\n - C++ Version: 201703\r\n - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications\r\n - Intel(R) MKL-DNN v3.1.1 (Git Hash 64f6bcbcbab628e96f33a62c3e975f8535a7bde4)\r\n - OpenMP 201511 (a.k.a. OpenMP 4.5)\r\n - LAPACK is enabled (usually provided by MKL)\r\n - NNPACK is enabled\r\n - CPU capability usage: AVX512\r\n - CUDA Runtime 11.8\r\n - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_90,code=sm_90\r\n - CuDNN 8.7\r\n - Magma 2.6.1\r\n - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.8, CUDNN_VERSION=8.7.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-invalid-partial-specialization -Wno-unused-private-field -Wno-aligned-allocation-unavailable -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.1.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, \r\n\r\n>>> torch.__version__\r\n'2.1.0+cu118\r\n```\r\n \r\nOn this not working server, I also tried create another conda environment with pytorch cuda 11.7:\r\n```\r\npython\r\nPython 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import torch\r\n>>> torch.__version__\r\n'2.0.0'\r\n>>> print(torch.__config__.show())\r\nPyTorch built with:\r\n - GCC 9.3\r\n - C++ Version: 201703\r\n - Intel(R) oneAPI Math Kernel Library Version 2023.1-Product Build 20230303 for Intel(R) 64 architecture applications\r\n - Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e)\r\n - OpenMP 201511 (a.k.a. OpenMP 4.5)\r\n - LAPACK is enabled (usually provided by MKL)\r\n - NNPACK is enabled\r\n - CPU capability usage: AVX2\r\n - CUDA Runtime 11.7\r\n - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37\r\n - CuDNN 8.5\r\n - Magma 2.6.1\r\n - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.7, CUDNN_VERSION=8.5.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, \r\n ```\r\n\r\n\r\nIt still raise error on multi-gpu inference.\r\n\r\nI'm wondering do I need to install cuda toolkit separately? Because pytorch uses its own cuda runtime library and I couldn't find any difference between install cuda or not", "Finally solved this by disabling ACS in bios, ref https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/troubleshooting.html#pci-access-control-services-acs\r\n\r\nThis test is very helpful. https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/troubleshooting.html#gpu-to-gpu-communication", "my code:\r\n```\r\nfrom flask import Flask, request, jsonify\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\nimport time\r\nimport logging\r\nimport os\r\n#print the version of cuda being used\r\nprint(torch.version.cuda)\r\n\r\napp = Flask(__name__)\r\n\r\n# get directory of this file\r\ndir_path = os.path.dirname(os.path.realpath(__file__))\r\n\r\nmodellocation = \"/home/levi/projects/text-generation-webui/models/Upstage_SOLAR-10.7B-Instruct-v1.0\"\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"/home/levi/projects/text-generation-webui/models/Upstage_SOLAR-10.7B-Instruct-v1.0\")\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n modellocation,\r\n device_map=\"auto\"\r\n)\r\n\r\n\r\[email protected]_request\r\ndef start_timer():\r\n request.start_time = time.time()\r\n print(f\"Request made to LLM; starting timer!\")\r\n\r\n\r\[email protected]_request\r\ndef log_request(response):\r\n request_duration = time.time() - request.start_time\r\n print(f\"Request took {request_duration} seconds\")\r\n return response\r\n\r\n\r\[email protected]('/generate/chat/completions', methods=['POST'])\r\ndef generate_completions():\r\n data = request.get_json()\r\n conversation = data['messages']\r\n\r\n prompt = tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)\r\n\r\n inputs = tokenizer(prompt, return_tensors=\"pt\").to(model.device)\r\n outputs = model.generate(**inputs, use_cache=True, max_length=4096)\r\n output_text = tokenizer.decode(outputs[0])\r\n\r\n return jsonify({'choices': [{'message': {'role': 'assistant', 'content': output_text}}]})\r\n\r\n\r\nif __name__ == '__main__':\r\n app.run(host='0.0.0.0', port=5000)\r\n```\r\n\r\nI have 2 RTX 3060's and i am able to run LLM's on One GPU but it wont work when i try to run them on 2 GPU's with the same error:\r\n```\r\n/opt/anaconda3/envs/333MillionEyes/lib/python3.10/site-packages/transformers/generation/utils.py:1518: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use and modify the model generation configuration (see https://huggingface.co/docs/transformers/generation_strategies#default-text-generation-configuration )\r\n warnings.warn(\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [0,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [1,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [2,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [3,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [4,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [5,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [6,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [7,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [8,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [9,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [10,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [11,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [12,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [13,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [14,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [15,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [16,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [17,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [18,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [19,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [20,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [21,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [22,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [23,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [24,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [25,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [26,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [27,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [28,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [29,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [30,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [31,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\r\n```\r\n\r\nRunning Arch\r\n```\r\nNVIDIA-SMI 545.29.06 Driver Version: 545.29.06 CUDA Version: 12.3 \r\n\r\nCuda compilation tools, release 12.1, V12.1.105\r\nBuild cuda_12.1.r12.1/compiler.32688072_0\r\n```\r\n\r\nIt runs fine on single GPU, but when i try to run on multiple it does not like it. I have tried with oobabooga and vllm and none of them make a difference, it always fails. I have tried with many models of many sizes and types 7b 8x7b 13b 33b 34b, awq doesnt work, gptq doesnt work either.\r\n\r\nmy motherboard is a MSi MEG x570 with a AMD APU in it, and it has no option to disable ACS in the bios \r\n\r\n\r\nGPU comms test came back good (i think ):\r\n```\r\nCUDA_VISIBLE_DEVICES=0,1 ./p2pBandwidthLatencyTest levi@deuxbeast\r\n[P2P (Peer-to-Peer) GPU Bandwidth Latency Test]\r\nDevice: 0, NVIDIA GeForce RTX 3060, pciBusID: 10, pciDeviceID: 0, pciDomainID:0\r\nDevice: 1, NVIDIA GeForce RTX 3060, pciBusID: 2d, pciDeviceID: 0, pciDomainID:0\r\nDevice=0 CAN Access Peer Device=1\r\nDevice=1 CAN Access Peer Device=0\r\n\r\n***NOTE: In case a device doesn't have P2P access to other one, it falls back to normal memcopy procedure.\r\nSo you can see lesser Bandwidth (GB/s) and unstable Latency (us) in those cases.\r\n\r\nP2P Connectivity Matrix\r\n D\\D 0 1\r\n 0 1 1\r\n 1 1 1\r\nUnidirectional P2P=Disabled Bandwidth Matrix (GB/s)\r\n D\\D 0 1 \r\n 0 331.46 3.17 \r\n 1 3.17 331.67 \r\nUnidirectional P2P=Enabled Bandwidth (P2P Writes) Matrix (GB/s)\r\n D\\D 0 1 \r\n 0 331.74 2.93 \r\n 1 2.93 331.81 \r\nBidirectional P2P=Disabled Bandwidth Matrix (GB/s)\r\n D\\D 0 1 \r\n 0 318.71 4.72 \r\n 1 4.70 332.59 \r\nBidirectional P2P=Enabled Bandwidth Matrix (GB/s)\r\n D\\D 0 1 \r\n 0 318.81 2.93 \r\n 1 2.93 332.59 \r\nP2P=Disabled Latency Matrix (us)\r\n GPU 0 1 \r\n 0 1.41 13.24 \r\n 1 13.11 1.42 \r\n\r\n CPU 0 1 \r\n 0 2.35 6.27 \r\n 1 6.78 2.28 \r\nP2P=Enabled Latency (P2P Writes) Matrix (us)\r\n GPU 0 1 \r\n 0 1.42 1.14 \r\n 1 1.19 1.41 \r\n\r\n CPU 0 1 \r\n 0 2.34 1.89 \r\n 1 1.94 2.48 \r\n```\r\nLet me know what else you need me to do/run/compile or download to help get this fixed." ]
1,686
1,703
1,687
NONE
null
### System Info `transformers version`: 4.30.0.dev0 `Platform:` Linux 6.3.5-zen2-1-zen-x86_64-with-glibc2.37.3 on Arch `Python version`: 3.10.9 `PyTorch version (GPU)`: 2.0.1+cu118 (True) `peft version`: 0.4.0.dev0 `accelerate version`: 0.20.0.dev0 `bitsandbytes version`: 0.39.0 `nvidia driver version`: nvidia-dkms-530.41.03-1 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run the following code Code: ```python from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig, pipeline import torch import os # os.environ["CUDA_VISIBLE_DEVICES"] = "0,1" model_name = "/models/wizard-vicuna-13B-HF" tokenizer = LlamaTokenizer.from_pretrained(model_name) model = LlamaForCausalLM.from_pretrained(model_name, device_map='auto', torch_dtype=torch.float16, ) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_length=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) import os os.environ["CUDA_LAUNCH_BLOCKING"] = "1" prompt = 'What are the difference between Llamas, Alpacas and Vicunas?' raw_output = pipe(get_prompt(prompt)) parse_text(raw_output) ``` While this code works fine on a single 4090 GPU. Loading any model for inference with 2 or 3 RTX 4090 is resulting in the following error: ``` /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [3,0,0], thread: [64,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [3,0,0], thread: [65,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [3,0,0], thread: [66,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [3,0,0], thread: [67,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [3,0,0], thread: [68,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [3,0,0], thread: [69,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [3,0,0], thread: [70,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [3,0,0], thread: [71,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [3,0,0], thread: [72,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [3,0,0], thread: [73,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [3,0,0], thread: [74,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1682343995026/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [3,0,0], thread: [75,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. --------many such lines---------- File ~/miniconda3/envs/textgen/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs) 163 output = old_forward(*args, **kwargs) 164 else: --> 165 output = old_forward(*args, **kwargs) 166 return module._hf_hook.post_forward(module, output) File ~/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:227, in LlamaAttention.forward(self, hidden_states, attention_mask, position_ids, past_key_value, output_attentions, use_cache) 222 raise ValueError( 223 f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}" 224 ) 225 attn_weights = attn_weights + attention_mask 226 attn_weights = torch.max( --> 227 attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min, device=attn_weights.device) 228 ) 230 # upcast attention to fp32 231 attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype) RuntimeError: CUDA error: device-side assert triggered Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ``` ### Expected behavior Code does inference successfully.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24056/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24056/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24055
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24055/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24055/comments
https://api.github.com/repos/huggingface/transformers/issues/24055/events
https://github.com/huggingface/transformers/pull/24055
1,744,277,646
PR_kwDOCUB6oc5SU9rs
24,055
bring back `filtered_test_list_cross_tests.txt`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
COLLABORATOR
null
# What does this PR do? As discussed in [this comment](https://github.com/huggingface/transformers/pull/23737#discussion_r1220002876)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24055/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24055/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24055", "html_url": "https://github.com/huggingface/transformers/pull/24055", "diff_url": "https://github.com/huggingface/transformers/pull/24055.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24055.patch", "merged_at": 1686072925000 }
https://api.github.com/repos/huggingface/transformers/issues/24054
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24054/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24054/comments
https://api.github.com/repos/huggingface/transformers/issues/24054/events
https://github.com/huggingface/transformers/pull/24054
1,744,268,805
PR_kwDOCUB6oc5SU7zM
24,054
Oops, missed one
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
CONTRIBUTOR
null
# What does this PR do? Tried to be careful, missed one 🙃 Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24054/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24054/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24054", "html_url": "https://github.com/huggingface/transformers/pull/24054", "diff_url": "https://github.com/huggingface/transformers/pull/24054.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24054.patch", "merged_at": 1686072619000 }
https://api.github.com/repos/huggingface/transformers/issues/24053
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24053/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24053/comments
https://api.github.com/repos/huggingface/transformers/issues/24053/events
https://github.com/huggingface/transformers/pull/24053
1,744,198,707
PR_kwDOCUB6oc5SUsju
24,053
Act on deprecations in Accelerate no_trainer examples
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
CONTRIBUTOR
null
# What does this PR do? As title states, acts on deprecation that will be going through in this PR https://github.com/huggingface/accelerate/pull/1537 to avoid nightly failures Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24053/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24053/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24053", "html_url": "https://github.com/huggingface/transformers/pull/24053", "diff_url": "https://github.com/huggingface/transformers/pull/24053.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24053.patch", "merged_at": 1686071079000 }
https://api.github.com/repos/huggingface/transformers/issues/24052
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24052/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24052/comments
https://api.github.com/repos/huggingface/transformers/issues/24052/events
https://github.com/huggingface/transformers/pull/24052
1,744,174,401
PR_kwDOCUB6oc5SUnUn
24,052
Tiny fix for `check_self_hosted_runner.py`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24052). All of your documentation changes will be reflected on that endpoint." ]
1,686
1,686
1,686
COLLABORATOR
null
# What does this PR do? See comment in the change.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24052/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24052/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24052", "html_url": "https://github.com/huggingface/transformers/pull/24052", "diff_url": "https://github.com/huggingface/transformers/pull/24052.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24052.patch", "merged_at": 1686068263000 }
https://api.github.com/repos/huggingface/transformers/issues/24051
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24051/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24051/comments
https://api.github.com/repos/huggingface/transformers/issues/24051/events
https://github.com/huggingface/transformers/pull/24051
1,744,133,020
PR_kwDOCUB6oc5SUed5
24,051
Modification of one text example file should trigger said test
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
COLLABORATOR
null
# What does this PR do? I realized while reviewing PRs like #23912 that a modification of a given text example file won't trigger the run of said test. This PR fixes that bug in the test fetcher and add some tests.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24051/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24051/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24051", "html_url": "https://github.com/huggingface/transformers/pull/24051", "diff_url": "https://github.com/huggingface/transformers/pull/24051.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24051.patch", "merged_at": 1686067376000 }
https://api.github.com/repos/huggingface/transformers/issues/24050
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24050/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24050/comments
https://api.github.com/repos/huggingface/transformers/issues/24050/events
https://github.com/huggingface/transformers/issues/24050
1,744,123,335
I_kwDOCUB6oc5n9TXH
24,050
RuntimeError: unscale_() has already been called on this optimizer since the last update().
{ "login": "diaojunxian", "id": 19700467, "node_id": "MDQ6VXNlcjE5NzAwNDY3", "avatar_url": "https://avatars.githubusercontent.com/u/19700467?v=4", "gravatar_id": "", "url": "https://api.github.com/users/diaojunxian", "html_url": "https://github.com/diaojunxian", "followers_url": "https://api.github.com/users/diaojunxian/followers", "following_url": "https://api.github.com/users/diaojunxian/following{/other_user}", "gists_url": "https://api.github.com/users/diaojunxian/gists{/gist_id}", "starred_url": "https://api.github.com/users/diaojunxian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/diaojunxian/subscriptions", "organizations_url": "https://api.github.com/users/diaojunxian/orgs", "repos_url": "https://api.github.com/users/diaojunxian/repos", "events_url": "https://api.github.com/users/diaojunxian/events{/privacy}", "received_events_url": "https://api.github.com/users/diaojunxian/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Can you try restarting your runtime after installing the new version to see if that fixes it? CC @pacman100 ", "I'm following up this notebook: https://huggingface.co/dfurman/falcon-7b-chat-oasst1/blob/main/finetune_falcon7b_oasst1_with_bnb_peft.ipynb\r\n\r\nand getting this dump when training:\r\n\r\n`File [/workspace/generative_models/.venv/lib/python3.10/site-packages/accelerate/accelerator.py:1873](https://vscode-remote+ssh-002dremote.vscode-resource.vscode-cdn.net/workspace/generative_models/.venv/lib/python3.10/site-packages/accelerate/accelerator.py:1873), in Accelerator.clip_grad_norm_(self, parameters, max_norm, norm_type)\r\n 1869 elif self.distributed_type == DistributedType.DEEPSPEED:\r\n 1870 # `accelerator.backward(loss)` is doing that automatically. Therefore, its implementation is not needed\r\n 1871 # We cannot return the gradient norm because DeepSpeed does it.\r\n 1872 return None\r\n-> 1873 self.unscale_gradients()\r\n 1874 return torch.nn.utils.clip_grad_norm_(parameters, max_norm, norm_type=norm_type)\r\n\r\nFile [/workspace/generative_models/.venv/lib/python3.10/site-packages/accelerate/accelerator.py:1836](https://vscode-remote+ssh-002dremote.vscode-resource.vscode-cdn.net/workspace/generative_models/.venv/lib/python3.10/site-packages/accelerate/accelerator.py:1836), in Accelerator.unscale_gradients(self, optimizer)\r\n 1834 while isinstance(opt, AcceleratedOptimizer):\r\n 1835 opt = opt.optimizer\r\n-> 1836 self.scaler.unscale_(opt)\r\n\r\nFile [/workspace/generative_models/.venv/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py:275](https://vscode-remote+ssh-002dremote-.vscode-resource.vscode-cdn.net/workspace/generative_models/.venv/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py:275), in GradScaler.unscale_(self, optimizer)\r\n 272 optimizer_state = self._per_optimizer_states[id(optimizer)]\r\n 274 if optimizer_state[\"stage\"] is OptState.UNSCALED:\r\n--> 275 raise RuntimeError(\"unscale_() has already been called on this optimizer since the last update().\")\r\n 276 elif optimizer_state[\"stage\"] is OptState.STEPPED:\r\n 277 raise RuntimeError(\"unscale_() is being called after step().\")\r\n\r\nRuntimeError: unscale_() has already been called on this optimizer since the last update().`\r\n\r\nThese are the libraries versions I have:\r\ntransformers @ git+https://github.com/huggingface/transformers.git@f1660d7e23d4432513fe060bde4f9b7b29f05204\r\npeft @ git+https://github.com/huggingface/peft.git@7fb5f90a38cb39a31396de7e638ead9ecea692af\r\naccelerate @ git+https://github.com/huggingface/accelerate.git@62357f218f72cce88b8e086cc372b15c119b590b\r\n\r\nI have restarted and followed (to the best of my knowledge) the guidance to correct this. @pacman100 \r\n\r\nThank you!\r\n\r\n", "I am getting this as well.\r\nTried restarting the notebook but that doesn't fix it\r\n\r\nThis was working previously. Today ran a fresh install using \r\n\r\n`!pip install -q git+https://github.com/huggingface/peft.git git+https://github.com/huggingface/transformers.git`", "> Can you try restarting your runtime after installing the new version to see if that fixes it? CC @pacman100\r\n\r\n@muellerzr thanks a lot. I have restarted the kernel and tried repeatedly according to the operation, but the problem still exists.", "I am facing the same issue. Tried doing a fresh install still the issue persists.", "Hi all, \r\nI was able to rerun my workflow via:\r\n\r\n1. Deleting the current runtime\r\n2. Starting a new runtime\r\n3. Running using `pip install transformers`", "> 3\\. pip install transformers\r\n\r\nHi, @lfunderburk can you share the version of each library?thanks a lot.", "`transformers==4.29.2` and `tokenizers==0.13.3` on Python 3.10.11\r\n\r\nBelow is the rest of the dependencies\r\n\r\n```\r\nabsl-py==1.4.0\r\naccelerate==0.20.0.dev0\r\naiohttp==3.8.4\r\naiosignal==1.3.1\r\nalabaster==0.7.13\r\nalbumentations==1.2.1\r\naltair==4.2.2\r\nanyio==3.6.2\r\nappdirs==1.4.4\r\nargon2-cffi==21.3.0\r\nargon2-cffi-bindings==21.2.0\r\narray-record==0.2.0\r\narviz==0.15.1\r\nastropy==5.2.2\r\nastunparse==1.6.3\r\nasync-timeout==4.0.2\r\nattrs==23.1.0\r\naudioread==3.0.0\r\nautograd==1.5\r\nBabel==2.12.1\r\nbackcall==0.2.0\r\nbeautifulsoup4==4.11.2\r\nbitsandbytes==0.39.0\r\nbleach==6.0.0\r\nblis==0.7.9\r\nblosc2==2.0.0\r\nbokeh==2.4.3\r\nbranca==0.6.0\r\nbuild==0.10.0\r\nCacheControl==0.12.11\r\ncached-property==1.5.2\r\ncachetools==5.3.0\r\ncatalogue==2.0.8\r\ncertifi==2022.12.7\r\ncffi==1.15.1\r\nchardet==4.0.0\r\ncharset-normalizer==2.0.12\r\nchex==0.1.7\r\nclick==8.1.3\r\ncloudpickle==2.2.1\r\ncmake==3.25.2\r\ncmdstanpy==1.1.0\r\ncolorcet==3.0.1\r\ncolorlover==0.3.0\r\ncommunity==1.0.0b1\r\nconfection==0.0.4\r\ncons==0.4.5\r\ncontextlib2==0.6.0.post1\r\ncontourpy==1.0.7\r\nconvertdate==2.4.0\r\ncryptography==40.0.2\r\ncufflinks==0.17.3\r\ncupy-cuda11x==11.0.0\r\ncvxopt==1.3.0\r\ncvxpy==1.3.1\r\ncycler==0.11.0\r\ncymem==2.0.7\r\nCython==0.29.34\r\ndask==2022.12.1\r\ndatascience==0.17.6\r\ndatasets==2.12.0\r\ndb-dtypes==1.1.1\r\ndbus-python==1.2.16\r\ndebugpy==1.6.6\r\ndecorator==4.4.2\r\ndefusedxml==0.7.1\r\ndill==0.3.6\r\ndistributed==2022.12.1\r\ndlib==19.24.1\r\ndm-tree==0.1.8\r\ndocutils==0.16\r\ndopamine-rl==4.0.6\r\nduckdb==0.7.1\r\nearthengine-api==0.1.350\r\neasydict==1.10\r\necos==2.0.12\r\neditdistance==0.6.2\r\nen-core-web-sm==3.5.0\r\nentrypoints==0.4\r\nephem==4.1.4\r\net-xmlfile==1.1.0\r\netils==1.2.0\r\netuples==0.3.8\r\nexceptiongroup==1.1.1\r\nfastai==2.7.12\r\nfastcore==1.5.29\r\nfastdownload==0.0.7\r\nfastjsonschema==2.16.3\r\nfastprogress==1.0.3\r\nfastrlock==0.8.1\r\nfilelock==3.12.0\r\nfirebase-admin==5.3.0\r\nFlask==2.2.4\r\nflatbuffers==23.3.3\r\nflax==0.6.9\r\nfolium==0.14.0\r\nfonttools==4.39.3\r\nfrozendict==2.3.7\r\nfrozenlist==1.3.3\r\nfsspec==2023.4.0\r\nfuture==0.18.3\r\ngast==0.4.0\r\nGDAL==3.3.2\r\ngdown==4.6.6\r\ngensim==4.3.1\r\ngeographiclib==2.0\r\ngeopy==2.3.0\r\ngin-config==0.5.0\r\nglob2==0.7\r\ngoogle==2.0.3\r\ngoogle-api-core==2.11.0\r\ngoogle-api-python-client==2.84.0\r\ngoogle-auth==2.17.3\r\ngoogle-auth-httplib2==0.1.0\r\ngoogle-auth-oauthlib==1.0.0\r\ngoogle-cloud-bigquery==3.9.0\r\ngoogle-cloud-bigquery-storage==2.19.1\r\ngoogle-cloud-core==2.3.2\r\ngoogle-cloud-datastore==2.15.1\r\ngoogle-cloud-firestore==2.11.0\r\ngoogle-cloud-language==2.9.1\r\ngoogle-cloud-storage==2.8.0\r\ngoogle-cloud-translate==3.11.1\r\ngoogle-colab==1.0.0\r\ngoogle-crc32c==1.5.0\r\ngoogle-pasta==0.2.0\r\ngoogle-resumable-media==2.5.0\r\ngoogleapis-common-protos==1.59.0\r\ngoogledrivedownloader==0.4\r\ngraphviz==0.20.1\r\ngreenlet==2.0.2\r\ngrpcio==1.54.0\r\ngrpcio-status==1.48.2\r\ngspread==3.4.2\r\ngspread-dataframe==3.0.8\r\ngym==0.25.2\r\ngym-notices==0.0.8\r\nh5netcdf==1.1.0\r\nh5py==3.8.0\r\nholidays==0.25\r\nholoviews==1.15.4\r\nhtml5lib==1.1\r\nhttpimport==1.3.0\r\nhttplib2==0.21.0\r\nhuggingface-hub==0.15.1\r\nhumanize==4.6.0\r\nhyperopt==0.2.7\r\nidna==3.4\r\nimageio==2.25.1\r\nimageio-ffmpeg==0.4.8\r\nimagesize==1.4.1\r\nimbalanced-learn==0.10.1\r\nimgaug==0.4.0\r\nimportlib-resources==5.12.0\r\nimutils==0.5.4\r\ninflect==6.0.4\r\niniconfig==2.0.0\r\nintel-openmp==2023.1.0\r\nipykernel==5.5.6\r\nipython==7.34.0\r\nipython-genutils==0.2.0\r\nipython-sql==0.4.1\r\nipywidgets==7.7.1\r\nitsdangerous==2.1.2\r\njax==0.4.10\r\njaxlib==0.4.10+cuda11.cudnn86\r\njieba==0.42.1\r\nJinja2==3.1.2\r\njoblib==1.2.0\r\njsonpickle==3.0.1\r\njsonschema==4.3.3\r\njupyter-client==6.1.12\r\njupyter-console==6.1.0\r\njupyter_core==5.3.0\r\njupyter-server==1.24.0\r\njupyterlab-pygments==0.2.2\r\njupyterlab-widgets==3.0.7\r\nkaggle==1.5.13\r\nkeras==2.12.0\r\nkiwisolver==1.4.4\r\nkorean-lunar-calendar==0.3.1\r\nlangcodes==3.3.0\r\nlazy_loader==0.2\r\nlibclang==16.0.0\r\nlibrosa==0.10.0.post2\r\nlightgbm==3.3.5\r\nlit==16.0.5\r\nllvmlite==0.39.1\r\nlocket==1.0.0\r\nlogical-unification==0.4.5\r\nloralib==0.1.1\r\nLunarCalendar==0.0.9\r\nlxml==4.9.2\r\nMarkdown==3.4.3\r\nmarkdown-it-py==2.2.0\r\nMarkupSafe==2.1.2\r\nmatplotlib==3.7.1\r\nmatplotlib-inline==0.1.6\r\nmatplotlib-venn==0.11.9\r\nmdurl==0.1.2\r\nminiKanren==1.0.3\r\nmissingno==0.5.2\r\nmistune==0.8.4\r\nmizani==0.8.1\r\nmkl==2019.0\r\nml-dtypes==0.1.0\r\nmlxtend==0.14.0\r\nmore-itertools==9.1.0\r\nmoviepy==1.0.3\r\nmpmath==1.3.0\r\nmsgpack==1.0.5\r\nmultidict==6.0.4\r\nmultipledispatch==0.6.0\r\nmultiprocess==0.70.14\r\nmultitasking==0.0.11\r\nmurmurhash==1.0.9\r\nmusic21==8.1.0\r\nnatsort==8.3.1\r\nnbclient==0.7.4\r\nnbconvert==6.5.4\r\nnbformat==5.8.0\r\nnest-asyncio==1.5.6\r\nnetworkx==3.1\r\nnibabel==3.0.2\r\nnltk==3.8.1\r\nnotebook==6.4.8\r\nnumba==0.56.4\r\nnumexpr==2.8.4\r\nnumpy==1.22.4\r\noauth2client==4.1.3\r\noauthlib==3.2.2\r\nopencv-contrib-python==4.7.0.72\r\nopencv-python==4.7.0.72\r\nopencv-python-headless==4.7.0.72\r\nopenpyxl==3.0.10\r\nopt-einsum==3.3.0\r\noptax==0.1.5\r\norbax-checkpoint==0.2.1\r\nosqp==0.6.2.post8\r\npackaging==23.1\r\npalettable==3.3.3\r\npandas==1.5.3\r\npandas-datareader==0.10.0\r\npandas-gbq==0.17.9\r\npandocfilters==1.5.0\r\npanel==0.14.4\r\nparam==1.13.0\r\nparso==0.8.3\r\npartd==1.4.0\r\npathlib==1.0.1\r\npathy==0.10.1\r\npatsy==0.5.3\r\npeft==0.4.0.dev0\r\npexpect==4.8.0\r\npickleshare==0.7.5\r\nPillow==8.4.0\r\npip==23.1.2\r\npip-tools==6.13.0\r\nplatformdirs==3.3.0\r\nplotly==5.13.1\r\nplotnine==0.10.1\r\npluggy==1.0.0\r\npolars==0.17.3\r\npooch==1.6.0\r\nportpicker==1.3.9\r\nprefetch-generator==1.0.3\r\npreshed==3.0.8\r\nprettytable==0.7.2\r\nproglog==0.1.10\r\nprogressbar2==4.2.0\r\nprometheus-client==0.16.0\r\npromise==2.3\r\nprompt-toolkit==3.0.38\r\nprophet==1.1.3\r\nproto-plus==1.22.2\r\nprotobuf==3.20.3\r\npsutil==5.9.5\r\npsycopg2==2.9.6\r\nptyprocess==0.7.0\r\npy-cpuinfo==9.0.0\r\npy4j==0.10.9.7\r\npyarrow==9.0.0\r\npyasn1==0.5.0\r\npyasn1-modules==0.3.0\r\npycocotools==2.0.6\r\npycparser==2.21\r\npyct==0.5.0\r\npydantic==1.10.7\r\npydata-google-auth==1.7.0\r\npydot==1.4.2\r\npydot-ng==2.0.0\r\npydotplus==2.0.2\r\nPyDrive==1.3.1\r\npyerfa==2.0.0.3\r\npygame==2.3.0\r\nPygments==2.14.0\r\nPyGObject==3.36.0\r\npymc==5.1.2\r\nPyMeeus==0.5.12\r\npymystem3==0.2.0\r\nPyOpenGL==3.1.6\r\npyparsing==3.0.9\r\npyproject_hooks==1.0.0\r\npyrsistent==0.19.3\r\nPySocks==1.7.1\r\npytensor==2.10.1\r\npytest==7.2.2\r\npython-apt==0.0.0\r\npython-dateutil==2.8.2\r\npython-louvain==0.16\r\npython-slugify==8.0.1\r\npython-utils==3.5.2\r\npytz==2022.7.1\r\npytz-deprecation-shim==0.1.0.post0\r\npyviz-comms==2.2.1\r\nPyWavelets==1.4.1\r\nPyYAML==6.0\r\npyzmq==23.2.1\r\nqdldl==0.1.7\r\nqudida==0.0.4\r\nregex==2022.10.31\r\nrequests==2.27.1\r\nrequests-oauthlib==1.3.1\r\nrequests-unixsocket==0.2.0\r\nrequirements-parser==0.5.0\r\nresponses==0.18.0\r\nrich==13.3.4\r\nrpy2==3.5.5\r\nrsa==4.9\r\nscikit-image==0.19.3\r\nscikit-learn==1.2.2\r\nscipy==1.10.1\r\nscs==3.2.3\r\nseaborn==0.12.2\r\nSend2Trash==1.8.0\r\nsetuptools==67.7.2\r\nshapely==2.0.1\r\nsix==1.16.0\r\nsklearn-pandas==2.2.0\r\nsmart-open==6.3.0\r\nsniffio==1.3.0\r\nsnowballstemmer==2.2.0\r\nsortedcontainers==2.4.0\r\nsoundfile==0.12.1\r\nsoupsieve==2.4.1\r\nsoxr==0.3.5\r\nspacy==3.5.2\r\nspacy-legacy==3.0.12\r\nspacy-loggers==1.0.4\r\nSphinx==3.5.4\r\nsphinxcontrib-applehelp==1.0.4\r\nsphinxcontrib-devhelp==1.0.2\r\nsphinxcontrib-htmlhelp==2.0.1\r\nsphinxcontrib-jsmath==1.0.1\r\nsphinxcontrib-qthelp==1.0.3\r\nsphinxcontrib-serializinghtml==1.1.5\r\nSQLAlchemy==2.0.10\r\nsqlparse==0.4.4\r\nsrsly==2.4.6\r\nstatsmodels==0.13.5\r\nsympy==1.11.1\r\ntables==3.8.0\r\ntabulate==0.8.10\r\ntblib==1.7.0\r\ntenacity==8.2.2\r\ntensorboard==2.12.2\r\ntensorboard-data-server==0.7.0\r\ntensorboard-plugin-wit==1.8.1\r\ntensorflow==2.12.0\r\ntensorflow-datasets==4.9.2\r\ntensorflow-estimator==2.12.0\r\ntensorflow-gcs-config==2.12.0\r\ntensorflow-hub==0.13.0\r\ntensorflow-io-gcs-filesystem==0.32.0\r\ntensorflow-metadata==1.13.1\r\ntensorflow-probability==0.20.1\r\ntensorstore==0.1.36\r\ntermcolor==2.3.0\r\nterminado==0.17.1\r\ntext-unidecode==1.3\r\ntextblob==0.17.1\r\ntf-slim==1.1.0\r\nthinc==8.1.9\r\nthreadpoolctl==3.1.0\r\ntifffile==2023.4.12\r\ntinycss2==1.2.1\r\ntokenizers==0.13.3\r\ntoml==0.10.2\r\ntomli==2.0.1\r\ntoolz==0.12.0\r\ntorch==2.0.1+cu118\r\ntorchaudio==2.0.2+cu118\r\ntorchdata==0.6.1\r\ntorchsummary==1.5.1\r\ntorchtext==0.15.2\r\ntorchvision==0.15.2+cu118\r\ntornado==6.3.1\r\ntqdm==4.65.0\r\ntraitlets==5.7.1\r\ntransformers==4.29.2\r\ntriton==2.0.0\r\ntweepy==4.13.0\r\ntyper==0.7.0\r\ntypes-setuptools==67.8.0.0\r\ntyping_extensions==4.5.0\r\ntzdata==2023.3\r\ntzlocal==4.3\r\nuritemplate==4.1.1\r\nurllib3==1.26.15\r\nvega-datasets==0.9.0\r\nwasabi==1.1.1\r\nwcwidth==0.2.6\r\nwebcolors==1.13\r\nwebencodings==0.5.1\r\nwebsocket-client==1.5.1\r\nWerkzeug==2.3.0\r\nwheel==0.40.0\r\nwidgetsnbextension==3.6.4\r\nwordcloud==1.8.2.2\r\nwrapt==1.14.1\r\nxarray==2022.12.0\r\nxarray-einstats==0.5.1\r\nxgboost==1.7.5\r\nxlrd==2.0.1\r\nxxhash==3.2.0\r\nyarl==1.9.2\r\nyellowbrick==1.5\r\nyfinance==0.2.18\r\nzict==3.0.0\r\nzipp==3.15.0\r\n```", "Hello everyone, I found the cause to be `auto_find_batch_size=True`. In the meantime, please confirm disabling it and passing small `per_device_train_batch_size =4` works (I can confirm). I'm working on a PR to resolve this. \r\n\r\n![Screenshot 2023-06-07 at 12 37 13 PM](https://github.com/huggingface/transformers/assets/13534540/d0765b4c-77c8-4b38-bfb3-80fdfb09a9a1)\r\n\r\n", "> Hello everyone, I found the cause to be `auto_find_batch_size=True`. In the meantime, please confirm disabling it and passing small `per_device_train_batch_size =4` works (I can confirm). I'm working on a PR to resolve this.\r\n> \r\n> ![Screenshot 2023-06-07 at 12 37 13 PM](https://user-images.githubusercontent.com/13534540/243952303-d0765b4c-77c8-4b38-bfb3-80fdfb09a9a1.png)\r\n\r\nSeems it is working with those changes in parameters (20 min training so far... previously it cancelled at about 4). THANKS!", "@FedericoMontana this has also been fixed with the latest Accelerate release I believe, worst case you can use `pip install git+https://github.com/huggingface/accelerate` until we release the patch, and you can use `auto_find_batch_size=True`", "I am still facing this issue. \r\n```\r\n File \"/home/kunal/miniconda3/envs/lora/lib/python3.10/site-packages/transformers/trainer.py\", line 1843, in _inner_training_loop \r\n self.accelerator.clip_grad_norm_( \r\n File \"/home/kunal/miniconda3/envs/lora/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1913, in clip_grad_norm_ \r\n self.unscale_gradients() \r\n File \"/home/kunal/miniconda3/envs/lora/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1876, in unscale_gradients \r\n self.scaler.unscale_(opt) \r\n File \"/home/kunal/miniconda3/envs/lora/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py\", line 275, in unscale_ \r\n raise RuntimeError(\"unscale_() has already been called on this optimizer since the last update().\") \r\nRuntimeError: unscale_() has already been called on this optimizer since the last update().\r\n```\r\nThis issue doesnt exist in `transformers@de9255de27abfcae4a1f816b904915f0b1e23cd9`, but I cannot use that build due to `UnboundLocalError: local variable 'load_result' referenced before assignment` error.\r\nEnvironment:\r\n```\r\n- `transformers` version: 4.31.0.dev0\r\n- Platform: Linux-6.3.9-zen1-1-zen-x86_64-with-glibc2.37\r\n- Python version: 3.10.11\r\n- Huggingface_hub version: 0.15.1\r\n- Safetensors version: 0.3.1\r\n- PyTorch version (GPU?): 2.0.1+cu118 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: Yes\r\n- Using distributed or parallel set-up in script?: No\r\n```", "@kunaldeo, provide a minimal reproducible example post installing Transformers and Accelerate from Main branch. That would be helpful for us to deep dive.", "This is working now." ]
1,686
1,687
1,686
NONE
null
It mentions this fine-tuning notebook like: https://colab.research.google.com/#fileId=https%3A//huggingface.co/dfurman/falcon-7b-chat-oasst1/blob/main/finetune_falcon7b_oasst1_with_bnb_peft.ipynb full stack: ``` Traceback (most recent call last): File "/home/llama/train_infer/finetune_falcon7b_oasst1_with_bnb_peft.py", line 204, in <module> trainer.train() File "/home/.conda/envs/3.9/lib/python3.9/site-packages/transformers/trainer.py", line 1638, in train return inner_training_loop( File "/home/.conda/envs/3.9/lib/python3.9/site-packages/accelerate/utils/memory.py", line 132, in decorator return function(batch_size, *args, **kwargs) File "/home/.conda/envs/3.9/lib/python3.9/site-packages/transformers/trainer.py", line 1972, in _inner_training_loop self.accelerator.clip_grad_norm_( File "/home/.conda/envs/3.9/lib/python3.9/site-packages/accelerate/accelerator.py", line 1892, in clip_grad_norm_ self.unscale_gradients() File "/home/.conda/envs/3.9/lib/python3.9/site-packages/accelerate/accelerator.py", line 1855, in unscale_gradients self.scaler.unscale_(opt) File "/home/.conda/envs/3.9/lib/python3.9/site-packages/torch/cuda/amp/grad_scaler.py", line 275, in unscale_ raise RuntimeError("unscale_() has already been called on this optimizer since the last update().") RuntimeError: unscale_() has already been called on this optimizer since the last update(). ``` refs https://github.com/huggingface/transformers/pull/23914, I had upgraded the transformers to the latest commit. - `transformers` version: 4.30.0.dev0 - `Platform`: Linux-5.15.0-73-generic-x86_64-with-glibc2.31 - `Python version`: 3.9.16 - `Safetensors` version: 0.3.1 - `PyTorch` version (GPU): 2.0.1+cu117 (True) - `peft` version: 0.4.0.dev0 - `accelerate` version: 0.20.0.dev0 - `bitsandbytes` version: 0.39.0 How to slove it?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24050/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 3 }
https://api.github.com/repos/huggingface/transformers/issues/24050/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24049
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24049/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24049/comments
https://api.github.com/repos/huggingface/transformers/issues/24049/events
https://github.com/huggingface/transformers/pull/24049
1,744,069,366
PR_kwDOCUB6oc5SURTU
24,049
Prevent ZeroDivisionError on `trainer.evaluate` if model and dataset are tiny
{ "login": "tomaarsen", "id": 37621491, "node_id": "MDQ6VXNlcjM3NjIxNDkx", "avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomaarsen", "html_url": "https://github.com/tomaarsen", "followers_url": "https://api.github.com/users/tomaarsen/followers", "following_url": "https://api.github.com/users/tomaarsen/following{/other_user}", "gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions", "organizations_url": "https://api.github.com/users/tomaarsen/orgs", "repos_url": "https://api.github.com/users/tomaarsen/repos", "events_url": "https://api.github.com/users/tomaarsen/events{/privacy}", "received_events_url": "https://api.github.com/users/tomaarsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
MEMBER
null
Closes #24048 Hello! ## Pull Request overview * Prevent ZeroDivisionError on `trainer.evaluate` if model and dataset are tiny ## Details Please see #24048 for details. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? Tests would be quite flaky for this. ## Who can review? @sgugger, @younesbelkada - Tom Aarsen
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24049/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24049/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24049", "html_url": "https://github.com/huggingface/transformers/pull/24049", "diff_url": "https://github.com/huggingface/transformers/pull/24049.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24049.patch", "merged_at": 1686065465000 }
https://api.github.com/repos/huggingface/transformers/issues/24048
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24048/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24048/comments
https://api.github.com/repos/huggingface/transformers/issues/24048/events
https://github.com/huggingface/transformers/issues/24048
1,744,065,091
I_kwDOCUB6oc5n9FJD
24,048
ZeroDivisionError on `trainer.evaluate` if model and dataset are tiny
{ "login": "tomaarsen", "id": 37621491, "node_id": "MDQ6VXNlcjM3NjIxNDkx", "avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomaarsen", "html_url": "https://github.com/tomaarsen", "followers_url": "https://api.github.com/users/tomaarsen/followers", "following_url": "https://api.github.com/users/tomaarsen/following{/other_user}", "gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions", "organizations_url": "https://api.github.com/users/tomaarsen/orgs", "repos_url": "https://api.github.com/users/tomaarsen/repos", "events_url": "https://api.github.com/users/tomaarsen/events{/privacy}", "received_events_url": "https://api.github.com/users/tomaarsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,686
1,686
1,686
MEMBER
null
### System Info - `transformers` version: 4.29.2 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.9.16 - Huggingface_hub version: 0.15.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @sgugger cc: @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Consider the following snippet: ```python from torch import nn from transformers import Trainer from datasets import Dataset model = nn.Identity() eval_dataset = Dataset.from_dict({"tokens": [1]}) trainer = Trainer( model, eval_dataset=eval_dataset, ) metrics = trainer.evaluate() print(metrics) ``` (Sometimes) results in ``` Traceback (most recent call last): File "[sic]\demo.py", line 13, in <module> metrics = trainer.evaluate() File "[sic]\transformers\trainer.py", line 3043, in evaluate speed_metrics( File "[sic]\transformers\trainer_utils.py", line 354, in speed_metrics samples_per_second = num_samples / runtime ZeroDivisionError: float division by zero ``` This is rarely an issue - only when models and datasets are tiny. The reason I am invested in resolving this is testing purposes. See for example this [Action](https://github.com/lvwerra/trl/actions/runs/5179991753/jobs/9351434458) on TRL. To keep the testing efficient, the TRL maintainers chose a small model and dataset - which sometimes caused this flaky test. ### Expected behavior I would expect any of these: ``` 1. {'eval_runtime': 0.0, 'eval_samples_per_second': 0.0, 'eval_steps_per_second': 0.0} 2. {'eval_runtime': 0.0, 'eval_samples_per_second': None, 'eval_steps_per_second': None} 3. {'eval_runtime': 0.0, 'eval_samples_per_second': torch.inf, 'eval_steps_per_second': torch.inf} 4. {'eval_runtime': 0.0} ``` Note that these cases would essentially never occur other than during tests. With other words, I think all are fine as long as there's no exception. However, I prefer option 4 personally, but I am open to suggestions. For simplicity, I'll push a simple PR to implement 4. - Tom Aarsen
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24048/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24048/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24047
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24047/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24047/comments
https://api.github.com/repos/huggingface/transformers/issues/24047/events
https://github.com/huggingface/transformers/issues/24047
1,743,951,040
I_kwDOCUB6oc5n8pTA
24,047
AttributeError: 'NoneType' object has no attribute 'flush'
{ "login": "akesh1235", "id": 125154243, "node_id": "U_kgDOB3Wzww", "avatar_url": "https://avatars.githubusercontent.com/u/125154243?v=4", "gravatar_id": "", "url": "https://api.github.com/users/akesh1235", "html_url": "https://github.com/akesh1235", "followers_url": "https://api.github.com/users/akesh1235/followers", "following_url": "https://api.github.com/users/akesh1235/following{/other_user}", "gists_url": "https://api.github.com/users/akesh1235/gists{/gist_id}", "starred_url": "https://api.github.com/users/akesh1235/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akesh1235/subscriptions", "organizations_url": "https://api.github.com/users/akesh1235/orgs", "repos_url": "https://api.github.com/users/akesh1235/repos", "events_url": "https://api.github.com/users/akesh1235/events{/privacy}", "received_events_url": "https://api.github.com/users/akesh1235/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @LysandreJik maybe", "Hello, I have encountered the same problem as you, did you solve it?", "Hi! I also encountered this error. I'm building a package with `pyinstaller` which works on MacOS with M2 amd64. Running inside of a Windows VM running Windows 11, this fails with the same error. \r\n\r\n```\r\nFile \"<frozen importlib._bootstrap>\", line 1176, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 1147, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 690, in _load_unlocked\r\n File \"PyInstaller\\loader\\pyimod02_importers.py\", line 385, in exec_module\r\n File \"transformers\\utils\\import_utils.py\", line 37, in <module>\r\n logger = logging.get_logger(__name__) # pylint: disable=invalid-name\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"transformers\\utils\\logging.py\", line 124, in get_logger\r\n _configure_library_root_logger()\r\n File \"transformers\\utils\\logging.py\", line 88, in _configure_library_root_logger\r\n _default_handler.flush = sys.stderr.flush\r\n ^^^^^^^^^^^^^^^^\r\nAttributeError: 'NoneType' object has no attribute 'flush'\r\n```", "> 你好!我也遇到了这个错误。我正在构建一个在 MacOS 上使用 M2 amd64 的软件包。在运行 Windows 11 的 Windows VM 中运行,此操作失败并出现相同的错误。`pyinstaller`\r\n> \r\n> ```\r\n> File \"<frozen importlib._bootstrap>\", line 1176, in _find_and_load\r\n> File \"<frozen importlib._bootstrap>\", line 1147, in _find_and_load_unlocked\r\n> File \"<frozen importlib._bootstrap>\", line 690, in _load_unlocked\r\n> File \"PyInstaller\\loader\\pyimod02_importers.py\", line 385, in exec_module\r\n> File \"transformers\\utils\\import_utils.py\", line 37, in <module>\r\n> logger = logging.get_logger(__name__) # pylint: disable=invalid-name\r\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n> File \"transformers\\utils\\logging.py\", line 124, in get_logger\r\n> _configure_library_root_logger()\r\n> File \"transformers\\utils\\logging.py\", line 88, in _configure_library_root_logger\r\n> _default_handler.flush = sys.stderr.flush\r\n> ^^^^^^^^^^^^^^^^\r\n> AttributeError: 'NoneType' object has no attribute 'flush'\r\n> ```\r\nYou can add this code before your transformers import\r\nif sys.stdout is None:\r\n sys.stdout = open(os.devnull, \"w\")\r\nif sys.stderr is None:\r\n sys.stderr = open(os.devnull, \"w\")", "if by chance you have this error and you have a Virtualenv, remember to generate the pyinstaller exe from inside the virtual environment, I solved it like this" ]
1,686
1,706
1,690
NONE
null
### System Info **System info** - `transformers` version: 4.29.2 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.11.3 - Huggingface_hub version: 0.15.1 - Safetensors version: not installed - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: <fill in> **Issue** **After creating virtual environment and installing requirements.txt, carried out following steps to convert **`.py`** file into **`.exe`** ** using pyinstaller library **step 1 : `pip install pyinstaller`** **step 2 : `pyinstaller --name GrammarCorrector --onefile --windowed new_gram1_Tkinter.py --hidden-import cymem.cymem`** **Then i got this AttributeError:** Traceback (most recent call last): File "new_gram1_Tkinter.py", line 271, in <module> File "new_gram1_Tkinter.py", line 142, in __init__ File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "transformers\__init__.py", line 26, in <module> File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "transformers\dependency_versions_check.py", line 17, in <module> File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "transformers\utils\__init__.py", line 30, in <module> File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "transformers\utils\generic.py", line 29, in <module> File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "transformers\utils\import_utils.py", line 36, in <module> File "transformers\utils\logging.py", line 124, in get_logger File "transformers\utils\logging.py", line 88, in _configure_library_root_logger **AttributeError: 'NoneType' object has no attribute 'flush'** I raised issue in `pyinstaller `repository, and i got answer as followed below from @bwoodsend who is a maintainer **You should be able to get the same error without `PyInstaller `if you run your source code using `pythonw `instead of just `python`.** **Raise a bug to** **`transformers`** if they have their own **windowed-mode-naive logger**. https://github.com/orgs/pyinstaller/discussions/7689#discussion-5270292 ### Who can help? @sgugger @ArthurZucker @LysandreJik ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I want my **`.py`** file in **`.exe`** file and when i am doing that using `pyinstaller `it is giving Attribute error when i asked `pyinstaller `developers on repository they suggested me to raise bug report on `transformers `saying that **if they have their own windowed-mode-naive logger.** ### Expected behavior i want **`.exe`** file from **`.py`**
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24047/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24047/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24046
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24046/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24046/comments
https://api.github.com/repos/huggingface/transformers/issues/24046/events
https://github.com/huggingface/transformers/pull/24046
1,743,937,635
PR_kwDOCUB6oc5ST0tB
24,046
Reduce memory usage in TF building
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Let me run it on CI and see.", "Sorry for the delay - there's an issue with Funnel that wasn't reproducing on my machine. I eventually figured out that the problem is the classic TF one: indices for `tf.gather` are not validated on GPU but are validated on CPU, and so the bug only becomes apparent on CPU. Will fix in just a sec!", "I also tried to run the change in this PR, and got \r\n\r\n\r\n```\r\nFAILED tests/pipelines/test_pipelines_common.py::PipelineUtilsTest::test_load_default_pipelines_tf - tensorflow.python.framework.errors_impl.ResourceExhaustedError: {{function_node __wrapped__Transpose_device_/job:localhost/replica:0/task:0/device:GPU:0}} OOM when allocating tensor with shape[768,768] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [Op:Transpose]\r\nFAILED tests/pipelines/test_pipelines_common.py::PipelineUtilsTest::test_load_default_pipelines_tf_table_qa - tensorflow.python.framework.errors_impl.ResourceExhaustedError: Exception encountered when calling layer 'tapas' (type TFTapasMainLayer).\r\n\r\n{{function_node __wrapped__StatelessTruncatedNormalV2_device_/job:localhost/replica:0/task:0/device:GPU:0}} OOM when allocating tensor with shape[30522,768] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [Op:StatelessTruncatedNormalV2]\r\n\r\nCall arguments received by layer 'tapas' (type TFTapasMainLayer):\r\n • input_ids=tf.Tensor(shape=(2, 2), dtype=int32)\r\n • attention_mask=tf.Tensor(shape=(2, 2), dtype=float32)\r\n • token_type_ids=tf.Tensor(shape=(2, 2, 7), dtype=int32)\r\n • position_ids=None\r\n • head_mask=None\r\n • inputs_embeds=None\r\n • output_attentions=False\r\n • output_hidden_states=False\r\n • return_dict=True\r\n • training=False\r\n```\r\nand 5 other ones (probably due to the above one).\r\n\r\n@Rocketknight1 I think we will have to reiterate (change->run->change->run) a bit more before we merge.", "Yep, working on it now!", "The `tests/pipelines/test_pipelines_common.py::PipelineUtilsTest::test_load_default_pipelines_tf` run against a list of models, so it's kind normal it fails with other models even some fixes are done previously.\r\n\r\nI am OK to trigger the run (a subset) whenever you feel it's time. Otherwise I can show you a modified workflow file for you to trigger manually.", "@ydshieh the issues with Funnel have been resolved, so this should be ready for a CI run now!", "You can watch it live [here](https://github.com/huggingface/transformers/actions/runs/5191137996/jobs/9358557442). It will take 20-30 min to finish.", "Looks like they're still failing even with very small dummies. I'll investigate those models and try to figure out why - the new dummies should be smaller than the old ones! ", "Maybe this is a sign that we should transition the dummies to symbolic tensors for those models, even if it's probably too slow for our tests to do it across the whole codebase." ]
1,686
1,686
1,686
MEMBER
null
This PR reduces the default shape of dummy inputs from (3, 3) to (2, 2). This slightly reduces the memory usage when building TF models, which should hopefully fix some of our pipeline tests. We could replace the dummy inputs with symbolic tensors, which would mean we could build TF models with 0 memory usage, but this would make TF model building slower (~4X) because it would implicitly compile the model when building, which is probably not an acceptable tradeoff. cc @ydshieh and @amyeroberts as core maintainer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24046/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24046/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24046", "html_url": "https://github.com/huggingface/transformers/pull/24046", "diff_url": "https://github.com/huggingface/transformers/pull/24046.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24046.patch", "merged_at": 1686072597000 }
https://api.github.com/repos/huggingface/transformers/issues/24045
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24045/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24045/comments
https://api.github.com/repos/huggingface/transformers/issues/24045/events
https://github.com/huggingface/transformers/pull/24045
1,743,933,418
PR_kwDOCUB6oc5STzyD
24,045
Fix a tiny typo in `WhisperForConditionalGeneration::generate` docstring
{ "login": "sadra-barikbin", "id": 22097587, "node_id": "MDQ6VXNlcjIyMDk3NTg3", "avatar_url": "https://avatars.githubusercontent.com/u/22097587?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sadra-barikbin", "html_url": "https://github.com/sadra-barikbin", "followers_url": "https://api.github.com/users/sadra-barikbin/followers", "following_url": "https://api.github.com/users/sadra-barikbin/following{/other_user}", "gists_url": "https://api.github.com/users/sadra-barikbin/gists{/gist_id}", "starred_url": "https://api.github.com/users/sadra-barikbin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sadra-barikbin/subscriptions", "organizations_url": "https://api.github.com/users/sadra-barikbin/orgs", "repos_url": "https://api.github.com/users/sadra-barikbin/repos", "events_url": "https://api.github.com/users/sadra-barikbin/events{/privacy}", "received_events_url": "https://api.github.com/users/sadra-barikbin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sadra-barikbin Thanks for fixing this! \r\n\r\nIt seems there is an issue with your CircleCI permissions, the tests won't run.\r\nCould you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?", "CircleCI has banned my account. Feel free to make another PR.", "@sadra-barikbin OK - as the changes are small and don't affect code logic, the ci checks aren't critical, so I'm going to merge. " ]
1,686
1,686
1,686
CONTRIBUTOR
null
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). @sanchit-gandhi
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24045/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24045/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24045", "html_url": "https://github.com/huggingface/transformers/pull/24045", "diff_url": "https://github.com/huggingface/transformers/pull/24045.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24045.patch", "merged_at": 1686228897000 }
https://api.github.com/repos/huggingface/transformers/issues/24044
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24044/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24044/comments
https://api.github.com/repos/huggingface/transformers/issues/24044/events
https://github.com/huggingface/transformers/issues/24044
1,743,726,217
I_kwDOCUB6oc5n7yaJ
24,044
Add keypoint-detection task
{ "login": "vincentmin", "id": 39170736, "node_id": "MDQ6VXNlcjM5MTcwNzM2", "avatar_url": "https://avatars.githubusercontent.com/u/39170736?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vincentmin", "html_url": "https://github.com/vincentmin", "followers_url": "https://api.github.com/users/vincentmin/followers", "following_url": "https://api.github.com/users/vincentmin/following{/other_user}", "gists_url": "https://api.github.com/users/vincentmin/gists{/gist_id}", "starred_url": "https://api.github.com/users/vincentmin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vincentmin/subscriptions", "organizations_url": "https://api.github.com/users/vincentmin/orgs", "repos_url": "https://api.github.com/users/vincentmin/repos", "events_url": "https://api.github.com/users/vincentmin/events{/privacy}", "received_events_url": "https://api.github.com/users/vincentmin/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[]
1,686
1,688
null
NONE
null
### Feature request Add support for keypoint detection. This includes a task, pipeline, dataset label and training pipeline. The task is to take an image and predict the x and y locations of a set of keypoints. Which keypoints are predicted should depend on the model trained for this task. The training pipeline for keypoint detection should allow to swap components. For example, one should be able to choose the backbone to be any suitable vision transformer model that is available on the huggingface hub. ### Motivation Keypoint detection is a use case that is prevalent in computer vision. The computer vision subset of the huggingface ecosystem would benefit from adding the popular keypoint detection task to the existing set of tasks. At the time of writing, existing repositories for keypoint detection often focus on a single particular model, e.g.: - yolov7: https://github.com/RizwanMunawar/yolov7-pose-estimation - yolov8: https://docs.ultralytics.com/tasks/pose/ - vitpose: https://github.com/ViTAE-Transformer/ViTPose The computer vision community could benefit greatly from a high quality community oriented open source hub for keypoint detection. ### Your contribution I am happy to be part of the discussion, but probably can do little in terms of PR's at this point in time.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24044/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24044/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/24043
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24043/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24043/comments
https://api.github.com/repos/huggingface/transformers/issues/24043/events
https://github.com/huggingface/transformers/pull/24043
1,743,560,052
PR_kwDOCUB6oc5SSiCQ
24,043
[`bnb`] Fix bnb skip modules
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
CONTRIBUTOR
null
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/24037 https://github.com/huggingface/transformers/pull/23479 removed by mistake the logic introduced in https://github.com/huggingface/transformers/pull/21579 to deal with modules that are not needed to be converted The PR also adds a nice test to make sure this will never happen again
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24043/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24043/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24043", "html_url": "https://github.com/huggingface/transformers/pull/24043", "diff_url": "https://github.com/huggingface/transformers/pull/24043.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24043.patch", "merged_at": 1686144467000 }
https://api.github.com/repos/huggingface/transformers/issues/24042
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24042/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24042/comments
https://api.github.com/repos/huggingface/transformers/issues/24042/events
https://github.com/huggingface/transformers/pull/24042
1,743,427,448
PR_kwDOCUB6oc5SSFBX
24,042
[Lllama] Update tokenization code to ensure parsing of the special tokens [core]
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Ok, narrowed it down to this line: \r\n```python \r\n # Check all our special tokens are registered as \"no split\" token (we don't cut them) and are in the vocab\r\n added_tokens = tokenizer.sanitize_special_tokens()\r\n```\r\nWhen converting the model from a slow one, the tokenizer correctly processes the inputs up until this point. Meaning that before, the special tokens where already registered as special tokens, but adding them once more most probably breaks the internal regex. Still checking but should be this. ", "_The documentation is not available anymore as the PR was closed or merged._", "After debugging with @Narsil it seems that the special tokens have to be not normalised, otherwise the normalizer prepends a space when adding it, which is why the token is not recognized. I suspect that there is another bug, as I tried with special tokens set to normalized = True (when calling `from_slow=True`+commenting `self._sanitize_special_tokens`) but the current should fix the conversion. \r\n\r\nA big discrepancy is that the default `AddedTokens` imported from `tokenizers` will set `normalized` to `!special`, so if you add tokens as special tokens, `normalized` will be False. But in `transformers` this is not the case, which explains why the call to sanitize is a source of problem.", "We have to update the online models to change the `tokenizer.json`, (people might be confused because the `normalized` param is also in the slow files but always ignored) " ]
1,686
1,686
1,686
COLLABORATOR
null
# What does this PR do? Adresses the issues with the fast tokenizer of LLama. Namely: - nit making it return token type ids. - the added tokens are not correctly encoded. There seems to be an issue with the conversion: before the python layer, just loading the tokenizer_config.json file with the rust backend still produced: `tokenizer.encode("this is not<s>").tokens`, ` ['<s>', '▁This', '▁is', '▁not', '</', 's', '>']`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24042/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24042/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24042", "html_url": "https://github.com/huggingface/transformers/pull/24042", "diff_url": "https://github.com/huggingface/transformers/pull/24042.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24042.patch", "merged_at": 1686296179000 }
https://api.github.com/repos/huggingface/transformers/issues/24041
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24041/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24041/comments
https://api.github.com/repos/huggingface/transformers/issues/24041/events
https://github.com/huggingface/transformers/issues/24041
1,743,385,966
I_kwDOCUB6oc5n6fVu
24,041
Fix bug in using TPU
{ "login": "pphuc25", "id": 81808312, "node_id": "MDQ6VXNlcjgxODA4MzEy", "avatar_url": "https://avatars.githubusercontent.com/u/81808312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pphuc25", "html_url": "https://github.com/pphuc25", "followers_url": "https://api.github.com/users/pphuc25/followers", "following_url": "https://api.github.com/users/pphuc25/following{/other_user}", "gists_url": "https://api.github.com/users/pphuc25/gists{/gist_id}", "starred_url": "https://api.github.com/users/pphuc25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pphuc25/subscriptions", "organizations_url": "https://api.github.com/users/pphuc25/orgs", "repos_url": "https://api.github.com/users/pphuc25/repos", "events_url": "https://api.github.com/users/pphuc25/events{/privacy}", "received_events_url": "https://api.github.com/users/pphuc25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Could you please give us a reproducer? Also cc @muellerzr and @pacman100 ", "As an idea workflow, this representation may be limited due to the inability to showcase the complete setting. Consequently, there could potentially be bugs in the way it is set up. The following points outline the reproducers:\r\n\r\n\r\n1. Download file [run_speech_recognition_seq2seq_streaming.py](https://github.com/huggingface/community-events/blob/main/whisper-fine-tuning-event/run_speech_recognition_seq2seq_streaming.py) and [xla_spawn.py](https://github.com/huggingface/transformers/blob/main/examples/legacy/seq2seq/xla_spawn.py)\r\n2. Adding script\r\n```\r\necho 'python xla_spawn.py \\\r\n --num_cores 8 \\\r\n run_speech_recognition_seq2seq_streaming.py \\\r\n --tpu_num_cores 8 \\\r\n\t--model_name_or_path=\"openai/whisper-tiny\" \\\r\n\t--dataset_name=\"mozilla-foundation/common_voice_11_0\" \\\r\n\t--dataset_config_name=\"vi\" \\\r\n\t--language=\"vi\" \\\r\n\t--train_split_name=\"train+validation\" \\\r\n\t--eval_split_name=\"test\" \\\r\n\t--model_index_name=\"Whisper Small Spanish\" \\\r\n\t--max_steps=\"5000\" \\\r\n\t--output_dir=\"./\" \\\r\n\t--per_device_train_batch_size=\"64\" \\\r\n\t--per_device_eval_batch_size=\"32\" \\\r\n\t--logging_steps=\"25\" \\\r\n\t--learning_rate=\"1e-5\" \\\r\n\t--warmup_steps=\"500\" \\\r\n\t--evaluation_strategy=\"steps\" \\\r\n\t--eval_steps=\"1000\" \\\r\n\t--save_strategy=\"steps\" \\\r\n\t--save_steps=\"1000\" \\\r\n\t--generation_max_length=\"225\" \\\r\n\t--length_column_name=\"input_length\" \\\r\n\t--max_duration_in_seconds=\"30\" \\\r\n\t--text_column_name=\"sentence\" \\\r\n\t--freeze_feature_encoder=\"False\" \\\r\n\t--report_to=\"tensorboard\" \\\r\n\t--metric_for_best_model=\"loss\" \\\r\n\t--greater_is_better=\"False\" \\\r\n\t--load_best_model_at_end False \\\r\n\t--gradient_checkpointing \\\r\n\t--bf16 \\\r\n\t--overwrite_output_dir \\\r\n\t--do_train \\\r\n\t--do_eval \\\r\n\t--predict_with_generate False \\\r\n\t--do_normalize_eval \\\r\n\t--streaming \\\r\n\t--use_auth_token \\\r\n\t--push_to_hub \\\r\n --bf16_full_eval True' >> run.sh\r\n```\r\n3. run command line `bash run.sh`\r\n\r\n--- \r\nAfter the execution reaches a certain point, a bug occurs when calling `torch.cuda` on the TPU. To prevent this issue, one possible solution is to include the argument `--half_precision_backend \"cpu_amp\"` in the script. \r\n\r\nTo implement this fix, I suggest modifying the trainer.py file by adding a condition that checks if the code is running on a TPU (as I mentioned before). If it is, the half_precision_backend should be set to \"cpu_amp\".\r\n\r\nI would like to cc @muellerzr and @pacman100 ", "The flags `--bf16` and `--bf16_full_eval` are not supported on TPU. I'm not sure using the CPU autocast is a good idea since it will trigger copy of the data to the CPU.", "During my experiment, I observed that the removal of bf16 led to satisfactory results. However, the training speed was slower compared to using bf16 in transformers==4.27.0. Could you please suggest some methods to enhance the training speed using TPU?", "There is some information in the accelerate docs: https://huggingface.co/docs/accelerate/concept_guides/training_tpu\r\n\r\nProbably using JAX would be faster here on TPU. The script at #21764 works for non-streaming mode fine-tuning (the PR just needs tests + docs before merge), so you can use this already if you want", "thank you so much for your information <3" ]
1,686
1,686
1,686
CONTRIBUTOR
null
### System Info transformers==4.28 ### Who can help? @sgugger and @sanchit-gandhi ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction ``` if (args.fp16 or args.bf16) and self.sharded_ddp is not None: if args.half_precision_backend == "auto": if args.device == torch.device("cpu"): if args.fp16: raise ValueError("Tried to use `fp16` but it is not supported on cpu") elif _is_native_cpu_amp_available: args.half_precision_backend = "cpu_amp" else: raise ValueError("Tried to use cpu amp but native cpu amp is not available") else: args.half_precision_backend = "cuda_amp" logger.info(f"Using {args.half_precision_backend} half precision backend") ``` in file trainer.py code from 615 to 627, I see that when using TPU with this logic so half_precision_backend will auto set to "cuda_amp", which TPU can not works seen the in TPU can not call some function like torch.cuda (I am traning Whisper using TPU) So in my suggest, we should have the condition if is_torch_tpu_available() then args.half_precision_backend = "cpu_amp" The new code: ``` if (args.fp16 or args.bf16) and self.sharded_ddp is not None: if args.half_precision_backend == "auto": if args.device == torch.device("cpu"): if args.fp16: raise ValueError("Tried to use `fp16` but it is not supported on cpu") elif _is_native_cpu_amp_available: args.half_precision_backend = "cpu_amp" else: raise ValueError("Tried to use cpu amp but native cpu amp is not available") elif is_torch_tpu_available() : args.half_precision_backend = "cpu_amp" else: args.half_precision_backend = "cuda_amp" logger.info(f"Using {args.half_precision_backend} half precision backend") ``` However, I do not sure that my new changing is suitable so I post to this to have a better solution, can you review for me is this a good solution so that I can make a pull requests to contribute I would like to advice @sgugger and @sanchit-gandhi to review it ### Expected behavior There's no bug in training using TPU
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24041/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/24041/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24040
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24040/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24040/comments
https://api.github.com/repos/huggingface/transformers/issues/24040/events
https://github.com/huggingface/transformers/issues/24040
1,743,244,340
I_kwDOCUB6oc5n58w0
24,040
There is bug in trainer of Llama:indices should be either on cpu or on the same device as the indexed tensor (cpu)
{ "login": "2018211801", "id": 53637341, "node_id": "MDQ6VXNlcjUzNjM3MzQx", "avatar_url": "https://avatars.githubusercontent.com/u/53637341?v=4", "gravatar_id": "", "url": "https://api.github.com/users/2018211801", "html_url": "https://github.com/2018211801", "followers_url": "https://api.github.com/users/2018211801/followers", "following_url": "https://api.github.com/users/2018211801/following{/other_user}", "gists_url": "https://api.github.com/users/2018211801/gists{/gist_id}", "starred_url": "https://api.github.com/users/2018211801/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/2018211801/subscriptions", "organizations_url": "https://api.github.com/users/2018211801/orgs", "repos_url": "https://api.github.com/users/2018211801/repos", "events_url": "https://api.github.com/users/2018211801/events{/privacy}", "received_events_url": "https://api.github.com/users/2018211801/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, what is the launch command?\r\nWhat version of accelerate is being used?\r\nAlso see this https://github.com/microsoft/DeepSpeed/issues/3678", "hi~ thanks for your help!\r\nCUDA_VISIBLE_DEVICES=\"1,2,4,5\" torchrun --nnodes=1 --nproc_per_node=4 start_ds_finetune.py --deepspeed deepspeed_config.json --learning_rate=2e-5 --per_device_train_batch_size=4 --gradient_accumulation_steps=1\r\n\r\naccelerate == 0.19.0\r\n\r\n> Hello, what is the launch command? What version of accelerate is being used? Also see this [microsoft/DeepSpeed#3678](https://github.com/microsoft/DeepSpeed/issues/3678)\r\n\r\n", "Can you update the accelerate from the main branch and use the PR of DeepSpeed LinkedIn the above DeepSpeed issue", "Sorry I'm a beginner, do you mean this? \r\npip install --upgrade accelerate \r\ngit clone -b olruwase/ds_3678 https://github.com/microsoft/DeepSpeed.git\r\ncd DeepSpeed\r\nDS_BUILD_CPU_ADAM=1 DS_BUILD_AIO=1 DS_BUILD_UTILS=1 pip install -e .", "My problem is solved!!!Thanks!!!How excellent you are!!\r\n\r\n> Can you update the accelerate from the main branch and use the PR of DeepSpeed LinkedIn the above DeepSpeed issue\r\n\r\n", "could you make a brief about how to solve this bug? I also meet this. Should i change the version of deepspeed?", "@2018211801 ", "@2018211801 \r\n", "> My problem is solved!!!Thanks!!!How excellent you are!!\r\n> \r\n> > Can you update the accelerate from the main branch and use the PR of DeepSpeed LinkedIn the above DeepSpeed issue\r\n\r\nWhat is your solution to this problem? Could you please share it?", "update deepspeed to latest version solve it " ]
1,686
1,701
1,686
NONE
null
### System Info Setting ds_accelerator to cuda (auto detect) Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.4.0-132-generic-x86_64-with-glibc2.27 - Python version: 3.10.11 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 1.13.1+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker @sgugger @pacman100 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from dataclasses import dataclass, field from typing import Optional, Tuple import transformers from transformers import Trainer from transformers.models.llama import LlamaForCausalLM # noqa from transformers import ( AutoConfig, AutoModelForCausalLM, AutoTokenizer, AutoModel ) from nl2sql_dataset import Nl2SqlJsonlDataset import env import time @dataclass class ModelArguments: pretrain_path: str = field( default=f"{env.MODEL_ROOT.joinpath('llama-7b-hf')}" # llama-13b-hf ) @dataclass class DataArguments: train_file: str = field( default=f"{env.INPUT_ROOT.joinpath('trainset/config.json')}", metadata={"help": "A josnl file containing the training corpus"}, ) validation_file: str = field( default=f"{env.INPUT_ROOT.joinpath('devset/config.json')}", metadata={"help": "A jsonl file containing the validation corpus"}, ) max_seq_length: int = field( default=512, metadata={"help": "Max sequence length for training"} ) pad_to_max_length: bool = field(default=False) @dataclass class TrainingArguments(transformers.TrainingArguments): cache_dir: Optional[str] = field(default=None) optim: str = field(default="adamw_torch") model_max_length: int = field( default=512, metadata={ "help": "Maximum sequence length. Sequences will be right padded (and possibly truncated)." }, ) num_train_epochs: int = 5 evaluation_strategy: str = field(default="epoch") save_strategy: str = "epoch" fp16: bool = True save_total_limit: int = 5 load_best_model_at_end: bool = False warmup_steps: int = 0 logging_steps: int = 1 gradient_checkpointing: bool = True ddp_timeout: int = 3600 output_dir: str = field( default=f"{env.OUTPUT_ROOT.joinpath(time.strftime('%Y年%m月%d日%H时%M分%S秒'))}", ) def safe_save_model_for_hf_trainer(trainer: transformers.Trainer, output_dir: str): """Collects the state dict and dump to disk.""" state_dict = trainer.model.state_dict() if trainer.args.should_save: cpu_state_dict = {key: value.cpu() for key, value in state_dict.items()} del state_dict trainer._save(output_dir, state_dict=cpu_state_dict) # noqa def parse_args() -> Tuple[ModelArguments, DataArguments, TrainingArguments]: parser = transformers.HfArgumentParser( (ModelArguments, DataArguments, TrainingArguments) ) return parser.parse_args_into_dataclasses() def train(): model_args, data_args, training_args = parse_args() if "chatglm" in model_args.pretrain_path: print(model_args.pretrain_path) model = AutoModel.from_pretrained(model_args.pretrain_path, trust_remote_code=True, empty_init=False) else: model = AutoModelForCausalLM.from_pretrained(model_args.pretrain_path) print(model_args, data_args, training_args) dataset = Nl2SqlJsonlDataset( pretrain_path=model_args.pretrain_path, train_file_path=data_args.train_file, validation_file_path=data_args.validation_file, max_seg_length=data_args.max_seq_length, pad_to_max_length=data_args.pad_to_max_length, ) dataset.setup() # Tell Trainer not to attempt DataParallel model.is_parallelizable = True model.model_parallel = True trainer = Trainer( model=model, args=training_args, train_dataset=dataset.train_dataset, eval_dataset=dataset.val_dataset, data_collator=dataset.collate_fn, ) model.config.use_cache = False trainer.train() trainer.save_state() safe_save_model_for_hf_trainer(trainer=trainer, output_dir=training_args.output_dir) if __name__ == "__main__": train() ``` **error:** To simplify the output information, I ran only on one card Setting ds_accelerator to cuda (auto detect) [2023-06-06 12:16:17,633] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented [2023-06-06 12:16:17,633] [INFO] [comm.py:594:init_distributed] cdb=None [2023-06-06 12:16:17,633] [INFO] [comm.py:625:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl [2023-06-06 12:17:21,017] [INFO] [partition_parameters.py:454:__exit__] finished initializing model with 6.74B parameters Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████| 3/3 [00:41<00:00, 13.86s/it]ModelArguments(pretrain_path='/mnt/data/wxc/workspace/pretrained_models/2/llama-7b') DataArguments(train_file='/mnt/data/wxc/workspace/release/data/trainset/config.json', validation_file='/mnt/data/wxc/workspace/release/data/devset/config.json', max_seq_length=512, pad_to_max_length=False) TrainingArguments( Parameter Offload: Total persistent parameters: 266240 in 65 params 0%| | 0/87850 [00:00<?, ?it/s]Traceback (most recent call last): File "/mnt/data/wxc/workspace/release/start_ds_finetune.py", line 118, in train() File "/mnt/data/wxc/workspace/release/start_ds_finetune.py", line 112, in train trainer.train() File "/mnt/data/wxc/workspace/Llama-X/src/transformers/src/transformers/trainer.py", line 1661, in train return inner_training_loop( File "/mnt/data/wxc/workspace/Llama-X/src/transformers/src/transformers/trainer.py", line 1946, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/mnt/data/wxc/workspace/Llama-X/src/transformers/src/transformers/trainer.py", line 2756, in training_step loss = self.compute_loss(model, inputs) File "/mnt/data/wxc/workspace/Llama-X/src/transformers/src/transformers/trainer.py", line 2781, in compute_loss outputs = model(**inputs) File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn ret_val = func(*args, **kwargs) File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1733, in forward loss = self.module(*inputs, **kwargs) File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1212, in _call_impl result = forward_call(*input, **kwargs) File "/mnt/data/wxc/workspace/Llama-X/src/transformers/src/transformers/models/llama/modeling_llama.py", line 688, in forward outputs = self.model( File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1212, in _call_impl result = forward_call(*input, **kwargs) File "/mnt/data/wxc/workspace/Llama-X/src/transformers/src/transformers/models/llama/modeling_llama.py", line 570, in forward layer_outputs = torch.utils.checkpoint.checkpoint( File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 249, in checkpoint return CheckpointFunction.apply(function, preserve, *args) File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 107, in forward outputs = run_function(*args) File "/mnt/data/wxc/workspace/Llama-X/src/transformers/src/transformers/models/llama/modeling_llama.py", line 566, in custom_forward return module(*inputs, output_attentions, None) File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1212, in _call_impl result = forward_call(*input, **kwargs) File "/mnt/data/wxc/workspace/Llama-X/src/transformers/src/transformers/models/llama/modeling_llama.py", line 292, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1212, in _call_impl result = forward_call(*input, **kwargs) File "/mnt/data/wxc/workspace/Llama-X/src/transformers/src/transformers/models/llama/modeling_llama.py", line 202, in forward query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids) File "/mnt/data/wxc/workspace/Llama-X/src/transformers/src/transformers/models/llama/modeling_llama.py", line 134, in apply_rotary_pos_emb cos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim] RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu) ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 12104) of binary: /home/wxc/miniconda3/envs/llamax/bin/python3.1 Traceback (most recent call last): File "/home/wxc/miniconda3/envs/llamax/bin/torchrun", line 8, in <module> sys.exit(main()) File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper return f(*args, **kwargs) File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/torch/distributed/run.py", line 762, in main run(args) File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/torch/distributed/run.py", line 753, in run elastic_launch( File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/home/wxc/miniconda3/envs/llamax/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ start_ds_finetune.py FAILED ------------------------------------------------------------ Failures: <NO_OTHER_FAILURES> ------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2023-06-06_12:20:18 host : omnisky rank : 0 (local_rank: 0) exitcode : 1 (pid: 12104) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ============================================================ By addition,my deepspeed is the latest version,installed by git and compile. ### Expected behavior This is a finetune with nl2sql dataset. And my data format is {"source": "dusql_中国历史名城", "table": [{"table_name": "城市", "header": ["词条id", "名称", "所属省份", "常住人口", "城区面积", "建城年数"], "rows": []}, {"table_name": "都城", "header": ["朝代", "古称", "城市id", "建都起始时间", "建都结束时间", "建都年数"], "rows": []}], "sql": "select 名称 , 所属省份 from 城市 where 词条id not in ( select 城市id from 都城 )", "question": "哪些城市没有做过都城,给出这些城市名和其省份。"}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24040/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24040/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24039
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24039/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24039/comments
https://api.github.com/repos/huggingface/transformers/issues/24039/events
https://github.com/huggingface/transformers/pull/24039
1,743,071,251
PR_kwDOCUB6oc5SQ37z
24,039
Add support for non-rust implemented tokenization for `__getitem__` method.
{ "login": "jacklanda", "id": 54089835, "node_id": "MDQ6VXNlcjU0MDg5ODM1", "avatar_url": "https://avatars.githubusercontent.com/u/54089835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jacklanda", "html_url": "https://github.com/jacklanda", "followers_url": "https://api.github.com/users/jacklanda/followers", "following_url": "https://api.github.com/users/jacklanda/following{/other_user}", "gists_url": "https://api.github.com/users/jacklanda/gists{/gist_id}", "starred_url": "https://api.github.com/users/jacklanda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jacklanda/subscriptions", "organizations_url": "https://api.github.com/users/jacklanda/orgs", "repos_url": "https://api.github.com/users/jacklanda/repos", "events_url": "https://api.github.com/users/jacklanda/events{/privacy}", "received_events_url": "https://api.github.com/users/jacklanda/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It seems that failed in workflow due to `Read Timeout` on T5-relevant testing.\r\nHow can I rerun for this?\r\n\r\n![图片](https://github.com/huggingface/transformers/assets/54089835/f9922d94-440f-42d6-92b7-69f38e604d81)\r\n![图片](https://github.com/huggingface/transformers/assets/54089835/4a1eec18-f2cf-4509-93aa-1c254286fe2e)\r\n", "We can re-run that for you 😉 \r\n", "_The documentation is not available anymore as the PR was closed or merged._", "Request for review :)", "@jacklanda Could you update the error message as requested by @ArthurZucker? ", "> @jacklanda Could you update the error message as requested by @ArthurZucker?\r\n\r\n@amyeroberts Have updated the mentioned error messages by @ArthurZucker \r\nThanks.", "Ask for final review :)" ]
1,686
1,686
1,686
CONTRIBUTOR
null
# Overview This PR is going to add a support for the usage scenario of "getting a slice from the batch-tokenized sequences". Without this PR, it seems to raise KeyError with the following message KeyError: 'Indexing with integers (to access backend Encoding for a given batch index) is not available when using Python based tokenizers' P.S. The above scenario could be reproduced by using some models new uploaded but not support to Rust-implemented tokenization, such as fnlp/moss-moon-003-sft. Also we can run the following examplar script for reproducing this issue: ```python # test script `/home/workspace/test.py` for this PR. from transformers import AutoTokenizer tok = AutoTokenizer.from_pretrained("fnlp/moss-moon-003-sft", trust_remote_code=True) tok.add_special_tokens({"pad_token": "[PAD]"}) texts = ["Today is a good day!", "It's a good idea!", "How's going?"] batch_tok = tok(texts, padding=True) print(batch_tok[0:3]) # report `KeyError` here ``` # Error Message ```txt Traceback (most recent call last): File "/home/workspace/test.py", line 8, in <module> print(batch_tok[0:3]) # report `KeyError` here File "/home/app/anaconda3/envs/test/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 242, in __getitem__ raise KeyError( KeyError: 'Indexing with integers (to access backend Encoding for a given batch index) is not available when using Python based tokenizers' ``` All in all, I think it seems useful to implement __getitem__ method behind it in Python side :) Note that this PR is associative with the previous closed one. https://github.com/huggingface/transformers/pull/23645
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24039/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24039/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24039", "html_url": "https://github.com/huggingface/transformers/pull/24039", "diff_url": "https://github.com/huggingface/transformers/pull/24039.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24039.patch", "merged_at": 1686137359000 }
https://api.github.com/repos/huggingface/transformers/issues/24038
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24038/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24038/comments
https://api.github.com/repos/huggingface/transformers/issues/24038/events
https://github.com/huggingface/transformers/issues/24038
1,743,070,764
I_kwDOCUB6oc5n5SYs
24,038
Add VGCN-BERT model
{ "login": "Louis-udm", "id": 25377679, "node_id": "MDQ6VXNlcjI1Mzc3Njc5", "avatar_url": "https://avatars.githubusercontent.com/u/25377679?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Louis-udm", "html_url": "https://github.com/Louis-udm", "followers_url": "https://api.github.com/users/Louis-udm/followers", "following_url": "https://api.github.com/users/Louis-udm/following{/other_user}", "gists_url": "https://api.github.com/users/Louis-udm/gists{/gist_id}", "starred_url": "https://api.github.com/users/Louis-udm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Louis-udm/subscriptions", "organizations_url": "https://api.github.com/users/Louis-udm/orgs", "repos_url": "https://api.github.com/users/Louis-udm/repos", "events_url": "https://api.github.com/users/Louis-udm/events{/privacy}", "received_events_url": "https://api.github.com/users/Louis-udm/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "The new implement is here:\r\nhttps://huggingface.co/zhibinlu/vgcn-bert-distilbert-base-uncased" ]
1,686
1,687
1,687
NONE
null
### Model description HI, I am the author of [VGCN-BERT paper](https://arxiv.org/abs/2004.05707), the original implementation is in my gitlab [vgcn-bert repo](https://github.com/Louis-udm/VGCN-BERT), but recently I updated the algorithm and implemented a new version for integrating in Transformer. > Much progress has been made recently on text classification with methods based on neural networks. In particular, models using attention mechanism such as BERT have shown to have the capability of capturing the contextual information within a sentence or document. However, their ability of capturing the global information about the vocabulary of a language is more limited. This latter is the strength of Graph Convolutional Networks (GCN). In this paper, we propose VGCN-BERT model which combines the capability of BERT with a Vocabulary Graph Convolutional Network (VGCN). Local information and global information interact through different layers of BERT, allowing them to influence mutually and to build together a final representation for classification. In our experiments on several text classification datasets, our approach outperforms BERT and GCN alone, and achieve higher effectiveness than that reported in previous studies. I actually finished the integration of my new version and opened the PR. This new VGCN-BERT algorithm has the following improvements: - Greatly speeds up the calculation speed of embedding vocabulary graph convolutinal network (or Word Graph embedding). Taking CoLa as an example, the new model only increases the training time by 11% compared with the base model - Updated subgraph selection algorithm. - Currently using DistilBert as the base model, but it is easy to migrate to other models. - Provide two graph construction methods in vgcn_bert/modeling_graph.py (the same NPMI statistical method as the paper, and the predefined entity-relationship mapping method) I hope that after integrating into transformers, someone can discover some more practical use case. I am ashamed to say that I have not discovered too much real use cases myself, mainly because the word-grounded graph obtained through statistical methods has limited improvement on the LLM model. I think its potential application should be when there are specific/customized graphs that need to be integrated into LLM. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation https://arxiv.org/abs/2004.05707 https://github.com/Louis-udm/VGCN-BERT
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24038/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24038/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24037
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24037/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24037/comments
https://api.github.com/repos/huggingface/transformers/issues/24037/events
https://github.com/huggingface/transformers/issues/24037
1,742,872,400
I_kwDOCUB6oc5n4h9Q
24,037
BitsAndBytesConfig llm_int8_skip_modules does not work in the new version
{ "login": "AntoineBlanot", "id": 91732614, "node_id": "U_kgDOBXe6hg", "avatar_url": "https://avatars.githubusercontent.com/u/91732614?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AntoineBlanot", "html_url": "https://github.com/AntoineBlanot", "followers_url": "https://api.github.com/users/AntoineBlanot/followers", "following_url": "https://api.github.com/users/AntoineBlanot/following{/other_user}", "gists_url": "https://api.github.com/users/AntoineBlanot/gists{/gist_id}", "starred_url": "https://api.github.com/users/AntoineBlanot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AntoineBlanot/subscriptions", "organizations_url": "https://api.github.com/users/AntoineBlanot/orgs", "repos_url": "https://api.github.com/users/AntoineBlanot/repos", "events_url": "https://api.github.com/users/AntoineBlanot/events{/privacy}", "received_events_url": "https://api.github.com/users/AntoineBlanot/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @AntoineBlanot \r\nThanks for the issue and flagging! \r\nhttps://github.com/huggingface/transformers/pull/24043 should fix the issue!" ]
1,686
1,686
1,686
NONE
null
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import RobertaForSequenceClassification, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True, llm_int8_skip_modules=['classifier']) model = RobertaForSequenceClassification.from_pretrained('roberta-large-mnli', quantization_config=quantization_config) ``` ### Expected behavior The 'classifier' layer should be in Float16 but is actually loaded in 8bit. This is problematic because it drastically lower the performance of the model. It also make it impossible to train it (using peft library for example).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24037/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24037/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24036
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24036/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24036/comments
https://api.github.com/repos/huggingface/transformers/issues/24036/events
https://github.com/huggingface/transformers/pull/24036
1,742,861,032
PR_kwDOCUB6oc5SQKRp
24,036
Use TruncatedNormal from Keras initializers
{ "login": "hvaara", "id": 1535968, "node_id": "MDQ6VXNlcjE1MzU5Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/1535968?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hvaara", "html_url": "https://github.com/hvaara", "followers_url": "https://api.github.com/users/hvaara/followers", "following_url": "https://api.github.com/users/hvaara/following{/other_user}", "gists_url": "https://api.github.com/users/hvaara/gists{/gist_id}", "starred_url": "https://api.github.com/users/hvaara/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hvaara/subscriptions", "organizations_url": "https://api.github.com/users/hvaara/orgs", "repos_url": "https://api.github.com/users/hvaara/repos", "events_url": "https://api.github.com/users/hvaara/events{/privacy}", "received_events_url": "https://api.github.com/users/hvaara/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
CONTRIBUTOR
null
# What does this PR do? This PR updates the types of `get_initializer` to use `TruncatedNormal` from `tf.keras.initializers`. Before this change the type was set to `tf.initializers.TruncatedNormal`, while `tf.keras.initializers.TruncatedNormal` is what was returned from the function. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24036/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24036/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24036", "html_url": "https://github.com/huggingface/transformers/pull/24036", "diff_url": "https://github.com/huggingface/transformers/pull/24036.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24036.patch", "merged_at": 1686059505000 }
https://api.github.com/repos/huggingface/transformers/issues/24035
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24035/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24035/comments
https://api.github.com/repos/huggingface/transformers/issues/24035/events
https://github.com/huggingface/transformers/pull/24035
1,742,812,809
PR_kwDOCUB6oc5SP_pZ
24,035
Add overloads for PretrainedModel.from_pretrained
{ "login": "ringohoffman", "id": 27844407, "node_id": "MDQ6VXNlcjI3ODQ0NDA3", "avatar_url": "https://avatars.githubusercontent.com/u/27844407?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ringohoffman", "html_url": "https://github.com/ringohoffman", "followers_url": "https://api.github.com/users/ringohoffman/followers", "following_url": "https://api.github.com/users/ringohoffman/following{/other_user}", "gists_url": "https://api.github.com/users/ringohoffman/gists{/gist_id}", "starred_url": "https://api.github.com/users/ringohoffman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ringohoffman/subscriptions", "organizations_url": "https://api.github.com/users/ringohoffman/orgs", "repos_url": "https://api.github.com/users/ringohoffman/repos", "events_url": "https://api.github.com/users/ringohoffman/events{/privacy}", "received_events_url": "https://api.github.com/users/ringohoffman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24035). All of your documentation changes will be reflected on that endpoint.", "> you can search the source code, it's not present anywhere\r\n\r\nSearching in the source code, you can see it here: https://github.com/search?q=repo%3Ahuggingface%2Ftransformers+%40overload&type=code\r\n\r\nIs there a specific part of this `overload` that makes it difficult to merge in?\r\n\r\nThis overload makes it obvious to users that you only get the `LoadingInfo` when you pass in `output_loading_info=True` and that otherwise they will only get the model they expected.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,686
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #23980 This PR fixes the type hints that users see when calling `PretrainedModel.from_pretrained` before: ```python import transformers bert_model = transformers.BertForSequenceClassification.from_pretrained("...") reveal_type(bert_model) # Type of "bert_model" is "tuple[Unknown | BertForSequenceClassification, dict[str, Unbound | Unknown] | dict[str, Unknown | list[Unknown]] | Unknown] | Unknown | BertForSequenceClassification" bert_model_and_loading_info = transformers.BertForSequenceClassification.from_pretrained("...", output_loading_info=True) reveal_type(bert_model_and_loading_info) # Type of "bert_model_and_loading_info" is "tuple[Unknown | BertForSequenceClassification, dict[str, Unbound | Unknown] | dict[str, Unknown | list[Unknown]] | Unknown] | Unknown | BertForSequenceClassification" ``` after: ```python import transformers bert_model = transformers.BertForSequenceClassification.from_pretrained("...") reveal_type(bert_model) # Type of "bert_model" is "BertForSequenceClassification" bert_model_and_loading_info = transformers.BertForSequenceClassification.from_pretrained("...", output_loading_info=True) reveal_type(bert_model_and_loading_info) # Type of "bert_model_and_loading_info" is "Tuple[BertForSequenceClassification, LoadingInfo]" ``` 1. move `output_loading_info` from variadic kwargs to be an explicit kwarg 2. create overloaded signature for `from_pretrained` based on its value 3. add `LoadingInfo` `TypedDict` ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Documentation: @sgugger, @stevhliu and @MKhalusova
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24035/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24035/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24035", "html_url": "https://github.com/huggingface/transformers/pull/24035", "diff_url": "https://github.com/huggingface/transformers/pull/24035.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24035.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24034
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24034/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24034/comments
https://api.github.com/repos/huggingface/transformers/issues/24034/events
https://github.com/huggingface/transformers/issues/24034
1,742,750,572
I_kwDOCUB6oc5n4ENs
24,034
AttributeError: ‘EvalPrediction’ object has no attribute ‘prediction’
{ "login": "Lionel98", "id": 92796786, "node_id": "U_kgDOBYf3cg", "avatar_url": "https://avatars.githubusercontent.com/u/92796786?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Lionel98", "html_url": "https://github.com/Lionel98", "followers_url": "https://api.github.com/users/Lionel98/followers", "following_url": "https://api.github.com/users/Lionel98/following{/other_user}", "gists_url": "https://api.github.com/users/Lionel98/gists{/gist_id}", "starred_url": "https://api.github.com/users/Lionel98/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lionel98/subscriptions", "organizations_url": "https://api.github.com/users/Lionel98/orgs", "repos_url": "https://api.github.com/users/Lionel98/repos", "events_url": "https://api.github.com/users/Lionel98/events{/privacy}", "received_events_url": "https://api.github.com/users/Lionel98/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Looking at your code, your `compute_metric` function is calling `pred.prediction` while it should be `pred.predictions`. ", "Much appreciated tyvm for your help. " ]
1,686
1,707
1,686
NONE
null
Im trying to finetune Minilm via hugging face using the following Codes: ![image](https://github.com/huggingface/transformers/assets/92796786/945b681b-3eea-4b49-b710-8387f68227b1) Error Message: AttributeError: 'EvalPrediction' object has no attribute 'prediction' ### Who can help? @sgugger @ArthurZucker @gan ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Custom Dataset was used and Mapping was done appropriately. ### Expected behavior A fine tuned model
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24034/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24034/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24033
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24033/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24033/comments
https://api.github.com/repos/huggingface/transformers/issues/24033/events
https://github.com/huggingface/transformers/pull/24033
1,742,671,069
PR_kwDOCUB6oc5SPj1L
24,033
fix type annotation for debug arg
{ "login": "Bearnardd", "id": 43574448, "node_id": "MDQ6VXNlcjQzNTc0NDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/43574448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bearnardd", "html_url": "https://github.com/Bearnardd", "followers_url": "https://api.github.com/users/Bearnardd/followers", "following_url": "https://api.github.com/users/Bearnardd/following{/other_user}", "gists_url": "https://api.github.com/users/Bearnardd/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bearnardd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bearnardd/subscriptions", "organizations_url": "https://api.github.com/users/Bearnardd/orgs", "repos_url": "https://api.github.com/users/Bearnardd/repos", "events_url": "https://api.github.com/users/Bearnardd/events{/privacy}", "received_events_url": "https://api.github.com/users/Bearnardd/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi @amyeroberts, I have made a fix to the code. I'm not certain if there is a more concise way to address this, but I added an additional check for `self.debug` being `None`. The reason for this is that when using `Union[str, List[DebugOption]]`, even if `default=\"\"` is specified, it is still evaluated as `None`.", "@Bearnardd Thanks for updating. Could you share a snippet to reproduce showing the evaluation of `debug` as `None` if left as default? \r\n\r\nIf I create the training arguments directly, when working from `main`, debug defaults to `[]` with the type changes e.g. \r\n\r\n```python\r\nIn [1]: from transformers import TrainingArguments\r\n\r\nIn [2]: args = TrainingArguments(\"dummy_dir\")\r\n\r\nIn [3]: args.debug\r\nOut[3]: []\r\n```\r\n\r\nSo it might be an environment or how it's being used in a script thing? \r\n\r\n\r\n ", "Hi @amyeroberts! It was one od the failing test cases. I will be back at home tomorrow so I will check that to confirm :)", "Hi @amyeroberts! I have done some quick debugging. I was able to obtain the same results as you while running your snippet. However the problem appears when you try to run things from CLI. One of the test cases that were failing is `test_run_seq2seq_no_dist` from `transformers/tests/extended/test_trainer_ext.py` which uses command line arguments. In a result of running this test case there is a chain of internal function calls as follows: \r\n\r\n```\r\nparser = HfArgumentParser((ModelArguments, DataTrainingArguments, Seq2SeqTrainingArguments))\r\nmodel_args, data_args, training_args = parser.parse_args_into_dataclasses()\r\n```\r\n\r\n`parse_args_into_dataclasses` underneath calls `self.parse_known_args(args=args)` which is a method derived from `ArgumentParser`.\r\n\r\n```\r\nnamespace, remaining_args = self.parse_known_args(args=args) # hf_argparser.py:339\r\n```\r\n In the `argparse` itself there is a default action `--debug` which is initialize as `None`. And here is a trick: if `debug` argument is of type str then argparse is able to internally cast it into empty string however it leaves if as `None` if it is of type `Union[str, List[DebugOption]`. Thats why this test fails if we change type annotation of `debug` argument.\r\n\r\n\r\nIs this explanation understandable for you or do you need some additional context/information :) ?", "@Bearnardd Thanks for such a detailed investigation and write up! In this case, resolving this with the `--debug` flag in `argparse` would be very involved and this `None` check works well :) " ]
1,686
1,687
1,687
CONTRIBUTOR
null
# What does this PR do? Fix type annotation for `debug` argument in `training_args.py` Fixes https://github.com/huggingface/transformers/issues/23958 ## Who can review?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24033/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24033/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24033", "html_url": "https://github.com/huggingface/transformers/pull/24033", "diff_url": "https://github.com/huggingface/transformers/pull/24033.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24033.patch", "merged_at": 1687344141000 }
https://api.github.com/repos/huggingface/transformers/issues/24032
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24032/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24032/comments
https://api.github.com/repos/huggingface/transformers/issues/24032/events
https://github.com/huggingface/transformers/pull/24032
1,742,650,324
PR_kwDOCUB6oc5SPfWL
24,032
Tool types
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
MEMBER
null
WIP PR to have specific types for tool outputs, which should clarify interaction to and from Agents. Left to do: - [x] Remove or complete the video integration - [x] Add support for remote tools - [x] Complete documentation - [x] Test it out with real world use-cases - [x] Add a test to ensure that the inputs are cast correctly (so far only the outputs are tested) - [x] Arrange the dependencies so that they don't make all the tests fail
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24032/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24032/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24032", "html_url": "https://github.com/huggingface/transformers/pull/24032", "diff_url": "https://github.com/huggingface/transformers/pull/24032.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24032.patch", "merged_at": 1686332048000 }
https://api.github.com/repos/huggingface/transformers/issues/24031
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24031/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24031/comments
https://api.github.com/repos/huggingface/transformers/issues/24031/events
https://github.com/huggingface/transformers/issues/24031
1,742,573,651
I_kwDOCUB6oc5n3ZBT
24,031
Add scGPT Model
{ "login": "jprivera44", "id": 9093934, "node_id": "MDQ6VXNlcjkwOTM5MzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9093934?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jprivera44", "html_url": "https://github.com/jprivera44", "followers_url": "https://api.github.com/users/jprivera44/followers", "following_url": "https://api.github.com/users/jprivera44/following{/other_user}", "gists_url": "https://api.github.com/users/jprivera44/gists{/gist_id}", "starred_url": "https://api.github.com/users/jprivera44/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jprivera44/subscriptions", "organizations_url": "https://api.github.com/users/jprivera44/orgs", "repos_url": "https://api.github.com/users/jprivera44/repos", "events_url": "https://api.github.com/users/jprivera44/events{/privacy}", "received_events_url": "https://api.github.com/users/jprivera44/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "Hi @jprivera44, thanks for opening this issue! \r\n\r\nThe fastest and easiest way to add a model to be used in the transformers library, is to add the model code and its weights on the code. Here's a how-to guide: https://huggingface.co/docs/transformers/custom_models\r\n\r\ncc @Rocketknight1 \r\n\r\n", "Hello @amyeroberts , thank you for the comment! After reviewing the content, I plan to stick with the process outlined in this [link](https://huggingface.co/transformers/v4.8.0/add_new_model.html), which goes over how to add a model from scratch. Since the model is SOTA the involved process will make it easier for our community to leverage the model, and make the overall codebase more interpretable. If you have any questions please let me know!\r\n\r\nTo give you an update, I just ran the code to load the model weights and I'm now focusing on tracing the forward pass.", "Hi @jprivera44, \r\n\r\nGlad to hear you've already got the weight loading logic working! \r\n\r\nAnyone is welcome to open a model PR in the library. However, please be aware that it is no longer the preferred method. Any model code merged directly into the repo brings its own maintenance costs, and so the barrier to add is a lot higher. \r\n\r\nOur experience is that model PRs always take a lot longer that one expects, and is a large amount of work for both parties, particularly if the contributor hasn't added models previously. \r\n\r\nWith regards to your points: \r\n* SOTA or not, models are just as easily used if they're implemented on the hub. For example, the recent [falcon model](https://huggingface.co/tiiuae/falcon-40b/blob/main/modelling_RW.py) was first added there. \r\n\r\n* I'm not sure how you're defining interpretability, however model code should be equivalently understandable in either place (I'd argue it's easier on the hub without unrelated code & commits - but it's all subjective)\r\n\r\n* What will be a blocker is adding through a PR here. As mentioned above, it can be a long process and, as other community members haven't requested this model, it won't be a priority for us to review and merge in. \r\n", "Hi @amyeroberts, that all makes sense on my end!\r\n\r\nI'll go ahead and add the scGPT model via the custom models link you mentioned, as the initial version of this code.\r\n\r\n" ]
1,685
1,686
null
CONTRIBUTOR
null
### Model description scGPT is a single celled foundation model, based off the GPT architecture. The model is shown to have captured meaningful biological insights into cells and genes. The authors state the model can be fine tuned to downstream tasks included, cell-type annotation, genetic perturbation etc. I'd like to add scGPT to HuggingFace Transformers. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation The paper [scGPT: Towards Building a Foundation Model for Single-Cell 2 Multi-omics Using Generative AI](https://www.biorxiv.org/content/10.1101/2023.04.30.538439v1.full.pdf) by [Haotian Cui](https://www.researchgate.net/scientific-contributions/Haotian-Cui-2193100667), [Chloe Wang](https://www.linkedin.com/in/chloe-xueqi-wang-979712158/?originalSubdomain=ca) , [Hassaan Maan](https://hsmaan.com/), [Bo Wang](https://bowang87.github.io/) Github link: [scGPT by subercui](https://github.com/bowang-lab/scGPT) Model Checkpoint: [Google Drive](https://drive.google.com/drive/folders/1kkug5C7NjvXIwQGGaGoqXTk_Lb_pDrBU) - From this checkpoint I can generate the model weights
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24031/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24031/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/24030
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24030/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24030/comments
https://api.github.com/repos/huggingface/transformers/issues/24030/events
https://github.com/huggingface/transformers/pull/24030
1,742,482,785
PR_kwDOCUB6oc5SO5ux
24,030
Use new parametrization based weight norm if available
{ "login": "ezyang", "id": 13564, "node_id": "MDQ6VXNlcjEzNTY0", "avatar_url": "https://avatars.githubusercontent.com/u/13564?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ezyang", "html_url": "https://github.com/ezyang", "followers_url": "https://api.github.com/users/ezyang/followers", "following_url": "https://api.github.com/users/ezyang/following{/other_user}", "gists_url": "https://api.github.com/users/ezyang/gists{/gist_id}", "starred_url": "https://api.github.com/users/ezyang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ezyang/subscriptions", "organizations_url": "https://api.github.com/users/ezyang/orgs", "repos_url": "https://api.github.com/users/ezyang/repos", "events_url": "https://api.github.com/users/ezyang/events{/privacy}", "received_events_url": "https://api.github.com/users/ezyang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "PyTorch side PR has landed!", "@ezyang \r\n\r\nWith nightly pytorch, we get \r\n\r\n> AttributeError: encoder.pos_conv_embed.conv.weight_v not found in PyTorch model\r\n\r\nwhen trying to load a pytorch model into a TF model.\r\n\r\nThe TF model is looking for `encoder.pos_conv_embed.conv.weight_v` but we now have `encoder.pos_conv_embed.conv.parametrizations.weight.original0`. (This is from our `(TF)Wav2Vec2Model` model class).\r\n\r\n**Question**: In your PR https://github.com/pytorch/pytorch/pull/103001, is this part\r\n\r\n`def _weight_norm_compat_hook()`\r\n\r\nthat deals with the backward compatibility? If so, we will copy it :-) ", "Yep. The change here is not FC so the ingester needs updating." ]
1,685
1,687
1,686
CONTRIBUTOR
null
# What does this PR do? In https://github.com/pytorch/pytorch/pull/103001 I introduce a new parametrization based version of `weight_norm`. One big benefit of the new API is that the resulting model is deepcopy'able; today, you can't deepcopy Wav2Vec2 models. Since the new API isn't even in PyTorch main yet, I'd like to feature gate it here, so that it gets used whenever PyTorch is recent enough to support it. It would be a big help for me if you could take this change earlier rather than later; otherwise I will have to patch transformers in our own CI to get our benchmark harness working on Wav2Vec2. Signed-off-by: Edward Z. Yang <[email protected]> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? cc @sanchit-gandhi
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24030/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24030/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24030", "html_url": "https://github.com/huggingface/transformers/pull/24030", "diff_url": "https://github.com/huggingface/transformers/pull/24030.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24030.patch", "merged_at": 1686072898000 }
https://api.github.com/repos/huggingface/transformers/issues/24029
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24029/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24029/comments
https://api.github.com/repos/huggingface/transformers/issues/24029/events
https://github.com/huggingface/transformers/pull/24029
1,742,468,844
PR_kwDOCUB6oc5SO2ky
24,029
Add check for tied parameters
{ "login": "SunMarc", "id": 57196510, "node_id": "MDQ6VXNlcjU3MTk2NTEw", "avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SunMarc", "html_url": "https://github.com/SunMarc", "followers_url": "https://api.github.com/users/SunMarc/followers", "following_url": "https://api.github.com/users/SunMarc/following{/other_user}", "gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}", "starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions", "organizations_url": "https://api.github.com/users/SunMarc/orgs", "repos_url": "https://api.github.com/users/SunMarc/repos", "events_url": "https://api.github.com/users/SunMarc/events{/privacy}", "received_events_url": "https://api.github.com/users/SunMarc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The test are failing because I used a function that i have added recently in accelerate.utils. Should we use the main for the tests @sgugger ? ", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,685
1,686
1,686
MEMBER
null
# What does this PR do This is the fix in the transformers library of this [PR](https://github.com/huggingface/accelerate/pull/1529). It will fix the case where a user uses their own device map (in the `from_pretrained` method) but forget that parameters that are tied together should be on the same device. We return an error showing which parameters should be on the same device.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24029/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24029/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24029", "html_url": "https://github.com/huggingface/transformers/pull/24029", "diff_url": "https://github.com/huggingface/transformers/pull/24029.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24029.patch", "merged_at": 1686057166000 }
https://api.github.com/repos/huggingface/transformers/issues/24028
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24028/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24028/comments
https://api.github.com/repos/huggingface/transformers/issues/24028/events
https://github.com/huggingface/transformers/pull/24028
1,742,288,219
PR_kwDOCUB6oc5SOOYa
24,028
🚨🚨🚨 Replace DataLoader logic for Accelerate in Trainer, remove unneeded tests 🚨🚨🚨
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "id": 1834083927, "node_id": "MDU6TGFiZWwxODM0MDgzOTI3", "url": "https://api.github.com/repos/huggingface/transformers/labels/External", "name": "External", "color": "fbca04", "default": false, "description": "Using the library with external tools (onnx, tflite, ...)" }, { "id": 1834088753, "node_id": "MDU6TGFiZWwxODM0MDg4NzUz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Tests", "name": "Tests", "color": "a6fcca", "default": false, "description": "Related to tests" }, { "id": 2107554019, "node_id": "MDU6TGFiZWwyMTA3NTU0MDE5", "url": "https://api.github.com/repos/huggingface/transformers/labels/Distributed%20Training%20/%20Models", "name": "Distributed Training / Models", "color": "fef2c0", "default": false, "description": "" }, { "id": 2155169140, "node_id": "MDU6TGFiZWwyMTU1MTY5MTQw", "url": "https://api.github.com/repos/huggingface/transformers/labels/trainer", "name": "trainer", "color": "2ef289", "default": false, "description": "" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Great PR, currently this is breaking my custom collate_fn in the dataloader, still trying to understand why that is. First assumption might be due to multiprocessing?", "@franz101 please open an issue with a reproducer of what you are trying to do so we can help :)" ]
1,685
1,689
1,686
CONTRIBUTOR
null
# What does this PR do? This PR: - Guts the internals for the `DataLoader` in all basic distributed fashions (replacing `pl.Loader` for TPU coming in a follow-up PR) to replace it with `accelerator.prepare` - Removes **two** tests that were deemed unnecessary - Test 1 removed: `tests/trainer/test_trainer.py::TrainerIntegrationTest::test_sampler_seed`, deemed to no longer be necessary to reset the seed, as Accelerate's dataloader setup doesn't need any extra help when iterating/loading back in the seed, regardless of the torch version - Test 2 removed: `tests/trainer/test_trainer.py::TrainerIntegrationTest::test_training_finite_iterable_dataset`, as with Accelerate's new sampler for `IterableDataset` we'll actually catch if it's `None` and raise an error, a new test will be made + clear error message on the `Accelerate` side, with a test added to `Trainer` afterwards. - Modifies two tests to use the proper attribute: Accelerator's `DataLoaders` all have `total_batch_size` rather than `batch_size` - `test_train_and_eval_dataloaders` and `test_data_is_not_parallelized_when_model_is_parallel` Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger @pacman100
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24028/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24028/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24028", "html_url": "https://github.com/huggingface/transformers/pull/24028", "diff_url": "https://github.com/huggingface/transformers/pull/24028.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24028.patch", "merged_at": 1686583417000 }
https://api.github.com/repos/huggingface/transformers/issues/24027
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24027/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24027/comments
https://api.github.com/repos/huggingface/transformers/issues/24027/events
https://github.com/huggingface/transformers/pull/24027
1,742,247,518
PR_kwDOCUB6oc5SOFLE
24,027
Fixes all hidden states output in FlaxT5
{ "login": "ztjhz", "id": 59118459, "node_id": "MDQ6VXNlcjU5MTE4NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/59118459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ztjhz", "html_url": "https://github.com/ztjhz", "followers_url": "https://api.github.com/users/ztjhz/followers", "following_url": "https://api.github.com/users/ztjhz/following{/other_user}", "gists_url": "https://api.github.com/users/ztjhz/gists{/gist_id}", "starred_url": "https://api.github.com/users/ztjhz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ztjhz/subscriptions", "organizations_url": "https://api.github.com/users/ztjhz/orgs", "repos_url": "https://api.github.com/users/ztjhz/repos", "events_url": "https://api.github.com/users/ztjhz/events{/privacy}", "received_events_url": "https://api.github.com/users/ztjhz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "need to fix the tests", "Hmm okay so the PyTorch model is also missing this, so we'd need to update it here. Continuing the discussion in the issue!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Based on the discussion in the issue: https://github.com/huggingface/transformers/issues/23960#issuecomment-1579140627, we should probably not update the Flax and PyTorch T5 models since this would be a surprise breaking change to what is one of the most popular models in the lib. Feel free to make these changes locally @ztjhz if you need all the hidden states! Otherwise we can close this one for now\r\n\r\ncc @amyeroberts " ]
1,685
1,688
1,688
CONTRIBUTOR
null
Fixes #23960 @sanchit-gandhi
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24027/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24027/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24027", "html_url": "https://github.com/huggingface/transformers/pull/24027", "diff_url": "https://github.com/huggingface/transformers/pull/24027.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24027.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24025
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24025/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24025/comments
https://api.github.com/repos/huggingface/transformers/issues/24025/events
https://github.com/huggingface/transformers/pull/24025
1,742,146,876
PR_kwDOCUB6oc5SNvFE
24,025
Fix device placement for model-parallelism in generate for encoder/de…
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,685
1,686
1,686
COLLABORATOR
null
…coders When using model parallelism with encoder/decoder models, there is an issue in `generate` using the encoder in isolation. That encoder won't output logits on the same device as the inputs like the whole model does, and we get mismatched device errors. Repro: ```py from transformers import AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M", device_map="auto") inputs = {"input_ids": torch.tensor([[256047, 94124, 248079, 15697, 248203, 2]], device=0), 'attention_mask': torch.tensor([[1, 1, 1, 1, 1, 1]], device=0), 'forced_bos_token_id': 256079} model.generate(**inputs, max_length=4000) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24025/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24025/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24025", "html_url": "https://github.com/huggingface/transformers/pull/24025", "diff_url": "https://github.com/huggingface/transformers/pull/24025.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24025.patch", "merged_at": 1686076260000 }
https://api.github.com/repos/huggingface/transformers/issues/24024
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24024/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24024/comments
https://api.github.com/repos/huggingface/transformers/issues/24024/events
https://github.com/huggingface/transformers/pull/24024
1,742,068,244
PR_kwDOCUB6oc5SNdq2
24,024
Pin `deepspeed` to `0.9.2` for now
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "cc @pacman100 " ]
1,685
1,685
1,685
COLLABORATOR
null
# What does this PR do? DeepSpeed 0.9.3 has some issue, and introduced many more failures. See for example [here](https://github.com/huggingface/transformers/actions/runs/5166856163/jobs/9307371184) and [there](https://github.com/huggingface/transformers/actions/runs/5166856163/jobs/9307371242). Let's pin `deepspeed==0.9.2` for now 🙏
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24024/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24024/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24024", "html_url": "https://github.com/huggingface/transformers/pull/24024", "diff_url": "https://github.com/huggingface/transformers/pull/24024.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24024.patch", "merged_at": 1685988028000 }
https://api.github.com/repos/huggingface/transformers/issues/24023
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24023/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24023/comments
https://api.github.com/repos/huggingface/transformers/issues/24023/events
https://github.com/huggingface/transformers/pull/24023
1,742,032,248
PR_kwDOCUB6oc5SNVzg
24,023
Fixing single candidate_label return.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,685
1,704
1,686
CONTRIBUTOR
null
# What does this PR do? Fix #24008 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24023/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24023/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24023", "html_url": "https://github.com/huggingface/transformers/pull/24023", "diff_url": "https://github.com/huggingface/transformers/pull/24023.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24023.patch", "merged_at": 1686057970000 }
https://api.github.com/repos/huggingface/transformers/issues/24022
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24022/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24022/comments
https://api.github.com/repos/huggingface/transformers/issues/24022/events
https://github.com/huggingface/transformers/pull/24022
1,741,850,599
PR_kwDOCUB6oc5SMtzv
24,022
Update README.md
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,685
1,685
1,685
COLLABORATOR
null
# What does this PR do? Remove the mention of `prepare_for_doc_test.py`, as this file is no longer necessary and is removed in #23271 Thanks @NielsRogge for finding this.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24022/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24022/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24022", "html_url": "https://github.com/huggingface/transformers/pull/24022", "diff_url": "https://github.com/huggingface/transformers/pull/24022.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24022.patch", "merged_at": 1685977695000 }
https://api.github.com/repos/huggingface/transformers/issues/24021
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24021/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24021/comments
https://api.github.com/repos/huggingface/transformers/issues/24021/events
https://github.com/huggingface/transformers/issues/24021
1,741,768,506
I_kwDOCUB6oc5n0Uc6
24,021
How to use LogitsWarper within .generate()?
{ "login": "JellePiepenbrock", "id": 14347764, "node_id": "MDQ6VXNlcjE0MzQ3NzY0", "avatar_url": "https://avatars.githubusercontent.com/u/14347764?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JellePiepenbrock", "html_url": "https://github.com/JellePiepenbrock", "followers_url": "https://api.github.com/users/JellePiepenbrock/followers", "following_url": "https://api.github.com/users/JellePiepenbrock/following{/other_user}", "gists_url": "https://api.github.com/users/JellePiepenbrock/gists{/gist_id}", "starred_url": "https://api.github.com/users/JellePiepenbrock/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JellePiepenbrock/subscriptions", "organizations_url": "https://api.github.com/users/JellePiepenbrock/orgs", "repos_url": "https://api.github.com/users/JellePiepenbrock/repos", "events_url": "https://api.github.com/users/JellePiepenbrock/events{/privacy}", "received_events_url": "https://api.github.com/users/JellePiepenbrock/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nSee this for a great blog post on that: https://towardsdatascience.com/the-power-of-constrained-language-models-cf63b65a035d.\r\n\r\nThe blog post includes a Colab notebook that showcases creating a custom `LogitsProcessor`", "cc @gante ", "@JellePiepenbrock 👋 \r\n\r\nTemperature only has an effect on sample-based text generation method. When no sampling is done (`do_sample=False`), the most likely token(s) will be selected at each token selection step, so temperature scaling doesn't change the output at all.\r\n\r\nHave a look at this blog post: https://huggingface.co/blog/how-to-generate", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,689
1,689
NONE
null
### System Info - `transformers` version: 4.16.2 - Platform: Linux-5.4.0-100-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.7.1+cu101 (True) ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Using a GPT2 Model, I want to affect the logits, as they are used in the generate function. To do this, I created a LogitsWarper, which is the only member of the LogitsProcessorList: ``` logits_processor_list = LogitsProcessorList([ TemperatureLogitsWarper(10.0) ]) ``` This is then given as an argument to the generate() function: ``` beam_output = model.generate( sample, ... logits_processor_list=logits_processor_list, early_stopping=True, num_return_sequences=k, ... do_sample=False, output_scores=True, return_dict_in_generate=True, length_penalty=0, ) ``` ### Expected behavior When changing the temperature, I expect the output sequence probabilities to be different, but they do not differ between a temperature of 1.0 and 10.0. My understanding is that logits_processor_list will be propagated to the specific search function that will be called (beam search, in this case). Should I do this differently, or is there an easier way to affect the temperature for all the search procedures? I know that .generate has a _temperature_ parameter, but this seems to only be used automatically when do_sample=True (https://github.com/huggingface/transformers/issues/22405) . How can I change the temperature when that is not the case?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24021/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24021/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24020
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24020/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24020/comments
https://api.github.com/repos/huggingface/transformers/issues/24020/events
https://github.com/huggingface/transformers/pull/24020
1,741,760,511
PR_kwDOCUB6oc5SMZ_w
24,020
Fix typo in Llama docstrings
{ "login": "Kh4L", "id": 3193578, "node_id": "MDQ6VXNlcjMxOTM1Nzg=", "avatar_url": "https://avatars.githubusercontent.com/u/3193578?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Kh4L", "html_url": "https://github.com/Kh4L", "followers_url": "https://api.github.com/users/Kh4L/followers", "following_url": "https://api.github.com/users/Kh4L/following{/other_user}", "gists_url": "https://api.github.com/users/Kh4L/gists{/gist_id}", "starred_url": "https://api.github.com/users/Kh4L/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Kh4L/subscriptions", "organizations_url": "https://api.github.com/users/Kh4L/orgs", "repos_url": "https://api.github.com/users/Kh4L/repos", "events_url": "https://api.github.com/users/Kh4L/events{/privacy}", "received_events_url": "https://api.github.com/users/Kh4L/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts yes, I ran it locally, got the error and fixed it,\r\n\r\nhere are the types in my code\r\n```\r\ntype(tokenizer)\r\n<class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>\r\n\r\ntype(inputs)\r\n<class 'torch.Tensor'\r\n``", "@Kh4L Out of interest - could you share the checkpoint being used? Could you also run this snippet with the checkpoint and share the output? \r\n\r\n```python\r\nfrom transformers import AutoTokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(MODEL_CHECKPOINT)\r\nprompt = \"Hey, are you conscious? Can you talk to me?\"\r\ninputs = tokenizer(prompt, return_tensors=\"pt\")\r\nprint(inputs)\r\nprint(type(inputs))\r\nprint(type(tokenizer))\r\n```\r\n\r\nThe current changes need to be checked with a standard checkpoint for all the models affected here. For instance, if I run the snippet with the OPT checkpoint in the example\r\n\r\n`MODEL_CHECKPOINT = \"facebook/opt-350m\"`\r\n\r\nI get the following output:\r\n```\r\n{'input_ids': tensor([[ 2, 13368, 6, 32, 47, 13316, 116, 2615, 47, 1067,\r\n 7, 162, 116]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}\r\n<class 'transformers.tokenization_utils_base.BatchEncoding'>\r\n<class 'transformers.models.gpt2.tokenization_gpt2_fast.GPT2TokenizerFast'>\r\n```\r\n\r\n\r\n", "@amyeroberts Checkpoint is the 7B LLama converted to HF, I get the same output!\r\nSorry for the confusion, I was using `LlamaTokenizer` and not `AutoTokenizer` in my code", "Thanks for the detailed review!\r\nI am a bit confused as I can't see the latest commit https://github.com/Kh4L/pytorch-transformers/commit/62ea9f244b70ae190b20c69742027e277a241f2e in this PR, even though I pushed it on my branch https://github.com/Kh4L/pytorch-transformers/tree/fix_conscious_typo 🤔 ", "_The documentation is not available anymore as the PR was closed or merged._", "@Kh4L Github PRs were down for part of yesterday - I think it was just that. I can see the commit now and all tests are passing :) " ]
1,685
1,686
1,686
CONTRIBUTOR
null
# What does this PR do? Fix typos in docs. ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24020/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24020/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24020", "html_url": "https://github.com/huggingface/transformers/pull/24020", "diff_url": "https://github.com/huggingface/transformers/pull/24020.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24020.patch", "merged_at": 1686241148000 }
https://api.github.com/repos/huggingface/transformers/issues/24019
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24019/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24019/comments
https://api.github.com/repos/huggingface/transformers/issues/24019/events
https://github.com/huggingface/transformers/pull/24019
1,741,725,346
PR_kwDOCUB6oc5SMSQO
24,019
Add none check when instantiating tokenizer from auto
{ "login": "achsvg", "id": 3223219, "node_id": "MDQ6VXNlcjMyMjMyMTk=", "avatar_url": "https://avatars.githubusercontent.com/u/3223219?v=4", "gravatar_id": "", "url": "https://api.github.com/users/achsvg", "html_url": "https://github.com/achsvg", "followers_url": "https://api.github.com/users/achsvg/followers", "following_url": "https://api.github.com/users/achsvg/following{/other_user}", "gists_url": "https://api.github.com/users/achsvg/gists{/gist_id}", "starred_url": "https://api.github.com/users/achsvg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/achsvg/subscriptions", "organizations_url": "https://api.github.com/users/achsvg/orgs", "repos_url": "https://api.github.com/users/achsvg/repos", "events_url": "https://api.github.com/users/achsvg/events{/privacy}", "received_events_url": "https://api.github.com/users/achsvg/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks a lot for opening the PR and sorry for the delay 🤗 ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,693
1,693
NONE
null
# What does this PR do? Many tokenizers require sentencepiece to be installed. When not installed the AutoTokenizer will fail with an unhelpful error: ``` File "/private/var/tmp/_bazel_anthony/26db13ca47961bc86c979e31d4f830d7/execroot/collimator/bazel-out/darwin_arm64-fastbuild/bin/ml_training/ai_assistant/scripts/prompts/collect_prompts_to_jsonl.runfiles/ml_training_transformers/site-packages/transformers/models/auto/tokenization_auto.py", line 395, in tokenizer_class_from_name return getattr(module, class_name) TypeError: getattr(): attribute name must be string ``` This PR fixes that by checking that the tokenizer name is not None before trying to instantiate it and it give a better error message in case it is None. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. - tokenizers: @ArthurZucker <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24019/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24019/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24019", "html_url": "https://github.com/huggingface/transformers/pull/24019", "diff_url": "https://github.com/huggingface/transformers/pull/24019.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24019.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24018
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24018/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24018/comments
https://api.github.com/repos/huggingface/transformers/issues/24018/events
https://github.com/huggingface/transformers/pull/24018
1,741,708,425
PR_kwDOCUB6oc5SMOkd
24,018
Fix `MobileViTV2` checkpoint name
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@ydshieh Sorry, my bad, I'd assumed because the checkpoints in the PR were under the org they were already uploaded and didn't verify. \r\n\r\n> Question: Should we upload the model to apple/mobilevitv2-1.0-imagenet1k-256 and change all checkpoint names in the code to apple/mobilevitv2-1.0-imagenet1k-256?\r\n\r\nYep! ", "OK, so I should change everything to `apple` in this PR instead. I am not a member of `apple` on `Hub`. Maybe I can ask @hollance to help me on this?", "Also, I'd like to suggest moving the following checkpoints under `apple` org as well.\r\n\r\n```\r\nshehan97/mobilevitv2-1.0-voc-deeplabv3\r\n\r\nshehan97/mobilevitv2-2.0-imagenet1k-256\r\n\r\nshehan97/mobilevitv2-1.5-voc-deeplabv3\r\n```\r\n", "@amyeroberts checkpoint is uploaded to `apple`. All tests pass now 🙏 ", "Thanks. I missed the other 3 ones 😭 ", "> Also, I'd like to suggest moving the following checkpoints under `apple` org as well.\r\n> \r\n> ```\r\n> shehan97/mobilevitv2-1.0-voc-deeplabv3\r\n> \r\n> shehan97/mobilevitv2-2.0-imagenet1k-256\r\n> \r\n> shehan97/mobilevitv2-1.5-voc-deeplabv3\r\n> ```\r\n\r\nThank you @shehanmunasinghe for the heads up 🤗 " ]
1,685
1,686
1,685
COLLABORATOR
null
# What does this PR do? For `tests/models/mobilevitv2/test_modeling_mobilevitv2.py::MobileViTV2ModelTest::test_model_from_pretrained`, we get ```bash (line 433) OSError: apple/mobilevitv2-1.0-imagenet1k-256 is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' ``` This PR changes the checkpoint name to avoid this failure. **Question**: Should we upload the model to `apple/mobilevitv2-1.0-imagenet1k-256` and change all checkpoint names in the code to `apple/mobilevitv2-1.0-imagenet1k-256`? cc @shehanmunasinghe
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24018/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24018/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24018", "html_url": "https://github.com/huggingface/transformers/pull/24018", "diff_url": "https://github.com/huggingface/transformers/pull/24018.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24018.patch", "merged_at": 1685981566000 }
https://api.github.com/repos/huggingface/transformers/issues/24017
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24017/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24017/comments
https://api.github.com/repos/huggingface/transformers/issues/24017/events
https://github.com/huggingface/transformers/pull/24017
1,741,681,538
PR_kwDOCUB6oc5SMIim
24,017
Skip `test_multi_gpu_data_parallel_forward` for `MobileViTV2ModelTest`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Yes, see #21991. That PR mentioned torch 2.0, but this test are skipped for some other modes even before that PR for other reasons." ]
1,685
1,685
1,685
COLLABORATOR
null
# What does this PR do? Skip `test_multi_gpu_data_parallel_forward` for `MobileViTV2ModelTest`. This passes on CI, but the other 2x tests running after this one are all failing with `CUDA error: misaligned address`. (Similar to #21991)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24017/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24017/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24017", "html_url": "https://github.com/huggingface/transformers/pull/24017", "diff_url": "https://github.com/huggingface/transformers/pull/24017.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24017.patch", "merged_at": 1685975373000 }
https://api.github.com/repos/huggingface/transformers/issues/24016
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24016/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24016/comments
https://api.github.com/repos/huggingface/transformers/issues/24016/events
https://github.com/huggingface/transformers/pull/24016
1,741,675,874
PR_kwDOCUB6oc5SMHTy
24,016
Add DINOv2
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,685
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? This PR adds DINOv2. Fixes #23739 #23773 To do: - [x] transfer checkpoints to the facebook org (when are we going to have a meta org on the hub?)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24016/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24016/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24016", "html_url": "https://github.com/huggingface/transformers/pull/24016", "diff_url": "https://github.com/huggingface/transformers/pull/24016.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24016.patch", "merged_at": 1689690847000 }
https://api.github.com/repos/huggingface/transformers/issues/24015
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24015/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24015/comments
https://api.github.com/repos/huggingface/transformers/issues/24015/events
https://github.com/huggingface/transformers/pull/24015
1,741,463,588
PR_kwDOCUB6oc5SLYnn
24,015
change to suitable with half precision in tpu
{ "login": "pphuc25", "id": 81808312, "node_id": "MDQ6VXNlcjgxODA4MzEy", "avatar_url": "https://avatars.githubusercontent.com/u/81808312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pphuc25", "html_url": "https://github.com/pphuc25", "followers_url": "https://api.github.com/users/pphuc25/followers", "following_url": "https://api.github.com/users/pphuc25/following{/other_user}", "gists_url": "https://api.github.com/users/pphuc25/gists{/gist_id}", "starred_url": "https://api.github.com/users/pphuc25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pphuc25/subscriptions", "organizations_url": "https://api.github.com/users/pphuc25/orgs", "repos_url": "https://api.github.com/users/pphuc25/repos", "events_url": "https://api.github.com/users/pphuc25/events{/privacy}", "received_events_url": "https://api.github.com/users/pphuc25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sgugger " ]
1,685
1,685
1,685
CONTRIBUTOR
null
I have made changes to your statement to make it suitable for a pull request on GitHub. Please review the following: Description: I am utilizing TPU for training and have observed that when setting the half_precision_backend to 'auto', it automatically assigns it as 'cuda_amp'. However, this causes a bug since there is no torch.cuda available on TPUs. To resolve this issue, I have implemented a conditional check where if the device is XLA (TPU), it will switch to 'cpu_amp' instead, and then avoid bug that call torch.cuda when using TPU. However, it appears that this change has led to a decrease in TPU speed. I would appreciate it if you could review my modification and provide suggestions for improvements.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24015/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24015/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24015", "html_url": "https://github.com/huggingface/transformers/pull/24015", "diff_url": "https://github.com/huggingface/transformers/pull/24015.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24015.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24014
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24014/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24014/comments
https://api.github.com/repos/huggingface/transformers/issues/24014/events
https://github.com/huggingface/transformers/pull/24014
1,741,442,677
PR_kwDOCUB6oc5SLUCe
24,014
fix accelerator prepare during eval only mode
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The thing is that mixed precision application for eval only mode won't work unless we prepare model" ]
1,685
1,686
1,686
CONTRIBUTOR
null
# What does this PR do? 1. As mentioned in https://github.com/huggingface/transformers/pull/23957#issuecomment-1573810900, currently the accelerate `prepare` method is happening only during training loop. If the user is directly doing `eval`/`predict` without the training loop, the model isn't prepared leading to wrong behaviour. This PR is aimed at fixing it. 2. Should be merged after https://github.com/huggingface/accelerate/pull/1540
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24014/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24014/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24014", "html_url": "https://github.com/huggingface/transformers/pull/24014", "diff_url": "https://github.com/huggingface/transformers/pull/24014.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24014.patch", "merged_at": 1686166394000 }
https://api.github.com/repos/huggingface/transformers/issues/24013
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24013/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24013/comments
https://api.github.com/repos/huggingface/transformers/issues/24013/events
https://github.com/huggingface/transformers/pull/24013
1,741,414,597
PR_kwDOCUB6oc5SLNuk
24,013
[doc-build] Use new github workflows
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Supercded by https://github.com/huggingface/transformers/pull/24079" ]
1,685
1,686
1,686
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24013/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24013/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24013", "html_url": "https://github.com/huggingface/transformers/pull/24013", "diff_url": "https://github.com/huggingface/transformers/pull/24013.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24013.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24012
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24012/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24012/comments
https://api.github.com/repos/huggingface/transformers/issues/24012/events
https://github.com/huggingface/transformers/pull/24012
1,741,331,665
PR_kwDOCUB6oc5SK7ou
24,012
[No merge] Just a Test
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,685
1,685
1,685
COLLABORATOR
null
# What does this PR do? Just a Test
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24012/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24012/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24012", "html_url": "https://github.com/huggingface/transformers/pull/24012", "diff_url": "https://github.com/huggingface/transformers/pull/24012.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24012.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24011
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24011/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24011/comments
https://api.github.com/repos/huggingface/transformers/issues/24011/events
https://github.com/huggingface/transformers/pull/24011
1,741,253,486
PR_kwDOCUB6oc5SKqh0
24,011
fix trainer slow tests related to hyperparam search
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,685
1,685
1,685
CONTRIBUTOR
null
# What does this PR do? 1. With the Accelerate integration in Trainer, Hyperparam Search tests were failing. This PR fixes it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24011/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24011/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24011", "html_url": "https://github.com/huggingface/transformers/pull/24011", "diff_url": "https://github.com/huggingface/transformers/pull/24011.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24011.patch", "merged_at": 1685968090000 }
https://api.github.com/repos/huggingface/transformers/issues/24008
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24008/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24008/comments
https://api.github.com/repos/huggingface/transformers/issues/24008/events
https://github.com/huggingface/transformers/issues/24008
1,741,136,842
I_kwDOCUB6oc5nx6PK
24,008
Zero-shot image classification with single-label results in 'float is not iterable' error
{ "login": "josephrocca", "id": 1167575, "node_id": "MDQ6VXNlcjExNjc1NzU=", "avatar_url": "https://avatars.githubusercontent.com/u/1167575?v=4", "gravatar_id": "", "url": "https://api.github.com/users/josephrocca", "html_url": "https://github.com/josephrocca", "followers_url": "https://api.github.com/users/josephrocca/followers", "following_url": "https://api.github.com/users/josephrocca/following{/other_user}", "gists_url": "https://api.github.com/users/josephrocca/gists{/gist_id}", "starred_url": "https://api.github.com/users/josephrocca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/josephrocca/subscriptions", "organizations_url": "https://api.github.com/users/josephrocca/orgs", "repos_url": "https://api.github.com/users/josephrocca/repos", "events_url": "https://api.github.com/users/josephrocca/events{/privacy}", "received_events_url": "https://api.github.com/users/josephrocca/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It might make sense to have some sort of explicit option for opting into the \"similarity score\" mode - that may be less surprising e.g. for people that are using the pipeline for a variable number of labels, and expect the scores to mean the same thing regardless of how many labels are passed in.\r\n\r\nBut if that approach is taken, then it seems like it would require a breaking change, since zero-shot text classification doesn't return a score of 1 if you pass a single label.", "Yes, The single label odd behavior, is legacy, I don't think it's something we want to support in the same way for a new pipeline.\r\n\r\nThat being said, having a parameter to deactivate normalization, and at the very least *not crashing* is desirable.\r\n", "Created a PR to just fix the failure (will just return 1.0 all the time)" ]
1,685
1,704
1,686
CONTRIBUTOR
null
### System Info - `transformers` version: 4.29.2 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.15.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1+cu118 (False) - Tensorflow version (GPU?): 2.12.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (cpu) - Jax version: 0.4.10 - JaxLib version: 0.4.10 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @Narsil ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction **Colab notebook**: https://colab.research.google.com/gist/josephrocca/fdf9537e06fd15d2b5a81727fd70d56b/zero-shot-classification-with-single-label.ipynb ```py !pip install transformers !wget https://i.imgur.com/RKsLoNB.png from transformers import pipeline image_classifier = pipeline("zero-shot-image-classification", model="openai/clip-vit-large-patch14-336") text_classifier = pipeline(model="facebook/bart-large-mnli") # multi-label text - works ✅ text_classifier("houston, we have a problem with the thruster", candidate_labels=["astronaut", "forest cabin", "rabbit and lion"]) # single-label text - works ✅ text_classifier("houston, we have a problem with the thruster", candidate_labels=["astronaut"]) # multi-label image - works ✅ image_classifier("RKsLoNB.png", candidate_labels = ["astronaut", "forest cabin", "rabbit and lion"]) # single-label image - doesn't work ❌ image_classifier("RKsLoNB.png", candidate_labels = ["astronaut"]) ``` ### Expected behavior Like in the text case, I'd expect it to not give an error, and importantly, I'd expect it to give a value between 0 and 1, rather than giving a value of exactly 1 (again, like in the text classification case). In other words, if you provide only 1 label, the scores change from being *relative to other label scores* to being a simple **similarity score** between the text and the image - i.e. a binary classification (with the underlying score exposed so you can choose your own threshold).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24008/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24008/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24007
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24007/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24007/comments
https://api.github.com/repos/huggingface/transformers/issues/24007/events
https://github.com/huggingface/transformers/pull/24007
1,740,996,272
PR_kwDOCUB6oc5SJytF
24,007
[WIP] Hivemind Integration
{ "login": "chavinlo", "id": 85657083, "node_id": "MDQ6VXNlcjg1NjU3MDgz", "avatar_url": "https://avatars.githubusercontent.com/u/85657083?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chavinlo", "html_url": "https://github.com/chavinlo", "followers_url": "https://api.github.com/users/chavinlo/followers", "following_url": "https://api.github.com/users/chavinlo/following{/other_user}", "gists_url": "https://api.github.com/users/chavinlo/gists{/gist_id}", "starred_url": "https://api.github.com/users/chavinlo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chavinlo/subscriptions", "organizations_url": "https://api.github.com/users/chavinlo/orgs", "repos_url": "https://api.github.com/users/chavinlo/repos", "events_url": "https://api.github.com/users/chavinlo/events{/privacy}", "received_events_url": "https://api.github.com/users/chavinlo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The idea is to allow for an \"infinite run\" with hivemind where users can join using the hivemind DHT.\r\nThe DHT will have to be initialized before initiating the trainingarguments so that the user can mess with it more freely, then the optimizer kwargs will just be passed on internally.\r\n\r\nMost of the changes are related to `max_steps`. Although hivemind should be able to work with it, I think it would be better to not use it to let it scale freely. (in other words, the user will not be able to predict the ammount of peers that will join to scale the max_steps well enough)\r\n\r\nRight now I am having some issues with `steps_trained_in_current_epoch`, the tqdm progress bar, and wandb being enabled out of nowhere. However, it seems it can run some steps, it just doesn't reports it (wandb or progress bar)\r\n\r\nSome feedback would be nice.", "> Thanks for your PR! This rewrites a lot of the `Trainer` internals, so maybe it should be better to define a `Trainer` subclass hosted on the `hivemind` side?\r\n\r\nI was thinking of that too but then what would happen with the sub trainers? (Seq2Seq for example) I have no problem with that though.", "> Right now I am having some issues with steps_trained_in_current_epoch, the tqdm progress bar, and wandb being enabled out of nowhere. However, it seems it can run some steps, it just doesn't reports it (wandb or progress bar)\r\n\r\n@sgugger \r\nEdit: solved by `self.total_batched_samples = self.total_batched_samples + 1`", "Will reopen this once I believe it's good enough and has a clean integration.\r\nFor now I will just use it internally" ]
1,685
1,686
1,686
NONE
null
# What does this PR do? This PR (will) add integration for hivemind (https://github.com/learning-at-home/hivemind), a PyTorch library for decentralized deep learning across the Internet. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier --!> Library: <!-- - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker --!> - trainer: @sgugger <!-- Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24007/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24007/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24007", "html_url": "https://github.com/huggingface/transformers/pull/24007", "diff_url": "https://github.com/huggingface/transformers/pull/24007.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24007.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24006
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24006/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24006/comments
https://api.github.com/repos/huggingface/transformers/issues/24006/events
https://github.com/huggingface/transformers/issues/24006
1,740,986,516
I_kwDOCUB6oc5nxViU
24,006
Unexpect behaviour
{ "login": "lucasjinreal", "id": 21303438, "node_id": "MDQ6VXNlcjIxMzAzNDM4", "avatar_url": "https://avatars.githubusercontent.com/u/21303438?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lucasjinreal", "html_url": "https://github.com/lucasjinreal", "followers_url": "https://api.github.com/users/lucasjinreal/followers", "following_url": "https://api.github.com/users/lucasjinreal/following{/other_user}", "gists_url": "https://api.github.com/users/lucasjinreal/gists{/gist_id}", "starred_url": "https://api.github.com/users/lucasjinreal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucasjinreal/subscriptions", "organizations_url": "https://api.github.com/users/lucasjinreal/orgs", "repos_url": "https://api.github.com/users/lucasjinreal/repos", "events_url": "https://api.github.com/users/lucasjinreal/events{/privacy}", "received_events_url": "https://api.github.com/users/lucasjinreal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada and @pacman100 \r\nBut they won't be able to help you without seeing a reproducer of the issue.", "this is very weired, I wish I could post a reproducible code here but this could only be done by opensource all my code which is limited due to policy. However, I think merged lora and the basemodel should be likely the same, why they could even get OOM, could be any possible reason?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,689
1,689
NONE
null
I have a base model and a lora model trained. And a new model which save with merge_and_unload() But the saved model loaded always got OOM, is that normal? This should't happen in my opiiopn. the saved model is same size aas 7B ![image](https://github.com/huggingface/transformers/assets/21303438/a588f3fb-00fc-4d2c-8dd6-8f7d138d42ca) and the script I load model are same.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24006/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24006/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24005
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24005/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24005/comments
https://api.github.com/repos/huggingface/transformers/issues/24005/events
https://github.com/huggingface/transformers/issues/24005
1,740,941,311
I_kwDOCUB6oc5nxKf_
24,005
Zero-shot classification pipeline does not appear to batch examples
{ "login": "lsimoneau", "id": 31768, "node_id": "MDQ6VXNlcjMxNzY4", "avatar_url": "https://avatars.githubusercontent.com/u/31768?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lsimoneau", "html_url": "https://github.com/lsimoneau", "followers_url": "https://api.github.com/users/lsimoneau/followers", "following_url": "https://api.github.com/users/lsimoneau/following{/other_user}", "gists_url": "https://api.github.com/users/lsimoneau/gists{/gist_id}", "starred_url": "https://api.github.com/users/lsimoneau/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lsimoneau/subscriptions", "organizations_url": "https://api.github.com/users/lsimoneau/orgs", "repos_url": "https://api.github.com/users/lsimoneau/repos", "events_url": "https://api.github.com/users/lsimoneau/events{/privacy}", "received_events_url": "https://api.github.com/users/lsimoneau/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @Narsil ", "> Assuming you’re using the same model, the pipeline is likely faster because it batches the inputs. If you pass a single sequence with 4 labels, you have an effective batch size of 4, and the pipeline will pass these through the model in a single pass.\r\n\r\nThis comment could be old, this has been changed 1yr+ ago.\r\n\r\nNow the batching happens when using `pipeline(..., batch_size=4)` (should you want this batch_size).\r\n\r\nThe batching happens regardless on the number of candidate_labels, and this was changed exactly for this reason. If we batched on candidate_labels automatically, users couldn't control the memory requirements nicely, so it would OOM easily with large number of labels, and couldn't batch more than number of labels if the GPU allowed for it.\r\n\r\nSo now `batch_size=n` will batch `n` samples for all forward passes, in could be `< len(candidate_labels)` or `> len(candidate_labels)`. Meaning you have a much finer control over the batching mecanism.\r\n\r\nThis is documented at the `pipeline` level since this behavior is orthogonal to each specific pipeline's behavior.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@Narsil I am running into a similar problem that batch_size can only make a difference for up to the number of labels. i.e. It's not batching across sentences if we pass in a list of strings. In our test, we have 5 candidate_labels, and setting `batch_size=5` is almost the same as `batch_size=32`.\r\n\r\nLooking into the implementation, it seems we will set is_last for each sentence's last label: https://github.com/Narsil/transformers/blob/main/src/transformers/pipelines/zero_shot_classification.py#L190-L195\r\nand then the is_last will stop the accumulator when the batch is not full:https://github.com/Narsil/transformers/blob/main/src/transformers/pipelines/pt_utils.py#L270-L275\r\n Not sure does it mean that `batch_size` only takes effect on zero-shot-classifier when `batch_size < len(candidate_labels)`?\r\n" ]
1,685
1,703
1,689
NONE
null
### System Info - `transformers` version: 4.29.2 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.15.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu) - Jax version: 0.4.10 - JaxLib version: 0.4.10 ### Who can help? @narsi ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Execute the zero-shot classification pipeline and vary the number of labels: ``` hf_pipeline = pipeline('zero-shot-classification', model="facebook/bart-large-mnli", device=device) hf_pipeline("top 10 destinations to visit this summer", candidate_labels=["travel", "lifestyle", "technology"]) ``` ### Expected behavior If we increase the number of labels, when running inference on a GPU we would expect latency to remain relatively constant as the inputs to the underlying entailment model should be batched together. This is not explicitly documented, but the forum post announcing the zero-shot pipeline does say this: >Assuming you’re using the same model, the pipeline is likely faster because it batches the inputs. If you pass a single sequence with 4 labels, you have an effective batch size of 4, and the pipeline will pass these through the model in a single pass. In practice though we see latency increase more or less linearly with label count (compared against a naive implementation batching up inferences to bart-mnli for the same inputs): ![image](https://github.com/huggingface/transformers/assets/31768/ed1f8ee3-c44c-405e-b247-90704c628459) Here is the colab used to make the graph above: https://colab.research.google.com/drive/19YiQFDcJUm8iz0azYWX35I6-vQ-bTheC?usp=sharing that demonstrates the issue.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24005/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24005/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24003
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24003/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24003/comments
https://api.github.com/repos/huggingface/transformers/issues/24003/events
https://github.com/huggingface/transformers/issues/24003
1,740,820,187
I_kwDOCUB6oc5nws7b
24,003
No inf checks were recorded for this optimizer
{ "login": "lanqianlong", "id": 6349557, "node_id": "MDQ6VXNlcjYzNDk1NTc=", "avatar_url": "https://avatars.githubusercontent.com/u/6349557?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lanqianlong", "html_url": "https://github.com/lanqianlong", "followers_url": "https://api.github.com/users/lanqianlong/followers", "following_url": "https://api.github.com/users/lanqianlong/following{/other_user}", "gists_url": "https://api.github.com/users/lanqianlong/gists{/gist_id}", "starred_url": "https://api.github.com/users/lanqianlong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lanqianlong/subscriptions", "organizations_url": "https://api.github.com/users/lanqianlong/orgs", "repos_url": "https://api.github.com/users/lanqianlong/repos", "events_url": "https://api.github.com/users/lanqianlong/events{/privacy}", "received_events_url": "https://api.github.com/users/lanqianlong/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Miss configuration LoraConfig caused the issue", "@lanqianlong what was the misconfiguration?" ]
1,685
1,692
1,685
NONE
null
### System Info transformers 4.30.0.dev0 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction `trainer = transformers.Trainer( model = model, train_dataset=data, args =training_args, data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False), ) trainer.train() ` ### Expected behavior ``` --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) /tmp/ipykernel_1639/4032920361.py in <cell line: 1>() ----> 1 trainer.train() ~/.local/lib/python3.8/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1659 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size 1660 ) -> 1661 return inner_training_loop( 1662 args=args, 1663 resume_from_checkpoint=resume_from_checkpoint, ~/.local/lib/python3.8/site-packages/transformers/trainer.py in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval) 2013 optimizer_was_run = scale_before <= scale_after 2014 else: -> 2015 self.optimizer.step() 2016 2017 if optimizer_was_run: ~/.local/lib/python3.8/site-packages/accelerate/optimizer.py in step(self, closure) 132 elif self.scaler is not None: 133 scale_before = self.scaler.get_scale() --> 134 self.scaler.step(self.optimizer, closure) 135 self.scaler.update() 136 scale_after = self.scaler.get_scale() ~/.local/lib/python3.8/site-packages/torch/cuda/amp/grad_scaler.py in step(self, optimizer, *args, **kwargs) 370 self.unscale_(optimizer) 371 --> 372 assert len(optimizer_state["found_inf_per_device"]) > 0, "No inf checks were recorded for this optimizer." 373 374 retval = self._maybe_opt_step(optimizer, optimizer_state, *args, **kwargs) AssertionError: No inf checks were recorded for this optimizer. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24003/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24003/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24002
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24002/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24002/comments
https://api.github.com/repos/huggingface/transformers/issues/24002/events
https://github.com/huggingface/transformers/pull/24002
1,740,794,161
PR_kwDOCUB6oc5SJE4I
24,002
Addition of test code for GPTNeoX Flax support
{ "login": "gojiteji", "id": 38291975, "node_id": "MDQ6VXNlcjM4MjkxOTc1", "avatar_url": "https://avatars.githubusercontent.com/u/38291975?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gojiteji", "html_url": "https://github.com/gojiteji", "followers_url": "https://api.github.com/users/gojiteji/followers", "following_url": "https://api.github.com/users/gojiteji/following{/other_user}", "gists_url": "https://api.github.com/users/gojiteji/gists{/gist_id}", "starred_url": "https://api.github.com/users/gojiteji/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gojiteji/subscriptions", "organizations_url": "https://api.github.com/users/gojiteji/orgs", "repos_url": "https://api.github.com/users/gojiteji/repos", "events_url": "https://api.github.com/users/gojiteji/events{/privacy}", "received_events_url": "https://api.github.com/users/gojiteji/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24002). All of your documentation changes will be reflected on that endpoint.", "Hey @gojiteji! Thanks for picking-up the Flax GPT Neo PR! Would you mind rebasing onto main:\r\n```\r\ngit fetch upstream\r\ngit rebase upstream main\r\n```\r\n\r\nAnd then force pushing the changes:\r\n```\r\ngit push -f origin fix_flax_gpt_neox\r\n```\r\n\r\nThis will then isolate the changes from your PR amongst the other ones", "Hey @gojiteji - not sure if you pushed or force pushed? See previous comment: https://github.com/huggingface/transformers/pull/24002#issuecomment-1577141369\r\n\r\nLet's see if we can revive the commit history here. In the case that we can't, we probably need to open a new PR for this", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hey @gojiteji - feel free to open a new PR for this if you still want to continue the integration. Currently not sure which bits are new since the commit history is broken, but am more than happy to help with any questions / queries on a fresh PR!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,691
1,691
CONTRIBUTOR
null
@sanchit-gandhi I have added a test code for the GPTNeoX Flax support #22950. I implemented it based on a fork at https://github.com/OhadRubin/transformers the above PR and [Flax GPT-Neo](https://github.com/gojiteji/transformers/blob/fix_flax_gpt_neox/tests/models/gpt_neo/test_modeling_flax_gpt_neo.py) test code. During the execution of the tests based on [the doc]( https://huggingface.co/docs/transformers/add_new_model#2-next-prepare-your-environment), the log displayed the following output: ``` platform linux -- Python 3.9.16, pytest-7.3.1, pluggy-1.0.0 rootdir: /myhomedir/transformers configfile: setup.cfg plugins: anyio-3.6.2 collected 43 items tests/models/gpt_neox/test_modeling_flax_gpt_neox.py sssssssssssssssssssssssssssssssssssssssssss [100%] =================================================== 43 skipped in 1.97s =================================================== ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24002/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24002/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24002", "html_url": "https://github.com/huggingface/transformers/pull/24002", "diff_url": "https://github.com/huggingface/transformers/pull/24002.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24002.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24000
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24000/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24000/comments
https://api.github.com/repos/huggingface/transformers/issues/24000/events
https://github.com/huggingface/transformers/pull/24000
1,740,598,848
PR_kwDOCUB6oc5SIbEC
24,000
changed unused args from error to warning
{ "login": "Daryl149", "id": 6736668, "node_id": "MDQ6VXNlcjY3MzY2Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/6736668?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Daryl149", "html_url": "https://github.com/Daryl149", "followers_url": "https://api.github.com/users/Daryl149/followers", "following_url": "https://api.github.com/users/Daryl149/following{/other_user}", "gists_url": "https://api.github.com/users/Daryl149/gists{/gist_id}", "starred_url": "https://api.github.com/users/Daryl149/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Daryl149/subscriptions", "organizations_url": "https://api.github.com/users/Daryl149/orgs", "repos_url": "https://api.github.com/users/Daryl149/repos", "events_url": "https://api.github.com/users/Daryl149/events{/privacy}", "received_events_url": "https://api.github.com/users/Daryl149/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This feels like a limitation of being able to discern between a typo (error) and an actual unused argument (warning), but I understand :(\r\n" ]
1,685
1,686
1,686
NONE
null
the value error now breaks the code, while it can run perfectly without the unused arguments. This happens for example in https://huggingface.co/OpenAssistant/falcon-40b-sft-mix-1226/discussions/2 # What does this PR do? Fixes issue referenced here: https://huggingface.co/OpenAssistant/falcon-40b-sft-mix-1226/discussions/2 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Library: - generate: @gante
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24000/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24000/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24000", "html_url": "https://github.com/huggingface/transformers/pull/24000", "diff_url": "https://github.com/huggingface/transformers/pull/24000.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24000.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/23999
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23999/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23999/comments
https://api.github.com/repos/huggingface/transformers/issues/23999/events
https://github.com/huggingface/transformers/pull/23999
1,740,574,079
PR_kwDOCUB6oc5SIXGV
23,999
TensorBoard callback no longer adds hparams
{ "login": "bri25yu", "id": 46059916, "node_id": "MDQ6VXNlcjQ2MDU5OTE2", "avatar_url": "https://avatars.githubusercontent.com/u/46059916?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bri25yu", "html_url": "https://github.com/bri25yu", "followers_url": "https://api.github.com/users/bri25yu/followers", "following_url": "https://api.github.com/users/bri25yu/following{/other_user}", "gists_url": "https://api.github.com/users/bri25yu/gists{/gist_id}", "starred_url": "https://api.github.com/users/bri25yu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bri25yu/subscriptions", "organizations_url": "https://api.github.com/users/bri25yu/orgs", "repos_url": "https://api.github.com/users/bri25yu/repos", "events_url": "https://api.github.com/users/bri25yu/events{/privacy}", "received_events_url": "https://api.github.com/users/bri25yu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23999). All of your documentation changes will be reflected on that endpoint." ]
1,685
1,687
1,685
CONTRIBUTOR
null
# What does this PR do? The `TensorBoardCallback.on_train_begin` function calls `add_hparams` with an empty `metric_dict` parameter, meaning that the only information that is logged is from `args.sanitized_dict()`. This is duplicated information from the previous `self.tb_writer.add_text("args", args.to_json_string())`. As a result, the `add_hparams` call is unnecessary and this PR removes it. Adjacently fixes https://github.com/huggingface/transformers/issues/21821. I'm aware of the following TensorBoard related documentation: - https://huggingface.co/docs/hub/tensorboard - https://huggingface.co/docs/transformers/main/en/main_classes/callback#transformers.integrations.TensorBoardCallback None of these docs need to be updated in this PR. A sanity check test: ```python """ Minimal replication of https://github.com/huggingface/transformers/issues/21821 """ from os import listdir from shutil import rmtree from transformers import TrainingArguments from transformers.integrations import TensorBoardCallback output_dir = "output_dir" args = TrainingArguments(output_dir=output_dir, logging_dir=output_dir) def has_extra_file(): return len(listdir(output_dir)) > 1 class DummyControl: should_training_stop = None class DummyState: is_world_process_zero = True is_hyper_param_search = False class NoHParamsTensorBoardCallback(TensorBoardCallback): # This is a copy of `TensorBoardCallback.on_train_begin` unless specified otherwise def on_train_begin(self, args, state, control, **kwargs): if not state.is_world_process_zero: return log_dir = None if state.is_hyper_param_search: trial_name = state.trial_name if trial_name is not None: log_dir = os.path.join(args.logging_dir, trial_name) if self.tb_writer is None: self._init_summary_writer(args, log_dir) if self.tb_writer is not None: self.tb_writer.add_text("args", args.to_json_string()) if "model" in kwargs: model = kwargs["model"] if hasattr(model, "config") and model.config is not None: model_config_json = model.config.to_json_string() self.tb_writer.add_text("model_config", model_config_json) ########################### # START no hparams call ########################### # Original code: # # Version of TensorBoard coming from tensorboardX does not have this method. # if hasattr(self.tb_writer, "add_hparams"): # self.tb_writer.add_hparams(args.to_sanitized_dict(), metric_dict={}) ########################### # END no hparams call ########################### rmtree(output_dir, ignore_errors=True) TensorBoardCallback().on_train_begin(args, DummyState(), DummyControl()) print(f"With the call to `add_hparams`, has extra file is {has_extra_file()}") rmtree(output_dir, ignore_errors=True) NoHParamsTensorBoardCallback().on_train_begin(args, DummyState(), DummyControl()) print(f"WithOUT the call to `add_hparams`, has extra file is {has_extra_file()}") rmtree(output_dir, ignore_errors=True) # Cleanup ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23999/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23999/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23999", "html_url": "https://github.com/huggingface/transformers/pull/23999", "diff_url": "https://github.com/huggingface/transformers/pull/23999.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23999.patch", "merged_at": 1685980425000 }
https://api.github.com/repos/huggingface/transformers/issues/23991
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23991/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23991/comments
https://api.github.com/repos/huggingface/transformers/issues/23991/events
https://github.com/huggingface/transformers/issues/23991
1,740,464,732
I_kwDOCUB6oc5nvWJc
23,991
When using TimeSeriesTransformerForPrediction, able to train model but not predict on test set
{ "login": "brett1099", "id": 32299264, "node_id": "MDQ6VXNlcjMyMjk5MjY0", "avatar_url": "https://avatars.githubusercontent.com/u/32299264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brett1099", "html_url": "https://github.com/brett1099", "followers_url": "https://api.github.com/users/brett1099/followers", "following_url": "https://api.github.com/users/brett1099/following{/other_user}", "gists_url": "https://api.github.com/users/brett1099/gists{/gist_id}", "starred_url": "https://api.github.com/users/brett1099/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brett1099/subscriptions", "organizations_url": "https://api.github.com/users/brett1099/orgs", "repos_url": "https://api.github.com/users/brett1099/repos", "events_url": "https://api.github.com/users/brett1099/events{/privacy}", "received_events_url": "https://api.github.com/users/brett1099/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @kashif ", "thanks @brett1099 for the issue... just having a look at the error to see if i can figure out the issue...", "@brett1099 can you kindly confirm that the length of the target time series in the test set are bigger that the corresponding training set target arrays by exactly the `prediction_length`?", "Hi @kashif , yes, I can confirm that is the case. Here are the details below from my code:\r\n\r\n`train_dataset`\r\nDataset({\r\n features: ['start', 'target', 'item_id', 'feat_static_cat'],\r\n num_rows: 366\r\n})\r\n\r\n`test_dataset`\r\nDataset({\r\n features: ['start', 'target', 'item_id', 'feat_static_cat'],\r\n num_rows: 366\r\n})\r\n\r\n(limited from over 2k to 366 rows to match the article in case there was a memory issue)\r\n\r\n`train_sample = train_dataset[0]`\r\n`test_sample = test_dataset[0]`\r\n\r\n`print(len(test_sample[\"target\"]))`\r\n811\r\n`print(len(train_sample[\"target\"]))`\r\n755\r\n\r\n`prediction_length = 56`\r\n\r\n`len(train_df[\"target\"])`\r\n274546\r\n\r\n`len(test_df[\"target\"])`\r\n295042\r\n\r\nI checked that (295042 - 274546) / 366 = 56, the prediction length.", "@brett1099 can you kindly for the purpose of debugging train and do inference on the CPU device and see what the error is?", "@kashif , seems I am not able to successfully separate onto only the CPU device. Here is the code below attempting to run, specified device as cpu.\r\n\r\n`from accelerate import Accelerator\r\nfrom torch.optim import AdamW\r\n\r\naccelerator = Accelerator()\r\n#device = accelerator.device\r\ndevice = \"cpu\"\r\n\r\nmodel.to(device)\r\noptimizer = AdamW(model.parameters(), lr=2e-4, betas=(0.9, 0.995), weight_decay=1e-2)\r\n\r\nmodel, optimizer, train_dataloader = accelerator.prepare(\r\n model,\r\n optimizer,\r\n train_dataloader,\r\n)\r\n\r\nmodel.train()\r\nfor epoch in range(3):\r\n for idx, batch in enumerate(train_dataloader):\r\n optimizer.zero_grad()\r\n outputs = model(\r\n static_categorical_features=batch[\"static_categorical_features\"].to(device)\r\n if config.num_static_categorical_features > 0\r\n else None,\r\n static_real_features=batch[\"static_real_features\"].to(device)\r\n if config.num_static_real_features > 0\r\n else None,\r\n past_time_features=batch[\"past_time_features\"].to(device),\r\n past_values=batch[\"past_values\"].to(device),\r\n future_time_features=batch[\"future_time_features\"].to(device),\r\n future_values=batch[\"future_values\"].to(device),\r\n past_observed_mask=batch[\"past_observed_mask\"].to(device),\r\n future_observed_mask=batch[\"future_observed_mask\"].to(device),\r\n )\r\n loss = outputs.loss\r\n\r\n # Backpropagation\r\n accelerator.backward(loss)\r\n optimizer.step()\r\n\r\n if idx % 100 == 0:\r\n print(loss.item())\r\n\r\nError message: \r\n--------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\nInput In [64], in <cell line: 18>()\r\n 19 for idx, batch in enumerate(train_dataloader):\r\n 20 optimizer.zero_grad()\r\n---> 21 outputs = model(\r\n 22 static_categorical_features=batch[\"static_categorical_features\"]\r\n 23 if config.num_static_categorical_features > 0\r\n 24 else None,\r\n 25 static_real_features=batch[\"static_real_features\"]\r\n 26 if config.num_static_real_features > 0\r\n 27 else None,\r\n 28 past_time_features=batch[\"past_time_features\"],\r\n 29 past_values=batch[\"past_values\"],\r\n 30 future_time_features=batch[\"future_time_features\"],\r\n 31 future_values=batch[\"future_values\"],\r\n 32 past_observed_mask=batch[\"past_observed_mask\"],\r\n 33 future_observed_mask=batch[\"future_observed_mask\"],\r\n 34 )\r\n 35 loss = outputs.loss\r\n 37 # Backpropagation\r\n\r\nFile ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs)\r\n 1190 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1191 # this function, and just call forward.\r\n 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1193 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1194 return forward_call(*input, **kwargs)\r\n 1195 # Do not call functions when jit is used\r\n 1196 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile ~/.local/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:1603, in TimeSeriesTransformerForPrediction.forward(self, past_values, past_time_features, past_observed_mask, static_categorical_features, static_real_features, future_values, future_time_features, future_observed_mask, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, output_hidden_states, output_attentions, use_cache, return_dict)\r\n 1600 if future_values is not None:\r\n 1601 use_cache = False\r\n-> 1603 outputs = self.model(\r\n 1604 past_values=past_values,\r\n 1605 past_time_features=past_time_features,\r\n 1606 past_observed_mask=past_observed_mask,\r\n 1607 static_categorical_features=static_categorical_features,\r\n 1608 static_real_features=static_real_features,\r\n 1609 future_values=future_values,\r\n 1610 future_time_features=future_time_features,\r\n 1611 decoder_attention_mask=decoder_attention_mask,\r\n 1612 head_mask=head_mask,\r\n 1613 decoder_head_mask=decoder_head_mask,\r\n 1614 cross_attn_head_mask=cross_attn_head_mask,\r\n 1615 encoder_outputs=encoder_outputs,\r\n 1616 past_key_values=past_key_values,\r\n 1617 output_hidden_states=output_hidden_states,\r\n 1618 output_attentions=output_attentions,\r\n 1619 use_cache=use_cache,\r\n 1620 return_dict=return_dict,\r\n 1621 )\r\n 1623 prediction_loss = None\r\n 1624 params = None\r\n\r\nFile ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs)\r\n 1190 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1191 # this function, and just call forward.\r\n 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1193 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1194 return forward_call(*input, **kwargs)\r\n 1195 # Do not call functions when jit is used\r\n 1196 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile ~/.local/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:1424, in TimeSeriesTransformerModel.forward(self, past_values, past_time_features, past_observed_mask, static_categorical_features, static_real_features, future_values, future_time_features, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, output_hidden_states, output_attentions, use_cache, return_dict)\r\n 1421 use_cache = use_cache if use_cache is not None else self.config.use_cache\r\n 1422 return_dict = return_dict if return_dict is not None else self.config.use_return_dict\r\n-> 1424 transformer_inputs, loc, scale, static_feat = self.create_network_inputs(\r\n 1425 past_values=past_values,\r\n 1426 past_time_features=past_time_features,\r\n 1427 past_observed_mask=past_observed_mask,\r\n 1428 static_categorical_features=static_categorical_features,\r\n 1429 static_real_features=static_real_features,\r\n 1430 future_values=future_values,\r\n 1431 future_time_features=future_time_features,\r\n 1432 )\r\n 1434 if encoder_outputs is None:\r\n 1435 enc_input = transformer_inputs[:, : self.config.context_length, ...]\r\n\r\nFile ~/.local/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:1331, in TimeSeriesTransformerModel.create_network_inputs(self, past_values, past_time_features, static_categorical_features, static_real_features, past_observed_mask, future_values, future_time_features)\r\n 1329 static_feat = torch.cat((static_real_features, static_feat), dim=1)\r\n 1330 if static_categorical_features is not None:\r\n-> 1331 embedded_cat = self.embedder(static_categorical_features)\r\n 1332 static_feat = torch.cat((embedded_cat, static_feat), dim=1)\r\n 1333 expanded_static_feat = static_feat.unsqueeze(1).expand(-1, time_feat.shape[1], -1)\r\n\r\nFile ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs)\r\n 1190 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1191 # this function, and just call forward.\r\n 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1193 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1194 return forward_call(*input, **kwargs)\r\n 1195 # Do not call functions when jit is used\r\n 1196 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile ~/.local/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:76, in TimeSeriesFeatureEmbedder.forward(self, features)\r\n 72 else:\r\n 73 cat_feature_slices = [features]\r\n 75 return torch.cat(\r\n---> 76 [\r\n 77 embed(cat_feature_slice.squeeze(-1))\r\n 78 for embed, cat_feature_slice in zip(self.embedders, cat_feature_slices)\r\n 79 ],\r\n 80 dim=-1,\r\n 81 )\r\n\r\nFile ~/.local/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:77, in <listcomp>(.0)\r\n 72 else:\r\n 73 cat_feature_slices = [features]\r\n 75 return torch.cat(\r\n 76 [\r\n---> 77 embed(cat_feature_slice.squeeze(-1))\r\n 78 for embed, cat_feature_slice in zip(self.embedders, cat_feature_slices)\r\n 79 ],\r\n 80 dim=-1,\r\n 81 )\r\n\r\nFile ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs)\r\n 1190 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1191 # this function, and just call forward.\r\n 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1193 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1194 return forward_call(*input, **kwargs)\r\n 1195 # Do not call functions when jit is used\r\n 1196 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile ~/.local/lib/python3.8/site-packages/torch/nn/modules/sparse.py:160, in Embedding.forward(self, input)\r\n 159 def forward(self, input: Tensor) -> Tensor:\r\n--> 160 return F.embedding(\r\n 161 input, self.weight, self.padding_idx, self.max_norm,\r\n 162 self.norm_type, self.scale_grad_by_freq, self.sparse)\r\n\r\nFile ~/.local/lib/python3.8/site-packages/torch/nn/functional.py:2210, in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)\r\n 2204 # Note [embedding_renorm set_grad_enabled]\r\n 2205 # XXX: equivalent to\r\n 2206 # with torch.no_grad():\r\n 2207 # torch.embedding_renorm_\r\n 2208 # remove once script supports set_grad_enabled\r\n 2209 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)\r\n-> 2210 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\n\r\nRuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select)\r\n", "hmm.. no I guess now things are on different devices... ok perhaps try to train with the number of categoricals set to 0 (i.e. no categorical features...)\r\n", "When changing num_static_categorical_features from 1 to 0 in the config code, the model was able to train and test successfully. Have not checked quality of run as it was a brief train for testing purposes.\r\n\r\nconfig = TimeSeriesTransformerConfig(\r\n prediction_length=prediction_length,\r\n # context length:\r\n context_length=prediction_length * 2,\r\n # lags coming from helper given the freq:\r\n lags_sequence=lags_sequence,\r\n # we'll add 2 time features (\"month of year\" and \"age\", see further):\r\n num_time_features=len(time_features) + 1,\r\n # we have a single static categorical feature, namely time series ID:\r\n **num_static_categorical_features=0,**\r\n # it has 366 possible values:\r\n cardinality=[len(train_dataset)],\r\n # the model will learn an embedding of size 2 for each of the 366 possible values:\r\n embedding_dimension=[2],\r\n \r\n # transformer params:\r\n encoder_layers=4,\r\n decoder_layers=4,\r\n d_model=32,\r\n)\r\n\r\nmodel = TimeSeriesTransformerForPrediction(config)", "ok cool then my suspicion was correct... the categorical ids in the train and test set need to have some cardinality... and it is this cardinality which is in the config... it seems there is some categorical id which is larger than the cardinality you gave the model when you configured it", "Hmm so I guess I am a bit confused on that conclusion. The train and test sets have the same IDs and match 1-1.\r\n\r\n`train_df.item_id.unique()`\r\narray(['UPC1', 'UPC10', 'UPC1060910111', 'UPC1061962711', 'UPC1060180111',\r\n 'UPC1090018031', ...\r\n\r\n`test_df.item_id.unique()`\r\narray(['UPC1', 'UPC10', 'UPC1060910111', 'UPC1061962711', 'UPC1060180111',\r\n 'UPC1090018031', ...\r\n\r\nI ensured this as I have the following code as well\r\n`test_df = test_df[test_df.item_id.isin(train_df.item_id.unique())]`\r\n\r\nIs it that they are not able to be strings?", "Still after running the following code to reassign the strings to numbers, I still get the original error.\r\n```\r\ntrain_classes = np.array(train_df[\"item_id\"])\r\ntrain_classnames, train_indices = np.unique(train_classes, return_inverse=True)\r\n\r\ntest_classes = np.array(test_df[\"item_id\"])\r\ntest_classnames, test_indices = np.unique(test_classes, return_inverse=True)\r\n\r\ntrain_df[\"item_id\"] = train_indices\r\ntest_df[\"item_id\"] = test_indices\r\n```\r\n\r\n`train_df.item_id.unique()`\r\narray([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,\r\n 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25,\r\n 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38,\r\n 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51,\r\n 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64,\r\n 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77,\r\n 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90,\r\n 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103,\r\n 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116,\r\n 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129,\r\n 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142,\r\n 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155,\r\n 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168,\r\n 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181,\r\n 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194,\r\n 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207,\r\n 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220,\r\n 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233,\r\n 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246,\r\n 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259,\r\n 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272,\r\n 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285,\r\n 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298,\r\n 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311,\r\n 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324,\r\n 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337,\r\n 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350,\r\n 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363,\r\n 364, 365])\r\n\r\n`test_df.item_id.unique()`\r\narray([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,\r\n 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25,\r\n 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38,\r\n 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51,\r\n 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64,\r\n 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77,\r\n 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90,\r\n 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103,\r\n 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116,\r\n 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129,\r\n 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142,\r\n 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155,\r\n 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168,\r\n 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181,\r\n 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194,\r\n 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207,\r\n 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220,\r\n 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233,\r\n 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246,\r\n 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259,\r\n 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272,\r\n 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285,\r\n 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298,\r\n 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311,\r\n 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324,\r\n 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337,\r\n 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350,\r\n 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363,\r\n 364, 365])", "ok sorry for the confusion! `item_id` can be a string however the `static_categorical_features` for each of the time series can potentially contain the corresponding integer id, e.g. for the very first one `static_categorical_features = [0]` etc. and thus the `cardinality = [366]` and you can specify the embedding vector dimension... can you confirm that the `cardinality` is correct? ", "Ahh you solved it! I made a mistake thinking the item_id was utilized as the static_categorical_features. You are absolultey correct and found the issue. Here was the code causing the issue:\r\n \r\n```\r\nclass ProcessStartField():\r\n ts_id = 0\r\n \r\n def __call__(self, data):\r\n data[\"start\"] = data[\"start\"].to_timestamp()\r\n data[\"feat_static_cat\"] = [self.ts_id]\r\n self.ts_id += 1\r\n \r\n return data\r\n```\r\n\r\n```\r\nfrom gluonts.itertools import Map\r\n\r\nprocess_start = ProcessStartField()\r\n\r\nlist_ds_train = list(Map(process_start, ds_train))\r\nlist_ds_test = list(Map(process_start, ds_test))\r\n```\r\n\r\n```\r\nfrom datasets import Dataset, Features, Value, Sequence\r\n\r\nfeatures = Features(\r\n { \r\n \"start\": Value(\"timestamp[s]\"),\r\n \"target\": Sequence(Value(\"float32\")),\r\n \"feat_static_cat\": Sequence(Value(\"uint64\")),\r\n # \"feat_static_real\": Sequence(Value(\"float32\")),\r\n # \"feat_dynamic_real\": Sequence(Sequence(Value(\"uint64\"))),\r\n # \"feat_dynamic_cat\": Sequence(Sequence(Value(\"uint64\"))),\r\n \"item_id\": Value(\"string\"),\r\n }\r\n)\r\n```\r\n\r\n```\r\ntrain_dataset = Dataset.from_list(list_ds_train, features=features)\r\ntest_dataset = Dataset.from_list(list_ds_test, features=features)\r\n```\r\n\r\nThis was causing the train_dataset to get static_categorical_features values from 0-365, and the test_dataset was then getting values from 366-731. I corrected the code to now perform the mapping process individually for each dataset with the function so that it would not begin from the last number. Thanks so much for all your help!" ]
1,685
1,686
1,686
NONE
null
Using the code from this article: https://huggingface.co/blog/time-series-transformers, I was able to successfully run the data they utilized. However when attempting with my own data in the same format, I am able to get to training the model successfully but get an error when creating predictions. I have a dataset with three columns, "start" as the index, "item_id", and "target", as matching the code from the article. `test_sample = test_dataset[0] test_sample.keys()` _OUT `dict_keys(['start', 'target', 'item_id', 'feat_static_cat'])`_ Here is the full error message: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Input In [59], in <cell line: 5>() 3 forecasts = [] 5 for batch in test_dataloader: ----> 6 outputs = model.generate( 7 static_categorical_features=batch["static_categorical_features"].to(device) 8 if config.num_static_categorical_features > 0 9 else None, 10 static_real_features=batch["static_real_features"].to(device) 11 if config.num_static_real_features > 0 12 else None, 13 past_time_features=batch["past_time_features"].to(device), 14 past_values=batch["past_values"].to(device), 15 future_time_features=batch["future_time_features"].to(device), 16 past_observed_mask=batch["past_observed_mask"].to(device), 17 ) 18 forecasts.append(outputs.sequences.cpu().numpy()) File ~/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py:27, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs) 24 @functools.wraps(func) 25 def decorate_context(*args, **kwargs): 26 with self.clone(): ---> 27 return func(*args, **kwargs) File ~/.local/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:1760, in TimeSeriesTransformerForPrediction.generate(self, past_values, past_time_features, future_time_features, past_observed_mask, static_categorical_features, static_real_features, output_attentions, output_hidden_states) 1661 @torch.no_grad() 1662 def generate( 1663 self, (...) 1671 output_hidden_states: Optional[bool] = None, 1672 ) -> SampleTSPredictionOutput: 1673 r""" 1674 Greedily generate sequences of sample predictions from a model with a probability distribution head. 1675 (...) 1758 multivariate predictions. 1759 """ -> 1760 outputs = self( 1761 static_categorical_features=static_categorical_features, 1762 static_real_features=static_real_features, 1763 past_time_features=past_time_features, 1764 past_values=past_values, 1765 past_observed_mask=past_observed_mask, 1766 future_time_features=future_time_features, 1767 future_values=None, 1768 output_attentions=output_attentions, 1769 output_hidden_states=output_hidden_states, 1770 return_dict=True, 1771 use_cache=True, 1772 ) 1774 decoder = self.model.get_decoder() 1775 enc_last_hidden = outputs.encoder_last_hidden_state File ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File ~/.local/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:1603, in TimeSeriesTransformerForPrediction.forward(self, past_values, past_time_features, past_observed_mask, static_categorical_features, static_real_features, future_values, future_time_features, future_observed_mask, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, output_hidden_states, output_attentions, use_cache, return_dict) 1600 if future_values is not None: 1601 use_cache = False -> 1603 outputs = self.model( 1604 past_values=past_values, 1605 past_time_features=past_time_features, 1606 past_observed_mask=past_observed_mask, 1607 static_categorical_features=static_categorical_features, 1608 static_real_features=static_real_features, 1609 future_values=future_values, 1610 future_time_features=future_time_features, 1611 decoder_attention_mask=decoder_attention_mask, 1612 head_mask=head_mask, 1613 decoder_head_mask=decoder_head_mask, 1614 cross_attn_head_mask=cross_attn_head_mask, 1615 encoder_outputs=encoder_outputs, 1616 past_key_values=past_key_values, 1617 output_hidden_states=output_hidden_states, 1618 output_attentions=output_attentions, 1619 use_cache=use_cache, 1620 return_dict=return_dict, 1621 ) 1623 prediction_loss = None 1624 params = None File ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File ~/.local/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:1452, in TimeSeriesTransformerModel.forward(self, past_values, past_time_features, past_observed_mask, static_categorical_features, static_real_features, future_values, future_time_features, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, output_hidden_states, output_attentions, use_cache, return_dict) 1445 encoder_outputs = BaseModelOutput( 1446 last_hidden_state=encoder_outputs[0], 1447 hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None, 1448 attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None, 1449 ) 1451 dec_input = transformer_inputs[:, self.config.context_length :, ...] -> 1452 decoder_outputs = self.decoder( 1453 inputs_embeds=dec_input, 1454 attention_mask=decoder_attention_mask, 1455 encoder_hidden_states=encoder_outputs[0], 1456 head_mask=decoder_head_mask, 1457 cross_attn_head_mask=cross_attn_head_mask, 1458 past_key_values=past_key_values, 1459 use_cache=use_cache, 1460 output_attentions=output_attentions, 1461 output_hidden_states=output_hidden_states, 1462 return_dict=return_dict, 1463 ) 1465 if not return_dict: 1466 return decoder_outputs + encoder_outputs + (loc, scale, static_feat) File ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File ~/.local/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:1178, in TimeSeriesTransformerDecoder.forward(self, attention_mask, encoder_hidden_states, encoder_attention_mask, head_mask, cross_attn_head_mask, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict) 1167 layer_outputs = torch.utils.checkpoint.checkpoint( 1168 create_custom_forward(decoder_layer), 1169 hidden_states, (...) 1175 None, 1176 ) 1177 else: -> 1178 layer_outputs = decoder_layer( 1179 hidden_states, 1180 attention_mask=attention_mask, 1181 encoder_hidden_states=encoder_hidden_states, 1182 encoder_attention_mask=encoder_attention_mask, 1183 layer_head_mask=(head_mask[idx] if head_mask is not None else None), 1184 cross_attn_layer_head_mask=( 1185 cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None 1186 ), 1187 past_key_value=past_key_value, 1188 output_attentions=output_attentions, 1189 use_cache=use_cache, 1190 ) 1191 hidden_states = layer_outputs[0] 1193 if use_cache: File ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File ~/.local/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:611, in TimeSeriesTransformerDecoderLayer.forward(self, hidden_states, attention_mask, encoder_hidden_states, encoder_attention_mask, layer_head_mask, cross_attn_layer_head_mask, past_key_value, output_attentions, use_cache) 609 # cross_attn cached key/values tuple is at positions 3,4 of present_key_value tuple 610 cross_attn_past_key_value = past_key_value[-2:] if past_key_value is not None else None --> 611 hidden_states, cross_attn_weights, cross_attn_present_key_value = self.encoder_attn( 612 hidden_states=hidden_states, 613 key_value_states=encoder_hidden_states, 614 attention_mask=encoder_attention_mask, 615 layer_head_mask=cross_attn_layer_head_mask, 616 past_key_value=cross_attn_past_key_value, 617 output_attentions=output_attentions, 618 ) 619 hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) 620 hidden_states = residual + hidden_states File ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File ~/.local/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:371, in TimeSeriesTransformerAttention.forward(self, hidden_states, key_value_states, past_key_value, attention_mask, layer_head_mask, output_attentions) 368 value_states = past_key_value[1] 369 elif is_cross_attention: 370 # cross_attentions --> 371 key_states = self._shape(self.k_proj(key_value_states), -1, bsz) 372 value_states = self._shape(self.v_proj(key_value_states), -1, bsz) 373 elif past_key_value is not None: 374 # reuse k, v, self_attention File ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File ~/.local/lib/python3.8/site-packages/torch/nn/modules/linear.py:114, in Linear.forward(self, input) 113 def forward(self, input: Tensor) -> Tensor: --> 114 return F.linear(input, self.weight, self.bias) RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasLtMatmul( ltHandle, computeDesc.descriptor(), &alpha_val, mat1_ptr, Adesc.descriptor(), mat2_ptr, Bdesc.descriptor(), &beta_val, result_ptr, Cdesc.descriptor(), result_ptr, Cdesc.descriptor(), &heuristicResult.algo, workspace.data_ptr(), workspaceSize, at::cuda::getCurrentCUDAStream())`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23991/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23991/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23990
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23990/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23990/comments
https://api.github.com/repos/huggingface/transformers/issues/23990/events
https://github.com/huggingface/transformers/pull/23990
1,740,429,349
PR_kwDOCUB6oc5SH4wP
23,990
add gradient checkpointing for the llama's final layernorm module
{ "login": "zhaoqf123", "id": 9318331, "node_id": "MDQ6VXNlcjkzMTgzMzE=", "avatar_url": "https://avatars.githubusercontent.com/u/9318331?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhaoqf123", "html_url": "https://github.com/zhaoqf123", "followers_url": "https://api.github.com/users/zhaoqf123/followers", "following_url": "https://api.github.com/users/zhaoqf123/following{/other_user}", "gists_url": "https://api.github.com/users/zhaoqf123/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhaoqf123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhaoqf123/subscriptions", "organizations_url": "https://api.github.com/users/zhaoqf123/orgs", "repos_url": "https://api.github.com/users/zhaoqf123/repos", "events_url": "https://api.github.com/users/zhaoqf123/events{/privacy}", "received_events_url": "https://api.github.com/users/zhaoqf123/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada ", "_The documentation is not available anymore as the PR was closed or merged._", "> \r\n\r\n\r\n\r\n> Hi @zhaoqf123 Thanks for bringing this up! Sadly I couldn't reproduce the issue, here is the snippet I used:\r\n> \r\n> ```python\r\n> import torch\r\n> from transformers import AutoModelForCausalLM\r\n> from peft import LoraConfig, get_peft_model, prepare_model_for_int8_training\r\n> \r\n> model_id = \"huggyllama/llama-7b\"\r\n> \r\n> config = LoraConfig(\r\n> r=16, \r\n> lora_alpha=32, \r\n> lora_dropout=0.05, \r\n> bias=\"none\", \r\n> task_type=\"CAUSAL_LM\"\r\n> )\r\n> \r\n> model = AutoModelForCausalLM.from_pretrained(model_id, device_map=\"auto\", load_in_8bit=True)\r\n> \r\n> # this should activate gradient checkpointing\r\n> model = prepare_model_for_int8_training(model)\r\n> \r\n> optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)\r\n> \r\n> model = get_peft_model(model, config)\r\n> \r\n> assert model.training and model.is_gradient_checkpointing\r\n> \r\n> dummy_input = torch.LongTensor([[0, 1, 0, 1]]).to(0)\r\n> logits = model(dummy_input).logits\r\n> loss = logits.sum()\r\n> loss.backward()\r\n> optimizer.step()\r\n> \r\n> for n, param in model.named_parameters():\r\n> if \"lora\" in n:\r\n> assert param.grad is not None\r\n> ```\r\n> \r\n> And as you can see the gradients are always non-`None`. Per my understanding as long as the weight have an associated gradient its value will be updated.\r\n\r\n@younesbelkada Thank you for your reply. I modify your script based on my training setup with V100 GPU as follows, and it can be reproduced.\r\n```python\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM\r\nfrom peft import LoraConfig, get_peft_model, prepare_model_for_int8_training\r\n\r\n# 1. load pretrained model\r\n# model_id = \"huggyllama/llama-7b\"\r\nmodel_id = \"decapoda-research/llama-7b-hf\"\r\ncache_dir = \"/mnt/workspace/kgg/hf_models\"\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(model_id, cache_dir=cache_dir, device_map=\"auto\", load_in_8bit=True)\r\n\r\n# this should activate gradient checkpointing\r\nmodel = prepare_model_for_int8_training(model)\r\n\r\n# 2. config peft model\r\nconfig = LoraConfig(\r\n r=16, \r\n lora_alpha=32, \r\n lora_dropout=0.05, \r\n bias=\"none\", \r\n task_type=\"CAUSAL_LM\",\r\n # target_modules=[\"layers.31.self_attn.q_proj\"]\r\n)\r\nmodel = get_peft_model(model, config)\r\n\r\nassert model.training and model.is_gradient_checkpointing\r\n\r\n# 3. set up optimizer\r\noptimizer = torch.optim.Adam(model.parameters(), lr=1e-3)\r\n\r\n# 4. train\r\nwith torch.autocast(\"cuda\"):\r\n dummy_input = torch.LongTensor([[0, 1, 0, 1]]).to(0)\r\n model.train()\r\n logits = model(dummy_input).logits\r\n loss = logits.sum()\r\n\r\n loss.backward()\r\n optimizer.step()\r\n\r\n for n, param in model.named_parameters():\r\n if \"lora\" in n:\r\n print(n)\r\n assert param.grad is not None\r\n```\r\n\r\nYou can see that the params of the last-layer (layer31) has None grad.\r\n\r\nThe main differences of the codes from yours is 3 parts:\r\n1. The optimizer setup is after `get_peft_model`\r\n2. `with torch.autocast(\"cuda\")`\r\n3. `model.train()` as in the `trainsformers/trainer.py` script\r\n\r\nBy the way, my torch version is 2.1.0a0+fe05266\r\n", "Indeed I also managed to reproduce, this time with the latest stable version of torch, also note that this bug also occurs with any other model, for instance OPT.\r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM\r\nfrom peft import LoraConfig, get_peft_model, prepare_model_for_int8_training\r\n\r\n# 1. load pretrained model\r\n# model_id = \"huggyllama/llama-7b\"\r\nmodel_id = \"facebook/opt-350m\"\r\n# model_id = \"decapoda-research/llama-7b-hf\"\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(model_id, device_map=\"auto\", load_in_8bit=True)\r\n\r\n# this should activate gradient checkpointing\r\nmodel = prepare_model_for_int8_training(model)\r\n\r\n# 2. config peft model\r\nconfig = LoraConfig(\r\n r=16, \r\n lora_alpha=32, \r\n lora_dropout=0.05, \r\n bias=\"none\", \r\n task_type=\"CAUSAL_LM\",\r\n # target_modules=[\"layers.31.self_attn.q_proj\"]\r\n)\r\nmodel = get_peft_model(model, config)\r\n\r\nassert model.training and model.is_gradient_checkpointing\r\n\r\n# 3. set up optimizer\r\noptimizer = torch.optim.Adam(model.parameters(), lr=1e-3)\r\n\r\n# 4. train\r\nwith torch.autocast(\"cuda\"):\r\n dummy_input = torch.LongTensor([[0, 1, 0, 1]]).to(0)\r\n model.train()\r\n logits = model(dummy_input).logits\r\n loss = logits.sum()\r\n\r\n loss.backward()\r\n optimizer.step()\r\n\r\n for n, param in model.named_parameters():\r\n if \"lora\" in n:\r\n print(n)\r\n assert param.grad is not None\r\n```\r\nHowever, it seems that the bug disappears when the `torch.autocast(\"cuda\")` context manager is removed. \r\nIt appears the issue can be reproduced even without PEFT:\r\n```python\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM\r\n\r\nmodel_id = \"facebook/opt-350m\"\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(model_id).to(0)\r\nmodel.gradient_checkpointing_enable()\r\nmodel.train()\r\n\r\nassert model.training and model.is_gradient_checkpointing\r\n\r\n# 3. set up optimizer\r\noptimizer = torch.optim.Adam(model.parameters(), lr=1e-3)\r\n\r\n# 4. train\r\nwith torch.cuda.amp.autocast(True, dtype=torch.float16):\r\n dummy_input = torch.LongTensor([[0, 1, 0, 1]]).to(0)\r\n model.train()\r\n logits = model(dummy_input).logits\r\n loss = logits.sum()\r\n\r\n loss.backward()\r\n optimizer.step()\r\n\r\n for n, param in model.named_parameters():\r\n if param.grad is None:\r\n print(n)\r\n```\r\nAnd this gives:\r\n```bash\r\nmodel.decoder.layers.23.self_attn.k_proj.weight\r\nmodel.decoder.layers.23.self_attn.k_proj.bias\r\nmodel.decoder.layers.23.self_attn.v_proj.weight\r\nmodel.decoder.layers.23.self_attn.v_proj.bias\r\nmodel.decoder.layers.23.self_attn.q_proj.weight\r\nmodel.decoder.layers.23.self_attn.q_proj.bias\r\nmodel.decoder.layers.23.self_attn.out_proj.weight\r\nmodel.decoder.layers.23.self_attn.out_proj.bias\r\nmodel.decoder.layers.23.fc1.weight\r\nmodel.decoder.layers.23.fc1.bias\r\nmodel.decoder.layers.23.fc2.weight\r\nmodel.decoder.layers.23.fc2.bias\r\n```\r\nMeaning the entire last layer doesn't get updated.\r\n\r\nFrom what I can see in the trainer, currently we support mixed precision autocast (`torch.xxx.amp`) context managers: https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L2688-L2705 - and replacing the context manager you have put to `torch.cuda.amp.autocast(True, dtype=torch.float16)` reproduces also the bug. \r\nI am not sure if this is a bug on transformers side or torch but I would say OK to merge this and apply this patch to other common architectures (by opening a good first issue maybe?).\r\n\r\nWdyt @sgugger @amyeroberts @ArthurZucker ", "In line with what @sgugger said, also not sure it even makes a lot of sense to checkpoint something as small as the layer norm grads? Thanks for flagging the issue and proposing a fix! ", "> Hi @zhaoqf123 Thanks a lot for helping us finding this important issue. After some digging and internal discussion, we have found a broader fix that includes most models that supports gradient checkpointing: #24247 . To credit you from your help, I have added you as a co-author in that PR and we will close this PR once #24247 will get merged Thanks a lot !\r\n\r\n@younesbelkada Thank you for your acknowledgement. Although I have several years of experiences in machine learning (with tf), I just start using `transformers` and `pytorch` for couple of months. It really took me 4 days and nights to figure out where the bug occurs and a workaround solution.\r\n\r\nThank you very much for the `transformers` and `peft` project. They are really very helpful.", "Closing the PR as https://github.com/huggingface/transformers/pull/24247 being merged\r\nAgain thanks so much @zhaoqf123 for all your help on this and your great investigation! ", "hi @zhaoqf123 \r\nSome training setups that were running fine in a single T4 (with 7GB peak memory) now OOM with that PR, I wanted to double check if you observe the same behaviour in your case as well?\r\n\r\nFor reference, check: https://github.com/huggingface/transformers/pull/24420#issuecomment-1602345683", "Hi @zhaoqf123 \r\n@pacman100 has found the rootcause of your original issue and we found out that the recent accelerate integration of Trainer silently fixed your bug. I can confirm I don't get any None grad with llama models using Trainer + autocast: https://github.com/huggingface/transformers/pull/24420#issuecomment-1602680953 | I believe 3 weeks ago the Trainer + accelerate integration was not released yet that could explain why you had the bug\r\nCan you try out your script after we revert the PR and let us know? \r\nThanks !", "> hi @zhaoqf123 Some training setups that were running fine in a single T4 (with 7GB peak memory) now OOM with that PR, I wanted to double check if you observe the same behaviour in your case as well?\r\n> \r\n> For reference, check: [#24420 (comment)](https://github.com/huggingface/transformers/pull/24420#issuecomment-1602345683)\r\n\r\n@younesbelkada Sorry for the late reply. Just got vocation last 3 days.\r\n\r\nYes, I also noticed that the memory consumption increased a lot when making the last layer updatable. For llama 7B, when using V100-32GB, the VRAM increases from 34% to 42%, which is not proportional to the increase of updatable params.", "> Hi @zhaoqf123 @pacman100 has found the rootcause of your original issue and we found out that the recent accelerate integration of Trainer silently fixed your bug. I can confirm I don't get any None grad with llama models using Trainer + autocast: [#24420 (comment)](https://github.com/huggingface/transformers/pull/24420#issuecomment-1602680953) | I believe 3 weeks ago the Trainer + accelerate integration was not released yet that could explain why you had the bug Can you try out your script after we revert the PR and let us know? Thanks !\r\n\r\n@younesbelkada May I know how should I try out? For example, re-install transformer: `pip install --upgrade git+https://github.com/huggingface/transformers.git`, and then run my script without `with torch.autocast(\"cuda\"):`?", "@zhaoqf123 thanks for the reply!\r\nYes you can try out that way, uninstall your current transformers lib, reinstall it from source and see if the original bug still persists", "> @zhaoqf123 thanks for the reply! Yes you can try out that way, uninstall your current transformers lib, reinstall it from source and see if the original bug still persists\r\n\r\n@younesbelkada After re-install transformers from the source, in my V100, if I remove `with torch.autocast(\"cuda\")`, I encounter [this issue](https://github.com/tloen/alpaca-lora/issues/203). If I don't remove `with torch.autocast(\"cuda\")`, the last layer still not updatable.\r\n\r\nIn my 3090 GPU, it works by removing `with torch.autocast(\"cuda\")`. It could be due to the implementation of bitsandbytes for GPU computability < 7.5. Because GPU<7.5 does not have int8 core production, so bitsandbytes do int8 mutliplication using fp16. \r\n\r\nCheck also this [issue](https://github.com/TimDettmers/bitsandbytes/issues/240) and this [issue](https://github.com/TimDettmers/bitsandbytes/issues/165#issuecomment-1518711138)" ]
1,685
1,687
1,687
CONTRIBUTOR
null
Without this, when tuning with LoRA + gradient checkpointing, the last transformer layer's LoRA weights won't be updated! For example, if we use this callback to log the weight change of LoRA weights in each layer, we will find that no weight update for the last layer in TensorBoard. ```python class ParamsTensorBoardCallback(TensorBoardCallback): def __init__(self, tb_writer=None, params=None, process_name=lambda x:x): super().__init__(tb_writer) self.params = params self._process_name = process_name def on_step_end(self, args, state, control, **kwargs): if state.global_step % args.logging_steps == 0: dict_ = {} model = kwargs["model"] for name in self.params: param = model.get_parameter(name) param = param.flatten() name_p = self._process_name(name) dict_tmp = { f"{name_p}_mean": param.mean().item(), f"{name_p}_max": param.max().item(), f"{name_p}_q75": param.quantile(0.75).item(), f"{name_p}_q25": param.quantile(0.25).item(), f"{name_p}_min": param.min().item(), f"{name_p}_median": param.median().item(), f"{name_p}_std": param.std().item(), } dict_.update(dict_tmp) self.on_log(args, state, control, logs=dict_, **kwargs) def get_params_for_logging(model): ls_params = [] for name, param in model.named_parameters(): if param.requires_grad: ls_params.append(name) return ls_params ls_params = get_params_for_logging(model) tb_cb = ParamsTensorBoardCallback( None, ls_params, process_name=lambda x: x[30:] ) trainer = Trainer( model=model, train_dataset=train_data, eval_dataset=val_data, args=args, data_collator=data_collator, callbacks=[tb_cb] ) ``` # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23990/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23990/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23990", "html_url": "https://github.com/huggingface/transformers/pull/23990", "diff_url": "https://github.com/huggingface/transformers/pull/23990.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23990.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/23989
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23989/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23989/comments
https://api.github.com/repos/huggingface/transformers/issues/23989/events
https://github.com/huggingface/transformers/issues/23989
1,740,411,664
I_kwDOCUB6oc5nvJMQ
23,989
load_in_8bit=True returns gibberish when inferencing on multi GPU
{ "login": "Daryl149", "id": 6736668, "node_id": "MDQ6VXNlcjY3MzY2Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/6736668?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Daryl149", "html_url": "https://github.com/Daryl149", "followers_url": "https://api.github.com/users/Daryl149/followers", "following_url": "https://api.github.com/users/Daryl149/following{/other_user}", "gists_url": "https://api.github.com/users/Daryl149/gists{/gist_id}", "starred_url": "https://api.github.com/users/Daryl149/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Daryl149/subscriptions", "organizations_url": "https://api.github.com/users/Daryl149/orgs", "repos_url": "https://api.github.com/users/Daryl149/repos", "events_url": "https://api.github.com/users/Daryl149/events{/privacy}", "received_events_url": "https://api.github.com/users/Daryl149/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Not OP's issue but for others finding this issue like I did: be aware that if you're using 2x RTX 4090, there's a [driver bug](https://forums.developer.nvidia.com/t/standard-nvidia-cuda-tests-fail-with-dual-rtx-4090-linux-box/233202/51) in Linux causing corrupt results. For me switching from the 530 to the 525 drivers fixed a multi-gpu gibberish issue when using `load_in_4bit` from the latest bitsnbytes.", "cc @younesbelkada ", "thanks for the issue, I see you are using a V100, can you try to upgrade `bitsandbytes` to the version `0.39.0`?\r\nAlso I think that the kernels for 4bit inference are much more robust in my experience (tried 4bit inference in a V100 and it seemed to work fine) - can you try them and let us know how it goes?", "Thanks, I will check it out! If it runs in 4 bit, then the multi-gpu issue is immediately solved, since the model would fit on a single card. Slightly worried about the inference speed from what I read. It did 1 token/s in 16-bit, spread across 4 V100Ss. But that'd be a new issue :)\r\n", "**Sort of solved**\r\nThanks @younesbelkada \r\nby updating to 4bit requirements as mentioned in https://huggingface.co/blog/4bit-transformers-bitsandbytes\r\n```\r\naccelerate 0.20.0.dev0\r\nbitsandbytes 0.39.0\r\npeft 0.4.0.dev0\r\ntransformers 4.30.0.dev0\r\n```\r\n\r\n`load_in_4bit=True` produces comprehensible text on multi-gpu!!! (Even though it now only takes 28GB of VRAM, and thus would also fit on a single V100S GPU.)\r\n```\r\npython\r\n\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer\r\ntokenizer = AutoTokenizer.from_pretrained(\"falcon-40b-sft-mix-1226\", trust_remote_code=True)\r\nmodel = AutoModelForCausalLM.from_pretrained(\"falcon-40b-sft-mix-1226\", device_map=\"auto\", offload_folder=\"offload\", trust_remote_code=True, load_in_4bit=True) \r\nstreamer = TextStreamer(tokenizer, skip_prompt=True)\r\nmessage = \"<|prompter|>This is a demo of a text streamer. What's a cool fact about ducks?<|endoftext|><|assistant|>\"\r\ninputs = tokenizer(message, return_tensors=\"pt\").to(model.device)\r\n\r\ntokens = model.generate(**inputs, max_new_tokens=25, do_sample=True, temperature=0.9, streamer=streamer) \r\n\r\n```\r\n\r\nyields a very cool:\r\n```\r\n/generation/utils.py:1140: UserWarning: The following `model_kwargs` are not used by the model: ['token_type_ids'] (note: typos in the generate arguments will also show up in this list)\r\n warnings.warn(\r\nSetting `pad_token_id` to `eos_token_id`:11 for open-end generation.\r\nDucks have waterproof feathers which are excellent at repelling water.<|endoftext|>\r\n```\r\n\r\nremarks/questions:\r\n- Yes it no longer blocks me, but the original issue remains for 8 bit. So shall I keep this bug open, or is the solution for everyone to move to `load_in_4bit`, instead of `load_in_8bit`?\r\n- 4 bit inference is not noticeably slower (or faster) than 16 bit, great!\r\n- using `load_in 4bit` also solves the inf/nan bug that `load_in_8bit` has.", "Awesome!\r\nGreat also to hear that load_in_4bit is as fast (maybe faster) than 16bit in V100s, this is very interesting! \r\nWe can keep this issue open for community members to chime in and comment on your observations. I think that the potential 8bit incompatibility issue on V100s might need to be reported on `bitsanbytes` library. Again thanks @Daryl149 !", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,689
1,689
NONE
null
### System Info ```Shell - `Accelerate` version: 0.18.0 - Platform: Linux-5.15.0-72-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Numpy version: 1.24.3 - PyTorch version (GPU?): 1.13.1+cu117 (True) - `Accelerate` default config: Not found - using transformers from here, as recommended by openassistant: https://huggingface.co/OpenAssistant/oasst-rlhf-2-llama-30b-7k-steps-xor git clone https://github.com/huggingface/transformers.git cd transformers git checkout d04ec99bec8a0b432fc03ed60cea9a1a20ebaf3c other info: - ubuntu 22.04 - bitsandbytes = 0.38.1 - CUDA 118 detected by bitsandbytes +-----------------------------------------------------------------------------+ | NVIDIA-SMI 525.105.17 Driver Version: 525.105.17 CUDA Version: 12.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla V100S-PCI... On | 00000000:00:06.0 Off | 0 | | N/A 31C P0 41W / 250W | 32198MiB / 32768MiB | 37% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 1 Tesla V100S-PCI... On | 00000000:00:07.0 Off | 0 | | N/A 31C P0 36W / 250W | 31784MiB / 32768MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 2 Tesla V100S-PCI... On | 00000000:00:08.0 Off | 0 | | N/A 33C P0 36W / 250W | 31784MiB / 32768MiB | 23% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 3 Tesla V100S-PCI... On | 00000000:00:09.0 Off | 0 | | N/A 33C P0 36W / 250W | 31784MiB / 32768MiB | 16% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ ``` ### Who can help? Big Model Inference: @sgugger @muellerzr ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below): https://huggingface.co/OpenAssistant/oasst-rlhf-2-llama-30b-7k-steps-xor ### Reproduction create a fresh venv and run this: ``` python3.10 -m venv dev_1 source dev_1/bin/activate pip install --upgrade pip git clone https://github.com/huggingface/transformers.git cd transformers git checkout d04ec99bec8a0b432fc03ed60cea9a1a20ebaf3c pip install . pip install torch==1.13.1 accelerate==0.18.0 sentencepiece==0.1.98 protobuf==3.20.1 pip install scipy pip install bitsandbytes==0.38.1 export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH python -m bitsandbytes ``` Then open python, load weights and infer with `load_in8bit=True`. The `model.generate` arguments differ, due to the inf/nan bug with `CUDA 11.8` and bitsandbytes `0.38.1` see https://github.com/tloen/alpaca-lora/issues/408 **Update: see section expected behaviour where I run the exact same `model.generate` call with the same parameters.** ``` python from transformers import LlamaTokenizer, LlamaForCausalLM, TextStreamer tokenizer = LlamaTokenizer.from_pretrained("/mnt/models/oasst-rlhf-2-llama-30b") model = LlamaForCausalLM.from_pretrained("/mnt/models/oasst-rlhf-2-llama-30b", device_map="auto", load_in_8bit=True) streamer = TextStreamer(tokenizer, skip_prompt=True) message = "<|prompter|>This is a demo of a text streamer. What's a cool fact about ducks?<|assistant|>" inputs = tokenizer(message, return_tensors="pt").to(model.device) tokens = model.generate(**inputs, max_new_tokens=25, do_sample=True, num_beams=1, temperature=0.9, streamer=streamer, remove_invalid_values=True) #if I don't use remove invalid, I get the inf/nan bug, see https://github.com/tloen/alpaca-lora/issues/408 ⁇ <|prompter|> This is a demo of a text streamer. What's a cool fact about ducks? <|assistant|> enaracht blood searches anomдів kun Nap wherever learned Laufcalendar ^C #manu ``` Here's what happens when I load the model without the `load_in_8bit=True` flag (good!): ``` python from transformers import LlamaTokenizer, LlamaForCausalLM, TextStreamer tokenizer = LlamaTokenizer.from_pretrained("/mnt/models/oasst-rlhf-2-llama-30b") model = LlamaForCausalLM.from_pretrained("/mnt/models/oasst-rlhf-2-llama-30b", device_map="auto") streamer = TextStreamer(tokenizer, skip_prompt=True) message = "<|prompter|>This is a demo of a text streamer. What's a cool fact about ducks?<|assistant|>" inputs = tokenizer(message, return_tensors="pt").to(model.device) tokens = model.generate(**inputs, max_new_tokens=25, do_sample=True, temperature=0.9, streamer=streamer) Response: The Duck: Small yet Mighty Did you know that, while ducks are relatively ``` I also tried running it without `do_sample=True`: ``` >>> tokens = model.generate(**inputs, max_new_tokens=25, temperature=0.9, streamer=streamer) ⁇ <|prompter|> This is a demo of a text streamer. What's a cool fact about ducks? <|assistant|> ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ``` ### Expected behavior I would expect the following: ``` python from transformers import LlamaTokenizer, LlamaForCausalLM, TextStreamer tokenizer = LlamaTokenizer.from_pretrained("/mnt/models/oasst-rlhf-2-llama-30b") model = LlamaForCausalLM.from_pretrained("/mnt/models/oasst-rlhf-2-llama-30b", device_map="auto", load_in_8bit=True) streamer = TextStreamer(tokenizer, skip_prompt=True) message = "<|prompter|>This is a demo of a text streamer. What's a cool fact about ducks?<|assistant|>" inputs = tokenizer(message, return_tensors="pt").to(model.device) tokens = model.generate(**inputs, max_new_tokens=25, do_sample=True, num_beams=1, temperature=0.9, streamer=streamer, remove_invalid_values=True) Response: The Duck: Small yet Mighty Did you know that, while ducks are relatively ``` edit (additional testing): - I also tried setting `use_cache=False` in `model.generate()`, as hinted in https://huggingface.co/OpenAssistant/falcon-40b-sft-top1-560/discussions/1, but still gibberish output. - I also tried running this with `torch==2.0.1`, but same error behavior. - I tried downgrading from `CUDA 11.8` to `CUDA 11.6` and `bitsandbytes` from `0.38.1` to `0.31.8`, which solves the inf/nan problem (see https://github.com/tloen/alpaca-lora/issues/408). So now I can run the exact same `model.generate()` code with the only difference between the two being `load_in_8bit=True` in the model loading step: ``` python Python 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from transformers import LlamaTokenizer, LlamaForCausalLM, TextStreamer >>> >>> tokenizer = LlamaTokenizer.from_pretrained("/mnt/models/oasst-rlhf-2-llama-30b") >>> model = LlamaForCausalLM.from_pretrained("/mnt/models/oasst-rlhf-2-llama-30b", device_map="auto", load_in_8bit=True) #returns gibberish Overriding torch_dtype=None with `torch_dtype=torch.float16` due to requirements of `bitsandbytes` to enable model loading in mixed int8. Either pass torch_dtype=torch.float16 or don't pass this argument at all to remove this warning. ===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please use this form: ...... ================================================================================ dev_1/lib/python3.10/site-packages/bitsandbytes/cuda_setup/paths.py:110: UserWarning: /usr/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu: did not contain libcudart.so as expected! Searching further paths... warn( CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64... CUDA SETUP: CUDA path found: /usr/local/cuda/lib64/libcudart.so CUDA SETUP: Highest compute capability among GPUs detected: 7.0 CUDA_SETUP: Detected CUDA version 116 CUDA_SETUP: Loading binary dev_1/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda116_nocublaslt.so... Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s] dev_1/lib/python3.10/site-packages/bitsandbytes/functional.py:227: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() return ct.c_void_p(A.data.storage().data_ptr()) Loading checkpoint shards: 100%|███████████████████████| 7/7 [00:54<00:00, 7.82s/it] >>> streamer = TextStreamer(tokenizer, skip_prompt=True) >>> message = "<|prompter|>This is a demo of a text streamer. What's a cool fact about ducks?<|assistant|>" >>> inputs = tokenizer(message, return_tensors="pt").to(model.device) >>> tokens = model.generate(**inputs, max_new_tokens=25, do_sample=True, temperature=0.9, streamer=streamer) xeabaselogiccmtnzak accomplish MeyifullylandaMP Marshallcitaemann beskre Gil zoomaki Bon companion Vert Mindsetti ``` The issue persists, so it's independent from the inf/nan bug and 100% confirmed caused by a combination of using both `load_in_8bit=True` and multi gpu. This code returns comprehensible language when: - it fits on a single GPU's VRAM and use `load_in_8bit=True`, - or when you load on multi GPU, but without the argument `load_in_8bit=True`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23989/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23989/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23988
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23988/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23988/comments
https://api.github.com/repos/huggingface/transformers/issues/23988/events
https://github.com/huggingface/transformers/issues/23988
1,740,374,241
I_kwDOCUB6oc5nvADh
23,988
Bug Report - Tokenizer Issue with Tensor Device Assignment in transformers/pipelines/text_generation.py
{ "login": "ericzhou571", "id": 57415741, "node_id": "MDQ6VXNlcjU3NDE1NzQx", "avatar_url": "https://avatars.githubusercontent.com/u/57415741?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ericzhou571", "html_url": "https://github.com/ericzhou571", "followers_url": "https://api.github.com/users/ericzhou571/followers", "following_url": "https://api.github.com/users/ericzhou571/following{/other_user}", "gists_url": "https://api.github.com/users/ericzhou571/gists{/gist_id}", "starred_url": "https://api.github.com/users/ericzhou571/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ericzhou571/subscriptions", "organizations_url": "https://api.github.com/users/ericzhou571/orgs", "repos_url": "https://api.github.com/users/ericzhou571/repos", "events_url": "https://api.github.com/users/ericzhou571/events{/privacy}", "received_events_url": "https://api.github.com/users/ericzhou571/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The `pipeline` will handle the device if you pass it to it with the `device` kwarg.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,689
1,689
NONE
null
### System Info ``` - `transformers` version: 4.29.0.dev0 - Platform: Linux-5.4.0-110-generic-x86_64-with-glibc2.10 - Python version: 3.8.13 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.0 - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): 2.9.3 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @ArthurZucker @Narsil ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction code example of https://huggingface.co/tiiuae/falcon-7b from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-7b" ```python tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, device=torch.device(0), ) sequences = pipeline( "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ### Expected behavior every thing should work well on gpu:0. ### More Detail Dear Transformers GitHub team, I hope this message finds you well. I would like to report a bug that I have identified in the code block provided below: https://github.com/huggingface/transformers/blob/118e9810687dd713b6be07af79e80eeb1d916908/src/transformers/pipelines/text_generation.py#L203-L263 ```python def preprocess(self, prompt_text, prefix="", handle_long_generation=None, **generate_kwargs): inputs = self.tokenizer( prefix + prompt_text, padding=False, add_special_tokens=False, return_tensors=self.framework ) inputs["prompt_text"] = prompt_text # Rest of the code... ``` Problem description: The bug occurs in the `preprocess` method of the code block above. It seems that after tokenizing the `prompt_text`, the resulting tensor is not automatically moved to the same device where the model is located. This behavior causes an error when attempting to use the GPU for computation, specifically resulting in the following error message: ``` RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select) ``` Expected behavior: Ideally, the `preprocess` method should ensure that the tensor generated by the tokenizer is moved to the same device as the model before further processing or generation takes place. This would prevent any device mismatch errors when using the GPU for computations. Possible solution: To resolve this issue, I suggest modifying the `preprocess` method to include a device assignment step for the tokenized tensor. By using the `to` method, the tensor can be explicitly moved to the device where the model is located. Here's an example of how this could be implemented: ```python inputs = self.tokenizer( prefix + prompt_text, padding=False, add_special_tokens=False, return_tensors=self.framework ) inputs["input_ids"] = inputs["input_ids"].to(self.model.device) if "attention_mask" in inputs: inputs["attention_mask"] = inputs["attention_mask"].to(self.model.device) ``` By adding these lines of code, the tensor and its attention mask (if applicable) will be correctly assigned to the same device as the model. I hope this information helps in resolving the issue. Please let me know if you need any further clarification or assistance. Thank you for your attention to this matter. Best regards, Wenrui
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23988/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23988/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23987
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23987/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23987/comments
https://api.github.com/repos/huggingface/transformers/issues/23987/events
https://github.com/huggingface/transformers/issues/23987
1,740,326,597
I_kwDOCUB6oc5nu0bF
23,987
How can I using deepspeed along with LION optimizer/.?
{ "login": "luohao123", "id": 49749220, "node_id": "MDQ6VXNlcjQ5NzQ5MjIw", "avatar_url": "https://avatars.githubusercontent.com/u/49749220?v=4", "gravatar_id": "", "url": "https://api.github.com/users/luohao123", "html_url": "https://github.com/luohao123", "followers_url": "https://api.github.com/users/luohao123/followers", "following_url": "https://api.github.com/users/luohao123/following{/other_user}", "gists_url": "https://api.github.com/users/luohao123/gists{/gist_id}", "starred_url": "https://api.github.com/users/luohao123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/luohao123/subscriptions", "organizations_url": "https://api.github.com/users/luohao123/orgs", "repos_url": "https://api.github.com/users/luohao123/repos", "events_url": "https://api.github.com/users/luohao123/events{/privacy}", "received_events_url": "https://api.github.com/users/luohao123/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Please use the [forums](https://discuss.huggingface.co/) for such questions.", "@sgugger I have post one: https://discuss.huggingface.co/t/how-to-using-lion-optimizer/42270 Hope you guys could give any help.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,685
1,689
1,689
NONE
null
How can I using deepspeed along with LION optimizer/.?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23987/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23987/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23986
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23986/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23986/comments
https://api.github.com/repos/huggingface/transformers/issues/23986/events
https://github.com/huggingface/transformers/issues/23986
1,740,252,965
I_kwDOCUB6oc5nuicl
23,986
learn_rate behavior is not expected when using =transformers.TrainingArguments
{ "login": "dkqkxx", "id": 32215330, "node_id": "MDQ6VXNlcjMyMjE1MzMw", "avatar_url": "https://avatars.githubusercontent.com/u/32215330?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dkqkxx", "html_url": "https://github.com/dkqkxx", "followers_url": "https://api.github.com/users/dkqkxx/followers", "following_url": "https://api.github.com/users/dkqkxx/following{/other_user}", "gists_url": "https://api.github.com/users/dkqkxx/gists{/gist_id}", "starred_url": "https://api.github.com/users/dkqkxx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dkqkxx/subscriptions", "organizations_url": "https://api.github.com/users/dkqkxx/orgs", "repos_url": "https://api.github.com/users/dkqkxx/repos", "events_url": "https://api.github.com/users/dkqkxx/events{/privacy}", "received_events_url": "https://api.github.com/users/dkqkxx/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Could you explain what the bug is here?", "> Could you explain what the bug is here?\r\n\r\nI'm sorry I didn't explain clearly, I will attach a simple script.\r\nIn the previous version of transformer, this code behaves normally, and the learning rate decreases linearly like the green line.\r\n![image](https://github.com/huggingface/transformers/assets/32215330/f73fe2d0-54b9-4c6d-bc2d-6e63ee1c2bac)\r\nBut in the latest transformers, the learning rate decreases like the red line.", "> > Could you explain what the bug is here?\r\n> \r\n> I'm sorry I didn't explain clearly, I will attach a simple script. In the previous version of transformer, this code behaves normally, and the learning rate decreases linearly like the green line. ![image](https://user-images.githubusercontent.com/32215330/244130497-f73fe2d0-54b9-4c6d-bc2d-6e63ee1c2bac.png) But in the latest transformers, the learning rate decreases like the red line.\r\n\r\n\r\nuse this command:\r\n`torchrun --nnodes 1 --nproc_per_node 4 run.py --model_name_or_path bert-base-cased --task_name mrpc --do_train --max_seq_length 128 --per_device_train_batch_size 8 --gradient_accumulation_steps 2 --learning_rate 3e-5 --num_train_epochs 3 --output_dir ./mrpc --overwrite_output_dir --warmup_steps 1 --logging_steps 5`\r\n\r\n``` python\r\n\r\n#!/usr/bin/env python\r\n# coding=utf-8\r\n# Copyright 2020 The HuggingFace Inc. team. All rights reserved.\r\n#\r\n# Licensed under the Apache License, Version 2.0 (the \"License\");\r\n# you may not use this file except in compliance with the License.\r\n# You may obtain a copy of the License at\r\n#\r\n# http://www.apache.org/licenses/LICENSE-2.0\r\n#\r\n# Unless required by applicable law or agreed to in writing, software\r\n# distributed under the License is distributed on an \"AS IS\" BASIS,\r\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r\n# See the License for the specific language governing permissions and\r\n# limitations under the License.\r\n\"\"\" Finetuning the library models for sequence classification on GLUE.\"\"\"\r\n# You can also adapt this script on your own text classification task. Pointers for this are left as comments.\r\n\r\nimport logging\r\nimport os\r\nimport random\r\nimport sys\r\nfrom dataclasses import dataclass, field\r\nfrom typing import Optional\r\n\r\nimport datasets\r\nimport evaluate\r\nimport numpy as np\r\nfrom datasets import load_dataset\r\n\r\nimport transformers\r\nfrom transformers import (\r\n AutoConfig,\r\n AutoModelForSequenceClassification,\r\n AutoTokenizer,\r\n DataCollatorWithPadding,\r\n EvalPrediction,\r\n HfArgumentParser,\r\n PretrainedConfig,\r\n Trainer,\r\n TrainingArguments,\r\n default_data_collator,\r\n set_seed,\r\n)\r\nfrom transformers.trainer_utils import get_last_checkpoint\r\nfrom transformers.utils import check_min_version, send_example_telemetry\r\nfrom transformers.utils.versions import require_version\r\n\r\n\r\n# Will error if the minimal version of Transformers is not installed. Remove at your own risks.\r\n\r\nrequire_version(\"datasets>=1.8.0\", \"To fix: pip install -r examples/pytorch/text-classification/requirements.txt\")\r\n\r\ntask_to_keys = {\r\n \"cola\": (\"sentence\", None),\r\n \"mnli\": (\"premise\", \"hypothesis\"),\r\n \"mrpc\": (\"sentence1\", \"sentence2\"),\r\n \"qnli\": (\"question\", \"sentence\"),\r\n \"qqp\": (\"question1\", \"question2\"),\r\n \"rte\": (\"sentence1\", \"sentence2\"),\r\n \"sst2\": (\"sentence\", None),\r\n \"stsb\": (\"sentence1\", \"sentence2\"),\r\n \"wnli\": (\"sentence1\", \"sentence2\"),\r\n}\r\n\r\nlogger = logging.getLogger(__name__)\r\n\r\n\r\n@dataclass\r\nclass DataTrainingArguments:\r\n \"\"\"\r\n Arguments pertaining to what data we are going to input our model for training and eval.\r\n\r\n Using `HfArgumentParser` we can turn this class\r\n into argparse arguments to be able to specify them on\r\n the command line.\r\n \"\"\"\r\n\r\n task_name: Optional[str] = field(\r\n default=None,\r\n metadata={\"help\": \"The name of the task to train on: \" + \", \".join(task_to_keys.keys())},\r\n )\r\n dataset_name: Optional[str] = field(\r\n default=None, metadata={\"help\": \"The name of the dataset to use (via the datasets library).\"}\r\n )\r\n dataset_config_name: Optional[str] = field(\r\n default=None, metadata={\"help\": \"The configuration name of the dataset to use (via the datasets library).\"}\r\n )\r\n max_seq_length: int = field(\r\n default=128,\r\n metadata={\r\n \"help\": (\r\n \"The maximum total input sequence length after tokenization. Sequences longer \"\r\n \"than this will be truncated, sequences shorter will be padded.\"\r\n )\r\n },\r\n )\r\n overwrite_cache: bool = field(\r\n default=False, metadata={\"help\": \"Overwrite the cached preprocessed datasets or not.\"}\r\n )\r\n pad_to_max_length: bool = field(\r\n default=True,\r\n metadata={\r\n \"help\": (\r\n \"Whether to pad all samples to `max_seq_length`. \"\r\n \"If False, will pad the samples dynamically when batching to the maximum length in the batch.\"\r\n )\r\n },\r\n )\r\n max_train_samples: Optional[int] = field(\r\n default=None,\r\n metadata={\r\n \"help\": (\r\n \"For debugging purposes or quicker training, truncate the number of training examples to this \"\r\n \"value if set.\"\r\n )\r\n },\r\n )\r\n max_eval_samples: Optional[int] = field(\r\n default=None,\r\n metadata={\r\n \"help\": (\r\n \"For debugging purposes or quicker training, truncate the number of evaluation examples to this \"\r\n \"value if set.\"\r\n )\r\n },\r\n )\r\n max_predict_samples: Optional[int] = field(\r\n default=None,\r\n metadata={\r\n \"help\": (\r\n \"For debugging purposes or quicker training, truncate the number of prediction examples to this \"\r\n \"value if set.\"\r\n )\r\n },\r\n )\r\n train_file: Optional[str] = field(\r\n default=None, metadata={\"help\": \"A csv or a json file containing the training data.\"}\r\n )\r\n validation_file: Optional[str] = field(\r\n default=None, metadata={\"help\": \"A csv or a json file containing the validation data.\"}\r\n )\r\n test_file: Optional[str] = field(default=None, metadata={\"help\": \"A csv or a json file containing the test data.\"})\r\n\r\n def __post_init__(self):\r\n if self.task_name is not None:\r\n self.task_name = self.task_name.lower()\r\n if self.task_name not in task_to_keys.keys():\r\n raise ValueError(\"Unknown task, you should pick one in \" + \",\".join(task_to_keys.keys()))\r\n elif self.dataset_name is not None:\r\n pass\r\n elif self.train_file is None or self.validation_file is None:\r\n raise ValueError(\"Need either a GLUE task, a training/validation file or a dataset name.\")\r\n else:\r\n train_extension = self.train_file.split(\".\")[-1]\r\n assert train_extension in [\"csv\", \"json\"], \"`train_file` should be a csv or a json file.\"\r\n validation_extension = self.validation_file.split(\".\")[-1]\r\n assert (\r\n validation_extension == train_extension\r\n ), \"`validation_file` should have the same extension (csv or json) as `train_file`.\"\r\n\r\n\r\n@dataclass\r\nclass ModelArguments:\r\n \"\"\"\r\n Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.\r\n \"\"\"\r\n\r\n model_name_or_path: str = field(\r\n metadata={\"help\": \"Path to pretrained model or model identifier from huggingface.co/models\"}\r\n )\r\n config_name: Optional[str] = field(\r\n default=None, metadata={\"help\": \"Pretrained config name or path if not the same as model_name\"}\r\n )\r\n tokenizer_name: Optional[str] = field(\r\n default=None, metadata={\"help\": \"Pretrained tokenizer name or path if not the same as model_name\"}\r\n )\r\n cache_dir: Optional[str] = field(\r\n default=None,\r\n metadata={\"help\": \"Where do you want to store the pretrained models downloaded from huggingface.co\"},\r\n )\r\n use_fast_tokenizer: bool = field(\r\n default=True,\r\n metadata={\"help\": \"Whether to use one of the fast tokenizer (backed by the tokenizers library) or not.\"},\r\n )\r\n model_revision: str = field(\r\n default=\"main\",\r\n metadata={\"help\": \"The specific model version to use (can be a branch name, tag name or commit id).\"},\r\n )\r\n use_auth_token: bool = field(\r\n default=False,\r\n metadata={\r\n \"help\": (\r\n \"Will use the token generated when running `huggingface-cli login` (necessary to use this script \"\r\n \"with private models).\"\r\n )\r\n },\r\n )\r\n ignore_mismatched_sizes: bool = field(\r\n default=False,\r\n metadata={\"help\": \"Will enable to load a pretrained model whose head dimensions are different.\"},\r\n )\r\n\r\n\r\ndef main():\r\n # See all possible arguments in src/transformers/training_args.py\r\n # or by passing the --help flag to this script.\r\n # We now keep distinct sets of args, for a cleaner separation of concerns.\r\n\r\n parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))\r\n if len(sys.argv) == 2 and sys.argv[1].endswith(\".json\"):\r\n # If we pass only one argument to the script and it's the path to a json file,\r\n # let's parse it to get our arguments.\r\n model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))\r\n else:\r\n model_args, data_args, training_args = parser.parse_args_into_dataclasses()\r\n\r\n # Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The\r\n # information sent is the one passed as arguments along with your Python/PyTorch versions.\r\n send_example_telemetry(\"run_glue\", model_args, data_args)\r\n\r\n # Setup logging\r\n logging.basicConfig(\r\n format=\"%(asctime)s - %(levelname)s - %(name)s - %(message)s\",\r\n datefmt=\"%m/%d/%Y %H:%M:%S\",\r\n handlers=[logging.StreamHandler(sys.stdout)],\r\n )\r\n\r\n if training_args.should_log:\r\n # The default of training_args.log_level is passive, so we set log level at info here to have that default.\r\n transformers.utils.logging.set_verbosity_info()\r\n\r\n log_level = training_args.get_process_log_level()\r\n logger.setLevel(log_level)\r\n datasets.utils.logging.set_verbosity(log_level)\r\n transformers.utils.logging.set_verbosity(log_level)\r\n transformers.utils.logging.enable_default_handler()\r\n transformers.utils.logging.enable_explicit_format()\r\n\r\n # Log on each process the small summary:\r\n logger.warning(\r\n f\"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}\"\r\n + f\"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}\"\r\n )\r\n logger.info(f\"Training/evaluation parameters {training_args}\")\r\n\r\n # Detecting last checkpoint.\r\n last_checkpoint = None\r\n if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:\r\n last_checkpoint = get_last_checkpoint(training_args.output_dir)\r\n if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:\r\n raise ValueError(\r\n f\"Output directory ({training_args.output_dir}) already exists and is not empty. \"\r\n \"Use --overwrite_output_dir to overcome.\"\r\n )\r\n elif last_checkpoint is not None and training_args.resume_from_checkpoint is None:\r\n logger.info(\r\n f\"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change \"\r\n \"the `--output_dir` or add `--overwrite_output_dir` to train from scratch.\"\r\n )\r\n\r\n # Set seed before initializing model.\r\n set_seed(training_args.seed)\r\n\r\n # Get the datasets: you can either provide your own CSV/JSON training and evaluation files (see below)\r\n # or specify a GLUE benchmark task (the dataset will be downloaded automatically from the datasets Hub).\r\n #\r\n # For CSV/JSON files, this script will use as labels the column called 'label' and as pair of sentences the\r\n # sentences in columns called 'sentence1' and 'sentence2' if such column exists or the first two columns not named\r\n # label if at least two columns are provided.\r\n #\r\n # If the CSVs/JSONs contain only one non-label column, the script does single sentence classification on this\r\n # single column. You can easily tweak this behavior (see below)\r\n #\r\n # In distributed training, the load_dataset function guarantee that only one local process can concurrently\r\n # download the dataset.\r\n if data_args.task_name is not None:\r\n # Downloading and loading a dataset from the hub.\r\n raw_datasets = load_dataset(\r\n \"glue\",\r\n data_args.task_name,\r\n cache_dir=model_args.cache_dir,\r\n use_auth_token=True if model_args.use_auth_token else None,\r\n )\r\n elif data_args.dataset_name is not None:\r\n # Downloading and loading a dataset from the hub.\r\n raw_datasets = load_dataset(\r\n data_args.dataset_name,\r\n data_args.dataset_config_name,\r\n cache_dir=model_args.cache_dir,\r\n use_auth_token=True if model_args.use_auth_token else None,\r\n )\r\n else:\r\n # Loading a dataset from your local files.\r\n # CSV/JSON training and evaluation files are needed.\r\n data_files = {\"train\": data_args.train_file, \"validation\": data_args.validation_file}\r\n\r\n # Get the test dataset: you can provide your own CSV/JSON test file (see below)\r\n # when you use `do_predict` without specifying a GLUE benchmark task.\r\n if training_args.do_predict:\r\n if data_args.test_file is not None:\r\n train_extension = data_args.train_file.split(\".\")[-1]\r\n test_extension = data_args.test_file.split(\".\")[-1]\r\n assert (\r\n test_extension == train_extension\r\n ), \"`test_file` should have the same extension (csv or json) as `train_file`.\"\r\n data_files[\"test\"] = data_args.test_file\r\n else:\r\n raise ValueError(\"Need either a GLUE task or a test file for `do_predict`.\")\r\n\r\n for key in data_files.keys():\r\n logger.info(f\"load a local file for {key}: {data_files[key]}\")\r\n\r\n if data_args.train_file.endswith(\".csv\"):\r\n # Loading a dataset from local csv files\r\n raw_datasets = load_dataset(\r\n \"csv\",\r\n data_files=data_files,\r\n cache_dir=model_args.cache_dir,\r\n use_auth_token=True if model_args.use_auth_token else None,\r\n )\r\n else:\r\n # Loading a dataset from local json files\r\n raw_datasets = load_dataset(\r\n \"json\",\r\n data_files=data_files,\r\n cache_dir=model_args.cache_dir,\r\n use_auth_token=True if model_args.use_auth_token else None,\r\n )\r\n # See more about loading any type of standard or custom dataset at\r\n # https://huggingface.co/docs/datasets/loading_datasets.html.\r\n\r\n # Labels\r\n if data_args.task_name is not None:\r\n is_regression = data_args.task_name == \"stsb\"\r\n if not is_regression:\r\n label_list = raw_datasets[\"train\"].features[\"label\"].names\r\n num_labels = len(label_list)\r\n else:\r\n num_labels = 1\r\n else:\r\n # Trying to have good defaults here, don't hesitate to tweak to your needs.\r\n is_regression = raw_datasets[\"train\"].features[\"label\"].dtype in [\"float32\", \"float64\"]\r\n if is_regression:\r\n num_labels = 1\r\n else:\r\n # A useful fast method:\r\n # https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.unique\r\n label_list = raw_datasets[\"train\"].unique(\"label\")\r\n label_list.sort() # Let's sort it for determinism\r\n num_labels = len(label_list)\r\n\r\n # Load pretrained model and tokenizer\r\n #\r\n # In distributed training, the .from_pretrained methods guarantee that only one local process can concurrently\r\n # download model & vocab.\r\n config = AutoConfig.from_pretrained(\r\n model_args.config_name if model_args.config_name else model_args.model_name_or_path,\r\n num_labels=num_labels,\r\n finetuning_task=data_args.task_name,\r\n cache_dir=model_args.cache_dir,\r\n revision=model_args.model_revision,\r\n use_auth_token=True if model_args.use_auth_token else None,\r\n )\r\n tokenizer = AutoTokenizer.from_pretrained(\r\n model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,\r\n cache_dir=model_args.cache_dir,\r\n use_fast=model_args.use_fast_tokenizer,\r\n revision=model_args.model_revision,\r\n use_auth_token=True if model_args.use_auth_token else None,\r\n )\r\n model = AutoModelForSequenceClassification.from_pretrained(\r\n model_args.model_name_or_path,\r\n from_tf=bool(\".ckpt\" in model_args.model_name_or_path),\r\n config=config,\r\n cache_dir=model_args.cache_dir,\r\n revision=model_args.model_revision,\r\n use_auth_token=True if model_args.use_auth_token else None,\r\n ignore_mismatched_sizes=model_args.ignore_mismatched_sizes,\r\n )\r\n\r\n # Preprocessing the raw_datasets\r\n if data_args.task_name is not None:\r\n sentence1_key, sentence2_key = task_to_keys[data_args.task_name]\r\n else:\r\n # Again, we try to have some nice defaults but don't hesitate to tweak to your use case.\r\n non_label_column_names = [name for name in raw_datasets[\"train\"].column_names if name != \"label\"]\r\n if \"sentence1\" in non_label_column_names and \"sentence2\" in non_label_column_names:\r\n sentence1_key, sentence2_key = \"sentence1\", \"sentence2\"\r\n else:\r\n if len(non_label_column_names) >= 2:\r\n sentence1_key, sentence2_key = non_label_column_names[:2]\r\n else:\r\n sentence1_key, sentence2_key = non_label_column_names[0], None\r\n\r\n # Padding strategy\r\n if data_args.pad_to_max_length:\r\n padding = \"max_length\"\r\n else:\r\n # We will pad later, dynamically at batch creation, to the max sequence length in each batch\r\n padding = False\r\n\r\n # Some models have set the order of the labels to use, so let's make sure we do use it.\r\n label_to_id = None\r\n if (\r\n model.config.label2id != PretrainedConfig(num_labels=num_labels).label2id\r\n and data_args.task_name is not None\r\n and not is_regression\r\n ):\r\n # Some have all caps in their config, some don't.\r\n label_name_to_id = {k.lower(): v for k, v in model.config.label2id.items()}\r\n if sorted(label_name_to_id.keys()) == sorted(label_list):\r\n label_to_id = {i: int(label_name_to_id[label_list[i]]) for i in range(num_labels)}\r\n else:\r\n logger.warning(\r\n \"Your model seems to have been trained with labels, but they don't match the dataset: \",\r\n f\"model labels: {sorted(label_name_to_id.keys())}, dataset labels: {sorted(label_list)}.\"\r\n \"\\nIgnoring the model labels as a result.\",\r\n )\r\n elif data_args.task_name is None and not is_regression:\r\n label_to_id = {v: i for i, v in enumerate(label_list)}\r\n\r\n if label_to_id is not None:\r\n model.config.label2id = label_to_id\r\n model.config.id2label = {id: label for label, id in config.label2id.items()}\r\n elif data_args.task_name is not None and not is_regression:\r\n model.config.label2id = {l: i for i, l in enumerate(label_list)}\r\n model.config.id2label = {id: label for label, id in config.label2id.items()}\r\n\r\n if data_args.max_seq_length > tokenizer.model_max_length:\r\n logger.warning(\r\n f\"The max_seq_length passed ({data_args.max_seq_length}) is larger than the maximum length for the\"\r\n f\"model ({tokenizer.model_max_length}). Using max_seq_length={tokenizer.model_max_length}.\"\r\n )\r\n max_seq_length = min(data_args.max_seq_length, tokenizer.model_max_length)\r\n\r\n def preprocess_function(examples):\r\n # Tokenize the texts\r\n args = (\r\n (examples[sentence1_key],) if sentence2_key is None else (examples[sentence1_key], examples[sentence2_key])\r\n )\r\n result = tokenizer(*args, padding=padding, max_length=max_seq_length, truncation=True)\r\n\r\n # Map labels to IDs (not necessary for GLUE tasks)\r\n if label_to_id is not None and \"label\" in examples:\r\n result[\"label\"] = [(label_to_id[l] if l != -1 else -1) for l in examples[\"label\"]]\r\n return result\r\n\r\n with training_args.main_process_first(desc=\"dataset map pre-processing\"):\r\n raw_datasets = raw_datasets.map(\r\n preprocess_function,\r\n batched=True,\r\n load_from_cache_file=not data_args.overwrite_cache,\r\n desc=\"Running tokenizer on dataset\",\r\n )\r\n if training_args.do_train:\r\n if \"train\" not in raw_datasets:\r\n raise ValueError(\"--do_train requires a train dataset\")\r\n train_dataset = raw_datasets[\"train\"]\r\n if data_args.max_train_samples is not None:\r\n max_train_samples = min(len(train_dataset), data_args.max_train_samples)\r\n train_dataset = train_dataset.select(range(max_train_samples))\r\n\r\n if training_args.do_eval:\r\n if \"validation\" not in raw_datasets and \"validation_matched\" not in raw_datasets:\r\n raise ValueError(\"--do_eval requires a validation dataset\")\r\n eval_dataset = raw_datasets[\"validation_matched\" if data_args.task_name == \"mnli\" else \"validation\"]\r\n if data_args.max_eval_samples is not None:\r\n max_eval_samples = min(len(eval_dataset), data_args.max_eval_samples)\r\n eval_dataset = eval_dataset.select(range(max_eval_samples))\r\n\r\n if training_args.do_predict or data_args.task_name is not None or data_args.test_file is not None:\r\n if \"test\" not in raw_datasets and \"test_matched\" not in raw_datasets:\r\n raise ValueError(\"--do_predict requires a test dataset\")\r\n predict_dataset = raw_datasets[\"test_matched\" if data_args.task_name == \"mnli\" else \"test\"]\r\n if data_args.max_predict_samples is not None:\r\n max_predict_samples = min(len(predict_dataset), data_args.max_predict_samples)\r\n predict_dataset = predict_dataset.select(range(max_predict_samples))\r\n\r\n # Log a few random samples from the training set:\r\n if training_args.do_train:\r\n for index in random.sample(range(len(train_dataset)), 3):\r\n logger.info(f\"Sample {index} of the training set: {train_dataset[index]}.\")\r\n\r\n # Get the metric function\r\n if data_args.task_name is not None:\r\n metric = evaluate.load(\"glue\", data_args.task_name)\r\n elif is_regression:\r\n metric = evaluate.load(\"mse\")\r\n else:\r\n metric = evaluate.load(\"accuracy\")\r\n\r\n # You can define your custom compute_metrics function. It takes an `EvalPrediction` object (a namedtuple with a\r\n # predictions and label_ids field) and has to return a dictionary string to float.\r\n def compute_metrics(p: EvalPrediction):\r\n preds = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions\r\n preds = np.squeeze(preds) if is_regression else np.argmax(preds, axis=1)\r\n result = metric.compute(predictions=preds, references=p.label_ids)\r\n if len(result) > 1:\r\n result[\"combined_score\"] = np.mean(list(result.values())).item()\r\n return result\r\n\r\n # Data collator will default to DataCollatorWithPadding when the tokenizer is passed to Trainer, so we change it if\r\n # we already did the padding.\r\n if data_args.pad_to_max_length:\r\n data_collator = default_data_collator\r\n elif training_args.fp16:\r\n data_collator = DataCollatorWithPadding(tokenizer, pad_to_multiple_of=8)\r\n else:\r\n data_collator = None\r\n\r\n # Initialize our Trainer\r\n # warmup_steps=warmup_steps,\r\n # num_train_epochs=args.epochs,\r\n # learning_rate=args.learning_rate,\r\n # fp16=True,\r\n # logging_steps=20,\r\n # evaluation_strategy=\"steps\" if args.val_set_size > 0 else \"no\",\r\n # save_strategy=\"steps\",\r\n # eval_steps=saving_step\r\n print(training_args)\r\n trainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=train_dataset if training_args.do_train else None,\r\n eval_dataset=eval_dataset if training_args.do_eval else None,\r\n compute_metrics=compute_metrics,\r\n tokenizer=tokenizer,\r\n data_collator=data_collator,\r\n )\r\n\r\n # Training\r\n if training_args.do_train:\r\n checkpoint = None\r\n if training_args.resume_from_checkpoint is not None:\r\n checkpoint = training_args.resume_from_checkpoint\r\n elif last_checkpoint is not None:\r\n checkpoint = last_checkpoint\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n metrics = train_result.metrics\r\n max_train_samples = (\r\n data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset)\r\n )\r\n metrics[\"train_samples\"] = min(max_train_samples, len(train_dataset))\r\n\r\n trainer.save_model() # Saves the tokenizer too for easy upload\r\n\r\n trainer.log_metrics(\"train\", metrics)\r\n trainer.save_metrics(\"train\", metrics)\r\n trainer.save_state()\r\n\r\n # Evaluation\r\n if training_args.do_eval:\r\n logger.info(\"*** Evaluate ***\")\r\n\r\n # Loop to handle MNLI double evaluation (matched, mis-matched)\r\n tasks = [data_args.task_name]\r\n eval_datasets = [eval_dataset]\r\n if data_args.task_name == \"mnli\":\r\n tasks.append(\"mnli-mm\")\r\n valid_mm_dataset = raw_datasets[\"validation_mismatched\"]\r\n if data_args.max_eval_samples is not None:\r\n max_eval_samples = min(len(valid_mm_dataset), data_args.max_eval_samples)\r\n valid_mm_dataset = valid_mm_dataset.select(range(max_eval_samples))\r\n eval_datasets.append(valid_mm_dataset)\r\n combined = {}\r\n\r\n for eval_dataset, task in zip(eval_datasets, tasks):\r\n metrics = trainer.evaluate(eval_dataset=eval_dataset)\r\n\r\n max_eval_samples = (\r\n data_args.max_eval_samples if data_args.max_eval_samples is not None else len(eval_dataset)\r\n )\r\n metrics[\"eval_samples\"] = min(max_eval_samples, len(eval_dataset))\r\n\r\n if task == \"mnli-mm\":\r\n metrics = {k + \"_mm\": v for k, v in metrics.items()}\r\n if task is not None and \"mnli\" in task:\r\n combined.update(metrics)\r\n\r\n trainer.log_metrics(\"eval\", metrics)\r\n trainer.save_metrics(\"eval\", combined if task is not None and \"mnli\" in task else metrics)\r\n\r\n if training_args.do_predict:\r\n logger.info(\"*** Predict ***\")\r\n\r\n # Loop to handle MNLI double evaluation (matched, mis-matched)\r\n tasks = [data_args.task_name]\r\n predict_datasets = [predict_dataset]\r\n if data_args.task_name == \"mnli\":\r\n tasks.append(\"mnli-mm\")\r\n predict_datasets.append(raw_datasets[\"test_mismatched\"])\r\n\r\n for predict_dataset, task in zip(predict_datasets, tasks):\r\n # Removing the `label` columns because it contains -1 and Trainer won't like that.\r\n predict_dataset = predict_dataset.remove_columns(\"label\")\r\n predictions = trainer.predict(predict_dataset, metric_key_prefix=\"predict\").predictions\r\n predictions = np.squeeze(predictions) if is_regression else np.argmax(predictions, axis=1)\r\n\r\n output_predict_file = os.path.join(training_args.output_dir, f\"predict_results_{task}.txt\")\r\n if trainer.is_world_process_zero():\r\n with open(output_predict_file, \"w\") as writer:\r\n logger.info(f\"***** Predict results {task} *****\")\r\n writer.write(\"index\\tprediction\\n\")\r\n for index, item in enumerate(predictions):\r\n if is_regression:\r\n writer.write(f\"{index}\\t{item:3.3f}\\n\")\r\n else:\r\n item = label_list[item]\r\n writer.write(f\"{index}\\t{item}\\n\")\r\n\r\n kwargs = {\"finetuned_from\": model_args.model_name_or_path, \"tasks\": \"text-classification\"}\r\n if data_args.task_name is not None:\r\n kwargs[\"language\"] = \"en\"\r\n kwargs[\"dataset_tags\"] = \"glue\"\r\n kwargs[\"dataset_args\"] = data_args.task_name\r\n kwargs[\"dataset\"] = f\"GLUE {data_args.task_name.upper()}\"\r\n\r\n if training_args.push_to_hub:\r\n trainer.push_to_hub(**kwargs)\r\n else:\r\n trainer.create_model_card(**kwargs)\r\n\r\n\r\ndef _mp_fn(index):\r\n # For xla_spawn (TPUs)\r\n main()\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n\r\n\r\n```", "here are run log\r\npreviously:\r\n``` log\r\n 2 {'loss': 1.0339, 'learning_rate': 2.9294117647058824e-05, 'epoch': 0.09}\r\n 3 {'loss': 0.7566, 'learning_rate': 2.8411764705882353e-05, 'epoch': 0.17}\r\n 4 {'loss': 0.6624, 'learning_rate': 2.7529411764705883e-05, 'epoch': 0.26}\r\n 5 {'loss': 0.6149, 'learning_rate': 2.6647058823529412e-05, 'epoch': 0.35}\r\n 6 {'loss': 0.5654, 'learning_rate': 2.576470588235294e-05, 'epoch': 0.43}\r\n 7 {'loss': 0.5128, 'learning_rate': 2.488235294117647e-05, 'epoch': 0.52}\r\n 8 {'loss': 0.5389, 'learning_rate': 2.4e-05, 'epoch': 0.61}\r\n 9 {'loss': 0.5487, 'learning_rate': 2.311764705882353e-05, 'epoch': 0.7}\r\n10 {'loss': 0.5497, 'learning_rate': 2.223529411764706e-05, 'epoch': 0.78}\r\n11 {'loss': 0.5916, 'learning_rate': 2.135294117647059e-05, 'epoch': 0.87}\r\n12 {'loss': 0.5519, 'learning_rate': 2.047058823529412e-05, 'epoch': 0.96}\r\n13 {'loss': 0.508, 'learning_rate': 1.9588235294117648e-05, 'epoch': 1.04}\r\n14 {'loss': 0.4503, 'learning_rate': 1.8705882352941178e-05, 'epoch': 1.13}\r\n15 {'loss': 0.4167, 'learning_rate': 1.7823529411764707e-05, 'epoch': 1.22}\r\n16 {'loss': 0.4888, 'learning_rate': 1.6941176470588237e-05, 'epoch': 1.3}\r\n17 {'loss': 0.4184, 'learning_rate': 1.6058823529411766e-05, 'epoch': 1.39}\r\n18 {'loss': 0.5095, 'learning_rate': 1.5176470588235294e-05, 'epoch': 1.48}\r\n19 {'loss': 0.4301, 'learning_rate': 1.4294117647058823e-05, 'epoch': 1.57}\r\n20 {'loss': 0.4162, 'learning_rate': 1.3411764705882354e-05, 'epoch': 1.65}\r\n21 {'loss': 0.3814, 'learning_rate': 1.2529411764705884e-05, 'epoch': 1.74}\r\n22 {'loss': 0.3884, 'learning_rate': 1.1647058823529412e-05, 'epoch': 1.83}\r\n23 {'loss': 0.3852, 'learning_rate': 1.0764705882352941e-05, 'epoch': 1.91}\r\n24 {'loss': 0.4088, 'learning_rate': 9.88235294117647e-06, 'epoch': 2.0}\r\n25 {'loss': 0.2877, 'learning_rate': 9e-06, 'epoch': 2.09}\r\n26 {'loss': 0.3444, 'learning_rate': 8.11764705882353e-06, 'epoch': 2.17}\r\n27 {'loss': 0.2726, 'learning_rate': 7.235294117647059e-06, 'epoch': 2.26}\r\n28 {'loss': 0.3031, 'learning_rate': 6.352941176470589e-06, 'epoch': 2.35}\r\n29 {'loss': 0.2965, 'learning_rate': 5.470588235294117e-06, 'epoch': 2.43}\r\n30 {'loss': 0.2905, 'learning_rate': 4.588235294117648e-06, 'epoch': 2.52}\r\n31 {'loss': 0.2672, 'learning_rate': 3.7058823529411767e-06, 'epoch': 2.61}\r\n32 {'loss': 0.2822, 'learning_rate': 2.823529411764706e-06, 'epoch': 2.7}\r\n33 {'loss': 0.3066, 'learning_rate': 1.9411764705882357e-06, 'epoch': 2.78}\r\n34 {'loss': 0.271, 'learning_rate': 1.0588235294117648e-06, 'epoch': 2.87}\r\n35 {'loss': 0.2694, 'learning_rate': 1.764705882352941e-07, 'epoch': 2.96}\r\n```\r\nlatest:\r\n``` log\r\n 4 {'loss': 0.6673, 'learning_rate': 1.9588235294117648e-05, 'epoch': 0.26}\r\n 5 {'loss': 0.6114, 'learning_rate': 1.6058823529411766e-05, 'epoch': 0.35}\r\n 6 {'loss': 0.5715, 'learning_rate': 1.2529411764705884e-05, 'epoch': 0.43}\r\n 7 {'loss': 0.5307, 'learning_rate': 9e-06, 'epoch': 0.52}\r\n 8 {'loss': 0.5452, 'learning_rate': 5.470588235294117e-06, 'epoch': 0.61}\r\n 9 {'loss': 0.5622, 'learning_rate': 1.9411764705882357e-06, 'epoch': 0.7}\r\n10 {'loss': 0.5563, 'learning_rate': 0.0, 'epoch': 0.78}\r\n11 {'loss': 0.569, 'learning_rate': 0.0, 'epoch': 0.87}\r\n12 {'loss': 0.5708, 'learning_rate': 0.0, 'epoch': 0.96}\r\n13 {'loss': 0.5439, 'learning_rate': 0.0, 'epoch': 1.04}\r\n14 {'loss': 0.55, 'learning_rate': 0.0, 'epoch': 1.13}\r\n15 {'loss': 0.5254, 'learning_rate': 0.0, 'epoch': 1.22}\r\n16 {'loss': 0.5534, 'learning_rate': 0.0, 'epoch': 1.3}\r\n17 {'loss': 0.5354, 'learning_rate': 0.0, 'epoch': 1.39}\r\n18 {'loss': 0.5657, 'learning_rate': 0.0, 'epoch': 1.48}\r\n19 {'loss': 0.5479, 'learning_rate': 0.0, 'epoch': 1.57}\r\n20 {'loss': 0.5732, 'learning_rate': 0.0, 'epoch': 1.65}\r\n21 {'loss': 0.5326, 'learning_rate': 0.0, 'epoch': 1.74}\r\n22 {'loss': 0.5624, 'learning_rate': 0.0, 'epoch': 1.83}\r\n23 {'loss': 0.5331, 'learning_rate': 0.0, 'epoch': 1.91}\r\n24 {'loss': 0.5686, 'learning_rate': 0.0, 'epoch': 2.0}\r\n25 {'loss': 0.5393, 'learning_rate': 0.0, 'epoch': 2.09}\r\n26 {'loss': 0.5797, 'learning_rate': 0.0, 'epoch': 2.17}\r\n27 {'loss': 0.5271, 'learning_rate': 0.0, 'epoch': 2.26}\r\n28 {'loss': 0.5346, 'learning_rate': 0.0, 'epoch': 2.35}\r\n29 {'loss': 0.575, 'learning_rate': 0.0, 'epoch': 2.43}\r\n30 {'loss': 0.5359, 'learning_rate': 0.0, 'epoch': 2.52}\r\n31 {'loss': 0.5521, 'learning_rate': 0.0, 'epoch': 2.61}\r\n32 {'loss': 0.5406, 'learning_rate': 0.0, 'epoch': 2.7}\r\n33 {'loss': 0.5593, 'learning_rate': 0.0, 'epoch': 2.78}\r\n34 {'loss': 0.5551, 'learning_rate': 0.0, 'epoch': 2.87}\r\n35 {'loss': 0.5481, 'learning_rate': 0.0, 'epoch': 2.96}\r\n```\r\n" ]
1,685
1,686
1,686
CONTRIBUTOR
null
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.15.0-72-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 1.13.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction uniform_finetune.py: ``` python trainer = transformers.Trainer( model=model, train_dataset=train_data, eval_dataset=val_data, args=transformers.TrainingArguments( per_device_train_batch_size=args.per_gpu_train_batch_size, gradient_accumulation_steps=args.gradient_accumulation_steps, warmup_steps=warmup_steps, num_train_epochs=args.epochs, learning_rate=args.learning_rate, fp16=True, logging_steps=20, evaluation_strategy="steps", save_strategy="steps", eval_steps=saving_step , save_steps=saving_step, output_dir=output_dir, save_total_limit=11, load_best_model_at_end=True if args.val_set_size > 0 else False, ddp_find_unused_parameters=False if ddp else None, ), data_collator=transformers.DataCollatorForSeq2Seq(tokenizer, return_tensors="pt", padding=True) ) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23986/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23986/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23985
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23985/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23985/comments
https://api.github.com/repos/huggingface/transformers/issues/23985/events
https://github.com/huggingface/transformers/pull/23985
1,740,210,893
PR_kwDOCUB6oc5SHKcR
23,985
Regarding pix2struct's bugs in the attentions of the encoder's and decoder's outputs
{ "login": "loveisp", "id": 4267420, "node_id": "MDQ6VXNlcjQyNjc0MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/4267420?v=4", "gravatar_id": "", "url": "https://api.github.com/users/loveisp", "html_url": "https://github.com/loveisp", "followers_url": "https://api.github.com/users/loveisp/followers", "following_url": "https://api.github.com/users/loveisp/following{/other_user}", "gists_url": "https://api.github.com/users/loveisp/gists{/gist_id}", "starred_url": "https://api.github.com/users/loveisp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loveisp/subscriptions", "organizations_url": "https://api.github.com/users/loveisp/orgs", "repos_url": "https://api.github.com/users/loveisp/repos", "events_url": "https://api.github.com/users/loveisp/events{/privacy}", "received_events_url": "https://api.github.com/users/loveisp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23985). All of your documentation changes will be reflected on that endpoint.", "Now it has passed all tests, but this is just a temporary measure. In most cases, it can return the correct attentions and cross-attentions, but I have not figured out why there are index errors in a few cases, which causes the test to fail. In theory, as long as the output_attentions parameter is set to True, it should get enough parameters in the tuple. This is very strange. I will continue to investigate when I have time.", "Hi @loveisp ! \r\nAgain thanks for your contribution on this\r\nCan you share with us why this PR got closed? The PR should also fix #24717 so it would be great to merge it :D ", "> Hi @loveisp ! Again thanks for your contribution on this Can you share with us why this PR got closed? The PR should also fix #24717 so it would be great to merge it :D\r\n\r\nSorry I don't know how to merge it. Could you help me do that?", "Hi @younesbelkada @amyeroberts @loveisp, sorry for bothering you. I have a question and would appreciate your insights. I was wondering why tensors are passed in a tuple instead of using dataclasses with None-able fields. It seems like using tuples can be error-prone and make bug detection and fixing more challenging due to the variable size. Additionally, it may require additional comments in the code. I'm considering creating an example PR to explore the possibility of using dataclasses, but before proceeding, I wanted to check if there are specific reasons for using tuples. Your input would be greatly appreciated. Thank you!", "@artyomxyz There's no need to apologise for asking questions. People are busy so they may not always reply, but questions, especially if they're thoughtfully written like yours, are always welcome :) \r\n\r\nYes, indexing with tuples is error prone! You'll notice that our models accept `return_dict` as an argument in their forward pass, and that by default they return a dataclass. We can't force passing and returning dataclasses as torchscript has only recently (and I'm not sure if fully) [started supporting this](https://github.com/pytorch/pytorch/issues/72901) and we need to maintain backwards compatibility with older library versions. c.f. \r\n* a discussion when I raised something similar: https://github.com/huggingface/transformers/pull/22970#discussion_r1195252303. \r\n* [Here](https://github.com/huggingface/transformers/blob/aac4c7996837b77abe91cd1ea734b6ef74117156/src/transformers/configuration_utils.py#L408C14-L408C14) in our config code where we force `return_dict=False` if using torchscript. \r\n\r\n", "Oh, thank you very much for explanation. It makes much more sense to me now" ]
1,685
1,689
1,686
NONE
null
This change addresses the following bug: incorrect indices were used for "all_attentions" ([link](https://github.com/huggingface/transformers/blob/539e2281cd97c35ef4122757f26c88f44115fa94/src/transformers/models/pix2struct/modeling_pix2struct.py#L1549)) and "all_cross_attentions" ([link](https://github.com/huggingface/transformers/blob/539e2281cd97c35ef4122757f26c88f44115fa94/src/transformers/models/pix2struct/modeling_pix2struct.py#L1550)) where the indices should have been 3 and 5, respectively, but the original code used 2 and 3. code snippet: ``` from PIL import Image import requests from transformers import AutoProcessor, Pix2StructForConditionalGeneration processor = AutoProcessor.from_pretrained("google/pix2struct-textcaps-base") model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-textcaps-base") url = "https://www.ilankelman.org/stopsigns/australia.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs, decoder_input_ids=torch.tensor([[0]]), output_attentions=True) print(outputs.cross_attentions[0].shape) ``` The shape of cross_attentions should be (batchsize, head num, text tokens, image tokens). After the modification, the shape is (1, 12, 1, 2048), which is correct. However, before the modification, the shape was (1, 12, 1, 1), which is clearly incorrect, as it represents the cross-attention between text and image. The "attention" in the original code is also incorrect as it retrieves the position bias instead of the attention. Actually, there is a correct comment regarding this matter: [comment](https://github.com/huggingface/transformers/blob/539e2281cd97c35ef4122757f26c88f44115fa94/src/transformers/models/pix2struct/modeling_pix2struct.py#L1532) , but it is unclear why the subsequent code does not use the indices provided in the comment to retrieve the correct values.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23985/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23985/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/23985", "html_url": "https://github.com/huggingface/transformers/pull/23985", "diff_url": "https://github.com/huggingface/transformers/pull/23985.diff", "patch_url": "https://github.com/huggingface/transformers/pull/23985.patch", "merged_at": null }