url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
βŒ€
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
βŒ€
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/25020
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25020/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25020/comments
https://api.github.com/repos/huggingface/transformers/issues/25020/events
https://github.com/huggingface/transformers/issues/25020
1,816,913,337
I_kwDOCUB6oc5sS-W5
25,020
GenerationMixin: model_kwargs not passed to model in assisted decoding
{ "login": "sinking-point", "id": 17532243, "node_id": "MDQ6VXNlcjE3NTMyMjQz", "avatar_url": "https://avatars.githubusercontent.com/u/17532243?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sinking-point", "html_url": "https://github.com/sinking-point", "followers_url": "https://api.github.com/users/sinking-point/followers", "following_url": "https://api.github.com/users/sinking-point/following{/other_user}", "gists_url": "https://api.github.com/users/sinking-point/gists{/gist_id}", "starred_url": "https://api.github.com/users/sinking-point/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sinking-point/subscriptions", "organizations_url": "https://api.github.com/users/sinking-point/orgs", "repos_url": "https://api.github.com/users/sinking-point/repos", "events_url": "https://api.github.com/users/sinking-point/events{/privacy}", "received_events_url": "https://api.github.com/users/sinking-point/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm happy to have a go at fixing this if a maintainer is willing to support.", "@sinking-point thank you for spotting it! Yes, I'd be very happy to support you in fixing this :D ", "@gante No problem, thank you for offering to support.\r\n\r\nI've come up against a problem. This is from the GPT2 `prepare_inputs_for_generation` method, but I imagine it's the same for many other models:\r\n\r\n```python\r\n # only last token for inputs_ids if past is defined in kwargs\r\n if past_key_values:\r\n input_ids = input_ids[:, -1].unsqueeze(-1)\r\n if token_type_ids is not None:\r\n token_type_ids = token_type_ids[:, -1].unsqueeze(-1)\r\n```\r\n\r\nIt assumes that if past_key_values is given, you only need the last token. In assisted generation, this is not the case, as multiple candidate tokens go in one pass.\r\n\r\nArguably, this is a bug in the implementation of `prepare_inputs_for_generation`. It would be better to only cut off as many tokens as we have past_key_values. E.g. with 20 past_key_values and 25 tokens given, it should take the last 5 tokens.\r\n\r\nI have 2 options:\r\n\r\n1. Fix `prepare_inputs_for_generation` in all models. This seems like it could be a lot of work, so I'm not sure I can take that on alone.\r\n2. Modify the output from `prepare_inputs_for_generation` in `assisted_decoding` to correct the `input_ids`. This would be easier, but it removes control of this process from the models. It also may be insufficient, as models may create other kwargs in `prepare_inputs_for_generation` to match the shape of `input_ids`.\r\n\r\nWhat do you think?", "I propose to implement a `prepare_inputs_for_assisted_generation` method in `GenerationMixin`.\r\n\r\nIt will call the `prepare_inputs_for_generation` method and modify the `input_ids` in the returned dict to the correct number of candidate tokens.\r\n\r\nModels can then override this if they need to implement custom logic.", "Hey @sinking-point πŸ‘‹ I appreciate your bias for action with #25135, but I'd like to propose a different route. A route that would benefit us all in the long run and implies a shorter PR :) \r\n\r\nWith your solution in #25135, we have a new function to maintain. From experience, different models will eventually make us add conditional branches to accumulate all expected input flavors -> it will be a burden (and a mess) in the long run 😞 \r\n\r\nYou mentioned an alternative plan, fixing the existing `prepare_inputs_for_generation` to detect how many new tokens there are. In the long run, this is a much better route -- no additional maintenance burden and may unblock future applications with similar problems. However, fixing all models takes a very long time (and may not be the best use of our time, as some models are not used with assisted generation). So... let's modify it for the model you are using now, and raise an exception with instructions regarding how to enable other models :) I've successfully used this strategy in the past, e.g. [here](https://github.com/huggingface/transformers/blob/dd9d45b6ecc0861847e21d461187711331a56138/src/transformers/generation/utils.py#L545).\r\n\r\nWould you be up for it?", "Hi @gante . Thanks for taking a look, but I don't agree with your assessment here.\r\n\r\nThe issue with your suggestion is it would break assisted generation for models it currently works with. This would be a regression of functionality, and could break people's code.\r\n\r\nThe `prepare_inputs_for_assisted_generation` default implementation is intended to work for most, but not necessarily all models. If a new model is added that it doesn't work with, the model can override this method (as with `prepare_inputs_for_generation`). This avoids the need for adding conditional branches to the default implementation.", "> The issue with your suggestion is it would break assisted generation for models it currently works with. This would be a regression of functionality, and could break people's code.\r\n\r\nHow so? input ids length = Attention mask length - past KV length, which would be true in all generation methods.\r\n\r\n", "Maybe I'm misunderstanding you, but isn't your suggestion to:\r\n\r\n1. Make `prepare_inputs_for_generation` compatible with assisted generation in the models I need only\r\n2. Raise an exception when anyone tries to use assisted generation with other models?\r\n\r\nCurrently, most models work with assisted generation. After implementing your suggestion, they would not.", "I see, you are correct, if we change the code to use `prepare_inputs_for_generation` instead of manual input preparation, then the models that don't update this function will fail with assisted generation because the function only prepares one token at a time. In other words, we have to update them all.\r\n\r\nStill, I'm very biased toward updating them all, it is a much wiser long-term solution and it is not that much more work -- all variations of assisted generation/speculative decoding will need it. It is more work to you (if you still want to implement it), but this sort of choice is critical to ensure we can keep maintaining `transformers` πŸ€— ", "I don't want to go through 170+ models and fix them manually one by one.\r\n\r\nI'm hoping they're similar enough that I can script it. I'll give that a go.", "If I'm honest though, I still disagree with you that this is a more maintainable approach.\r\n\r\nThe reason this repetitive effort is necessary is that the logic is reapeated for every model rather than being implemented in the mixin.\r\n\r\nIf the logic in my PR needs to be changed, you just have to change it once, in one place (the mixin). Your concern regarding an eventual need for conditional branches is addressed by the ability of models to override the function, implementing their own logic only if they need to rather than every single time.\r\n\r\nIf I change all the `prepare_inputs_for_generation` functions individually and then the logic needs to be changed again, someone will have to go through and update all the models again.\r\n\r\nIf we're optimising for future dev time, we should focus on hoisting logic from the models to the mixin when the opportunity presents itself, in my opinion.\r\n\r\nIs there anyone who can chime in to give a third opinion?", "> The reason this repetitive effort is necessary is that the logic is reapeated for every model rather than being implemented in the mixin.\r\n\r\nThe reason the logic is repeated is a core principle of our design philosophy -- https://huggingface.co/docs/transformers/philosophy. This philosophy is one of the reasons `transformers` is so successful.\r\n\r\nYou are saying that we can code the wrapper once in the mixin and then overwrite it on a per-model basis... so pretty much the same as updating `prepare_inputs_for_generation`, but with extra steps and additional encapsulation. This is precisely why I want to avoid going this route.\r\n\r\nAs the main developer and maintainer of everything `generate`-related, I can assure you your suggestion is worse in the long run. Generalist functions containing functionality that is strongly model-dependent are the main reason why `generate` is so hard to develop at the moment, their complexity grows very quickly.\r\n\r\nTo wrap up: if we end up going in this direction, there has to be a much stronger reason than saving an hour or two of work.\r\n\r\n> Is there anyone who can chime in to give a third opinion?\r\n\r\nFeel free to ping others, but ultimately it's me who you have to convince :)", "I hope I didn't come across as trying to undermine your authority. I just find that when there's a disagreement between two people, a third perspective can help to form a better consensus. If you agree, you would know better than me who to tag.\r\n\r\n> You are saying that we can code the wrapper once in the mixin and then overwrite it on a per-model basis... so pretty much the same as updating prepare_inputs_for_generation, but with extra steps and additional encapsulation. This is precisely why I want to avoid going this route.\r\n\r\nIt's not the same. With my solution, in most cases the default implementation would suffice and there would be no need to override it. In fact, as it stands the tests pass for all models - none of them need to override the method. I'm just saying that in the event that, as you fear, you would have to add a conditional branch to the default implementation, you could instead override it in the model.\r\n\r\nI don't think we have any fundamental disagreement on design philosophy. At the extreme end of the spectrum, you could do away with `GenerationUtils` and implement it all in every model. I think we can agree that to take the 'repeat yourself' philosophy to that extent is impractical. All we disagree on is where to draw the line.\r\n\r\nThat said, since you're the one who will have to deal with the consequences of whatever approach we take, I'm willing to defer to your preference.", "> I hope I didn't come across as trying to undermine your authority. I just find that when there's a disagreement between two people, a third perspective can help to form a better consensus. \r\n\r\nNot interpreted as so πŸ€— We are internally aligned that `generate` consists of too many nested calls and that adding generalist functions on model-dependent parts is a recipe for chaos, hence my assertive comment. I hope this doesn't come across as downplaying your comments and suggestions -- since we bear the load of maintenance, sometimes we have to say no to seemingly good suggestions, using our past experience as a guide.\r\n\r\n> All we disagree on is where to draw the line.\r\n\r\nPrecisely :) \r\n\r\n> That said, since you're the one who will have to deal with the consequences of whatever approach we take, I'm willing to defer to your preference.\r\n\r\nThank you for being understanding πŸ€— Let me know if I can help in any way!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "This should not be closed yet. It should be closed when https://github.com/huggingface/transformers/pull/25242 is merged." ]
1,690
1,697
1,697
CONTRIBUTOR
null
### System Info - `transformers` version: 4.31.0 - Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.31 - Python version: 3.11.4 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model = AutoModelForCausalLM.from_pretrained("gpt2") assist = AutoModelForCausalLM.from_pretrained("distilgpt2") tokenizer = AutoTokenizer.from_pretrained("gpt2") inputs = tokenizer("The first rule of fight", return_tensors='pt') outputs = model.generate(**inputs, token_type_ids=torch.tensor([[0,0,0,0,0]], dtype=torch.long)) print(tokenizer.decode(outputs[0])) # output: The first rule of fight!!!!!!!!!!!!!!! outputs = model.generate(**inputs, token_type_ids=torch.tensor([[0,0,0,0,0]], dtype=torch.long), num_beams=1, assistant_model=assist) print(tokenizer.decode(outputs[0])) # output: The first rule of fight-or-flight is to be prepared for the enemy. If you are ``` ### Expected behavior I would expect the outputs to be the same for the assisted generation as for the regular generation, as the token_type_ids is being passed into generate in both cases. It is expected that the `generate` method passes extra arguments to the model via its `prepare_inputs_for_generation` method. In fact, the assisted generation does not forward the `model_kwargs` to the model as the other generation methods do.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25020/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25020/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25019
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25019/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25019/comments
https://api.github.com/repos/huggingface/transformers/issues/25019/events
https://github.com/huggingface/transformers/pull/25019
1,816,799,255
PR_kwDOCUB6oc5WJjfS
25,019
Add saffu langauge model to the transformers library
{ "login": "haoranzhao419", "id": 79516199, "node_id": "MDQ6VXNlcjc5NTE2MTk5", "avatar_url": "https://avatars.githubusercontent.com/u/79516199?v=4", "gravatar_id": "", "url": "https://api.github.com/users/haoranzhao419", "html_url": "https://github.com/haoranzhao419", "followers_url": "https://api.github.com/users/haoranzhao419/followers", "following_url": "https://api.github.com/users/haoranzhao419/following{/other_user}", "gists_url": "https://api.github.com/users/haoranzhao419/gists{/gist_id}", "starred_url": "https://api.github.com/users/haoranzhao419/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/haoranzhao419/subscriptions", "organizations_url": "https://api.github.com/users/haoranzhao419/orgs", "repos_url": "https://api.github.com/users/haoranzhao419/repos", "events_url": "https://api.github.com/users/haoranzhao419/events{/privacy}", "received_events_url": "https://api.github.com/users/haoranzhao419/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,690
1,690
1,690
NONE
null
# What does this PR do? We upload a language model called SAFFU (self attention feed forward unit) to the huggingface with the theme of improving the computational efficiency in computing the attention distribution
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25019/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25019/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25019", "html_url": "https://github.com/huggingface/transformers/pull/25019", "diff_url": "https://github.com/huggingface/transformers/pull/25019.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25019.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/25018
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25018/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25018/comments
https://api.github.com/repos/huggingface/transformers/issues/25018/events
https://github.com/huggingface/transformers/pull/25018
1,816,789,025
PR_kwDOCUB6oc5WJhgB
25,018
Fix typo in LlamaTokenizerFast docstring example
{ "login": "sbrunk", "id": 3939659, "node_id": "MDQ6VXNlcjM5Mzk2NTk=", "avatar_url": "https://avatars.githubusercontent.com/u/3939659?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sbrunk", "html_url": "https://github.com/sbrunk", "followers_url": "https://api.github.com/users/sbrunk/followers", "following_url": "https://api.github.com/users/sbrunk/following{/other_user}", "gists_url": "https://api.github.com/users/sbrunk/gists{/gist_id}", "starred_url": "https://api.github.com/users/sbrunk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sbrunk/subscriptions", "organizations_url": "https://api.github.com/users/sbrunk/orgs", "repos_url": "https://api.github.com/users/sbrunk/repos", "events_url": "https://api.github.com/users/sbrunk/events{/privacy}", "received_events_url": "https://api.github.com/users/sbrunk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25018). All of your documentation changes will be reflected on that endpoint." ]
1,690
1,690
1,690
CONTRIBUTOR
null
# What does this PR do? Fix typo in LlamaTokenizerFast docstring example <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25018/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25018/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25018", "html_url": "https://github.com/huggingface/transformers/pull/25018", "diff_url": "https://github.com/huggingface/transformers/pull/25018.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25018.patch", "merged_at": 1690205878000 }
https://api.github.com/repos/huggingface/transformers/issues/25017
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25017/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25017/comments
https://api.github.com/repos/huggingface/transformers/issues/25017/events
https://github.com/huggingface/transformers/pull/25017
1,816,781,221
PR_kwDOCUB6oc5WJgCx
25,017
🌐 [i18n-KO] Translated `add_tensorflow_model.md` to Korean
{ "login": "keonju2", "id": 54880474, "node_id": "MDQ6VXNlcjU0ODgwNDc0", "avatar_url": "https://avatars.githubusercontent.com/u/54880474?v=4", "gravatar_id": "", "url": "https://api.github.com/users/keonju2", "html_url": "https://github.com/keonju2", "followers_url": "https://api.github.com/users/keonju2/followers", "following_url": "https://api.github.com/users/keonju2/following{/other_user}", "gists_url": "https://api.github.com/users/keonju2/gists{/gist_id}", "starred_url": "https://api.github.com/users/keonju2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/keonju2/subscriptions", "organizations_url": "https://api.github.com/users/keonju2/orgs", "repos_url": "https://api.github.com/users/keonju2/repos", "events_url": "https://api.github.com/users/keonju2/events{/privacy}", "received_events_url": "https://api.github.com/users/keonju2/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Please do not open and close four different PRs for the same file. You can update the original PR.", "곡듀여 λ²ˆμ—­ 및 μˆ˜μ •μ„ 톡해 ν›Œλ₯­ν•œ 결과물이 λ‚˜μ˜€μ‹  것 κ°™λ„€μš”! μ „μ²΄μ μœΌλ‘œ 잘 μ½νžˆλŠ” 것 κ°™μŠ΅λ‹ˆλ‹€~ ", "_The documentation is not available anymore as the PR was closed or merged._", "Could you review this PR? πŸ˜ƒ\r\n@sgugger, @ArthurZucker, @eunseojo" ]
1,690
1,691
1,691
CONTRIBUTOR
null
<!-- PR의 제λͺ©μ€ "🌐 [i18n-KO] Translated `add_tensorflow_model.md` to Korean" 으둜 λΆ€νƒλ“œλ¦½λ‹ˆλ‹€ --> # What does this PR do? Translated the `add_tensorflow_model.md` file of the documentation to Korean πŸ˜„ Thank you in advance for your review! Part of https://github.com/huggingface/transformers/issues/20179 <!-- 메인 μ΄μŠˆμ— 기둝이 λ‚¨μ•„μš”! κ°€μ§œμ—°κ΅¬μ†Œ 리포λ₯Ό μ‚¬μš©ν•΄ μ—°μŠ΅ν•˜μ‹€λ•ŒλŠ” μ œκ±°ν•΄μ£Όμ‹œλ©΄ κ°μ‚¬ν•˜κ² μŠ΅λ‹ˆλ‹€! :smile: --> ## Before reviewing - [x] Check for missing / redundant translations (λ²ˆμ—­ λˆ„λ½/쀑볡 검사) - [x] Grammar Check (λ§žμΆ€λ²• 검사) - [x] Review or Add new terms to glossary (μš©μ–΄ 확인 및 μΆ”κ°€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview둜 μ •μƒμž‘λ™ 확인) ## Who can review? (Initial) <!-- 1. μœ„ 체크가 λͺ¨λ‘ μ™„λ£Œλœ λ’€μ—λ§Œ OSSCA νŒ€μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> <!-- Team OSSCA, may you please review this PR? --> @wonhyeongseo, @keonju2, @harheem, @HongB1, @junejae ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. κ°€μ§œμ—°κ΅¬μ†Œ νŒ€μ›λ“€κ³Ό 리뷰가 λλ‚œ ν›„μ—λ§Œ ν—ˆκΉ…νŽ˜μ΄μŠ€ μ§μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25017/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25017/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25017", "html_url": "https://github.com/huggingface/transformers/pull/25017", "diff_url": "https://github.com/huggingface/transformers/pull/25017.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25017.patch", "merged_at": 1691495794000 }
https://api.github.com/repos/huggingface/transformers/issues/25016
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25016/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25016/comments
https://api.github.com/repos/huggingface/transformers/issues/25016/events
https://github.com/huggingface/transformers/pull/25016
1,816,772,087
PR_kwDOCUB6oc5WJeN_
25,016
🌐 [i18n-KO] Translated `add_tensorflow_model.md` to Korean
{ "login": "keonju2", "id": 54880474, "node_id": "MDQ6VXNlcjU0ODgwNDc0", "avatar_url": "https://avatars.githubusercontent.com/u/54880474?v=4", "gravatar_id": "", "url": "https://api.github.com/users/keonju2", "html_url": "https://github.com/keonju2", "followers_url": "https://api.github.com/users/keonju2/followers", "following_url": "https://api.github.com/users/keonju2/following{/other_user}", "gists_url": "https://api.github.com/users/keonju2/gists{/gist_id}", "starred_url": "https://api.github.com/users/keonju2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/keonju2/subscriptions", "organizations_url": "https://api.github.com/users/keonju2/orgs", "repos_url": "https://api.github.com/users/keonju2/repos", "events_url": "https://api.github.com/users/keonju2/events{/privacy}", "received_events_url": "https://api.github.com/users/keonju2/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,690
1,690
1,690
CONTRIBUTOR
null
<!-- PR의 제λͺ©μ€ "🌐 [i18n-KO] Translated `add_tensorflow_model.md` to Korean" 으둜 λΆ€νƒλ“œλ¦½λ‹ˆλ‹€ --> # What does this PR do? Translated the `add_tensorflow_model.md` file of the documentation to Korean πŸ˜„ Thank you in advance for your review! Part of https://github.com/huggingface/transformers/issues/20179 <!-- 메인 μ΄μŠˆμ— 기둝이 λ‚¨μ•„μš”! κ°€μ§œμ—°κ΅¬μ†Œ 리포λ₯Ό μ‚¬μš©ν•΄ μ—°μŠ΅ν•˜μ‹€λ•ŒλŠ” μ œκ±°ν•΄μ£Όμ‹œλ©΄ κ°μ‚¬ν•˜κ² μŠ΅λ‹ˆλ‹€! :smile: --> ## Before reviewing - [x] Check for missing / redundant translations (λ²ˆμ—­ λˆ„λ½/쀑볡 검사) - [x] Grammar Check (λ§žμΆ€λ²• 검사) - [x] Review or Add new terms to glossary (μš©μ–΄ 확인 및 μΆ”κ°€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [ ] Check live-preview for gotchas (live-preview둜 μ •μƒμž‘λ™ 확인) ## Who can review? (Initial) @keonju2 <!-- 1. μœ„ 체크가 λͺ¨λ‘ μ™„λ£Œλœ λ’€μ—λ§Œ OSSCA νŒ€μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> <!-- Team OSSCA, may you please review this PR? --> <!-- @wonhyeongseo, @keonju2, @harheem, @HongB1, @junejae --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. κ°€μ§œμ—°κ΅¬μ†Œ νŒ€μ›λ“€κ³Ό 리뷰가 λλ‚œ ν›„μ—λ§Œ ν—ˆκΉ…νŽ˜μ΄μŠ€ μ§μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> <!-- May you please review this PR? --> <!-- @sgugger, @ArthurZucker, @eunseojo -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25016/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25016/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25016", "html_url": "https://github.com/huggingface/transformers/pull/25016", "diff_url": "https://github.com/huggingface/transformers/pull/25016.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25016.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/25015
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25015/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25015/comments
https://api.github.com/repos/huggingface/transformers/issues/25015/events
https://github.com/huggingface/transformers/pull/25015
1,816,771,826
PR_kwDOCUB6oc5WJeKz
25,015
🌐 [i18n-KO] Translated `add_tensorflow_model.md` to Korean
{ "login": "keonju2", "id": 54880474, "node_id": "MDQ6VXNlcjU0ODgwNDc0", "avatar_url": "https://avatars.githubusercontent.com/u/54880474?v=4", "gravatar_id": "", "url": "https://api.github.com/users/keonju2", "html_url": "https://github.com/keonju2", "followers_url": "https://api.github.com/users/keonju2/followers", "following_url": "https://api.github.com/users/keonju2/following{/other_user}", "gists_url": "https://api.github.com/users/keonju2/gists{/gist_id}", "starred_url": "https://api.github.com/users/keonju2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/keonju2/subscriptions", "organizations_url": "https://api.github.com/users/keonju2/orgs", "repos_url": "https://api.github.com/users/keonju2/repos", "events_url": "https://api.github.com/users/keonju2/events{/privacy}", "received_events_url": "https://api.github.com/users/keonju2/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,690
1,690
1,690
CONTRIBUTOR
null
<!-- PR의 제λͺ©μ€ "🌐 [i18n-KO] Translated `add_tensorflow_model.md` to Korean" 으둜 λΆ€νƒλ“œλ¦½λ‹ˆλ‹€ --> # What does this PR do? Translated the `your_file.md` file of the documentation to Korean πŸ˜„ Thank you in advance for your review! Part of https://github.com/huggingface/transformers/issues/20179 <!-- 메인 μ΄μŠˆμ— 기둝이 λ‚¨μ•„μš”! κ°€μ§œμ—°κ΅¬μ†Œ 리포λ₯Ό μ‚¬μš©ν•΄ μ—°μŠ΅ν•˜μ‹€λ•ŒλŠ” μ œκ±°ν•΄μ£Όμ‹œλ©΄ κ°μ‚¬ν•˜κ² μŠ΅λ‹ˆλ‹€! :smile: --> ## Before reviewing - [x] Check for missing / redundant translations (λ²ˆμ—­ λˆ„λ½/쀑볡 검사) - [x] Grammar Check (λ§žμΆ€λ²• 검사) - [x] Review or Add new terms to glossary (μš©μ–΄ 확인 및 μΆ”κ°€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [ ] Check live-preview for gotchas (live-preview둜 μ •μƒμž‘λ™ 확인) ## Who can review? (Initial) @keonju2 <!-- 1. μœ„ 체크가 λͺ¨λ‘ μ™„λ£Œλœ λ’€μ—λ§Œ OSSCA νŒ€μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> <!-- Team OSSCA, may you please review this PR? --> <!-- @wonhyeongseo, @keonju2, @harheem, @HongB1, @junejae --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. κ°€μ§œμ—°κ΅¬μ†Œ νŒ€μ›λ“€κ³Ό 리뷰가 λλ‚œ ν›„μ—λ§Œ ν—ˆκΉ…νŽ˜μ΄μŠ€ μ§μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> <!-- May you please review this PR? --> <!-- @sgugger, @ArthurZucker, @eunseojo -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25015/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25015/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25015", "html_url": "https://github.com/huggingface/transformers/pull/25015", "diff_url": "https://github.com/huggingface/transformers/pull/25015.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25015.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/25014
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25014/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25014/comments
https://api.github.com/repos/huggingface/transformers/issues/25014/events
https://github.com/huggingface/transformers/issues/25014
1,816,751,521
I_kwDOCUB6oc5sSW2h
25,014
When I use the command "python convert_llama_weights_to_hf.py --input_dir /xxx/llama/ --model_size 70B --output_dir /xxxx/Llama-2-70b-chat-hf" killed
{ "login": "cm-liushaodong", "id": 44772254, "node_id": "MDQ6VXNlcjQ0NzcyMjU0", "avatar_url": "https://avatars.githubusercontent.com/u/44772254?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cm-liushaodong", "html_url": "https://github.com/cm-liushaodong", "followers_url": "https://api.github.com/users/cm-liushaodong/followers", "following_url": "https://api.github.com/users/cm-liushaodong/following{/other_user}", "gists_url": "https://api.github.com/users/cm-liushaodong/gists{/gist_id}", "starred_url": "https://api.github.com/users/cm-liushaodong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cm-liushaodong/subscriptions", "organizations_url": "https://api.github.com/users/cm-liushaodong/orgs", "repos_url": "https://api.github.com/users/cm-liushaodong/repos", "events_url": "https://api.github.com/users/cm-liushaodong/events{/privacy}", "received_events_url": "https://api.github.com/users/cm-liushaodong/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @cm-liushaodong \r\nThe llama-70B model weights file is at least 140GB in half precision, sadly I think that you need an instance of at least that CPU memory size to download the weights and load them in CPU memory. Maybe @ArthurZucker can confirm as he has used that script", "Yes, as[ the documentation mentions, ](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py#L52):\r\n```python \r\nImportant note: you need to be able to host the whole model in RAM to execute this script (even if the biggest versions\r\ncome in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM).\r\n```\r\n`killled` means you probably had a OOM issue. If you are on linux you can check with `dmesg -T| grep -E -i -B100 'killed process'` ", "Thanks all, I used a machine with a larger memory to transcode successfully.", "After you transcode the code on a machine with a larger memory, were you able to take it back and fine-tune (train) the model on you older machine (which had lesser memory)? If so, please share your transcoded model, since I have the exact same problem. \r\n\r\nI don’t have a machine with a larger memory to transcode the code (i.e. to change the 70B model weights to Huggingface format using the convert_llama_weights_to_hf.py Python code). However, if can already get the 70B model weights in Huggingface format (i.e. your transcoded model you just made with a machine with a larger memory), I can then train (fine-tune) it on my 4090 GPU machine. \r\n\r\nI don't care how long it takes (can live with it being slow to train).", "Or does anyone know where I can download the 70B model weights ALREADY converted in Huggingface format? It is probably around 250 GB", "@BramVanroy You can search for \"Llama 70B\" on hf.co and find a list of checkpoints: https://huggingface.co/models?search=llama%2070b", "@amyeroberts Not sure why you tagged me - perhaps by accident?", "Yes, sorry, my apologies! I meant to tag @BrookMakF ", " I'm new to this so please break it down for me. Here is what I tried:\r\n \r\n* I went to the list of checkpoints on hf.co and chose this one: https://huggingface.co/meta-llama/Llama-2-70b-hf/tree/main\r\n\r\n* I then went to the \"Files and versions\" tab and downloaded these two files ('pytorch_model.bin.index.json' and 'config.json') into a new folder called \"new_70B_hf_format\" under my Llama 2 folder (path: β€˜llama-2_for_70B/llama/new_70B_hf_format/’)\r\n\r\n*Then I tried to load this 70B model in Python using transformers as below:\r\n `from transformers import AutoModelForCausalLM\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\"../new_70B_hf_format\") ` \r\n\r\nHowever, the kernel dies. Is there a way to PRE-DOWNLOAD the 70B model in Huggingface format, instead of loading the model in Python? I’m not sure what I’m doing wrong. I have attached screenshots to better explain, thank you for the help. \r\n\r\n\r\n![Screenshot-2023-09-05-21-41-16](https://github.com/huggingface/transformers/assets/5878134/421c43c9-854c-420d-a5ee-c402b3331897)\r\n![Screenshot-2023-09-05-21-40-36](https://github.com/huggingface/transformers/assets/5878134/1d31b331-d3a9-44d7-80b0-2af4ddfdb4af)\r\n![Screenshot-2023-09-05-21-43-54](https://github.com/huggingface/transformers/assets/5878134/9205e7da-ed29-4d17-b968-15775c352a26)\r\n", "@amyeroberts πŸ€—", "@BrookMakF Please have patience. Pinging maintainers when they don't immediately reply to your issue isn't sustainable or scalable behaviour - if everyone did it, it wouldn't be impossible for us to manage our notifications. We're all very busy on different pieces of work across different repos and, importantly, on different timezones too. \r\n\r\nThe files you downloaded aren't the model weights - `pytorch_model.bin.index.json` is a json file which contains the mapping of model weights to their respective shard, and config.json contains the model config file. \r\n\r\nWhen you call `from_pretrained`, the model weights are first downloaded to a cache folder on your local machine and the loaded into the model. You can specify the model checkpoint from the hub directly: \r\n\r\n```python\r\nmodel = AutoModelForCausalLM.from_pretrained(\"meta-llama/Llama-2-70b-hf\")\r\n```\r\n\r\nNext time it is called, it will load the downloaded weights from your local machine. \r\n\r\nIf you want to copy all of the weights and model files outside of a python session, you can clone the checkpoint repo. Make sure you have git lfs installed. \r\n\r\n```\r\ngit clone https://huggingface.co/meta-llama/Llama-2-70b-hf\r\n```\r\n\r\nAnd the pass the path to the repo when calling `from_pretrained` \r\n", "Thank you for taking the time to provide guidance on this matter. Your insights on managing notifications and considering the global nature of contributors' engagements are duly noted.\r\n\r\nI downloaded all of the weights and model files outside of a python session, since I was having RAM memory issue. I only have one 4090 GPU server with 24GB VRAM (which is an online rented from Hostkey). I cloned the 513 GB Llama-2-70b-hf checkpoint repo since I had lots of disk space on my server. However, I still have the same issue, it seems to run out of VRAM memory, it says β€œThe kernel for llama-recipes/quickstart.ipynb appears to have died. It will restart automatically”\r\n\r\nThe β€˜llama-recipes/quickstart.ipynb’ wrote β€œThis notebook shows how to train a Llama 2 model on a single GPU (e.g. A10 with 24GB) using int8 quantization and LoRA”. \r\nHowever, I couldn’t train / fine-tune the 70B model on my 4090 GPU server with 24GB. \r\n\r\nThough, I was able to fine-tune the 7B & 13B models, I couldn’t fine-tune the 70B on this machine. \r\n\r\nDid anyone mange to fine-tune 70B on such machine? Are there things I can change?\r\n\r\nHere are things I tried and failed:\r\n\r\n1. I manually specified which layers get offloaded to cpu using the device_map argument\r\n\r\n```python\r\ndevice_map = { \r\n\"model.layers.0.self_attn.o_proj.weight\": \"cpu\",\r\n\"model.layers.0.self_attn.q_proj.weight\": \"cpu\",\r\n\"model.layers.0.self_attn.rotary_emb.inv_freq\": \"cpu\",\r\n\"model.layers.0.self_attn.v_proj.weight\": \"cpu\",\r\n\"model.layers.1.input_layernorm.weight\": \"cpu\",\r\n\"model.layers.1.mlp.down_proj.weight\": \"cpu\",\r\n\"model.layers.1.mlp.gate_proj.weight\": \"cpu\",\r\n\"model.layers.1.mlp.up_proj.weight\": \"cpu\",\r\n\"model.layers.1.post_attention_layernorm.weight\": \"cpu\",\r\n#more layers …\r\n}\r\nmodel = LlamaForCausalLM.from_pretrained(model_id, device_map=device_map, torch_dtype=torch.float16)\r\n```\r\nBut I keep getting the error:\r\n\r\n```\r\nRuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)\r\n```\r\n\r\n\r\n2. I tried to copy the model back to cuda\r\n\r\n```python\r\nmodel = model.to('cuda')\r\n```\r\nHowever, it runs out of VRAM memory, it says:\r\n```\r\nOutOfMemoryError: CUDA out of memory. Tried to allocate 448.00 MiB (GPU 0; 23.65 GiB total capacity; 23.11 GiB already allocated; 61.69 MiB free; 23.11 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\r\n```\r\n![Screenshot-2023-09-07-20-59-54_3](https://github.com/huggingface/transformers/assets/5878134/dc662232-ab1a-4e88-bd69-8ab2a02bbd6e)\r\n![Screenshot-2023-09-07-20-59-54_2](https://github.com/huggingface/transformers/assets/5878134/10bdafae-0048-4898-a5b0-a9bab9113d0b)\r\n![Screenshot-2023-09-07-20-59-54](https://github.com/huggingface/transformers/assets/5878134/a0343987-7697-420a-b58b-0f83c3e05954)\r\n", "Hi @BrookMakF, \r\n\r\nQuestions such as these: how to fit your model onto a device are best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports. I recommend reading through the docs to see the recommended ways for loading models e.g. https://huggingface.co/docs/accelerate/usage_guides/big_modeling. " ]
1,690
1,694
1,690
NONE
null
### System Info Hi,@ArthurZucker,@younesbelkada When I use the command "python convert_llama_weights_to_hf.py --input_dir /xxx/llama/ --model_size 70B --output_dir /xxxx/Llama-2-70b-chat-hf", the following error occurred: <img width="1342" alt="image" src="https://github.com/huggingface/transformers/assets/44772254/58946c17-e65d-4d56-b675-361ff0832576"> Additional Information: Memory Size:116GB I noticed: Important note: you need to be able to host the whole model in RAM to execute this script (even if the biggest versions come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM). I would like to ask, is there a parameter control, I can use disk instead of memory to complete the task? I can live with him running slower. ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction python convert_llama_weights_to_hf.py --input_dir /home/xxxx/llama/ --model_size 70B --output_dir /home/xxxx/llama/Llama-2-70b-chat-hf ### Expected behavior it can execute successfully
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25014/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25014/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25013
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25013/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25013/comments
https://api.github.com/repos/huggingface/transformers/issues/25013/events
https://github.com/huggingface/transformers/issues/25013
1,816,711,404
I_kwDOCUB6oc5sSNDs
25,013
[i18n-<languageCode>] Translating docs to <languageName>czech republic
{ "login": "Denyweeeed", "id": 137151489, "node_id": "U_kgDOCCzEAQ", "avatar_url": "https://avatars.githubusercontent.com/u/137151489?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Denyweeeed", "html_url": "https://github.com/Denyweeeed", "followers_url": "https://api.github.com/users/Denyweeeed/followers", "following_url": "https://api.github.com/users/Denyweeeed/following{/other_user}", "gists_url": "https://api.github.com/users/Denyweeeed/gists{/gist_id}", "starred_url": "https://api.github.com/users/Denyweeeed/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Denyweeeed/subscriptions", "organizations_url": "https://api.github.com/users/Denyweeeed/orgs", "repos_url": "https://api.github.com/users/Denyweeeed/repos", "events_url": "https://api.github.com/users/Denyweeeed/events{/privacy}", "received_events_url": "https://api.github.com/users/Denyweeeed/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@Denyweeeed I would like to contribute to the Telugu language translation", "Please properly fill the template when opening such issues." ]
1,690
1,690
1,690
NONE
null
<!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete) Who would want to translate? Please follow the πŸ€— [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers πŸ€—). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review. * πŸ™‹ If you'd like others to help you with the translation, you can also post in the πŸ€— [forums](https://discuss.huggingface.co/). ## Get Started section - [ ] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) https://github.com/huggingface/transformers/pull/20180 - [ ] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md) (waiting for initial PR to go through) - [ ] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md). ## Tutorial section - [ ] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md) - [ ] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.md) - [ ] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md) - [ ] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md) - [ ] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md) - [ ] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md) - [ ] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md) <!-- Keep on adding more as you go πŸ”₯ -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25013/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25013/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25012
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25012/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25012/comments
https://api.github.com/repos/huggingface/transformers/issues/25012/events
https://github.com/huggingface/transformers/pull/25012
1,816,662,059
PR_kwDOCUB6oc5WJJlY
25,012
[check_config_docstrings.py] improve diagnostics
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,690
1,690
1,690
CONTRIBUTOR
null
It's difficult to know what to do when one gets an error: ``` python utils/check_config_docstrings.py Traceback (most recent call last): File "utils/check_config_docstrings.py", line 92, in <module> check_config_docstrings_have_checkpoints() File "utils/check_config_docstrings.py", line 88, in check_config_docstrings_have_checkpoints raise ValueError(f"The following configurations don't contain any valid checkpoint:\n{message}") ValueError: The following configurations don't contain any valid checkpoint: IdeficsConfig Exited with code exit status 1 ``` After figuring out what it wants, proposing a better assert message.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25012/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25012/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25012", "html_url": "https://github.com/huggingface/transformers/pull/25012", "diff_url": "https://github.com/huggingface/transformers/pull/25012.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25012.patch", "merged_at": 1690172247000 }
https://api.github.com/repos/huggingface/transformers/issues/25011
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25011/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25011/comments
https://api.github.com/repos/huggingface/transformers/issues/25011/events
https://github.com/huggingface/transformers/issues/25011
1,816,644,197
I_kwDOCUB6oc5sR8pl
25,011
Transformers 4.31.0 Runtime error trying to load model saved as 8bit on HF fails
{ "login": "mediocreatmybest", "id": 80406625, "node_id": "MDQ6VXNlcjgwNDA2NjI1", "avatar_url": "https://avatars.githubusercontent.com/u/80406625?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mediocreatmybest", "html_url": "https://github.com/mediocreatmybest", "followers_url": "https://api.github.com/users/mediocreatmybest/followers", "following_url": "https://api.github.com/users/mediocreatmybest/following{/other_user}", "gists_url": "https://api.github.com/users/mediocreatmybest/gists{/gist_id}", "starred_url": "https://api.github.com/users/mediocreatmybest/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mediocreatmybest/subscriptions", "organizations_url": "https://api.github.com/users/mediocreatmybest/orgs", "repos_url": "https://api.github.com/users/mediocreatmybest/repos", "events_url": "https://api.github.com/users/mediocreatmybest/events{/privacy}", "received_events_url": "https://api.github.com/users/mediocreatmybest/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada ", "Hi @mediocreatmybest \r\nThanks for the very clean reproducer, I managed to repro the issue and propose a fix in https://github.com/huggingface/transformers/pull/25047\r\nThe fix should be now live on the main branch of transformers and you should be able to use that by installing transformers from source. ", "Champion :) thanks! πŸ‘" ]
1,690
1,690
1,690
NONE
null
### System Info Transformers 4.31.0 Python 3.10.6 Linux and Windows (Same issue on both) Bitsandbytes 0.39.1 to 0.40x ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Hi! I've just found I'm now getting the following error when trying to load a model I've saved as 8bit on the Huggingface.co website: The error is: ``` CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so CUDA SETUP: Highest compute capability among GPUs detected: 8.6 CUDA SETUP: Detected CUDA version 117 CUDA SETUP: Loading binary /home/user/.local/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda117.so... Traceback (most recent call last): File "/home/user/testing/blip2_testing.py", line 12, in <module> model = Blip2ForConditionalGeneration.from_pretrained( File "/home/user/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2903, in from_pretrained ) = cls._load_pretrained_model( File "/home/user/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3260, in _load_pretrained_model new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model( File "/home/user/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 725, in _load_state_dict_into_meta_model set_module_quantized_tensor_to_device( File "/home/user/.local/lib/python3.10/site-packages/transformers/utils/bitsandbytes.py", line 116, in set_module_quantized_tensor_to_device new_value = nn.Parameter(new_value, requires_grad=old_value.requires_grad) File "/home/user/.local/lib/python3.10/site-packages/torch/nn/parameter.py", line 36, in __new__ return torch.Tensor._make_subclass(cls, data, requires_grad) RuntimeError: Only Tensors of floating point and complex dtype can require gradients ``` A minimal python script that I was testing: ``` from PIL import Image import requests from transformers import AutoProcessor, Blip2ForConditionalGeneration import torch device = "cuda" if torch.cuda.is_available() else "cpu" processor = AutoProcessor.from_pretrained( "Mediocreatmybest/blip2-opt-2.7b_8bit", load_in_8bit=True, ) model = Blip2ForConditionalGeneration.from_pretrained( "Mediocreatmybest/blip2-opt-2.7b_8bit", load_in_8bit=True, ) url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) url1 = "http://images.cocodataset.org/val2017/000000039769.jpg" image1 = Image.open(requests.get(url, stream=True).raw) batch = [image, image1] inputs = processor(images=batch, return_tensors="pt").to(device, torch.float16) generated_ids = model.generate(**inputs, min_new_tokens=8, max_new_tokens=30) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) print(generated_text) ``` If I try loading a model that hasn't been saved in 8bit, for example the original Salesforce/blip2-opt-2.7b this then loads without issue and runs fine, from what I've been able to test it seems to be the 8bit saved model only. Dropping the Transformers version back to 4.30.2 and the script runs fine without error. ### Expected behavior Using Transformers version 4.30.2 the above example script run normally and outputs the described text. Updating to Transformers 4.31.0 the above example script fails when trying to use an 8bit saved model such as Mediocreatmybest/blip2-opt-2.7b_8bit. Using Transformers 4.31.0 on the original model works correctly when passing load_in_8bit=True.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25011/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25011/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25010
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25010/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25010/comments
https://api.github.com/repos/huggingface/transformers/issues/25010/events
https://github.com/huggingface/transformers/pull/25010
1,816,640,007
PR_kwDOCUB6oc5WJFub
25,010
🌐 [i18n-KO] Translated `philosophy.md` to Korean
{ "login": "TaeYupNoh", "id": 107118671, "node_id": "U_kgDOBmKATw", "avatar_url": "https://avatars.githubusercontent.com/u/107118671?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TaeYupNoh", "html_url": "https://github.com/TaeYupNoh", "followers_url": "https://api.github.com/users/TaeYupNoh/followers", "following_url": "https://api.github.com/users/TaeYupNoh/following{/other_user}", "gists_url": "https://api.github.com/users/TaeYupNoh/gists{/gist_id}", "starred_url": "https://api.github.com/users/TaeYupNoh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TaeYupNoh/subscriptions", "organizations_url": "https://api.github.com/users/TaeYupNoh/orgs", "repos_url": "https://api.github.com/users/TaeYupNoh/repos", "events_url": "https://api.github.com/users/TaeYupNoh/events{/privacy}", "received_events_url": "https://api.github.com/users/TaeYupNoh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "May you please review this PR? :) \r\n@sgugger, @ArthurZucker, @eunseojo" ]
1,690
1,691
1,691
CONTRIBUTOR
null
# What does this PR do? Translated the `philosophy.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (λ²ˆμ—­ λˆ„λ½/쀑볡 검사) - [x] Grammar Check (λ§žμΆ€λ²• 검사) - [x] Review or Add new terms to glossary (μš©μ–΄ 확인 및 μΆ”κ°€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview둜 μ •μƒμž‘λ™ 확인) ## Who can review? (Initial) @sronger, @TaeYupNoh, @kj021, @HanNayeoniee, @eenzeenee, @sim-so ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. νŒ€μ›λ“€κ³Ό 리뷰가 λλ‚œ ν›„μ—λ§Œ ν—ˆκΉ…νŽ˜μ΄μŠ€ μ§μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25010/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25010/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25010", "html_url": "https://github.com/huggingface/transformers/pull/25010", "diff_url": "https://github.com/huggingface/transformers/pull/25010.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25010.patch", "merged_at": 1691653851000 }
https://api.github.com/repos/huggingface/transformers/issues/25009
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25009/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25009/comments
https://api.github.com/repos/huggingface/transformers/issues/25009/events
https://github.com/huggingface/transformers/pull/25009
1,816,636,498
PR_kwDOCUB6oc5WJFCL
25,009
uploading saffu language model to the transformers library
{ "login": "haoranzhao419", "id": 79516199, "node_id": "MDQ6VXNlcjc5NTE2MTk5", "avatar_url": "https://avatars.githubusercontent.com/u/79516199?v=4", "gravatar_id": "", "url": "https://api.github.com/users/haoranzhao419", "html_url": "https://github.com/haoranzhao419", "followers_url": "https://api.github.com/users/haoranzhao419/followers", "following_url": "https://api.github.com/users/haoranzhao419/following{/other_user}", "gists_url": "https://api.github.com/users/haoranzhao419/gists{/gist_id}", "starred_url": "https://api.github.com/users/haoranzhao419/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/haoranzhao419/subscriptions", "organizations_url": "https://api.github.com/users/haoranzhao419/orgs", "repos_url": "https://api.github.com/users/haoranzhao419/repos", "events_url": "https://api.github.com/users/haoranzhao419/events{/privacy}", "received_events_url": "https://api.github.com/users/haoranzhao419/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Need to fix some issues of the code" ]
1,690
1,690
1,690
NONE
null
# What does this PR do? In this PR, we upload a new language model **SAFFU** (Self-Attention-Feed-Forward-Unit) to the HuggingFace transformers library. This model is super efficient to use and fast to train that we derive a super efficient way of computing the self-attention matrix with an explicit mathematical solution.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25009/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25009/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25009", "html_url": "https://github.com/huggingface/transformers/pull/25009", "diff_url": "https://github.com/huggingface/transformers/pull/25009.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25009.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/25008
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25008/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25008/comments
https://api.github.com/repos/huggingface/transformers/issues/25008/events
https://github.com/huggingface/transformers/issues/25008
1,816,514,256
I_kwDOCUB6oc5sRc7Q
25,008
[bug] `token` not supported in `AutoModel`
{ "login": "ain-soph", "id": 13214530, "node_id": "MDQ6VXNlcjEzMjE0NTMw", "avatar_url": "https://avatars.githubusercontent.com/u/13214530?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ain-soph", "html_url": "https://github.com/ain-soph", "followers_url": "https://api.github.com/users/ain-soph/followers", "following_url": "https://api.github.com/users/ain-soph/following{/other_user}", "gists_url": "https://api.github.com/users/ain-soph/gists{/gist_id}", "starred_url": "https://api.github.com/users/ain-soph/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ain-soph/subscriptions", "organizations_url": "https://api.github.com/users/ain-soph/orgs", "repos_url": "https://api.github.com/users/ain-soph/repos", "events_url": "https://api.github.com/users/ain-soph/events{/privacy}", "received_events_url": "https://api.github.com/users/ain-soph/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Could you let us know which version of transformers you are using? I just tried this on the main branch and it seems to work fine.", "@sgugger \r\n\r\n```\r\n$ pip freeze | grep transformers\r\ntransformers @ git+https://github.com/huggingface/transformers.git@b08f41e62a41632195cb986fcc41d428a5bf1d56\r\n```\r\n\r\nError Log for `token`\r\n```\r\n>>> transformers.AutoModelForCausalLM.from_pretrained('meta-llama/Llama-2-7b-chat-hf', token='β– β– β– β– β– β– ')\r\nTraceback (most recent call last):\r\n File \"/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py\", line 259, in hf_raise_for_status\r\n response.raise_for_status()\r\n File \"/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/requests/models.py\", line 1021, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/meta-llama/Llama-2-7b-chat-hf/resolve/main/config.json\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/transformers/utils/hub.py\", line 417, in cached_file\r\n resolved_file = hf_hub_download(\r\n ^^^^^^^^^^^^^^^^\r\n File \"/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn\r\n return fn(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/huggingface_hub/file_download.py\", line 1195, in hf_hub_download\r\n metadata = get_hf_file_metadata(\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn\r\n return fn(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/huggingface_hub/file_download.py\", line 1541, in get_hf_file_metadata\r\n hf_raise_for_status(r)\r\n File \"/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py\", line 291, in hf_raise_for_status\r\n raise RepositoryNotFoundError(message, response) from e\r\nhuggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=β– β– β– β– β– β– )\r\n\r\nRepository Not Found for url: https://huggingface.co/meta-llama/Llama-2-7b-chat-hf/resolve/main/config.json.\r\nPlease make sure you specified the correct `repo_id` and `repo_type`.\r\nIf you are trying to access a private or gated repo, make sure you are authenticated.\r\nInvalid username or password.\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py\", line 461, in from_pretrained\r\n config, kwargs = AutoConfig.from_pretrained(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py\", line 983, in from_pretrained\r\n config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/transformers/configuration_utils.py\", line 617, in get_config_dict\r\n config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/transformers/configuration_utils.py\", line 672, in _get_config_dict\r\n resolved_config_file = cached_file(\r\n ^^^^^^^^^^^^\r\n File \"/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/transformers/utils/hub.py\", line 433, in cached_file\r\n raise EnvironmentError(\r\nOSError: meta-llama/Llama-2-7b-chat-hf is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'\r\nIf this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.\r\n```\r\n\r\nDeprecation Log for `use_auth_token`\r\n```\r\n>>> transformers.AutoModelForCausalLM.from_pretrained('meta-llama/Llama-2-7b-chat-hf',use_auth_token='β– β– β– β– β– β– ')\r\n[2023-07-24 15:51:53,994] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect)\r\n/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:2197: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers.\r\n warnings.warn(\r\nLoading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:01<00:00, 1.04it/s]\r\nLlamaForCausalLM(\r\n (model): LlamaModel(\r\n (embed_tokens): Embedding(32000, 4096, padding_idx=0)\r\n (layers): ModuleList(\r\n (0-31): 32 x LlamaDecoderLayer(\r\n (self_attn): LlamaAttention(\r\n (q_proj): Linear(in_features=4096, out_features=4096, bias=False)\r\n (k_proj): Linear(in_features=4096, out_features=4096, bias=False)\r\n (v_proj): Linear(in_features=4096, out_features=4096, bias=False)\r\n (o_proj): Linear(in_features=4096, out_features=4096, bias=False)\r\n (rotary_emb): LlamaRotaryEmbedding()\r\n )\r\n (mlp): LlamaMLP(\r\n (gate_proj): Linear(in_features=4096, out_features=11008, bias=False)\r\n (up_proj): Linear(in_features=4096, out_features=11008, bias=False)\r\n (down_proj): Linear(in_features=11008, out_features=4096, bias=False)\r\n (act_fn): SiLUActivation()\r\n )\r\n (input_layernorm): LlamaRMSNorm()\r\n (post_attention_layernorm): LlamaRMSNorm()\r\n )\r\n )\r\n (norm): LlamaRMSNorm()\r\n )\r\n (lm_head): Linear(in_features=4096, out_features=32000, bias=False)\r\n)\r\n```", "I can reproduce (in some way): the line\r\n\r\n```bash\r\n> /transformers/src/transformers/utils/hub.py(418)cached_file()\r\n-> resolved_file = hf_hub_download(\r\n```\r\ndoesn't get the token when passing `token` to auto model's `from_pretrained`.\r\n\r\n@sgugger I can take a look if you are ok with this.", "By all means, thanks!", "@ain-soph \r\n\r\nA fix #25083 is merged into `main` branch πŸ€— \r\n\r\nThank you for reporting again." ]
1,689
1,690
1,690
NONE
null
I see `use_auth_token` is already deprecated and will be replaced by `token`, https://github.com/huggingface/transformers/blob/b257c46a075419c09e5ce5c5aa39bc346ecdb9a5/src/transformers/modeling_utils.py#L2196-L2204 But `AutoModel` doesn't accept the new argument `token` yet. Especially in https://github.com/huggingface/transformers/blob/b257c46a075419c09e5ce5c5aa39bc346ecdb9a5/src/transformers/configuration_utils.py#L631-L638 Directly call `LlamaForCausalLM` instead of `AutoModelForCausalLM` is a temporary workaround to get rid of the deprecation warning. ### Reproduction ```python3 transformers.AutoModelForCausalLM.from_pretrained('meta-llama/Llama-2-7b-chat-hf',use_auth_token='XXX') transformers.LlamaForCausalLM.from_pretrained('meta-llama/Llama-2-7b-chat-hf',token='XXX') ``` ### Expected behavior `AutoModel` should support new `token` argument.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25008/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25008/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25007
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25007/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25007/comments
https://api.github.com/repos/huggingface/transformers/issues/25007/events
https://github.com/huggingface/transformers/issues/25007
1,816,501,106
I_kwDOCUB6oc5sRZty
25,007
[ROCM] GFX906 gpu dosent work when GFX900 gpu is also in the system
{ "login": "IMbackK", "id": 13803414, "node_id": "MDQ6VXNlcjEzODAzNDE0", "avatar_url": "https://avatars.githubusercontent.com/u/13803414?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IMbackK", "html_url": "https://github.com/IMbackK", "followers_url": "https://api.github.com/users/IMbackK/followers", "following_url": "https://api.github.com/users/IMbackK/following{/other_user}", "gists_url": "https://api.github.com/users/IMbackK/gists{/gist_id}", "starred_url": "https://api.github.com/users/IMbackK/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IMbackK/subscriptions", "organizations_url": "https://api.github.com/users/IMbackK/orgs", "repos_url": "https://api.github.com/users/IMbackK/repos", "events_url": "https://api.github.com/users/IMbackK/events{/privacy}", "received_events_url": "https://api.github.com/users/IMbackK/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This seems more of an issue for PyTorch and AMD? We only use those libraries in Transformers.", "this is correct, this issue has since been tracked down here https://github.com/ROCmSoftwarePlatform/rocBLAS/issues/1346#issuecomment-1646942417\r\n\r\ni will keep the bug here open untill its resolved so that any one else expieranceing the same issue will be redirected the underlying issue.", "The underlying issue is still not resolved, amd is still investigateing, thus this should stay open", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "This issue has been fixed by amd and will be in rocm 5.7.1 or 6.0 " ]
1,689
1,697
1,697
NONE
null
### System Info System: * ROCM 5.6 * torch-2.1.0.dev20230721+rocm5.6 * GFX900 GPU (MI25) (HIP device 2) * GFX906 GPU (MI50) (HIP device 1) * GFX1030 GPU (rx6800xt) (HIP device 0) * transformers @b257c46a075419c09e5ce5c5aa39bc346ecdb9a5 * Linux 6.4.3 with AMDGPU p2p activated companion bug against rocm: https://github.com/RadeonOpenCompute/ROCm/issues/2328 ### Who can help? _No response_ ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Have GFX900 and GFX906 gpu in system run the following script: ``` from transformers import AutoTokenizer, AutoModelForCausalLM import torch import argparse if __name__ == "__main__": parser = argparse.ArgumentParser("Transformers llm testing script") parser.add_argument('--tokenizer', '-t', help="tokenizer to use") parser.add_argument('--model', '-m', required=True, help="model to use") parser.add_argument('--device', '-d', default="cpu", help="device to use") parser.add_argument('--prompt', '-p', default="Today was a long day ", help="the promt to generate from") args = parser.parse_args() if args.device != 'cpu': dtype = torch.bfloat16 else: dtype = torch.float32 if args.tokenizer is None: tokenizer = AutoTokenizer.from_pretrained(args.model, padding_side='left') else: tokenizer = AutoTokenizer.from_pretrained(args.tokenizer, padding_side='left') model = AutoModelForCausalLM.from_pretrained(args.model, low_cpu_mem_usage=True, torch_dtype=dtype).to(args.device) model.eval() input_ids = tokenizer(args.prompt, return_tensors="pt").input_ids.to(args.device) attention_mask = torch.ones(input_ids.shape, device=args.device, requires_grad=False) outputs = model.generate(input_ids, attention_mask=attention_mask, do_sample=True, temperature=1) response_decoded = tokenizer.batch_decode(outputs, skip_special_tokens=True) response = response_decoded[0] print(response) ``` I used bloom-6b for the model, however this dosent matter. If device is set to the GFX1030 gpu everything works. If the deivce is set to the GFX900 GPU everything works, if the deivce is set to the GFX906 the script fails with: ``` Traceback (most recent call last): File "/home/philipp/machine-lerning/Transformersplayground/janachat/test-simple.py", line 29, in <module> outputs = model.generate(input_ids, attention_mask=attention_mask, do_sample=True, temperature=1) File "/home/philipp/machine-lerning/Transformersplayground/venv/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/philipp/machine-lerning/Transformersplayground/venv/lib/python3.9/site-packages/transformers/generation/utils.py", line 1563, in generate return self.sample( File "/home/philipp/machine-lerning/Transformersplayground/venv/lib/python3.9/site-packages/transformers/generation/utils.py", line 2665, in sample next_tokens.tile(eos_token_id_tensor.shape[0], 1).ne(eos_token_id_tensor.unsqueeze(1)).prod(dim=0) RuntimeError: CUDA driver error: 303 ``` running with export AMD_LOG_LEVEL=8 reveals that rocm appears to try to launch a GFX900 kernel on GFX906: ``` :1:devprogram.cpp :1873: 1265234165 us: 21877: [tid:0x7f3bf549b740] Error: The program ISA amdgcn-amd-amdhsa--gfx900:xnack- is not compatible with the device ISA amdgcn-amd-amdhsa--gfx906:sramecc+:xnack-Error: create kernel metadata map using COMgr Error: Cannot Find Global Var Sizes Error: Cannot create kernels. ``` Indeed removeing the GFX900 gpu from the system makes the GFX906 work ### Expected behavior GFX906 gpu should work in all instances
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25007/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25007/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25006
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25006/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25006/comments
https://api.github.com/repos/huggingface/transformers/issues/25006/events
https://github.com/huggingface/transformers/issues/25006
1,816,377,402
I_kwDOCUB6oc5sQ7g6
25,006
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
{ "login": "lcoandrade", "id": 8769816, "node_id": "MDQ6VXNlcjg3Njk4MTY=", "avatar_url": "https://avatars.githubusercontent.com/u/8769816?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lcoandrade", "html_url": "https://github.com/lcoandrade", "followers_url": "https://api.github.com/users/lcoandrade/followers", "following_url": "https://api.github.com/users/lcoandrade/following{/other_user}", "gists_url": "https://api.github.com/users/lcoandrade/gists{/gist_id}", "starred_url": "https://api.github.com/users/lcoandrade/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lcoandrade/subscriptions", "organizations_url": "https://api.github.com/users/lcoandrade/orgs", "repos_url": "https://api.github.com/users/lcoandrade/repos", "events_url": "https://api.github.com/users/lcoandrade/events{/privacy}", "received_events_url": "https://api.github.com/users/lcoandrade/received_events", "type": "User", "site_admin": false }
[ { "id": 5616426447, "node_id": "LA_kwDOCUB6oc8AAAABTsPdzw", "url": "https://api.github.com/repos/huggingface/transformers/labels/solved", "name": "solved", "color": "B1D6DC", "default": false, "description": "" } ]
closed
false
null
[]
[ "Even setting the model to train model with:\r\n`self.bert.train()`\r\n\r\nI get the same error.", "Hey πŸ‘‹πŸ» \r\nWe need a minimal reproducing script in order to help: you are using an external library `pytorch lightning` and I have no idea what is happening inside. `transformers` also has a trainer class, we can help if a bug is related to it, but in this case I cannot ping anyone. ", "Hi there!\r\nIs it possible to check my Kaggle notebook ([https://www.kaggle.com/code/luizclaudioandrade/nlp-with-pytorch](https://www.kaggle.com/code/luizclaudioandrade/nlp-with-pytorch))?\r\n\r\nThe notebook has a [dataset](https://www.kaggle.com/datasets/rmisra/news-headlines-dataset-for-sarcasm-detection) with headlines and a tag if it is sarcastic or not (1 or 0). I'm creating a pandas dat frame and creating a dataset with this dataset encoded.\r\n\r\nThe dataset:\r\n```\r\nclass SarcasticHeadlineDataset(Dataset):\r\n\r\n def __init__(\r\n self, \r\n data: pd.DataFrame, \r\n tokenizer: BertTokenizer, \r\n max_token_len: int,\r\n ):\r\n self.tokenizer = tokenizer\r\n self.data = data\r\n self.max_token_len = max_token_len\r\n \r\n def __len__(self):\r\n return len(self.data)\r\n\r\n def __getitem__(self, index: int):\r\n data_row = self.data.iloc[index]\r\n\r\n headline = data_row.headline\r\n label = data_row.is_sarcastic\r\n\r\n encoding = self.tokenizer.encode_plus(\r\n headline,\r\n padding='max_length',\r\n max_length=self.max_token_len,\r\n return_tensors='pt',\r\n )\r\n\r\n return dict(\r\n headline=headline,\r\n input_ids=encoding[\"input_ids\"].flatten(),\r\n attention_mask=encoding[\"attention_mask\"].flatten(),\r\n label=torch.tensor(label, dtype=torch.float)\r\n )\r\n```\r\n\r\nThe datamodule:\r\n```\r\nclass SarcasticHeadlineDataModule(pl.LightningDataModule):\r\n\r\n def __init__(\r\n self, \r\n train_df: pd.DataFrame, \r\n test_df: pd.DataFrame, \r\n tokenizer: BertTokenizer, \r\n batch_size: int = 8, \r\n max_token_len: int = 128, \r\n num_workers: int = 4\r\n ):\r\n super().__init__()\r\n self.batch_size = batch_size\r\n self.train_df = train_df\r\n self.test_df = test_df\r\n self.tokenizer = tokenizer\r\n self.max_token_len = max_token_len\r\n self.num_workers = num_workers\r\n\r\n def setup(self, stage=None):\r\n self.train_dataset = SarcasticHeadlineDataset(\r\n self.train_df,\r\n self.tokenizer,\r\n self.max_token_len\r\n )\r\n\r\n self.test_dataset = SarcasticHeadlineDataset(\r\n self.test_df,\r\n self.tokenizer,\r\n self.max_token_len\r\n )\r\n\r\n def train_dataloader(self):\r\n return DataLoader(\r\n self.train_dataset,\r\n batch_size=self.batch_size,\r\n shuffle=True,\r\n num_workers=self.num_workers,\r\n pin_memory=True,\r\n persistent_workers=True,\r\n )\r\n\r\n def val_dataloader(self):\r\n return DataLoader(\r\n self.test_dataset,\r\n batch_size=self.batch_size,\r\n num_workers=self.num_workers,\r\n pin_memory=True,\r\n persistent_workers=True,\r\n )\r\n\r\n def test_dataloader(self):\r\n return DataLoader(\r\n self.test_dataset,\r\n batch_size=self.batch_size,\r\n num_workers=self.num_workers,\r\n pin_memory=True,\r\n persistent_workers=True,\r\n )\r\n```\r\n\r\nSome parameters for the trainer:\r\n```\r\nchekpoint_dir = os.path.join(OUTPUT_DIR, 'checkpoints')\r\ncheckpoint_callback = ModelCheckpoint(\r\n dirpath=chekpoint_dir,\r\n filename=\"best-checkpoint\",\r\n save_top_k=1,\r\n verbose=True,\r\n monitor=\"val_loss\",\r\n mode=\"min\"\r\n)\r\n\r\nearly_stopping_callback = EarlyStopping(monitor='val_loss', patience=2)\r\n\r\nlogger = CSVLogger(OUTPUT_DIR, name='lightning_logs')\r\n```\r\n\r\nThe trainer:\r\n```\r\ntrainer = pl.Trainer(\r\n accelerator='auto',\r\n devices='auto',\r\n strategy='auto',\r\n callbacks=[checkpoint_callback, early_stopping_callback],\r\n max_epochs=N_EPOCHS,\r\n logger=logger,\r\n)\r\n```\r\n\r\nThe LightningModule:\r\n```\r\nclass SarcasmTagger(pl.LightningModule):\r\n\r\n def __init__(\r\n self, \r\n model_name: str, \r\n n_classes: int, \r\n n_training_steps=None, \r\n n_warmup_steps=None\r\n ):\r\n super().__init__()\r\n \r\n self.save_hyperparameters()\r\n \r\n self.bert = BertModel.from_pretrained(model_name, return_dict=True)\r\n self.bert.train()\r\n self.classifier = nn.Linear(self.bert.config.hidden_size, n_classes)\r\n self.n_training_steps = n_training_steps\r\n self.n_warmup_steps = n_warmup_steps\r\n\r\n def forward(self, input_ids, attention_mask):\r\n outputs = self.bert(input_ids=input_ids, attention_mask=attention_mask)\r\n #print(outputs)\r\n logits = self.classifier(outputs.pooler_output)\r\n return logits\r\n \r\n def shared_step(self, batch, batch_idx):\r\n input_ids = batch[\"input_ids\"]\r\n attention_mask = batch[\"attention_mask\"]\r\n label = batch[\"label\"].view(-1, 1)\r\n logits = self(input_ids=input_ids, attention_mask=attention_mask)\r\n loss = nn.functional.cross_entropy(logits, label)\r\n return logits, loss, label\r\n \r\n\r\n def training_step(self, batch, batch_idx):\r\n logits, loss, label = self.shared_step(batch, batch_idx)\r\n self.log(\"train_loss\", loss, prog_bar=True, logger=True)\r\n return {\"loss\": loss, \"predictions\": logits, \"label\": label}\r\n\r\n def validation_step(self, batch, batch_idx):\r\n logits, loss, label = self.shared_step(batch, batch_idx)\r\n self.log(\"val_loss\", loss, prog_bar=True, logger=True)\r\n return loss\r\n\r\n def test_step(self, batch, batch_idx):\r\n logits, loss, label = self.shared_step(batch, batch_idx)\r\n self.log(\"test_loss\", loss, prog_bar=True, logger=True)\r\n return loss\r\n\r\n def configure_optimizers(self):\r\n optimizer = AdamW(self.parameters(), lr=2e-5)\r\n\r\n scheduler = get_linear_schedule_with_warmup(\r\n optimizer,\r\n num_warmup_steps=self.n_warmup_steps,\r\n num_training_steps=self.n_training_steps\r\n )\r\n\r\n return dict(\r\n optimizer=optimizer,\r\n lr_scheduler=dict(\r\n scheduler=scheduler,\r\n interval='step')\r\n )\r\n```\r\n\r\nAnd the actual training part:\r\n```\r\ncheckpoint_file = os.path.join(chekpoint_dir, 'best-checkpoint.ckpt') \r\n\r\nif os.path.isfile(checkpoint_file):\r\n print('Resuming training from previous checkpoint...')\r\n trainer.fit(\r\n sarcasm_tagger, \r\n datamodule=data_module,\r\n ckpt_path=checkpoint_file\r\n )\r\nelse:\r\n print('Starting training from scratch...')\r\n trainer.fit(\r\n sarcasm_tagger, \r\n datamodule=data_module\r\n )\r\n```\r\n\r\nThe problem doesn't seem to be in the trainer, I think it is related to the model as this error is related to the backpropagation tensors don't having a gradient function set. But as I'm not calling detach anywhere in the code and I'm setting the model to training mode, I'm lost...\r\n\r\nThanks in advance!", "Again, I am sorry but I won't have time to debug your kaggle notbook no. \r\nA minimal reproducing script is supposed to be ~10 lines of code where you show the issue with the model. \r\nHere is one, showing that the loss is properly back-propagating:\r\n```python \r\n>>> from transformers import BertForSequenceClassification\r\n>>> model = BertForSequenceClassification.from_pretrained(\"bert-base-uncased\", return_dict=True)\r\n>>> model(torch.ones(2,34).long(), labels = torch.ones(2,2)).loss\r\ntensor(0.8236, grad_fn=<BinaryCrossEntropyWithLogitsBackward0>)\r\n```\r\nIn this case, gradients properly flow and the output is the result of the `multi_label_classification` loss computation (BCE). \r\nI understand your frustration, but I have no idea what is happening behind the `class SarcasmTagger(pl.LightningModule):` inheritance, so I cannot really help you on this part.\r\n\r\nIf you are using a model + classifier:\r\n```python \r\n>>> from transformers import BertModel\r\n>>> model = BertModel.from_pretrained(\"bert-base-uncased\", return_dict=True)\r\n>>> pooler_out = model(torch.ones(2,34).long()).pooler_output\r\ntensor([[-0.0807, -0.3833, -0.7920, ..., -0.9157, -0.5491, -0.0386],\r\n [-0.0807, -0.3833, -0.7920, ..., -0.9157, -0.5491, -0.0386]],\r\n grad_fn=<TanhBackward0>)\r\n>>> classifier = torch.nn.Linear(model.bert.config.hidden_size, 2)\r\n>>> classifier(pooler_out)\r\ntensor([[-0.1094, 0.2125],\r\n [-0.1094, 0.2125]], grad_fn=<AddmmBackward0>)\r\n```\r\nIn these simple snippets, grad function is kept. \r\n\r\nThese kind of question should be asked on [the forum](https://discuss.huggingface.co/), as from what I see there is no bug in transformers.\r\n \r\n", "Thanks for your help!", "I have a similar issue.\r\n\r\nWith `pytorch-lightning==1.9.4` and `transformers==4.26.1` the code runs fine (and has done with previous versions of both libraries for months/years - yes there have been code changes in that time but the core has been rather stable).\r\n\r\n(Also just tested with `transformers==4.29.2` and works fine)\r\n\r\nHowever, when I change nothing in the code and change no other dependencies (so `pytorch-lightning==1.9.4` and all others the same) except to upgrade to `transformers==4.30.2` the code fails with the error message:\r\n```\r\nRuntimeError: element 0 of tensors does not require grad and does not have a grad_fn\r\n```\r\n\r\nThe problem is that my codebase is very large and it will take me a while to generate a minimal reproducing script. I will try to put this together, but in the time it takes me to do this, perhaps someone else will have a simpler solution (considering the information I am sharing) and/or a simpler minimal reproducing script.\r\n\r\nPerhaps also @lcoandrade you could try your script with `transformers==4.26.1` or `transformers==4.29.2` and see if that works for you?", "Thanks a lot for the additional information! If you can isolate which version of transformers made it fail for your, in that case we can look into this as a regression! Would be very helpful if @lcoandrade can do this! \r\n", "some more details:\r\n\r\nThese combinations work:\r\n- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.26.1`\r\n- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.27.4`\r\n- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.28.1`\r\n- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.29.2`\r\n\r\nThese combinations don't:\r\n- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.30.0`\r\n- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.30.2`\r\n- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.31.0`\r\n\r\nSo the regression must have been introduced in `transformers==4.30.0`?\r\n\r\nI'll try to see if I can get a minimal reproducing script together.", "Thanks for the help, @Alex-ley-scrub !\r\nI changed my install packages part to:\r\n```\r\n!pip install torch==2.0.0+cu117\r\n!pip install pytorch-lightning==1.9.4\r\n!pip install accelerate==0.21.0\r\n!pip install tokenizers==0.13.3\r\n!pip install transformers==4.26.1\r\n```\r\n\r\nBut the error was still popping up. So, I thought the error could be related to the optimizer used. My optimizer was this one:\r\n```\r\ndef configure_optimizers(self):\r\n optimizer = AdamW(self.parameters(), lr=2e-5)\r\n\r\n scheduler = get_linear_schedule_with_warmup(\r\n optimizer,\r\n num_warmup_steps=self.n_warmup_steps,\r\n num_training_steps=self.n_training_steps\r\n )\r\n\r\n return dict(\r\n optimizer=optimizer,\r\n lr_scheduler=dict(\r\n scheduler=scheduler,\r\n interval='step')\r\n )\r\n```\r\n\r\nWhen I changed my method to use a simple Adam optimizer:\r\n```\r\ndef configure_optimizers(self):\r\n optimizer = torch.optim.Adam(self.parameters(), lr=2e-5)\r\n return [optimizer]\r\n```\r\n\r\nIt worked!\r\n\r\nSo, the problem is in the AdamW with a scheduler. Reversing the install packages to just:\r\n```\r\n!pip install -q transformers\r\n```\r\n\r\nMakes the training work.\r\n\r\nAs the AdamW is deprecated, I think it is a good idea change the code to use the torch.optim.Adam for instance.\r\n\r\nShould this be considered a bug in AdamW?\r\n\r\nWhat do you think, @ArthurZucker and @Alex-ley-scrub?\r\n\r\nThanks again!", "Most probably yes! Thanks for investigating, I'm sure this will help others! ", "some more details after I swapped this line of code:\r\n```\r\nfrom transformers import AdamW\r\n```\r\nwith this line:\r\n```\r\nfrom torch.optim import AdamW\r\n```\r\n\r\nnow all the versions of transformers I tested earlier work on my existing codebase:\r\n- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.26.1`\r\n- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.27.4`\r\n- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.28.1`\r\n- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.29.2`\r\n- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.30.0`\r\n- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.30.2`\r\n- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.31.0`\r\n\r\ntherefore, there is pretty strong evidence that something in `transformers.AdamW` in `transformers==4.30.0` caused a regression?\r\n\r\nthanks a lot @lcoandrade for that! πŸ™Œ I can now upgrade our transformers dependency to the latest!", "Pinging @muellerzr and @pacman100 as they are more familiar than me with the recent changes!", "Not entirely sure this is worth looking into too much, given @stas00 point here: https://github.com/huggingface/transformers/pull/23417#issuecomment-1550506298\r\n\r\n> This is a very old and deprecated implementation since it doesn't even follow the AdamW algorithm exactly. One should use torch.optim.AdamW instead, which also has a fused version since pt-2.0.0 which is almost as fast as apex's fused AdamW. So really you shouldn't be using this version anyway.\r\n\r\n> The only reason it was kept is for BC for those who rely on exact results remaining exact after new transformers versions are released, otherwise we would have just replaced it with torch.optim.AdamW in the first place.\r\n\r\nSo yes, AdamW is slated for deprecation and you should use `torch.optim.AdamW`. @sgugger do we know when that is going to be? Or should we look into this more.\r\n\r\nThere wasn't anything explicit in the change to AdamW since v0.29.0, so it'll take some digging to find the exact commit certainly.", "If our AdamW is not working properly, all the more reasons to switch the default to the PyTorch one. Users will still be able to switch back if they do not like the change.", "Hi all, the default has been changed on main now and will populate on the next release. Install with `pip install git+https://github.com/huggingface/transformers` to use it OOTB!", "Same problem here, as suggested, it was resolved with the switch of optimizers", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,695
1,695
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.15.120+-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.0+cpu (False) - Tensorflow version (GPU?): 2.12.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.7.0 (cpu) - Jax version: 0.4.13 - JaxLib version: 0.4.13 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker and @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm trying to make a Sarcasm detector with Lightning in this Kaggle [notebook](https://www.kaggle.com/code/luizclaudioandrade/nlp-with-pytorch). When I start the training, I get this error: `RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn` This is my LightningModule: ``` class SarcasmTagger(pl.LightningModule): def __init__( self, model_name: str, n_classes: int, n_training_steps=None, n_warmup_steps=None ): super().__init__() self.bert = BertModel.from_pretrained(model_name, return_dict=True) #self.bert = BertForSequenceClassification.from_pretrained(model_name, return_dict=True) self.classifier = nn.Linear(self.bert.config.hidden_size, n_classes) self.n_training_steps = n_training_steps self.n_warmup_steps = n_warmup_steps def forward(self, input_ids, attention_mask): outputs = self.bert(input_ids=input_ids, attention_mask=attention_mask) #print(outputs) logits = self.classifier(outputs.pooler_output) return logits def shared_step(self, batch, batch_idx): input_ids = batch["input_ids"] attention_mask = batch["attention_mask"] label = batch["label"].view(-1, 1) logits = self(input_ids=input_ids, attention_mask=attention_mask) loss = nn.functional.cross_entropy(logits, label) return logits, loss, label def training_step(self, batch, batch_idx): logits, loss, label = self.shared_step(batch, batch_idx) self.log("train_loss", loss, prog_bar=True, logger=True) return {"loss": loss, "predictions": logits, "label": label} def validation_step(self, batch, batch_idx): logits, loss, label = self.shared_step(batch, batch_idx) self.log("val_loss", loss, prog_bar=True, logger=True) return loss def test_step(self, batch, batch_idx): logits, loss, label = self.shared_step(batch, batch_idx) self.log("test_loss", loss, prog_bar=True, logger=True) return loss def configure_optimizers(self): optimizer = AdamW(self.parameters(), lr=2e-5) scheduler = get_linear_schedule_with_warmup( optimizer, num_warmup_steps=self.n_warmup_steps, num_training_steps=self.n_training_steps ) return dict( optimizer=optimizer, lr_scheduler=dict( scheduler=scheduler, interval='step') ) ``` What is the problem here? I'm lost. Thanks! ### Expected behavior Execute the training without errors.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25006/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25006/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25005
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25005/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25005/comments
https://api.github.com/repos/huggingface/transformers/issues/25005/events
https://github.com/huggingface/transformers/pull/25005
1,816,351,369
PR_kwDOCUB6oc5WII8r
25,005
Make more test models smaller
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Yes we could go even smaller if we wanted to, but it's hard for big encoer/decoder models with backbones." ]
1,689
1,690
1,690
COLLABORATOR
null
# What does this PR do? This PR continues on the work of #24824 by fixing the models used in more common tests. This is big enough to be reviewed, I'll do the last ones in a separate PR :-)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25005/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25005/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25005", "html_url": "https://github.com/huggingface/transformers/pull/25005", "diff_url": "https://github.com/huggingface/transformers/pull/25005.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25005.patch", "merged_at": 1690207728000 }
https://api.github.com/repos/huggingface/transformers/issues/25004
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25004/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25004/comments
https://api.github.com/repos/huggingface/transformers/issues/25004/events
https://github.com/huggingface/transformers/pull/25004
1,816,273,210
PR_kwDOCUB6oc5WH4DP
25,004
Move template doc file to md
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? This PR fixes the `add-new-model` command which broke when we moved all the doc file from MDX to MD (dunno why the test missed it since we deactivated it after if I recall correctly). Fixes #25003
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25004/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25004/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25004", "html_url": "https://github.com/huggingface/transformers/pull/25004", "diff_url": "https://github.com/huggingface/transformers/pull/25004.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25004.patch", "merged_at": 1689972584000 }
https://api.github.com/repos/huggingface/transformers/issues/25003
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25003/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25003/comments
https://api.github.com/repos/huggingface/transformers/issues/25003/events
https://github.com/huggingface/transformers/issues/25003
1,816,245,674
I_kwDOCUB6oc5sQbWq
25,003
[cookiecutter] Fails to create new model template
{ "login": "zekun-li", "id": 5383572, "node_id": "MDQ6VXNlcjUzODM1NzI=", "avatar_url": "https://avatars.githubusercontent.com/u/5383572?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zekun-li", "html_url": "https://github.com/zekun-li", "followers_url": "https://api.github.com/users/zekun-li/followers", "following_url": "https://api.github.com/users/zekun-li/following{/other_user}", "gists_url": "https://api.github.com/users/zekun-li/gists{/gist_id}", "starred_url": "https://api.github.com/users/zekun-li/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zekun-li/subscriptions", "organizations_url": "https://api.github.com/users/zekun-li/orgs", "repos_url": "https://api.github.com/users/zekun-li/repos", "events_url": "https://api.github.com/users/zekun-li/events{/privacy}", "received_events_url": "https://api.github.com/users/zekun-li/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Please note that this command is deprecated and it's better to use `add-new-model-like`. Will fix this though.", "> Please note that this command is deprecated and it's better to use `add-new-model-like`. Will fix this though.\r\n\r\nGood to know, thanks!" ]
1,689
1,689
1,689
NONE
null
### System Info 2023-07-21 18:34:34.146635: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT WARNING:tensorflow:From /home/zekun/transformers/src/transformers/commands/env.py:100: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.) Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.32.0.dev0 - Platform: Linux-4.14.290-217.505.amzn2.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.17 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): 2.13.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.7.0 (cpu) - Jax version: 0.4.13 - JaxLib version: 0.4.13 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I'm trying to create a new model called `GeoLM` using the `cookiecutter` utility following the tutorial here: https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model However, the `transformers-cli add-new-model` was failed at one step, reporting a missing file error for `'cookiecutter-template-GeoLM/geolm.md'`. I can find a file named `geolm.mdx` under the same dir but not `geolm.md`. The previous step `pip install -e ".[quality]"` was completed without any error. The full running log is provided below. ---------------------------------------------------------------------------------------------------------------------------- ``` (huggingface) [zekun@ip-172-31-9-231 transformers]$ transformers-cli add-new-model 2023-07-21 18:33:20.317785: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT /home/zekun/transformers/src/transformers/commands/add_new_model.py:58: UserWarning: The command `transformers-cli add-new-model` is deprecated and will be removed in v5 of Transformers. It is not actively maintained anymore, so might give a result that won't pass all tests and quality checks, you should use `transformers-cli add-new-model-like` instead. warnings.warn( modelname [BrandNewBERT]: GeoLM uppercase_modelname [BRAND_NEW_BERT]: GEOLM lowercase_modelname [brand_new_bert]: geolm camelcase_modelname [BrandNewBert]: GeoLM authors [The HuggingFace Team]: Zekun Li checkpoint_identifier [brand-new-bert-base-cased]: zekun-li/geolm-base-cased Select tokenizer_type: 1 - Based on BERT 2 - Based on BART 3 - Standalone Choose from 1, 2, 3 [1]: 1 Select generate_tensorflow_pytorch_and_flax: 1 - PyTorch, TensorFlow and Flax 2 - PyTorch & TensorFlow 3 - PyTorch & Flax 4 - TensorFlow & Flax 5 - PyTorch 6 - TensorFlow 7 - Flax Choose from 1, 2, 3, 4, 5, 6, 7 [1]: 5 Select is_encoder_decoder_model: 1 - True 2 - False Choose from 1, 2 [1]: 2 Traceback (most recent call last): File "/home/zekun/.conda/envs/huggingface/lib/python3.8/shutil.py", line 791, in move os.rename(src, real_dst) FileNotFoundError: [Errno 2] No such file or directory: 'cookiecutter-template-GeoLM/geolm.md' -> '/home/zekun/transformers/docs/source/en/model_doc/geolm.md' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/zekun/.conda/envs/huggingface/bin/transformers-cli", line 8, in <module> sys.exit(main()) File "/home/zekun/transformers/src/transformers/commands/transformers_cli.py", line 55, in main service.run() File "/home/zekun/transformers/src/transformers/commands/add_new_model.py", line 185, in run shutil.move( File "/home/zekun/.conda/envs/huggingface/lib/python3.8/shutil.py", line 811, in move copy_function(src, real_dst) File "/home/zekun/.conda/envs/huggingface/lib/python3.8/shutil.py", line 435, in copy2 copyfile(src, dst, follow_symlinks=follow_symlinks) File "/home/zekun/.conda/envs/huggingface/lib/python3.8/shutil.py", line 264, in copyfile with open(src, 'rb') as fsrc, open(dst, 'wb') as fdst: FileNotFoundError: [Errno 2] No such file or directory: 'cookiecutter-template-GeoLM/geolm.md' ``` ### Expected behavior I should see the following files generated below: ``` docs/source/model_doc/<model_name>.md src/transformers/models/<model_name>/configuration_<model_name>.py src/transformers/models/<model_name>/modeling_<model_name>.py src/transformers/models/<model_name>/modeling_tf_<model_name>.py src/transformers/models/<model_name>/tokenization_<model_name>.py tests/test_modeling_<model_name>.py tests/test_modeling_tf_<model_name>.py ``` I can see some of these generated, missing the `*_tf_*` ones (expected) and these: ``` docs/source/model_doc/<model_name>.md src/transformers/models/<model_name>/tokenization_<model_name>.py tests/test_modeling_<model_name>.py ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25003/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25003/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25002
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25002/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25002/comments
https://api.github.com/repos/huggingface/transformers/issues/25002/events
https://github.com/huggingface/transformers/pull/25002
1,816,195,880
PR_kwDOCUB6oc5WHnMN
25,002
Fix graph break for Segformer
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Closing as this is resolved on torch nightly" ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? Small change in the forward pass of Segformer which reduces the number of graph breaks in the compiled model from 5 to 0. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25002/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25002/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25002", "html_url": "https://github.com/huggingface/transformers/pull/25002", "diff_url": "https://github.com/huggingface/transformers/pull/25002.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25002.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/25001
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25001/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25001/comments
https://api.github.com/repos/huggingface/transformers/issues/25001/events
https://github.com/huggingface/transformers/pull/25001
1,816,145,902
PR_kwDOCUB6oc5WHcf3
25,001
[WIP]Add ViTPose to Transformers
{ "login": "shauray8", "id": 39147312, "node_id": "MDQ6VXNlcjM5MTQ3MzEy", "avatar_url": "https://avatars.githubusercontent.com/u/39147312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shauray8", "html_url": "https://github.com/shauray8", "followers_url": "https://api.github.com/users/shauray8/followers", "following_url": "https://api.github.com/users/shauray8/following{/other_user}", "gists_url": "https://api.github.com/users/shauray8/gists{/gist_id}", "starred_url": "https://api.github.com/users/shauray8/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shauray8/subscriptions", "organizations_url": "https://api.github.com/users/shauray8/orgs", "repos_url": "https://api.github.com/users/shauray8/repos", "events_url": "https://api.github.com/users/shauray8/events{/privacy}", "received_events_url": "https://api.github.com/users/shauray8/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts weights for ViTPose are in PyTorch checkpoint files, should I convert them into .bin or something else?", "@shauray8 You will want to write a conversion script, which translates the names of the weights in the state dict to their equivalent in transformers. These weights will then be loaded into the transformers model. Then we save the transformers model using `model.save_pretrained(...)` which will save out the weights in the desired format (safetensors) as well as all other necessary files such as the model config. \r\n\r\nFor this model, because the encoder is ViT, the translation of these weights will follow a similar pattern to the conversion script here: https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit/convert_vit_timm_to_pytorch.py\r\n\r\nSimilarly, the encoder structure can be implemented directly by using `#Copied from` statements to copy the ViT architecture in the modeling file. \r\n\r\nIf you haven't seen it already, I'd also suggest looking through [this doc](https://huggingface.co/docs/transformers/add_new_model) as it covers in detail all the steps for adding a model to transformers. ", "Thank you for the information and the link to the conversion script for ViT.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25001). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@shauray8 Are you still actively working on this model addition? If you don't have the capacity to address it at the moment, we could open up to someone else in the community to take over. Let us know if you have any questions about the implementation. ", "Hi, @amyeroberts I'm sorry for not keeping you informed on this, it was a little time-consuming transferring everything from mmcv especially, I think I've done most of it correctly, But it does not work as it should, the only thing I can think of is I have not transferred the weights correctly (I might be totally wrong though), If that's the case maybe I can still find some time and fix it. ", "@shauray8 Let us know if you need help in getting it across the line. \r\n\r\nWhen you say that it's not working well - what tests are you running to confirm this? If you pass the same image to the mmcv implementation and this one, do you get the same outputs? If not, how about the same activations in the first layer? Debugging like this will help you pinpoint where the issue might be. ", "Yes, I did compare to the results from the original code, that's the only test I did, I'll try debugging it step by step as you said and let you know If I need any help, maybe I'll try doing at faster this time ", "@amyeroberts While debugging I couldn't find any non-trivial differences, opening it for the community again. " ]
1,689
1,700
1,700
CONTRIBUTOR
null
# What does this PR do? Adds ViTPose to Huggingface/Transformers Code and weights: https://github.com/ViTAE-Transformer/ViTPose Paper: https://arxiv.org/abs/2204.12484 Fixes #24915 ## Before submitting - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25001/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25001/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/25001", "html_url": "https://github.com/huggingface/transformers/pull/25001", "diff_url": "https://github.com/huggingface/transformers/pull/25001.diff", "patch_url": "https://github.com/huggingface/transformers/pull/25001.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/25000
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25000/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25000/comments
https://api.github.com/repos/huggingface/transformers/issues/25000/events
https://github.com/huggingface/transformers/issues/25000
1,816,046,742
I_kwDOCUB6oc5sPqyW
25,000
beam_indices = None
{ "login": "Dongximing", "id": 35741613, "node_id": "MDQ6VXNlcjM1NzQxNjEz", "avatar_url": "https://avatars.githubusercontent.com/u/35741613?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dongximing", "html_url": "https://github.com/Dongximing", "followers_url": "https://api.github.com/users/Dongximing/followers", "following_url": "https://api.github.com/users/Dongximing/following{/other_user}", "gists_url": "https://api.github.com/users/Dongximing/gists{/gist_id}", "starred_url": "https://api.github.com/users/Dongximing/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dongximing/subscriptions", "organizations_url": "https://api.github.com/users/Dongximing/orgs", "repos_url": "https://api.github.com/users/Dongximing/repos", "events_url": "https://api.github.com/users/Dongximing/events{/privacy}", "received_events_url": "https://api.github.com/users/Dongximing/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante who is gonna be more familiar with this! ", "Hey @Dongximing πŸ‘‹ \r\n\r\nThe PR linked above (#25042) should fix it :) In a nutshell, the values were not being piped all the way to the output." ]
1,689
1,690
1,690
NONE
null
### System Info Hi dear officer I find a bug, that is, if I use the 'force_words_id' parameter in generate() function and set output_score = True. then I want to get beam_indices, it will return None. but if I remove force_words_id. it will work. ` prompt = """Tell me some about Canada""" input_tokenized_info = tokenizer(prompt, return_tensors="pt") input_ids, attention_mask = input_tokenized_info['input_ids'], input_tokenized_info[ 'attention_mask'] input_ids = input_ids.to('cuda') attention_mask = attention_mask.to('cuda') force_words = ["Canada"] force_words_ids = tokenizer(force_words, add_special_tokens=False).input_ids outputs = model.generate(input_ids=input_ids, attention_mask=attention_mask,num_beams =4,max_new_tokens=10,\ return_dict_in_generate=True,output_scores=True) ` `print(outputs.beam_indices) tensor([[ 0, 0, 0, 0, 1, 0, 0, 3, 1, 0, -1, -1, -1, -1, -1]], device='cuda:0')` But if add 'force_words_id' `outputs = model.generate(input_ids=input_ids, attention_mask=attention_mask,num_beams =4,max_new_tokens=10,\ return_dict_in_generate=True,output_scores=True,force_words_ids=force_words_ids) print(outputs.beam_indices) None` ### Who can help? @ArthurZucker , please give me some guidance, thanks πŸ™ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction it should return indices. ### Expected behavior it should return indices.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25000/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25000/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24999
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24999/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24999/comments
https://api.github.com/repos/huggingface/transformers/issues/24999/events
https://github.com/huggingface/transformers/issues/24999
1,816,027,426
I_kwDOCUB6oc5sPmEi
24,999
dataloading bug after upgrading to 4.31.0
{ "login": "getao", "id": 12735658, "node_id": "MDQ6VXNlcjEyNzM1NjU4", "avatar_url": "https://avatars.githubusercontent.com/u/12735658?v=4", "gravatar_id": "", "url": "https://api.github.com/users/getao", "html_url": "https://github.com/getao", "followers_url": "https://api.github.com/users/getao/followers", "following_url": "https://api.github.com/users/getao/following{/other_user}", "gists_url": "https://api.github.com/users/getao/gists{/gist_id}", "starred_url": "https://api.github.com/users/getao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/getao/subscriptions", "organizations_url": "https://api.github.com/users/getao/orgs", "repos_url": "https://api.github.com/users/getao/repos", "events_url": "https://api.github.com/users/getao/events{/privacy}", "received_events_url": "https://api.github.com/users/getao/received_events", "type": "User", "site_admin": false }
[ { "id": 5616426447, "node_id": "LA_kwDOCUB6oc8AAAABTsPdzw", "url": "https://api.github.com/repos/huggingface/transformers/labels/solved", "name": "solved", "color": "B1D6DC", "default": false, "description": "" } ]
closed
false
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false } ]
[ "cc @muellerzr but we will need a full reproducer to be able to help.", "Thank you. I list my code as follows:\r\n\r\ndef train_model(model, train_dataset, eval_dataset, epochs=5, batch_size=1):\r\n training_args = TrainingArguments(\r\n output_dir=\"outputs/\",\r\n overwrite_output_dir=True,\r\n num_train_epochs=epochs,\r\n max_steps=100000,\r\n per_device_train_batch_size=batch_size,\r\n per_device_eval_batch_size=batch_size,\r\n eval_accumulation_steps=8,\r\n save_strategy=\"steps\",\r\n save_steps=500,\r\n evaluation_strategy=\"steps\",\r\n eval_steps=100,\r\n logging_steps=20,\r\n logging_dir=\"logs\",\r\n learning_rate=8e-5,\r\n gradient_accumulation_steps=8,\r\n fp16=True,\r\n do_train=True,\r\n )\r\n\r\n trainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=train_dataset,\r\n eval_dataset=eval_dataset,\r\n )\r\n\r\n\r\n train_result = trainer.train()\r\n trainer.save_model()\r\n trainer.log_metrics(\"train\", train_result.metrics)\r\n metrics = trainer.evaluate()\r\n trainer.log_metrics(\"eval\", metrics)\r\n trainer.save_metrics(\"eval\", metrics)\r\n\r\n\r\ndef tokenize_function(examples):\r\n max_len = max_txt_len + 128\r\n output = model.tokenizer(examples[\"text\"], truncation=True, max_length=max_len, padding=False)\r\n output[\"labels\"] = [list(e) for e in output[\"input_ids\"]]\r\n return output\r\n\r\ndef main():\r\n train_file = \"train.jsonl\"\r\n eval_file = \"valid.jsonl\"\r\n dataset = load_dataset(\"json\", data_files={\"train\": train_file, \"eval\": eval_file}, streaming=True)\r\n dataset = dataset.with_format(\"torch\")\r\n train_dataset = dataset[\"train\"]\r\n eval_dataset = dataset[\"eval\"]\r\n\r\n train_dataset = train_dataset.map(tokenize_function, batched=True)\r\n eval_dataset = eval_dataset.map(tokenize_function, batched=True)\r\n train_model(model, train_dataset, eval_dataset)\r\n\r\nmain()", "How did this work in 4.29 if you are not providing a data_collator to the Trainer and not padding your texts?", "> How did this work in 4.29 if you are not providing a data_collator to the Trainer and not padding your texts?\r\n\r\nAs per_device_train_batch_size=1 in my code, it runs properly in 4.29.2 and 4.30.2 even if I didn't pad and didn't provide a data_collator.\r\n\r\nIt only fails in 4.31.0.\r\n\r\nBTW, in 4.31.0, even if I provided a data_collator, it still fails. It only works if I pre-pad all the sequences into the same length in the tokenize_function().", "Ah yes, understood. This doesn't work anymore because Accelerate will by default use `dispatch_batches=True` for iterable datasets, which builds the batch on process 0 (with a batch size 4 here since you have 4 processes) then split it to send it to each GPU.\r\n\r\n@muellerzr what we need is to surface the option `dispatch_batches=False` here.\r\n\r\nI think if you add a line `trainer.accelerator.dispatch_batches=False` it will work again @getao ", "> Ah yes, understood. This doesn't work anymore because Accelerate will by default use `dispatch_batches=True` for iterable datasets, which builds the batch on process 0 (with a batch size 4 here since you ahve 4 processes) then split it to send it to each GPU.\r\n> \r\n> @muellerzr what we need is to surface the option `dispatch_batches=False` here.\r\n> \r\n> I think if you add a line `trainer.accelerator.disaptch_batches=False` it will work again @getao\r\n\r\nOh, I see! Thank you very much for your help! ", "Thanks @getao! #25038 should solve this, once merged just set `args.dispatch_batches=False` and your code should run just fine" ]
1,689
1,691
1,691
NONE
null
### System Info transformers=4.31.0 pytorch=1.13.1 ### Who can help? Hi @sgugger and @ArthurZucker When I used transformers 4.29.2 and 4.30.2 with the streaming dataset and local batch size=1, I didn't pad the text sequence and everything goes well. However, after I upgrade the transformers to 4.31.0. My previous training pipeline fails. Error messages are: File "myenv/lib/python3.8/site-packages/accelerate/data_loader.py", line 556, in __iter__ next_batch, next_batch_info = self._fetch_batches(main_iterator) File "myenv/lib/python3.8/site-packages/accelerate/data_loader.py", line 520, in _fetch_batches batch = concatenate(batches, dim=0) File "myenv/lib/python3.8/site-packages/accelerate/utils/operations.py", line 441, in concatenate return type(data[0])({k: concatenate([d[k] for d in data], dim=dim) for k in data[0].keys()}) File "myenv/lib/python3.8/site-packages/accelerate/utils/operations.py", line 441, in <dictcomp> return type(data[0])({k: concatenate([d[k] for d in data], dim=dim) for k in data[0].keys()}) File "myenv/lib/python3.8/site-packages/accelerate/utils/operations.py", line 444, in concatenate return torch.cat(data, dim=dim) RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 655 but got size 563 for tensor number 1 in the list. I find that in the following function in data_loader.py (from accelerate), the variable "batches" contain examples with different lengths, causing the error. For example, I trained my model on 4 GPUs with local batch size=1. Then, the list batches will have 4 elements (each is a batch of 1 example). But these 4 elements may have different lengths, causing the above error when concatenating. However, as my local batch size=1, there should be no need to make the samples to be in the same length. I think it is a bug introduced in 4.31.0 because in the previous transformers version (e.g., 4.29.2 and 4.30.2), the training script can run smoothly without raising the error. I look forward to your comments and suggestions. Thank you def _fetch_batches(self, iterator): batches, batch = None, None # On process 0, we gather the batch to dispatch. if self.state.process_index == 0: try: if self.split_batches: # One batch of the main iterator is dispatched and split. batch = next(iterator) else: # num_processes batches of the main iterator are concatenated then dispatched and split. # We add the batches one by one so we have the remainder available when drop_last=False. batches = [] for _ in range(self.state.num_processes): batches.append(next(iterator)) batch = concatenate(batches, dim=0) ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction dataset = load_dataset("json", data_files={"train": train_file, "eval": eval_file}, streaming=True)
 dataset = dataset.with_format("torch") 
train_dataset = dataset["train"] 
eval_dataset = dataset["eval"]

 train_dataset = train_dataset.map(tokenize_function, batched=True)
 eval_dataset = eval_dataset.map(tokenize_function, batched=True) 
train_model(model, train_dataset, eval_dataset) ### Expected behavior no error message and training smoothly
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24999/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24999/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24998
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24998/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24998/comments
https://api.github.com/repos/huggingface/transformers/issues/24998/events
https://github.com/huggingface/transformers/pull/24998
1,816,021,958
PR_kwDOCUB6oc5WHBuQ
24,998
[`Llama`] remove persistent `inv_freq` tensor
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? The tensor should not be persistent as if the model is loaded from pretrained, whatever the head dimension size, it will not be resized.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24998/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24998/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24998", "html_url": "https://github.com/huggingface/transformers/pull/24998", "diff_url": "https://github.com/huggingface/transformers/pull/24998.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24998.patch", "merged_at": 1689955868000 }
https://api.github.com/repos/huggingface/transformers/issues/24997
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24997/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24997/comments
https://api.github.com/repos/huggingface/transformers/issues/24997/events
https://github.com/huggingface/transformers/pull/24997
1,816,006,642
PR_kwDOCUB6oc5WG-Zw
24,997
Better handling missing SYS in llama conversation tokenizer
{ "login": "ichernev", "id": 757060, "node_id": "MDQ6VXNlcjc1NzA2MA==", "avatar_url": "https://avatars.githubusercontent.com/u/757060?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ichernev", "html_url": "https://github.com/ichernev", "followers_url": "https://api.github.com/users/ichernev/followers", "following_url": "https://api.github.com/users/ichernev/following{/other_user}", "gists_url": "https://api.github.com/users/ichernev/gists{/gist_id}", "starred_url": "https://api.github.com/users/ichernev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ichernev/subscriptions", "organizations_url": "https://api.github.com/users/ichernev/orgs", "repos_url": "https://api.github.com/users/ichernev/repos", "events_url": "https://api.github.com/users/ichernev/events{/privacy}", "received_events_url": "https://api.github.com/users/ichernev/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ArthurZucker ", "Hey! Could you elaborate on \r\n> failed to add SYS if the conv has history without SYS? \r\n\r\nThe SYSTEM prompting should always go at the start of the whole conversation, thus we only check whether a system prompt is in the first prompt or not, because if you go through the conversational API, you should add the system prompt at the beginning. \r\n\r\nIs the usecase that you are requesting to be able to add a system prompt in the middle of the conversation ? ", "Lets look at 2 use cases:\r\n- **history (i.e more than 1 user message), no SYS in first message**: the code adds SYS to conversation, but then proceeds to use `dialogue` where SYS is not added. So model doesn't see SYS (incorrect). If the conversation is used again, then there will be SYS present (OK, I guess)\r\n- **single (user) message in conversation (no `past_user_inputs`), no SYS in it**: the code adds SYS to `dialogue`, and the SYS ends up in the model input (correct), but the message/conversation is not modified (i.e if used again there will be no sys)\r\n\r\nSo there is discrepancy between what the model sees at this iteration and what would happen in the next iteration. So I changed the code to 1) first modify the conversation object, before `dialogue` is computed and 2) modify both the `past_user_inputs` and `new_user_input`, so no case will be left unhandled.\r\n\r\nSo now the model would see SYS in both cases, and the conversation is modified (for future use) in both cases.\r\n\r\nAn alternative is to modify the `dialogue` object only (it's just an array so that's even simpler - no difference between history and no history), then the conversation object would stay without SYS, but the model will see SYS every time.", "@ArthurZucker ready for approval", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,690
1,690
CONTRIBUTOR
null
The existing code failed to add SYS if the conversation has history without SYS, but does modify the passed conversation to include it. Rearrange the code so modification to the conversation object are taken into account for token id generation.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24997/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24997/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24997", "html_url": "https://github.com/huggingface/transformers/pull/24997", "diff_url": "https://github.com/huggingface/transformers/pull/24997.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24997.patch", "merged_at": 1690204870000 }
https://api.github.com/repos/huggingface/transformers/issues/24996
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24996/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24996/comments
https://api.github.com/repos/huggingface/transformers/issues/24996/events
https://github.com/huggingface/transformers/issues/24996
1,816,003,165
I_kwDOCUB6oc5sPgJd
24,996
one of the variables needed for gradient computation has been modified by an inplace operation
{ "login": "levuloihust99", "id": 49064246, "node_id": "MDQ6VXNlcjQ5MDY0MjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/49064246?v=4", "gravatar_id": "", "url": "https://api.github.com/users/levuloihust99", "html_url": "https://github.com/levuloihust99", "followers_url": "https://api.github.com/users/levuloihust99/followers", "following_url": "https://api.github.com/users/levuloihust99/following{/other_user}", "gists_url": "https://api.github.com/users/levuloihust99/gists{/gist_id}", "starred_url": "https://api.github.com/users/levuloihust99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/levuloihust99/subscriptions", "organizations_url": "https://api.github.com/users/levuloihust99/orgs", "repos_url": "https://api.github.com/users/levuloihust99/repos", "events_url": "https://api.github.com/users/levuloihust99/events{/privacy}", "received_events_url": "https://api.github.com/users/levuloihust99/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @pacman100 ", "Hi @levuloihust99 \r\ndo you face the same issue by setting:\r\n```python\r\nfind_unused_parameters=True\r\n```", "> Hi @levuloihust99 do you face the same issue by setting:\r\n> \r\n> ```python\r\n> find_unused_parameters=True\r\n> ```\r\n\r\nSetting `find_unused_parameters=True` gave me the exact same error. Additionally, in my example code, it is more performant to set `find_unused_parameters=False` since there is no unused parameters.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@levuloihust99 same problem, do you find further reason? thanks.", "The *solution* is to set `broadcast_buffers=False`\r\n\r\n```python\r\nmodel = DDP(model, broadcast_buffers=False, ...)\r\n```" ]
1,689
1,707
1,693
NONE
null
### System Info * Ubuntu 20.04 * Architecture x86_64 * 3 x Tesla P100-PCIE-12GB * Python 3.8.10 * torch==1.12.1+cu116 ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I encountered the error `one of the variables needed for gradient computation has been modified by an inplace operation...` when training my model with DistributedDataParallel (DDP). My code run smoothly when I do not use DDP. I have spent time inspecting the problem and below is the minimal code for reproducing the problem. ```python import torch from torch import nn import argparse class BertEmbeddings(nn.Module): def __init__(self, config): super().__init__() self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size) self.register_buffer("position_ids", torch.arange(config.max_position_embeddings).expand((1, -1))) def forward( self, input_ids, past_key_values_length=0 ): seq_length = input_ids.shape[1] position_ids = self.position_ids[:, past_key_values_length : seq_length + past_key_values_length] return self.position_embeddings(position_ids) def main(): parser = argparse.ArgumentParser() parser.add_argument("--local_rank", type=int, default=-1) args = parser.parse_args() local_rank = args.local_rank torch.cuda.set_device(local_rank) device = torch.device("cuda", local_rank) torch.distributed.init_process_group(backend="nccl") w = BertEmbeddings(config=argparse.Namespace(max_position_embeddings=10, hidden_size=24)) w.to(device) # setup distributed w = torch.nn.parallel.DistributedDataParallel(w, device_ids=[local_rank], output_device=local_rank, find_unused_parameters=False) input_ids = torch.tensor([[1, 2, 3]]).to(device) x = w(input_ids) y = w(input_ids) M = torch.sum(x) M.backward() if __name__ == "__main__": main() ``` Suppose this code is put in a file named `debug_distributed.py`. I run this code with the command ```shell python -m torch.distributed.launch --nproc_per_node=3 debug_distributed.py ``` , and I got the error <pre> one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.LongTensor [1, 3]] is at version 3; expected version 2 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True). </pre> If I do not use DDP, there is no such error. Specifically, put the following in a file named `debug_normal.py` and run `python debug_normal.py` ```python import torch from torch import nn import argparse class BertEmbeddings(nn.Module): def __init__(self, config): super().__init__() self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size) self.register_buffer("position_ids", torch.arange(config.max_position_embeddings).expand((1, -1))) def forward( self, input_ids, past_key_values_length=0 ): seq_length = input_ids.shape[1] position_ids = self.position_ids[:, past_key_values_length : seq_length + past_key_values_length] return self.position_embeddings(position_ids) def main(): w = BertEmbeddings(config=argparse.Namespace(max_position_embeddings=10, hidden_size=24)) w.to("cuda") input_ids = torch.tensor([[1, 2, 3]]).to("cuda") x = w(input_ids) y = w(input_ids) M = torch.sum(x) M.backward() if __name__ == "__main__": main() ``` This problem prevents me from training my BertModel in distributed mode. I found that the problem lies on the line `position_ids = self.position_ids[:, past_key_values_length : seq_length + past_key_values_length]`. It seems like an "inplace operation" as the error suggests. If I change that line to `position_ids = self.position_ids[:, past_key_values_length : seq_length + past_key_values_length].clone()`, the problem will be gone. I think this problem is much more related to PyTorch. It may be a Pytorch bug. However, the simplest workaround is to add a `.clone()` as I showed above. Currently, `transformers` of version `>=4` uses this "inplace operation" and all `>=4` versions of `transformers` will get this error. So, is there anyway to better fix the problem, so I don't need to change library (`transformers`) code? ### Expected behavior BertModel works in distributed training with DistributedDataParallel
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24996/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24996/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24995
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24995/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24995/comments
https://api.github.com/repos/huggingface/transformers/issues/24995/events
https://github.com/huggingface/transformers/pull/24995
1,816,002,605
PR_kwDOCUB6oc5WG9iV
24,995
[`bnb`] Add simple check for bnb import
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? as discussed internally @sgugger let's add a GPU check inside `is_bnb_available`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24995/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24995/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24995", "html_url": "https://github.com/huggingface/transformers/pull/24995", "diff_url": "https://github.com/huggingface/transformers/pull/24995.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24995.patch", "merged_at": 1689954653000 }
https://api.github.com/repos/huggingface/transformers/issues/24994
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24994/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24994/comments
https://api.github.com/repos/huggingface/transformers/issues/24994/events
https://github.com/huggingface/transformers/issues/24994
1,815,885,939
I_kwDOCUB6oc5sPDhz
24,994
Llama-2-hf non stopping token generation.
{ "login": "francescobodria", "id": 41335942, "node_id": "MDQ6VXNlcjQxMzM1OTQy", "avatar_url": "https://avatars.githubusercontent.com/u/41335942?v=4", "gravatar_id": "", "url": "https://api.github.com/users/francescobodria", "html_url": "https://github.com/francescobodria", "followers_url": "https://api.github.com/users/francescobodria/followers", "following_url": "https://api.github.com/users/francescobodria/following{/other_user}", "gists_url": "https://api.github.com/users/francescobodria/gists{/gist_id}", "starred_url": "https://api.github.com/users/francescobodria/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/francescobodria/subscriptions", "organizations_url": "https://api.github.com/users/francescobodria/orgs", "repos_url": "https://api.github.com/users/francescobodria/repos", "events_url": "https://api.github.com/users/francescobodria/events{/privacy}", "received_events_url": "https://api.github.com/users/francescobodria/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante and @ArthurZucker ", "Hey! This is expected, the llama model kind of rarely generates the `eos_token`. It was the same with Llama 1, and if you run your script with the original llama, you will get the same output:\r\n```python\r\n# Copyright (c) Meta Platforms, Inc. and affiliates.\r\n# This software may be used and distributed according to the terms of the GNU General Public License version 3.\r\n\r\nimport fire\r\n\r\nfrom llama import Llama\r\n\r\n\r\ndef main(\r\n ckpt_dir: str,\r\n tokenizer_path: str,\r\n temperature: float = 0.6,\r\n top_p: float = 0.9,\r\n max_seq_len: int = 512,\r\n max_gen_len: int = 256,\r\n max_batch_size: int = 8,\r\n):\r\n generator = Llama.build(\r\n ckpt_dir=ckpt_dir,\r\n tokenizer_path=tokenizer_path,\r\n max_seq_len=max_seq_len,\r\n max_batch_size=max_batch_size,\r\n )\r\n\r\n\r\n results = generator.text_completion(\r\n ['Hello there! How are you doing?'],\r\n max_gen_len=max_gen_len,\r\n temperature=temperature,\r\n top_p=top_p,\r\n )\r\n\r\n print('Hello there! How are you doing?')\r\n print(f\"> {results['generation']}\")\r\n print(\"\\n==================================\\n\")\r\n\r\n\r\nif __name__ == \"__main__\":\r\n fire.Fire(main)\r\n```\r\nand \r\n```bash\r\ntorchrun --nproc_per_node=1 llama/example_text_completion.py --ckpt_dir Llama-2-7b --tokenizer_path Llama-2-7b/tokenizer.model --temperature 0.1 --top_p 0.95\r\n```\r\nwill produce:\r\n```txt\r\n> I hope you are doing well. I am doing well. I am happy to be back here. I have been away for a while. I have been busy with my studies. I have been busy with my work. I have been busy with my life. I have been busy with my family. I have been busy with my friends. I have been busy with my hobbies. I have been busy with my interests. I have been busy with my passions. I have been busy with my dreams. I have been busy with my goals. I have been busy with my ambitions. I have been busy with my aspirations. I have been busy with my plans. I have been busy with my projects. I have been busy with my ideas. I have been busy with my thoughts. I have been busy with my feelings. I have been busy with my emotions. I have been busy with my mind. I have been busy with my heart. I have been busy with my soul. I have been busy with my spirit. I have been busy with my body. I have been busy with my mind. I have been busy with my soul. I have been busy with my spirit. I have been\r\n```\r\n\r\nNote that the setting you are providing (`temperature = 0.1` etc) can influence on the generation.", "So what are the best practices currently known to reduce this random ramble?", "You should probably ask this question on [the forum](https://discuss.huggingface.co/), using the default parameters should already help (`temperature=0.9`), you can try to use `LogitsProcessor` for length penalty to reduce potential hallucination. You can also change the sampling stategies, use `top_k`, `contrastive search` etc etc. @gante will have better solution!", "@AnishAlapattu-GEP @ArthurZucker \r\n\r\nThis is actually a hard problem to solve, and I have yet to see a solution that generalizes well! A few things that can be tried, ranked by implementation complexity:\r\n1. In the prompt, mention that you want a short output (be specific if you can, like \"Reply in 3 sentences or less\");\r\n2. Add a custom logits processor that increases the score of the eos token according to some rule (e.g. scaling with the generated length) *EDIT* it is implemented in [this logits processor](https://huggingface.co/docs/transformers/main/en/internal/generation_utils#transformers.EncoderRepetitionPenaltyLogitsProcessor)!\r\n3. Generate text in excess and have a post-processing step to crop unwanted text (e.g. based on the conditional probability of each sequence -- when the model starts rambling, I suspect there is a significant drop in the probability of the sentence given the past sentences)\r\n4. Fine-tune the model πŸ™ƒ ", "Thanks @ArthurZucker and @gante!", "The issue stems from using bare Llama-2 model, instead of `-chat` version, which is fine-tuned to follow instructions. \r\n\r\nBare llama-2 model is trained to complete text, so if you include the beginning of the conversation in the prompt, you should expect the rest of the conversation to be predicted by such model. In contrast, -chat models are trained to be more aligned or to follow the human instructions. This idea is nicely presented by openai in InstructGPT [article](https://openai.com/research/instruction-following)\r\n\r\n![2023-09-01_12-04](https://github.com/huggingface/transformers/assets/21311210/e5f0df11-2557-4f70-b3f7-d08e28401d08)\r\n\r\n", "☝️ Precisely\r\n\r\nA thing I forgot to mention in my answer above (which I've edited in case someone stumbles upon it): we have a way to add a soft cap on the generated text length. e.g. If you want your output to be about 100 tokens unless the remaining tokens are really important for the answer, you can do it through [this logits processor](https://huggingface.co/docs/transformers/main/en/internal/generation_utils#transformers.EncoderRepetitionPenaltyLogitsProcessor). " ]
1,689
1,693
1,691
NONE
null
### System Info - `transformers` version: 4.31.0 - Platform: Linux-5.15.0-1035-azure-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.22.0.dev0 - Accelerate config: not found - PyTorch version (GPU?): 1.13.1+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import torch model_id = "/home/modelweights/llama2-hf-7b" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, device_map='auto', load_in_8bit=True, ) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, do_sample=True, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, max_new_tokens=256, temperature=0.1, ) sequences = pipe( 'Hello there! How are you doing?', ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ### Expected behavior Hi, My Llama 2 model is not generating the stopping tokens. For example the reply of the question Hello there! How are you doing? is: Result: Hello there! How are you doing? I hope you are doing well. I am doing well. I am doing well. I am doing well. I am doing well. I am doing well. I am doing well. I am doing well. I am doing well. I am doing well. I am doing well. I am doing well. I am doing well. I am doing well. I am doing well. I am doing well. I am doing well. I am doing w The reply only stops when the max token criteria is met What am I doing wrong?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24994/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24994/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24993
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24993/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24993/comments
https://api.github.com/repos/huggingface/transformers/issues/24993/events
https://github.com/huggingface/transformers/pull/24993
1,815,873,752
PR_kwDOCUB6oc5WGhpW
24,993
Use main_input_name for include_inputs_for_metrics
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? Instead of hard-coding `"input_ids"`, we should use the model main input name when getting the inputs for the metrics. Fixes #24933
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24993/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24993/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24993", "html_url": "https://github.com/huggingface/transformers/pull/24993", "diff_url": "https://github.com/huggingface/transformers/pull/24993.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24993.patch", "merged_at": 1689949817000 }
https://api.github.com/repos/huggingface/transformers/issues/24992
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24992/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24992/comments
https://api.github.com/repos/huggingface/transformers/issues/24992/events
https://github.com/huggingface/transformers/pull/24992
1,815,829,901
PR_kwDOCUB6oc5WGYL2
24,992
Add `OwlViTForObjectDetection` to `MODEL_FOR_OBJECT_DETECTION`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "well", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24992). All of your documentation changes will be reflected on that endpoint." ]
1,689
1,693
1,689
COLLABORATOR
null
# What does this PR do? ~~This seems a miss, but need @NielsRogge to make sure.~~ well, he told me > it's not a bug, it's intended. OWL-ViT can be loaded using the AutoModelForZeroShotObjectDetectionModel class (few that's a long name)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24992/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24992/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24992", "html_url": "https://github.com/huggingface/transformers/pull/24992", "diff_url": "https://github.com/huggingface/transformers/pull/24992.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24992.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24991
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24991/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24991/comments
https://api.github.com/repos/huggingface/transformers/issues/24991/events
https://github.com/huggingface/transformers/issues/24991
1,815,828,071
I_kwDOCUB6oc5sO1Zn
24,991
MarianMTModel model.generate function issue after v4.30.2
{ "login": "bariscankurtkaya", "id": 33360380, "node_id": "MDQ6VXNlcjMzMzYwMzgw", "avatar_url": "https://avatars.githubusercontent.com/u/33360380?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bariscankurtkaya", "html_url": "https://github.com/bariscankurtkaya", "followers_url": "https://api.github.com/users/bariscankurtkaya/followers", "following_url": "https://api.github.com/users/bariscankurtkaya/following{/other_user}", "gists_url": "https://api.github.com/users/bariscankurtkaya/gists{/gist_id}", "starred_url": "https://api.github.com/users/bariscankurtkaya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bariscankurtkaya/subscriptions", "organizations_url": "https://api.github.com/users/bariscankurtkaya/orgs", "repos_url": "https://api.github.com/users/bariscankurtkaya/repos", "events_url": "https://api.github.com/users/bariscankurtkaya/events{/privacy}", "received_events_url": "https://api.github.com/users/bariscankurtkaya/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @bariscankurtkaya There is a problem in the weights of that model, I made a PR [here](https://huggingface.co/Helsinki-NLP/opus-tatoeba-en-tr/discussions/3) to fix them but it hasn't been merged. You can load the model weights of this PR by adding `revision=\"pr/3\"` in your `from_pretrained` call.", "> Hi @bariscankurtkaya There is a problem in the weights of that model, I made a PR [here](https://huggingface.co/Helsinki-NLP/opus-tatoeba-en-tr/discussions/3) to fix them but it hasn't been merged. You can load the model weights of this PR by adding `revision=\"pr/3\"` in your `from_pretrained` call.\r\n\r\nThanks you are the best. πŸš€ πŸ€— " ]
1,689
1,689
1,689
NONE
null
Hello there, I have encountered a problem with the model.generate function after version v4.30.2 on several different systems. Below, you can find the problem description and the corresponding code block: ``` from transformers import AutoTokenizer, MarianMTModel model_name = f"Helsinki-NLP/opus-tatoeba-en-tr" model = MarianMTModel.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) text = 'Once upon a time I met a boy named Hugo Cabret he lived in a train station' # Just random sample text print('Translating text ... ') batch = tokenizer(text, return_tensors="pt", padding=True, truncation=True) print('batch:', batch) generated_ids = model.generate(**batch, max_length=512) print('generated_ids:', generated_ids) translated = tokenizer.batch_decode( generated_ids, skip_special_tokens=True) print('Translated text:',:', translated) ``` Output when I downgrade the transformers library to v4.30.2, which is correct. ``` Translating text ... batch: {'input_ids': tensor([[ 895, 1100, 13, 144, 7, 1227, 13, 660, 3018, 15658, 15952, 32, 15, 53, 2896, 21, 13, 2825, 2865, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])} generated_ids: tensor([[59993, 95, 2491, 15658, 15952, 3433, 5240, 14, 12768, 45057, 2, 8530, 25435, 13855, 2, 0]]) Translated text: ['Bir zamanlar Hugo Cabret adında bir çocukla tanışmıştım. Tren istasyonunda yaşıyordu.'] ``` But unfortunately, when I upgraded the transformers library to the newest version v4.31.0, the model.generate function's return becomes corrupted. ``` Translating text ... batch: {'input_ids': tensor([[ 895, 1100, 13, 144, 7, 1227, 13, 660, 3018, 15658, 15952, 32, 15, 53, 2896, 21, 13, 2825, 2865, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])} generated_ids: tensor([[59993, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0]]) Translated text: .............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................. ``` As can be seen, the tokenizer works correctly, but unfortunately, the model.generate function returns corrupted data. By the way, I also tested the 'Helsinki-NLP/opus-mt-tc-big-en-tr' model, and it also generates corrupted data. Despite that, the 'Helsinki-NLP/opus-mt-en-es' model generates correct results.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24991/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24991/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24990
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24990/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24990/comments
https://api.github.com/repos/huggingface/transformers/issues/24990/events
https://github.com/huggingface/transformers/pull/24990
1,815,780,901
PR_kwDOCUB6oc5WGOBi
24,990
Fix `llama` tokenization doctest
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? See comment in the change.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24990/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24990/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24990", "html_url": "https://github.com/huggingface/transformers/pull/24990", "diff_url": "https://github.com/huggingface/transformers/pull/24990.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24990.patch", "merged_at": 1689950871000 }
https://api.github.com/repos/huggingface/transformers/issues/24989
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24989/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24989/comments
https://api.github.com/repos/huggingface/transformers/issues/24989/events
https://github.com/huggingface/transformers/pull/24989
1,815,743,642
PR_kwDOCUB6oc5WGGLr
24,989
Generate: `ImageToTextPipeline` now supports generation kwargs
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ " > This PR borrows code from the other pipelines, to allow things like `pipe(input, min_new_tokens=10)` (as opposed to `pipe(input, generate_kwargs={\"min_new_tokens\":10})`). It also updates the docs, which were slightly outdated.\r\n\r\nActually, other pipelines that accept anything, are the ones that have been done so because of legacy reasons.\r\nAccepting anything causes headaches because `generate` can arbitrarily add new kwargs, some of which could clash with existing pipeline kwargs.\r\nThe first one that comes to mind is `max_length` which already clashed with tokenizer `max_length`.\r\nI really think this is a bad idea, and that whitelisting is better.", "Ah ah, glad I asked πŸ˜… ", "_The documentation is not available anymore as the PR was closed or merged._", "@Narsil @sgugger I see, arg clashing is absolutely undesirable!\r\n\r\nExplicit whitelisting (like `max_new_tokens` in `ImageToTextPipeline`) also seems excessive -- different modalities/tasks benefit from controlling specific parameters, so we would have to pick between additional maintenance burden + large docs and usage capabilities.\r\n\r\nHow about we move all generative pipelines to accept a `generation_config` (with the proper deprecation cycles and doc changes)? It would be an explicitly whitelisted argument from the pipeline perspective while enabling the vast majority of generation modes.\r\n\r\n[Regardless of the specific decision here, I believe we would benefit from making all pipelines consistent with each other.]", "That works for me!", "`generate_config` seems even better than `generate_kwargs` !", "Awesome, I'll close this PR and open a new one reflecting the decision here πŸ‘ " ]
1,689
1,690
1,690
MEMBER
null
# What does this PR do? As the title indicates -- previously the generation kwargs had to be passed as a separate dictionary, which was inconsistent with the other pipelines. This PR borrows code from the other pipelines, to allow things like `pipe(input, min_new_tokens=10)` (as opposed to `pipe(input, generate_kwargs={"min_new_tokens":10})`). It also updates the docs, which were slightly outdated. Fixes #24836
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24989/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24989/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24989", "html_url": "https://github.com/huggingface/transformers/pull/24989", "diff_url": "https://github.com/huggingface/transformers/pull/24989.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24989.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24988
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24988/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24988/comments
https://api.github.com/repos/huggingface/transformers/issues/24988/events
https://github.com/huggingface/transformers/pull/24988
1,815,721,079
PR_kwDOCUB6oc5WGBbm
24,988
Fix type annotation for deepspeed training arg
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? #24550 wanted to put a more exact type annotation for the deepspeed arg, which then makes that training arg fail in CLI commands. This PR reverts that part and adds a comment so we do not break this again. Fixes #24974
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24988/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24988/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24988", "html_url": "https://github.com/huggingface/transformers/pull/24988", "diff_url": "https://github.com/huggingface/transformers/pull/24988.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24988.patch", "merged_at": 1689946926000 }
https://api.github.com/repos/huggingface/transformers/issues/24987
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24987/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24987/comments
https://api.github.com/repos/huggingface/transformers/issues/24987/events
https://github.com/huggingface/transformers/pull/24987
1,815,679,262
PR_kwDOCUB6oc5WF4Vy
24,987
[i18n-KO] Translated docs: ko: pr_checks.md to Korean
{ "login": "sronger", "id": 79131091, "node_id": "MDQ6VXNlcjc5MTMxMDkx", "avatar_url": "https://avatars.githubusercontent.com/u/79131091?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sronger", "html_url": "https://github.com/sronger", "followers_url": "https://api.github.com/users/sronger/followers", "following_url": "https://api.github.com/users/sronger/following{/other_user}", "gists_url": "https://api.github.com/users/sronger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sronger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sronger/subscriptions", "organizations_url": "https://api.github.com/users/sronger/orgs", "repos_url": "https://api.github.com/users/sronger/repos", "events_url": "https://api.github.com/users/sronger/events{/privacy}", "received_events_url": "https://api.github.com/users/sronger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24987). All of your documentation changes will be reflected on that endpoint.", "Translation completed!\r\nMay you please review this PR? :)\r\n@sgugger, @ArthurZucker, @eunseojo, @stevhliu " ]
1,689
1,692
1,692
CONTRIBUTOR
null
<!-- PR의 제λͺ©μ€ "🌐 [i18n-KO] Translated `<your_file>.md` to Korean" 으둜 λΆ€νƒλ“œλ¦½λ‹ˆλ‹€! --> # What does this PR do? Translated the `pr_checks.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (λ²ˆμ—­ λˆ„λ½/쀑볡 검사) - [x] Grammar Check (λ§žμΆ€λ²• 검사) - [x] Review or Add new terms to glossary (μš©μ–΄ 확인 및 μΆ”κ°€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview둜 μ •μƒμž‘λ™ 확인) ## Who can review? (Initial) @sronger, @TaeYupNoh, @kj021, @HanNayeoniee, @eenzeenee, @sim-so ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24987/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24987/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24987", "html_url": "https://github.com/huggingface/transformers/pull/24987", "diff_url": "https://github.com/huggingface/transformers/pull/24987.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24987.patch", "merged_at": 1692252197000 }
https://api.github.com/repos/huggingface/transformers/issues/24986
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24986/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24986/comments
https://api.github.com/repos/huggingface/transformers/issues/24986/events
https://github.com/huggingface/transformers/issues/24986
1,815,669,526
I_kwDOCUB6oc5sOOsW
24,986
All `meta-llama/Llama-2-*-hf` models have incorrect `max_position_embeddings`
{ "login": "zijian-hu", "id": 16883354, "node_id": "MDQ6VXNlcjE2ODgzMzU0", "avatar_url": "https://avatars.githubusercontent.com/u/16883354?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijian-hu", "html_url": "https://github.com/zijian-hu", "followers_url": "https://api.github.com/users/zijian-hu/followers", "following_url": "https://api.github.com/users/zijian-hu/following{/other_user}", "gists_url": "https://api.github.com/users/zijian-hu/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijian-hu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijian-hu/subscriptions", "organizations_url": "https://api.github.com/users/zijian-hu/orgs", "repos_url": "https://api.github.com/users/zijian-hu/repos", "events_url": "https://api.github.com/users/zijian-hu/events{/privacy}", "received_events_url": "https://api.github.com/users/zijian-hu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! πŸ‘‹πŸ» \r\nIndeed the paper does mention that the default `max_position_embeddings` is `4096`, and no, the models ~don't have~ should not have to be reconverted: \r\n```\r\nmodel = AutoModelForCausalLM.from_pretrained(\"meta-llama/Llama-2-7b-chat-hf\", max_position_embeddings=4096)\r\n```\r\nworked out of the box for me. The config should probably be updated, the previous choice is explained by the fact that in all the demonstrations [`example_chat_completion`](https://github.com/facebookresearch/llama/blob/main/example_chat_completion.py) and [`example_text_completion`](https://github.com/facebookresearch/llama/blob/main/example_text_completion.py) the `max_position_embeddings` was lowered (on purpose it seems?). \r\n\r\nI also noticed [this](https://huggingface.co/daryl149/llama-2-7b-chat-hf/commit/d8654a4f69178a0c9260cf730241ebac2e72b923) which is why I am investigating! Github issue on the original repo: [#359 ](https://github.com/facebookresearch/llama/issues/359#issuecomment-1640876808)\r\n\r\nThe `max_position_embeddings` only affects the ROPE embedding layer, which is computed on the fly. ", "Okay, so currently we have[ a safeguard](https://github.com/ArthurZucker/transformers/blob/050c4a48f77e42b9d5cd87fccaac955950799acc/src/transformers/models/llama/modeling_llama.py#L118-L120): \r\n```python\r\n if seq_len > self.max_seq_len_cached:\r\n self._set_cos_sin_cache(seq_len=seq_len, device=x.device, dtype=x.dtype)\r\n```\r\nwhich is not the best in terms of compute, but produces the correct values for now πŸ˜‰ \r\nThis also explains why changing for the actual value is wrong: the cos sin will not be re-computed! \r\n\r\nI'll open fixes in transformers (make inv_freq non persistent) and will push updated checkpoints", "Thank you so much for the update! I just took a look at the code; this safeguard is already part of the `transformers v4.31.0` release.\r\n\r\nIt would be great if you could let me know the correct way to use Llama 2 if we want to maintain the advertised `4096` context length without degrading the performance. Should we just pass `max_position_embeddings=4096` as mentioned earlier? Or should we use `max_position_embeddings=2048` until the problem is fully resolved?\r\n```python\r\nmodel = AutoModelForCausalLM.from_pretrained(\"meta-llama/Llama-2-7b-chat-hf\", max_position_embeddings=4096)\r\n```", "I updated the configuration file of all the models to make sure you don't need to do anything! \r\nThe performance degradation was not reported again, so If it comes up, I will make sure to adresse it! ", "Can confirm that just updating the config file works fine.", "Updated all configs online, closing as it is fixed! πŸ€— \r\n" ]
1,689
1,690
1,690
NONE
null
### System Info - `transformers` version: 4.31.0 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.34 - Python version: 3.8.13 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The hugging face version of Llama 2 have `max_position_embeddings` set to `2048` instead of `4096` in the config file. I am unsure if it's just an incorrect setting or if the models need to be converted again. See the below links for detail: - [`meta-llama/Llama-2-7b-hf` config.json](https://huggingface.co/meta-llama/Llama-2-7b-hf/blob/main/config.json#L13) - [`meta-llama/Llama-2-7b-chat-hf` config.json](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf/blob/main/config.json#L13) - [`meta-llama/Llama-2-13b-hf` config.json](https://huggingface.co/meta-llama/Llama-2-13b-hf/blob/main/config.json#L13) - [`meta-llama/Llama-2-13b-chat-hf` config.json](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf/blob/main/config.json#L12) - [`meta-llama/Llama-2-70b-hf` config.json](https://huggingface.co/meta-llama/Llama-2-70b-hf/blob/main/config.json#L13) - [`meta-llama/Llama-2-70b-chat-hf` config.json](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf/blob/main/config.json#L12) If helpful, someone posted an issue in [Meta's official repo](https://github.com/facebookresearch/llama/issues/359). ### Expected behavior According to Meta, Llama 2 has a context length `4096`. This should be the `max_position_embeddings` value in all the `config.json` files. If the model checkpoints on hugging face are not correctly converted, they should be converted again using the correct configuration.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24986/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24986/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24985
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24985/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24985/comments
https://api.github.com/repos/huggingface/transformers/issues/24985/events
https://github.com/huggingface/transformers/pull/24985
1,815,585,409
PR_kwDOCUB6oc5WFjt-
24,985
[i18n-KO] Translated `big_models.md` to Korean
{ "login": "bolizabeth", "id": 68984363, "node_id": "MDQ6VXNlcjY4OTg0MzYz", "avatar_url": "https://avatars.githubusercontent.com/u/68984363?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bolizabeth", "html_url": "https://github.com/bolizabeth", "followers_url": "https://api.github.com/users/bolizabeth/followers", "following_url": "https://api.github.com/users/bolizabeth/following{/other_user}", "gists_url": "https://api.github.com/users/bolizabeth/gists{/gist_id}", "starred_url": "https://api.github.com/users/bolizabeth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bolizabeth/subscriptions", "organizations_url": "https://api.github.com/users/bolizabeth/orgs", "repos_url": "https://api.github.com/users/bolizabeth/repos", "events_url": "https://api.github.com/users/bolizabeth/events{/privacy}", "received_events_url": "https://api.github.com/users/bolizabeth/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24985). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,695
1,695
CONTRIBUTOR
null
<!-- PR의 제λͺ©μ€ "🌐 [i18n-KO] Translated `<your_file>.md` to Korean" 으둜 λΆ€νƒλ“œλ¦½λ‹ˆλ‹€! --> # What does this PR do? Translated the `big_models.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (λ²ˆμ—­ λˆ„λ½/쀑볡 검사) - [x] Grammar Check (λ§žμΆ€λ²• 검사) - [x] Review or Add new terms to glossary (μš©μ–΄ 확인 및 μΆ”κ°€) - [ ] Check Inline TOC (e.g. `[[lowercased-header]]`) - [ ] Check live-preview for gotchas (live-preview둜 μ •μƒμž‘λ™ 확인) ## Who can review? (Initial) <!-- 1. μœ„ 체크가 λͺ¨λ‘ μ™„λ£Œλœ 뒀에, 이 μ•„λž˜μ— 리뷰λ₯Ό μš”μ²­ν•  νŒ€μ›λ“€μ„ λ©˜μ…˜ν•΄μ£Όμ„Έμš”! --> May you please review this PR? @hyunhp @nuatmochoi @heuristicwave @mjk0618 @jungnerd ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. νŒ€μ›λ“€κ³Ό 리뷰가 λλ‚œ ν›„μ—λ§Œ ν—ˆκΉ…νŽ˜μ΄μŠ€ μ§μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> <!-- May you please review this PR? @sgugger, @ArthurZucker, @eunseojo -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24985/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24985/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24985", "html_url": "https://github.com/huggingface/transformers/pull/24985", "diff_url": "https://github.com/huggingface/transformers/pull/24985.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24985.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24984
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24984/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24984/comments
https://api.github.com/repos/huggingface/transformers/issues/24984/events
https://github.com/huggingface/transformers/pull/24984
1,815,561,682
PR_kwDOCUB6oc5WFen0
24,984
🌐 [i18n-KO] Translated `add_tensorflow_model.md` to Korean
{ "login": "keonju2", "id": 54880474, "node_id": "MDQ6VXNlcjU0ODgwNDc0", "avatar_url": "https://avatars.githubusercontent.com/u/54880474?v=4", "gravatar_id": "", "url": "https://api.github.com/users/keonju2", "html_url": "https://github.com/keonju2", "followers_url": "https://api.github.com/users/keonju2/followers", "following_url": "https://api.github.com/users/keonju2/following{/other_user}", "gists_url": "https://api.github.com/users/keonju2/gists{/gist_id}", "starred_url": "https://api.github.com/users/keonju2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/keonju2/subscriptions", "organizations_url": "https://api.github.com/users/keonju2/orgs", "repos_url": "https://api.github.com/users/keonju2/repos", "events_url": "https://api.github.com/users/keonju2/events{/privacy}", "received_events_url": "https://api.github.com/users/keonju2/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,689
1,690
1,690
CONTRIBUTOR
null
<!-- PR의 제λͺ©μ€ "🌐 [i18n-KO] Translated `add_tensorflow_model.md` to Korean" 으둜 λΆ€νƒλ“œλ¦½λ‹ˆλ‹€ --> # What does this PR do? Translated the `your_file.md` file of the documentation to Korean πŸ˜„ Thank you in advance for your review! Part of https://github.com/huggingface/transformers/issues/20179 <!-- 메인 μ΄μŠˆμ— 기둝이 λ‚¨μ•„μš”! κ°€μ§œμ—°κ΅¬μ†Œ 리포λ₯Ό μ‚¬μš©ν•΄ μ—°μŠ΅ν•˜μ‹€λ•ŒλŠ” μ œκ±°ν•΄μ£Όμ‹œλ©΄ κ°μ‚¬ν•˜κ² μŠ΅λ‹ˆλ‹€! :smile: --> ## Before reviewing - [x] Check for missing / redundant translations (λ²ˆμ—­ λˆ„λ½/쀑볡 검사) - [x] Grammar Check (λ§žμΆ€λ²• 검사) - [x] Review or Add new terms to glossary (μš©μ–΄ 확인 및 μΆ”κ°€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [ ] Check live-preview for gotchas (live-preview둜 μ •μƒμž‘λ™ 확인) ## Who can review? (Initial) <!-- 1. μœ„ 체크가 λͺ¨λ‘ μ™„λ£Œλœ λ’€μ—λ§Œ OSSCA νŒ€μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> <!-- Team OSSCA, may you please review this PR? --> <!-- @wonhyeongseo, @keonju2, @harheem, @HongB1, @junejae --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. κ°€μ§œμ—°κ΅¬μ†Œ νŒ€μ›λ“€κ³Ό 리뷰가 λλ‚œ ν›„μ—λ§Œ ν—ˆκΉ…νŽ˜μ΄μŠ€ μ§μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> <!-- May you please review this PR? --> <!-- @sgugger, @ArthurZucker, @eunseojo -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24984/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24984/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24984", "html_url": "https://github.com/huggingface/transformers/pull/24984", "diff_url": "https://github.com/huggingface/transformers/pull/24984.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24984.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24983
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24983/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24983/comments
https://api.github.com/repos/huggingface/transformers/issues/24983/events
https://github.com/huggingface/transformers/pull/24983
1,815,525,614
PR_kwDOCUB6oc5WFWpX
24,983
🌐 [i18n-KO] Translated perf_train_gpu_many.md to Korean
{ "login": "hyunhp", "id": 105839613, "node_id": "U_kgDOBk77_Q", "avatar_url": "https://avatars.githubusercontent.com/u/105839613?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hyunhp", "html_url": "https://github.com/hyunhp", "followers_url": "https://api.github.com/users/hyunhp/followers", "following_url": "https://api.github.com/users/hyunhp/following{/other_user}", "gists_url": "https://api.github.com/users/hyunhp/gists{/gist_id}", "starred_url": "https://api.github.com/users/hyunhp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hyunhp/subscriptions", "organizations_url": "https://api.github.com/users/hyunhp/orgs", "repos_url": "https://api.github.com/users/hyunhp/repos", "events_url": "https://api.github.com/users/hyunhp/events{/privacy}", "received_events_url": "https://api.github.com/users/hyunhp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Please do not open and close several PRs for the same file. You can update a PR by pushing more commits to it.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24983). All of your documentation changes will be reflected on that endpoint.", "> Please do not open and close several PRs for the same file. You can update a PR by pushing more commits to it.\r\n\r\nDuly noted", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,695
1,695
CONTRIBUTOR
null
# What does this PR do? Translated the `perf_train_gpu_many.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (λ²ˆμ—­ λˆ„λ½/쀑볡 검사) - [x] Grammar Check (λ§žμΆ€λ²• 검사) - [x] Review or Add new terms to glossary (μš©μ–΄ 확인 및 μΆ”κ°€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview둜 μ •μƒμž‘λ™ 확인) ## Who can review? (Initial) May you please review this PR? @nuatmochoi, @bolizabeth, @heuristicwave, @mjk0618, @jungnerd, @hyunhp ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. νŒ€μ›λ“€κ³Ό 리뷰가 λλ‚œ ν›„μ—λ§Œ ν—ˆκΉ…νŽ˜μ΄μŠ€ μ§μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> <!-- May you please review this PR? @sgugger, @ArthurZucker, @eunseojo -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24983/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24983/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24983", "html_url": "https://github.com/huggingface/transformers/pull/24983", "diff_url": "https://github.com/huggingface/transformers/pull/24983.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24983.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24982
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24982/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24982/comments
https://api.github.com/repos/huggingface/transformers/issues/24982/events
https://github.com/huggingface/transformers/pull/24982
1,815,488,533
PR_kwDOCUB6oc5WFOjH
24,982
🌐 [i18n-KO] Translated `<perf_train_gpu_many>.md` to Korean
{ "login": "hyunhp", "id": 105839613, "node_id": "U_kgDOBk77_Q", "avatar_url": "https://avatars.githubusercontent.com/u/105839613?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hyunhp", "html_url": "https://github.com/hyunhp", "followers_url": "https://api.github.com/users/hyunhp/followers", "following_url": "https://api.github.com/users/hyunhp/following{/other_user}", "gists_url": "https://api.github.com/users/hyunhp/gists{/gist_id}", "starred_url": "https://api.github.com/users/hyunhp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hyunhp/subscriptions", "organizations_url": "https://api.github.com/users/hyunhp/orgs", "repos_url": "https://api.github.com/users/hyunhp/repos", "events_url": "https://api.github.com/users/hyunhp/events{/privacy}", "received_events_url": "https://api.github.com/users/hyunhp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "미리보기 λ¬Έμ„œ μ „ ready for request μ§„ν–‰ν•˜μ—¬ μ·¨μ†Œ ν›„, reopen" ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? Translated the `<your_file>.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (λ²ˆμ—­ λˆ„λ½/쀑볡 검사) - [x] Grammar Check (λ§žμΆ€λ²• 검사) - [x] Review or Add new terms to glossary (μš©μ–΄ 확인 및 μΆ”κ°€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview둜 μ •μƒμž‘λ™ 확인) ## Who can review? (Initial) May you please review this PR? @nuatmochoi, @bolizabeth, @heuristicwave, @mjk0618, @jungnerd, @hyunhp ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. νŒ€μ›λ“€κ³Ό 리뷰가 λλ‚œ ν›„μ—λ§Œ ν—ˆκΉ…νŽ˜μ΄μŠ€ μ§μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> <!-- May you please review this PR? @sgugger, @ArthurZucker, @eunseojo -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24982/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24982/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24982", "html_url": "https://github.com/huggingface/transformers/pull/24982", "diff_url": "https://github.com/huggingface/transformers/pull/24982.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24982.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24981
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24981/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24981/comments
https://api.github.com/repos/huggingface/transformers/issues/24981/events
https://github.com/huggingface/transformers/issues/24981
1,815,457,417
I_kwDOCUB6oc5sNa6J
24,981
trainer throw "Torch not compiled with CUDA enabled"
{ "login": "nemesis00sam", "id": 112406441, "node_id": "U_kgDOBrMvqQ", "avatar_url": "https://avatars.githubusercontent.com/u/112406441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nemesis00sam", "html_url": "https://github.com/nemesis00sam", "followers_url": "https://api.github.com/users/nemesis00sam/followers", "following_url": "https://api.github.com/users/nemesis00sam/following{/other_user}", "gists_url": "https://api.github.com/users/nemesis00sam/gists{/gist_id}", "starred_url": "https://api.github.com/users/nemesis00sam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nemesis00sam/subscriptions", "organizations_url": "https://api.github.com/users/nemesis00sam/orgs", "repos_url": "https://api.github.com/users/nemesis00sam/repos", "events_url": "https://api.github.com/users/nemesis00sam/events{/privacy}", "received_events_url": "https://api.github.com/users/nemesis00sam/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @nemesis00sam \r\nThanks for the issue, there are two issues with your training setup\r\n1- you are trying to load your model in 8bit, which is only supported on GPU devices (not M1 chips)\r\n2- It seems you are trying to do pure 8bit training, despite I see lora-specific arguments, i don't see where they are used. If you want to train 8bit models, consider converting your model into a PeftModel and train adapters on it for example: https://github.com/huggingface/peft/blob/main/examples/int8_training/Finetune_opt_bnb_peft.ipynb", "Thanks @younesbelkada. I'm a little bit confused. I need more explanation. Sorry for taking your time. load_in_8bit=True will not work for m1? Should I convert it to 8bit with different package?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,693
1,693
NONE
null
### System Info transformers version: 4.31.0.dev0 Platform: macOS-13.2.1-arm64-arm-64bit Python version: 3.10.10 Huggingface_hub version: 0.16.4 Safetensors version: 0.3.1 Accelerate version: 0.22.0.dev0 Accelerate config: not found PyTorch version (GPU?): 2.0.0 (False) Tensorflow version (GPU?): 2.10.0 (True) Flax version (CPU?/GPU?/TPU?): not installed (NA) Jax version: not installed JaxLib version: not installed Using GPU in script?: Using distributed or parallel set-up in script?: Accelerate version: 0.22.0.dev0 ### Who can help? @younesbelkada, @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I'm running following code in Macbook M2 Pro. I can run this code with T4 GPU but when I try to run it on my Apple Macbook, it is throwing error, "Torch not compiled with CUDA enabled". I can run other models with my MAC. GPU utilization is good. I suspect that I forgot to change some parameters related to MPS. Thanks in advance. ``` import transformers from transformers import LlamaTokenizer, LlamaForCausalLM from typing import List import polars as pl import html import datasets import torch from datasets import load_dataset from datasets import Dataset import pandas as pd DEVICE = "mps" CUTOFF_LEN = 256 safety_param = 10 import pandas as pd df = pd.read_parquet("zkp_training_data.parquet") data_filtered = ( pl.from_pandas(df).with_columns(pl.when(pl.col("description").is_null()).then("").otherwise(pl.col("description") + ". ").alias("description")) .with_columns( (pl.col("description") + pl.col("readme")).alias("final_text") ) .select(["repo_id",pl.col("final_text"), pl.col("label")]) ).to_pandas() data_filtered.loc[data_filtered.label == "other",["label"]] = "not zero-knowledge-proof" dataset = datasets.Dataset.from_pandas(data_filtered) dataset_rev = dataset.shuffle(seed=42) dataset_final = dataset_rev.class_encode_column("label").train_test_split(stratify_by_column="label",train_size=0.8) def map_func(example): mapping=dataset_final["train"].features["label"] val_set = mapping.int2str(example["label"]) del example["label"] example["labels"] = val_set return example dataset_final_v2 = dataset_final.map(map_func,batched=True) BASE_MODEL = "decapoda-research/llama-7b-hf" model = LlamaForCausalLM.from_pretrained( BASE_MODEL, load_in_8bit=True, torch_dtype=torch.float16, device_map="auto", ) tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf") tokenizer.pad_token_id = ( 0 # unk. we want this to be different from the eos token ) tokenizer.padding_side = "left" def generate_dummy_prompt_v2(input,output): partial_string = f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. # noqa: E501 ### Instruction: Does this project include zero-knowledge-proof implementation? ### Response: {output}""" full_string = html.unescape(partial_string) result = tokenizer( full_string, padding=False) constant_part_token_len = len(result["input_ids"]) input_html = html.unescape(input) result_input = tokenizer( input, padding=False) input_token_len = len(result_input["input_ids"]) allowed_len = (CUTOFF_LEN - constant_part_token_len) - safety_param input = tokenizer.decode(result_input["input_ids"][:allowed_len],skip_special_tokens = True) final_prompt = f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. # noqa: E501 ### Instruction: Does this project include zero-knowledge-proof implementation? ### Input: {input} ### Response: {output} """ final_prompt = html.unescape(final_prompt) if len(final_prompt.split(" "))<3: print("aaa") return final_prompt dataset_final_v3 = dataset_final_v2.map( lambda x: {"final_prompt": [generate_dummy_prompt_v2(a,b) for a,b in zip(x["final_text"],x["labels"]) ]}, batched=True ) def tokenize(prompt, add_eos_token=True): result = tokenizer( prompt["final_prompt"], truncation=True, max_length=CUTOFF_LEN, padding=False, return_tensors=None, ) if ( result["input_ids"][-1] != tokenizer.eos_token_id and len(result["input_ids"]) < CUTOFF_LEN and add_eos_token ): result["input_ids"].append(tokenizer.eos_token_id) result["attention_mask"].append(1) result["labels"] = result["input_ids"].copy() return result dataset_final_v4 = dataset_final_v3.map(tokenize,batched=False,remove_columns=["repo_id","final_text","final_prompt"]) dataset_final_v4.save_to_disk("dataset_final_v4") train_data = dataset_final_v4["train"] val_data = dataset_final_v4["test"] LORA_R = 8 LORA_ALPHA = 16 LORA_DROPOUT= 0.05 LORA_TARGET_MODULES = [ "q_proj", "v_proj", ] BATCH_SIZE = 128 MICRO_BATCH_SIZE = 4 GRADIENT_ACCUMULATION_STEPS = BATCH_SIZE // MICRO_BATCH_SIZE LEARNING_RATE = 3e-4 TRAIN_STEPS = 300 OUTPUT_DIR = "experiments_rev" training_arguments = transformers.TrainingArguments( per_device_train_batch_size=MICRO_BATCH_SIZE, gradient_accumulation_steps=GRADIENT_ACCUMULATION_STEPS, warmup_steps=100, max_steps=TRAIN_STEPS, learning_rate=LEARNING_RATE, fp16=False, logging_steps=2, optim="adamw_torch", evaluation_strategy="steps", save_strategy="steps", eval_steps=4, save_steps=24, output_dir=OUTPUT_DIR, use_mps_device = True, save_total_limit=3, overwrite_output_dir=True, report_to="tensorboard" ) data_collator = transformers.DataCollatorForSeq2Seq( tokenizer, pad_to_multiple_of=8, return_tensors="pt", padding=True ) trainer = transformers.Trainer( model=model, train_dataset=train_data, eval_dataset=val_data, args=training_arguments, data_collator=data_collator ) model.config.use_cache = False model = torch.compile(model) trainer.train(resume_from_checkpoint = False) model.save_pretrained(OUTPUT_DIR) ``` ### Expected behavior it should start to train data but it throws error when trainer.train is executed. I suspect mps is not supporting some features of model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24981/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24981/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24980
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24980/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24980/comments
https://api.github.com/repos/huggingface/transformers/issues/24980/events
https://github.com/huggingface/transformers/pull/24980
1,815,416,722
PR_kwDOCUB6oc5WE-7U
24,980
fsdp fixes and enhancements
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? 1. Fixes #24724. Should be merged after https://github.com/huggingface/accelerate/pull/1753 2. Fixes #24568. Should be merged after https://github.com/huggingface/accelerate/pull/1753
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24980/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24980/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24980", "html_url": "https://github.com/huggingface/transformers/pull/24980", "diff_url": "https://github.com/huggingface/transformers/pull/24980.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24980.patch", "merged_at": 1689942169000 }
https://api.github.com/repos/huggingface/transformers/issues/24979
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24979/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24979/comments
https://api.github.com/repos/huggingface/transformers/issues/24979/events
https://github.com/huggingface/transformers/pull/24979
1,815,384,291
PR_kwDOCUB6oc5WE3-p
24,979
[ `ForSequenceClassification`] Support `left` padding
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24979). All of your documentation changes will be reflected on that endpoint.", "Test seems flaky:\r\n```python \r\nFAILED tests/models/gptj/test_modeling_gptj.py::GPTJModelTest::test_pt_tf_model_equivalence - AssertionError: 0.14537059 not less than or equal to 1e-05 : outputs.logits: Difference between PyTorch and TF is 0.14537058770656586 (>= 1e-05).\r\nFAILED tests/models/gptj/test_modeling_tf_gptj.py::TFGPTJModelTest::test_pt_tf_model_equivalence - AssertionError: 0.21959408 not less than or equal to 1e-05 : outputs.logits: Difference between torch and tf is 0.2195940762758255 (>= 1e-05).\r\n```" ]
1,689
1,690
1,690
COLLABORATOR
null
# What does this PR do? Update the computation of `sequence_lengths` that determines the index of the outpus to be pooled on models that use pooled inputs in such a way. Adresses #24265 - [ ] add a common test to make sure whether you pad left or right, sequence outputs are the same
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24979/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24979/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24979", "html_url": "https://github.com/huggingface/transformers/pull/24979", "diff_url": "https://github.com/huggingface/transformers/pull/24979.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24979.patch", "merged_at": 1690294783000 }
https://api.github.com/repos/huggingface/transformers/issues/24978
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24978/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24978/comments
https://api.github.com/repos/huggingface/transformers/issues/24978/events
https://github.com/huggingface/transformers/pull/24978
1,815,354,186
PR_kwDOCUB6oc5WExqZ
24,978
🌐 [i18n-KO] Translated `perf_infer_gpu_one.md` to Korean
{ "login": "eenzeenee", "id": 71638597, "node_id": "MDQ6VXNlcjcxNjM4NTk3", "avatar_url": "https://avatars.githubusercontent.com/u/71638597?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eenzeenee", "html_url": "https://github.com/eenzeenee", "followers_url": "https://api.github.com/users/eenzeenee/followers", "following_url": "https://api.github.com/users/eenzeenee/following{/other_user}", "gists_url": "https://api.github.com/users/eenzeenee/gists{/gist_id}", "starred_url": "https://api.github.com/users/eenzeenee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eenzeenee/subscriptions", "organizations_url": "https://api.github.com/users/eenzeenee/orgs", "repos_url": "https://api.github.com/users/eenzeenee/repos", "events_url": "https://api.github.com/users/eenzeenee/events{/privacy}", "received_events_url": "https://api.github.com/users/eenzeenee/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Could you review this PR? πŸ˜ƒ\r\n@sgugger, @ArthurZucker, @eunseojo", "Thanks for fixing! πŸ”₯ " ]
1,689
1,691
1,691
CONTRIBUTOR
null
<!-- PR의 제λͺ©μ€ "🌐 [i18n-KO] Translated `<your_file>.md` to Korean" 으둜 λΆ€νƒλ“œλ¦½λ‹ˆλ‹€! --> # What does this PR do? Translated the `perf_infer_gpu_one.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (λ²ˆμ—­ λˆ„λ½/쀑볡 검사) - [x] Grammar Check (λ§žμΆ€λ²• 검사) - [x] Review or Add new terms to glossary (μš©μ–΄ 확인 및 μΆ”κ°€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview둜 μ •μƒμž‘λ™ 확인) ## Who can review? (Initial) <!-- 1. μœ„ 체크가 λͺ¨λ‘ μ™„λ£Œλœ 뒀에, 이 μ•„λž˜μ— 리뷰λ₯Ό μš”μ²­ν•  νŒ€μ›λ“€μ„ λ©˜μ…˜ν•΄μ£Όμ„Έμš”! --> May you please review this PR? @sronger, @TaeYupNoh, @kj021, @HanNayeoniee, @eenzeenee, @sim-so ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. νŒ€μ›λ“€κ³Ό 리뷰가 λλ‚œ ν›„μ—λ§Œ ν—ˆκΉ…νŽ˜μ΄μŠ€ μ§μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24978/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24978/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24978", "html_url": "https://github.com/huggingface/transformers/pull/24978", "diff_url": "https://github.com/huggingface/transformers/pull/24978.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24978.patch", "merged_at": 1691390249000 }
https://api.github.com/repos/huggingface/transformers/issues/24977
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24977/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24977/comments
https://api.github.com/repos/huggingface/transformers/issues/24977/events
https://github.com/huggingface/transformers/pull/24977
1,815,311,174
PR_kwDOCUB6oc5WEoWw
24,977
🌐 [i18n-KO] Translated `generation_strategies.md` to Korean
{ "login": "wonhyeongseo", "id": 29195190, "node_id": "MDQ6VXNlcjI5MTk1MTkw", "avatar_url": "https://avatars.githubusercontent.com/u/29195190?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wonhyeongseo", "html_url": "https://github.com/wonhyeongseo", "followers_url": "https://api.github.com/users/wonhyeongseo/followers", "following_url": "https://api.github.com/users/wonhyeongseo/following{/other_user}", "gists_url": "https://api.github.com/users/wonhyeongseo/gists{/gist_id}", "starred_url": "https://api.github.com/users/wonhyeongseo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wonhyeongseo/subscriptions", "organizations_url": "https://api.github.com/users/wonhyeongseo/orgs", "repos_url": "https://api.github.com/users/wonhyeongseo/repos", "events_url": "https://api.github.com/users/wonhyeongseo/events{/privacy}", "received_events_url": "https://api.github.com/users/wonhyeongseo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24977). All of your documentation changes will be reflected on that endpoint." ]
1,689
1,689
1,689
CONTRIBUTOR
null
<!-- PR의 제λͺ©μ€ "🌐 [i18n-KO] Translated `<your_file>.md` to Korean" 으둜 λΆ€νƒλ“œλ¦½λ‹ˆλ‹€ --> # What does this PR do? Translated the `generation_strategies.md` file of the documentation to Korean πŸ˜„ Thank you in advance for your review! Part of https://github.com/huggingface/transformers/issues/20179 <!-- 메인 μ΄μŠˆμ— 기둝이 λ‚¨μ•„μš”! κ°€μ§œμ—°κ΅¬μ†Œ 리포λ₯Ό μ‚¬μš©ν•΄ μ—°μŠ΅ν•˜μ‹€λ•ŒλŠ” μ œκ±°ν•΄μ£Όμ‹œλ©΄ κ°μ‚¬ν•˜κ² μŠ΅λ‹ˆλ‹€! :smile: --> ## Before reviewing - [ ] Check for missing / redundant translations (λ²ˆμ—­ λˆ„λ½/쀑볡 검사) - [ ] Grammar Check (λ§žμΆ€λ²• 검사) - [ ] Review or Add new terms to glossary (μš©μ–΄ 확인 및 μΆ”κ°€) - [ ] Check Inline TOC (e.g. `[[lowercased-header]]`) - [ ] Check live-preview for gotchas (live-preview둜 μ •μƒμž‘λ™ 확인) ## Who can review? (Initial) <!-- 1. μœ„ 체크가 λͺ¨λ‘ μ™„λ£Œλœ λ’€μ—λ§Œ OSSCA νŒ€μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> <!-- Team OSSCA, may you please review this PR? --> <!-- @wonhyeongseo, @keonju2, @harheem, @HongB1, @junejae --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. κ°€μ§œμ—°κ΅¬μ†Œ νŒ€μ›λ“€κ³Ό 리뷰가 λλ‚œ ν›„μ—λ§Œ ν—ˆκΉ…νŽ˜μ΄μŠ€ μ§μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> <!-- May you please review this PR? --> <!-- @sgugger, @ArthurZucker, @eunseojo -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24977/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24977/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24977", "html_url": "https://github.com/huggingface/transformers/pull/24977", "diff_url": "https://github.com/huggingface/transformers/pull/24977.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24977.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24976
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24976/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24976/comments
https://api.github.com/repos/huggingface/transformers/issues/24976/events
https://github.com/huggingface/transformers/issues/24976
1,815,301,839
I_kwDOCUB6oc5sM07P
24,976
Llama weights are in `bfloat16` but loaded as `float32`
{ "login": "ayaka14732", "id": 68557794, "node_id": "MDQ6VXNlcjY4NTU3Nzk0", "avatar_url": "https://avatars.githubusercontent.com/u/68557794?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ayaka14732", "html_url": "https://github.com/ayaka14732", "followers_url": "https://api.github.com/users/ayaka14732/followers", "following_url": "https://api.github.com/users/ayaka14732/following{/other_user}", "gists_url": "https://api.github.com/users/ayaka14732/gists{/gist_id}", "starred_url": "https://api.github.com/users/ayaka14732/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ayaka14732/subscriptions", "organizations_url": "https://api.github.com/users/ayaka14732/orgs", "repos_url": "https://api.github.com/users/ayaka14732/repos", "events_url": "https://api.github.com/users/ayaka14732/events{/privacy}", "received_events_url": "https://api.github.com/users/ayaka14732/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! πŸ‘‹πŸ» \r\nWithout the path to the checkpoint that you are using, it's gonna be a bit hard for me to debug. The `dtype` can be affectect by the `model.config.torch_dtype` which is stored online. ", "It is expected behavior: https://huggingface.co/docs/transformers/v4.31.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained.torch_dtype. You can try to pass `torch_dtype=\"auto\"` to `from_pretrained` and if the config.json is well written, it should load in the dtype specified in the config.", "In general you should specify the dtype in which you want the model with `torch_dtype` (using auto is not recommended as configs are not always well written). In PyTorch models are always loaded in float32 by default (even if the state dict has another dtype), this is not something specific to Transformers." ]
1,689
1,689
1,689
CONTRIBUTOR
null
### System Info - `transformers` version: 4.32.0.dev0 - Platform: Linux-5.8.0-1035-gcp-x86_64-with-glibc2.31 - Python version: 3.11.4 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0.dev20230719+cpu (False) - Tensorflow version (GPU?): 2.14.0-dev20230719 (False) - Flax version (CPU?/GPU?/TPU?): 0.7.0 (cpu) - Jax version: 0.4.13 - JaxLib version: 0.4.13 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python model = LlamaForCausalLM.from_pretrained(path) print(model.model.norm.weight.detach().dtype) ``` This prints `torch.float32`. ### Expected behavior This result is `torch.float32`, which indicates that the model is loaded in `float32` format. However, the result should be `torch.bfloat16` because it is the default for Llama models. The model weights are in `bfloat16` format.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24976/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24976/timeline
not_planned
null
null
https://api.github.com/repos/huggingface/transformers/issues/24975
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24975/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24975/comments
https://api.github.com/repos/huggingface/transformers/issues/24975/events
https://github.com/huggingface/transformers/pull/24975
1,815,296,306
PR_kwDOCUB6oc5WElEG
24,975
🌐 [i18n-KO] Translated `text-to-speech.md` to Korean
{ "login": "wonhyeongseo", "id": 29195190, "node_id": "MDQ6VXNlcjI5MTk1MTkw", "avatar_url": "https://avatars.githubusercontent.com/u/29195190?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wonhyeongseo", "html_url": "https://github.com/wonhyeongseo", "followers_url": "https://api.github.com/users/wonhyeongseo/followers", "following_url": "https://api.github.com/users/wonhyeongseo/following{/other_user}", "gists_url": "https://api.github.com/users/wonhyeongseo/gists{/gist_id}", "starred_url": "https://api.github.com/users/wonhyeongseo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wonhyeongseo/subscriptions", "organizations_url": "https://api.github.com/users/wonhyeongseo/orgs", "repos_url": "https://api.github.com/users/wonhyeongseo/repos", "events_url": "https://api.github.com/users/wonhyeongseo/events{/privacy}", "received_events_url": "https://api.github.com/users/wonhyeongseo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24975). All of your documentation changes will be reflected on that endpoint.", "Hello, I'm curious about the reason for closing this PR. I'd like to inquire whether I can continue the translation work of this document.\r\nμ•ˆλ…•ν•˜μ„Έμš”,이 PR을 λ‹«μœΌμ‹  μ΄μœ κ°€ 무엇인지 κΆκΈˆν•©λ‹ˆλ‹€. μ œκ°€ 이 λ¬Έμ„œμ˜ λ²ˆμ—­ μž‘μ—…μ„ μ΄μ–΄λ‚˜κ°ˆ 수 μžˆλŠ”μ§€ 여쭀보고 μ‹ΆμŠ΅λ‹ˆλ‹€. " ]
1,689
1,701
1,689
CONTRIBUTOR
null
<!-- PR의 제λͺ©μ€ "🌐 [i18n-KO] Translated `<your_file>.md` to Korean" 으둜 λΆ€νƒλ“œλ¦½λ‹ˆλ‹€ --> # What does this PR do? Translated the `text-to-speech.md` file of the documentation to Korean πŸ˜„ Thank you in advance for your review! Part of https://github.com/huggingface/transformers/issues/20179 <!-- 메인 μ΄μŠˆμ— 기둝이 λ‚¨μ•„μš”! κ°€μ§œμ—°κ΅¬μ†Œ 리포λ₯Ό μ‚¬μš©ν•΄ μ—°μŠ΅ν•˜μ‹€λ•ŒλŠ” μ œκ±°ν•΄μ£Όμ‹œλ©΄ κ°μ‚¬ν•˜κ² μŠ΅λ‹ˆλ‹€! :smile: --> ## Before reviewing - [ ] Check for missing / redundant translations (λ²ˆμ—­ λˆ„λ½/쀑볡 검사) - [ ] Grammar Check (λ§žμΆ€λ²• 검사) - [ ] Review or Add new terms to glossary (μš©μ–΄ 확인 및 μΆ”κ°€) - [ ] Check Inline TOC (e.g. `[[lowercased-header]]`) - [ ] Check live-preview for gotchas (live-preview둜 μ •μƒμž‘λ™ 확인) ## Who can review? (Initial) <!-- 1. μœ„ 체크가 λͺ¨λ‘ μ™„λ£Œλœ λ’€μ—λ§Œ OSSCA νŒ€μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> <!-- Team OSSCA, may you please review this PR? --> <!-- @wonhyeongseo, @keonju2, @harheem, @HongB1, @junejae --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. κ°€μ§œμ—°κ΅¬μ†Œ νŒ€μ›λ“€κ³Ό 리뷰가 λλ‚œ ν›„μ—λ§Œ ν—ˆκΉ…νŽ˜μ΄μŠ€ μ§μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> <!-- May you please review this PR? --> <!-- @sgugger, @ArthurZucker, @eunseojo -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24975/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24975/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24975", "html_url": "https://github.com/huggingface/transformers/pull/24975", "diff_url": "https://github.com/huggingface/transformers/pull/24975.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24975.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24974
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24974/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24974/comments
https://api.github.com/repos/huggingface/transformers/issues/24974/events
https://github.com/huggingface/transformers/issues/24974
1,815,212,577
I_kwDOCUB6oc5sMfIh
24,974
deepspeed typing Dict error
{ "login": "cnut1648", "id": 37067883, "node_id": "MDQ6VXNlcjM3MDY3ODgz", "avatar_url": "https://avatars.githubusercontent.com/u/37067883?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cnut1648", "html_url": "https://github.com/cnut1648", "followers_url": "https://api.github.com/users/cnut1648/followers", "following_url": "https://api.github.com/users/cnut1648/following{/other_user}", "gists_url": "https://api.github.com/users/cnut1648/gists{/gist_id}", "starred_url": "https://api.github.com/users/cnut1648/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cnut1648/subscriptions", "organizations_url": "https://api.github.com/users/cnut1648/orgs", "repos_url": "https://api.github.com/users/cnut1648/repos", "events_url": "https://api.github.com/users/cnut1648/events{/privacy}", "received_events_url": "https://api.github.com/users/cnut1648/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes, this has been broken by #24550 again..." ]
1,689
1,689
1,689
NONE
null
### System Info - `transformers` version: 4.32.0.dev0 - Platform: Linux-5.10.179-171.711.amzn2.x86_64-x86_64-with-glibc2.26 - Python version: 3.9.16 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - Accelerate version: 0.22.0.dev0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Right now any command with `--deepspeed /path/to/json` will fail and throw the following error ``` --deepspeed: invalid Dict value ``` This is reported in https://github.com/huggingface/transformers/pull/24549#issuecomment-1613046347 but the merged fix https://github.com/huggingface/transformers/pull/24574 does not resolve this. In fact the `deepspeed: Union[str, Dict]` field in `training_args.py` still raise the error when the `deepspeed` is passed as the string. This seems to be a limitation of python dataclass. ### Expected behavior `deepspeed` flag should support string.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24974/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24974/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24973
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24973/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24973/comments
https://api.github.com/repos/huggingface/transformers/issues/24973/events
https://github.com/huggingface/transformers/pull/24973
1,815,210,720
PR_kwDOCUB6oc5WESix
24,973
🌐 [i18n-KO] Translated perf_infer_gpu_one.md to Korean
{ "login": "eenzeenee", "id": 71638597, "node_id": "MDQ6VXNlcjcxNjM4NTk3", "avatar_url": "https://avatars.githubusercontent.com/u/71638597?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eenzeenee", "html_url": "https://github.com/eenzeenee", "followers_url": "https://api.github.com/users/eenzeenee/followers", "following_url": "https://api.github.com/users/eenzeenee/following{/other_user}", "gists_url": "https://api.github.com/users/eenzeenee/gists{/gist_id}", "starred_url": "https://api.github.com/users/eenzeenee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eenzeenee/subscriptions", "organizations_url": "https://api.github.com/users/eenzeenee/orgs", "repos_url": "https://api.github.com/users/eenzeenee/repos", "events_url": "https://api.github.com/users/eenzeenee/events{/privacy}", "received_events_url": "https://api.github.com/users/eenzeenee/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,689
1,689
1,689
CONTRIBUTOR
null
<!-- PR의 제λͺ©μ€ "🌐 [i18n-KO] Translated `<your_file>.md` to Korean" 으둜 λΆ€νƒλ“œλ¦½λ‹ˆλ‹€! --> # What does this PR do? Translated the `perf_infer_gpu_one.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (λ²ˆμ—­ λˆ„λ½/쀑볡 검사) - [x] Grammar Check (λ§žμΆ€λ²• 검사) - [x] Review or Add new terms to glossary (μš©μ–΄ 확인 및 μΆ”κ°€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview둜 μ •μƒμž‘λ™ 확인) ## Who can review? (Initial) <!-- 1. μœ„ 체크가 λͺ¨λ‘ μ™„λ£Œλœ 뒀에, 이 μ•„λž˜μ— 리뷰λ₯Ό μš”μ²­ν•  νŒ€μ›λ“€μ„ λ©˜μ…˜ν•΄μ£Όμ„Έμš”! --> May you please review this PR? @sronger, @TaeYupNoh, @kj021, @HanNayeoniee, @eenzeenee, @sim-so ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. νŒ€μ›λ“€κ³Ό 리뷰가 λλ‚œ ν›„μ—λ§Œ ν—ˆκΉ…νŽ˜μ΄μŠ€ μ§μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> <!-- May you please review this PR? @sgugger, @ArthurZucker, @eunseojo -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24973/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24973/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24973", "html_url": "https://github.com/huggingface/transformers/pull/24973", "diff_url": "https://github.com/huggingface/transformers/pull/24973.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24973.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24972
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24972/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24972/comments
https://api.github.com/repos/huggingface/transformers/issues/24972/events
https://github.com/huggingface/transformers/pull/24972
1,815,203,305
PR_kwDOCUB6oc5WEQ7f
24,972
🌐 [i18n-KO] Translated perf_infer_gpu_one.md to Korean
{ "login": "eenzeenee", "id": 71638597, "node_id": "MDQ6VXNlcjcxNjM4NTk3", "avatar_url": "https://avatars.githubusercontent.com/u/71638597?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eenzeenee", "html_url": "https://github.com/eenzeenee", "followers_url": "https://api.github.com/users/eenzeenee/followers", "following_url": "https://api.github.com/users/eenzeenee/following{/other_user}", "gists_url": "https://api.github.com/users/eenzeenee/gists{/gist_id}", "starred_url": "https://api.github.com/users/eenzeenee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eenzeenee/subscriptions", "organizations_url": "https://api.github.com/users/eenzeenee/orgs", "repos_url": "https://api.github.com/users/eenzeenee/repos", "events_url": "https://api.github.com/users/eenzeenee/events{/privacy}", "received_events_url": "https://api.github.com/users/eenzeenee/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,689
1,689
1,689
CONTRIBUTOR
null
<!-- PR의 제λͺ©μ€ "🌐 [i18n-KO] Translated `<your_file>.md` to Korean" 으둜 λΆ€νƒλ“œλ¦½λ‹ˆλ‹€! --> # What does this PR do? Translated the `perf_infer_gpu_one.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (λ²ˆμ—­ λˆ„λ½/쀑볡 검사) - [x] Grammar Check (λ§žμΆ€λ²• 검사) - [x] Review or Add new terms to glossary (μš©μ–΄ 확인 및 μΆ”κ°€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview둜 μ •μƒμž‘λ™ 확인) ## Who can review? (Initial) <!-- 1. μœ„ 체크가 λͺ¨λ‘ μ™„λ£Œλœ 뒀에, 이 μ•„λž˜μ— 리뷰λ₯Ό μš”μ²­ν•  νŒ€μ›λ“€μ„ λ©˜μ…˜ν•΄μ£Όμ„Έμš”! --> May you please review this PR? @sronger, @TaeYupNoh, @kj021, @HanNayeoniee, @eenzeenee, @sim-so ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. νŒ€μ›λ“€κ³Ό 리뷰가 λλ‚œ ν›„μ—λ§Œ ν—ˆκΉ…νŽ˜μ΄μŠ€ μ§μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> <!-- May you please review this PR? @sgugger, @ArthurZucker, @eunseojo -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24972/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24972/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24972", "html_url": "https://github.com/huggingface/transformers/pull/24972", "diff_url": "https://github.com/huggingface/transformers/pull/24972.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24972.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24971
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24971/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24971/comments
https://api.github.com/repos/huggingface/transformers/issues/24971/events
https://github.com/huggingface/transformers/issues/24971
1,815,178,950
I_kwDOCUB6oc5sMW7G
24,971
New mode in `model.generate`
{ "login": "huangjy-pku", "id": 68498923, "node_id": "MDQ6VXNlcjY4NDk4OTIz", "avatar_url": "https://avatars.githubusercontent.com/u/68498923?v=4", "gravatar_id": "", "url": "https://api.github.com/users/huangjy-pku", "html_url": "https://github.com/huangjy-pku", "followers_url": "https://api.github.com/users/huangjy-pku/followers", "following_url": "https://api.github.com/users/huangjy-pku/following{/other_user}", "gists_url": "https://api.github.com/users/huangjy-pku/gists{/gist_id}", "starred_url": "https://api.github.com/users/huangjy-pku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/huangjy-pku/subscriptions", "organizations_url": "https://api.github.com/users/huangjy-pku/orgs", "repos_url": "https://api.github.com/users/huangjy-pku/repos", "events_url": "https://api.github.com/users/huangjy-pku/events{/privacy}", "received_events_url": "https://api.github.com/users/huangjy-pku/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante ", "Hey @huangjy-pku I believe the authors intended its use to be as an alternative to beam search, but I agree the two could be used together (beam search explores beam diversity, and contrastive search explores repetition avoidance).\r\n\r\nBuilding it takes a significant time, and I haven't seen demand for it outside this issue -- we won't devote resources to it for now, as our bandwidth is limited. I would also oppose a PR, as it would add up to our long-term maintenance budget. If demand increases, I will revisit this decision :)\r\n\r\nHowever, I have a short-term suggestion: have you tried the `repetition_penalty` argument in `generate`? ([docs](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationConfig.repetition_penalty))", "Thanks. By the way, I tried increasing `repetition_penalty` in `generate` and it did help :)" ]
1,689
1,690
1,690
NONE
null
Hello, is there any support for combining `contrastive_search` and `beam_search` in `model.generate` method? My motivation is that when using `beam_search` I observe the outputs of model sometimes tend to produce repetitive text. I learned that `contrastive_search` was proposed to address the repetition issue. But when changing to `contrastive_mode`, the performance drops a lot, probably due to a lack of the beams. However, I checked the source code and found `contrastive_mode` triggers only when `num_beams == 1`. Therefore, I wonder if I can combine the advantages of the two generation modes for better result?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24971/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24971/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24970
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24970/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24970/comments
https://api.github.com/repos/huggingface/transformers/issues/24970/events
https://github.com/huggingface/transformers/issues/24970
1,815,165,782
I_kwDOCUB6oc5sMTtW
24,970
Parameter ... has been marked as ready twice.
{ "login": "levuloihust99", "id": 49064246, "node_id": "MDQ6VXNlcjQ5MDY0MjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/49064246?v=4", "gravatar_id": "", "url": "https://api.github.com/users/levuloihust99", "html_url": "https://github.com/levuloihust99", "followers_url": "https://api.github.com/users/levuloihust99/followers", "following_url": "https://api.github.com/users/levuloihust99/following{/other_user}", "gists_url": "https://api.github.com/users/levuloihust99/gists{/gist_id}", "starred_url": "https://api.github.com/users/levuloihust99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/levuloihust99/subscriptions", "organizations_url": "https://api.github.com/users/levuloihust99/orgs", "repos_url": "https://api.github.com/users/levuloihust99/repos", "events_url": "https://api.github.com/users/levuloihust99/events{/privacy}", "received_events_url": "https://api.github.com/users/levuloihust99/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,689
1,689
1,689
NONE
null
### System Info * Ubuntu 20.04 Desktop * Architecture x86_64 * Python 3.8.10 * torch 1.12.1+cu116 * GPU P100 x 3 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I have the following code in file named `debug.py` ```python import os import torch import argparse class CustomModel(torch.nn.Module): def __init__(self): super(CustomModel, self).__init__() self.w1 = torch.nn.Parameter(torch.tensor(1.5), requires_grad=True) self.w2 = torch.nn.Parameter(torch.tensor(1.5), requires_grad=True) self.w3 = torch.nn.Parameter(torch.tensor(1.5), requires_grad=True) def forward(self, x, order): if order == 0: return self.w1 * x elif order == 1: return self.w2 * x else: return self.w3 * x def main(): parser = argparse.ArgumentParser() parser.add_argument("--local_rank", type=int, default=-1) args = parser.parse_args() local_rank = args.local_rank torch.cuda.set_device(local_rank) device = torch.device("cuda", local_rank) torch.distributed.init_process_group(backend="nccl") model = CustomModel() model.to(device) # setup distributed model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[local_rank], output_device=local_rank, find_unused_parameters=True) x = torch.tensor(1.) y1 = model(x, 0) y2 = model(x, 1) y3 = model(x, 2) y1.backward() if __name__ == "__main__": main() ``` I run this code with the following command ```shell TORCH_DISTRIBUTED_DEBUG=DETAIL python -m torch.distributed.launch --nproc_per_node=3 debug.py ``` Then, I got the error <pre> Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations. Parameter at index 0 with name .w1 has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration. </pre> If I change the line `y1.backward()` to `y2.backward()`, the parameter that `has been marked as ready twice` change to `.w2` As the error suggested, there might have shared parameters across multiple forward-backward pass or a same set of parameters used by multiple backward passes. However, I find none of these two suggestions match the above provided code. The error disappeared if I set `find_unused_parameters=False`. However, in my actual code, this setting caused another error, which was `Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel...`. I am able to change my code to fix the problem. But the focus of my question is why such a simple code above produced the error? Why was there a parameter that `has been marked as ready twice`? ### Expected behavior The code run without errors
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24970/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24970/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24969
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24969/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24969/comments
https://api.github.com/repos/huggingface/transformers/issues/24969/events
https://github.com/huggingface/transformers/pull/24969
1,815,161,340
PR_kwDOCUB6oc5WEH_n
24,969
Fix beam search when using model parallel
{ "login": "pfldy2850", "id": 9526337, "node_id": "MDQ6VXNlcjk1MjYzMzc=", "avatar_url": "https://avatars.githubusercontent.com/u/9526337?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pfldy2850", "html_url": "https://github.com/pfldy2850", "followers_url": "https://api.github.com/users/pfldy2850/followers", "following_url": "https://api.github.com/users/pfldy2850/following{/other_user}", "gists_url": "https://api.github.com/users/pfldy2850/gists{/gist_id}", "starred_url": "https://api.github.com/users/pfldy2850/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pfldy2850/subscriptions", "organizations_url": "https://api.github.com/users/pfldy2850/orgs", "repos_url": "https://api.github.com/users/pfldy2850/repos", "events_url": "https://api.github.com/users/pfldy2850/events{/privacy}", "received_events_url": "https://api.github.com/users/pfldy2850/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24969). All of your documentation changes will be reflected on that endpoint.", "I agree with you!\r\nWhich do you think is better, fixing the rest of the model in this PR or creating a new PR fixing the rest?\r\n\r\nI am willing to do further work based on your comments.", "I think it will be easier to have everything in one PR, given how small and repetitive of a change it is! ", "@ArthurZucker \r\n\r\nI have added fixed commits for every model that required correction, and I have also made modifications to the cookiecutter template.\r\n\r\nAnd I have updated the PR title and content to align with the task.\r\n\r\nAs there were numerous models that needed correction, there is a possibility that some parts might have been overlooked. Therefore, I would appreciate it if you could review the changes again. Thank you for your attention to this matter.", "There is no `device` in tensorflow, let's limit the changes to pytorch! ", "Oh! Thank you for correcting the mistake.\r\nAs you suggested, I have dropped the modifications for the tf base model.", "@ArthurZucker \r\n\r\nIs there anything else you'd like me to fix?\r\nI want to use the merged main branch for my work.", "Not at all sorry, let me have a final look and I'll merge this! ", "Hmm, by the way.\r\nIt seems like there's already a test in the script you provided to test beam search on multi GPUs.\r\n\r\nhttps://github.com/pfldy2850/transformers/blob/e07126aac6840568b0db0b369d199f3a0cefa28f/tests/test_modeling_common.py#L2468-L2494\r\n\r\nWhy was this test not conducted for this issue beforehand?", "Your are right! Might be the `test_model_parallel`, it is set `False` by default ", "@ArthurZucker \r\n\r\nWhat do you think about setting the `test_model_parallel=True` in the existing modeling test file instead of creating a new test?\r\n", "Great idea, the problem is that this might also trigger other tests, and there might be a reason why don't test them (maybe too slow / model doesn't need these test as it is not used that much). Pinging @amyeroberts for a final answer πŸ€— ", "OK, I think this is hitting on some areas of our testing suite which are non-obvious and / or need updating. \r\n\r\nAFAICT, there are only two models which have `test_model_parallel=True`, [GPT2 and T5](https://github.com/search?q=repo%3Ahuggingface%2Ftransformers+%22test_model_parallel+%3D+%22&type=code). The tests which check this flag, both use a [deprecated method](https://github.com/huggingface/transformers/blob/5744482abc472e472874b632d23c726affed8650/src/transformers/models/t5/modeling_t5.py#L929) `model.parallelize` - [1](https://github.com/huggingface/transformers/blob/5744482abc472e472874b632d23c726affed8650/tests/test_modeling_common.py#L2413), [2](https://github.com/huggingface/transformers/blob/5744482abc472e472874b632d23c726affed8650/tests/test_modeling_common.py#L2454)- and so this flag is to control testing for backwards compatibility features. \r\n\r\nWe have another test which [tests for parallelism](https://github.com/huggingface/transformers/blob/5744482abc472e472874b632d23c726affed8650/tests/test_modeling_common.py#L2584) which doesn't check `test_model_parallel`: test_model_parallelism, which is an accelerate test and checks [if the model has `_no_split_modules` implemented](https://github.com/huggingface/transformers/blob/5744482abc472e472874b632d23c726affed8650/tests/test_modeling_common.py#L2588). \r\n\r\nIn addition, generate specific tests should be added to [GenerationTesterMixin](https://github.com/huggingface/transformers/blob/5744482abc472e472874b632d23c726affed8650/tests/generation/test_utils.py#L77), rather than ModelTesterMixin, as only models with `.generate` methods should be tested. \r\n\r\nWhat I would suggest is: \r\n* Moving `test_model_parallel_beam_search` to GenerationTesterMixin. \r\n* Update the modelings tests so that each of the model's updated in this PR have `all_generative_model_classes` added as attributes to their model tester.\r\n* Update `test_model_parallel_beam_search` to check if the model class has `_no_split_modules` implemented. If not, skip. If it does, then load using `device_map=\"auto\"` instead of `test_model_parallel`. \r\n* Make sure to mark that the test requires multi gpus\r\n", "@amyeroberts @ArthurZucker \r\n\r\nI'm sorry, it has been delayed due to busy work. \r\n\r\nI've done the fix as you said.\r\n\r\n1. `test_model_parallel_beam_search` function has been moved from ModelTextMixin to GenerationMixin.\r\n2. Based on whether `_no_split_modules` is implemented, skip logic has been written.\r\n3. Parallelized to multiple devices using `device_map=\"auto\"`.\r\n4. Errors caused by the new test have been fixed.", "Is there any update?\r\nI need this feature for my academic project.", "Friendly ping @ArthurZucker :) ", "@dev-cotyledon If you need this for your project, it's possible to work from this branch until it's been merged into main: \r\n\r\n* Clone the repo: `git clone [email protected]:huggingface/transformers.git`\r\n* Create a local environment e.g. `python -m venv my-env`\r\n* Install packages from source in dev mode `cd transformers && pip install -e .`\r\n* Add fork to remote `git remote add pfldy2850 https://github.com/pfldy2850/transformers.git`\r\n* Fetch this branch `git fetch pfldy2850 fix-gpt-neox-parallelize-beam`\r\n* Checkout this branch `git checkout fix-gpt-neox-parallelize-beam`\r\n\r\nYour environment will now be running the version of transformers of this branch. ", "How kind of you! Thanks a lot sir πŸ™", "I had to modify the author of the commit, so I amend and then force push.", "@ArthurZucker Are you happy for us to merge? ", "Yep! Sorry must have missed the ping 😒 ", "Thanks for the fix @pfldy2850, tests are all green so merged the PR πŸ˜‰ " ]
1,689
1,694
1,694
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR fixes a crash when running beam search on multiple GPUs. Similar issue is also observed and fixed on T5 https://github.com/huggingface/transformers/pull/11717 and LLama https://github.com/huggingface/transformers/pull/24224 Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker @younesbelkada <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24969/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24969/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24969", "html_url": "https://github.com/huggingface/transformers/pull/24969", "diff_url": "https://github.com/huggingface/transformers/pull/24969.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24969.patch", "merged_at": 1694703653000 }
https://api.github.com/repos/huggingface/transformers/issues/24968
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24968/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24968/comments
https://api.github.com/repos/huggingface/transformers/issues/24968/events
https://github.com/huggingface/transformers/pull/24968
1,815,138,631
PR_kwDOCUB6oc5WEDJo
24,968
🌐 [i18n-KO] Translated `hpo_train.md` to Korean
{ "login": "harheem", "id": 49297157, "node_id": "MDQ6VXNlcjQ5Mjk3MTU3", "avatar_url": "https://avatars.githubusercontent.com/u/49297157?v=4", "gravatar_id": "", "url": "https://api.github.com/users/harheem", "html_url": "https://github.com/harheem", "followers_url": "https://api.github.com/users/harheem/followers", "following_url": "https://api.github.com/users/harheem/following{/other_user}", "gists_url": "https://api.github.com/users/harheem/gists{/gist_id}", "starred_url": "https://api.github.com/users/harheem/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/harheem/subscriptions", "organizations_url": "https://api.github.com/users/harheem/orgs", "repos_url": "https://api.github.com/users/harheem/repos", "events_url": "https://api.github.com/users/harheem/events{/privacy}", "received_events_url": "https://api.github.com/users/harheem/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Can you just solve the conflict so we can merge this PR?" ]
1,689
1,690
1,690
CONTRIBUTOR
null
# What does this PR do? Translated the `hpo_train.md` file of the documentation to Korean πŸ˜„ Thank you in advance for your review! Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (λ²ˆμ—­ λˆ„λ½/쀑볡 검사) - [x] Grammar Check (λ§žμΆ€λ²• 검사) - [x] Review or Add new terms to glossary (μš©μ–΄ 확인 및 μΆ”κ°€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview둜 μ •μƒμž‘λ™ 확인) ## Who can review? (Initial) @wonhyeongseo, @keonju2, @harheem, @HongB1, @junejae ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24968/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24968/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24968", "html_url": "https://github.com/huggingface/transformers/pull/24968", "diff_url": "https://github.com/huggingface/transformers/pull/24968.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24968.patch", "merged_at": 1690288101000 }
https://api.github.com/repos/huggingface/transformers/issues/24967
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24967/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24967/comments
https://api.github.com/repos/huggingface/transformers/issues/24967/events
https://github.com/huggingface/transformers/issues/24967
1,815,036,946
I_kwDOCUB6oc5sL0QS
24,967
run summarization
{ "login": "tigernandita", "id": 124251696, "node_id": "U_kgDOB2fuMA", "avatar_url": "https://avatars.githubusercontent.com/u/124251696?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tigernandita", "html_url": "https://github.com/tigernandita", "followers_url": "https://api.github.com/users/tigernandita/followers", "following_url": "https://api.github.com/users/tigernandita/following{/other_user}", "gists_url": "https://api.github.com/users/tigernandita/gists{/gist_id}", "starred_url": "https://api.github.com/users/tigernandita/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tigernandita/subscriptions", "organizations_url": "https://api.github.com/users/tigernandita/orgs", "repos_url": "https://api.github.com/users/tigernandita/repos", "events_url": "https://api.github.com/users/tigernandita/events{/privacy}", "received_events_url": "https://api.github.com/users/tigernandita/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@tigernandita \r\n\r\nPlease report the issue with providing your env. info.\r\n\r\nYou can run the command `transformers-cli env` and copy-paste its output.\r\n\r\nAlso, please provide the full traceback.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@ydshieh I also got this problem when training LED. Here my env\r\n- `transformers` version: 4.36.0.dev0\r\n- Platform: Linux-5.15.0-88-generic-x86_64-with-glibc2.35\r\n- Python version: 3.11.5\r\n- Huggingface_hub version: 0.17.3\r\n- Safetensors version: 0.4.0\r\n- Accelerate version: 0.23.0\r\n- Accelerate config: - compute_environment: LOCAL_MACHINE\r\n - distributed_type: NO\r\n - mixed_precision: fp16\r\n - use_cpu: False\r\n - debug: False\r\n - num_processes: 1\r\n - machine_rank: 0\r\n - num_machines: 1\r\n - gpu_ids: all\r\n - rdzv_backend: static\r\n - same_network: True\r\n - main_training_function: main\r\n - downcast_bf16: no\r\n - tpu_use_cluster: False\r\n - tpu_use_sudo: False\r\n - tpu_env: []\r\n- PyTorch version (GPU?): 2.1.0+cu121 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n\r\nHere is the full traceback \r\nTraceback (most recent call last):\r\n File \"/home/ntq/nhanv/NLP/hf_trainer/NER/run_summarization_2.py\", line 788, in <module>\r\n main()\r\n File \"/home/ntq/nhanv/NLP/hf_trainer/NER/run_summarization_2.py\", line 760, in main\r\n writer.write(\"\\n\".join(predictions))\r\nUnicodeEncodeError: 'ascii' codec can't encode character '\\u2019' in position 292: ordinal not in range(128)", "Hi @huyhuyvu01 \r\n\r\nwe need a minimal (but self complete) reproducible code snippet to help.", "@ydshieh I train LED using the run_summarization.py examples using the below lines\r\nCUDA_VISIBLE_DEVICES=0 python3 run_summarization.py \\\r\n --model_name_or_path allenai/led-base-16384 \\\r\n --train_file train.json \\\r\n --validation_file test.json \\\r\n --test_file test.json \\\r\n --output_dir /hdd-6tb/llm_model/ouputs \\\r\n --do_train \\\r\n --do_predict \\\r\n --num_train_epochs 10 \\\r\n --source_prefix \"summarize: \" \\\r\n --text_column \"prompt\" \\\r\n --summary_column \"completion\" \\\r\n --per_device_train_batch_size=8 \\\r\n --per_device_eval_batch_size=8 \\\r\n --logging_strategy steps\\\r\n --evaluation_strategy epoch \\\r\n --save_strategy epoch \\\r\n --logging_steps 100 \\\r\n --overwrite_output_dir \\\r\n --predict_with_generate\r\n ", "Could you provide the json files (even the fake, tiny versions), or replace them with some public datasets on the Hub?" ]
1,689
1,700
1,693
NONE
null
### System Info with open(output_prediction_file, "w", encoding="utf-8") as writer:: The script opens the output file in write mode with UTF-8 encoding to handle non-ASCII characters ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction with open(output_prediction_file, "w" as writer) UnicodeEncodeError: 'ascii' codec can't encode character '\xe2' in position 1710: ordinal not in range(128) ### Expected behavior It should generate predictions
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24967/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24967/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24966
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24966/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24966/comments
https://api.github.com/repos/huggingface/transformers/issues/24966/events
https://github.com/huggingface/transformers/pull/24966
1,814,983,062
PR_kwDOCUB6oc5WDi6z
24,966
🌐 [i18n-KO] Translated `perf_hardware.md` to Korean
{ "login": "augustinLib", "id": 74291999, "node_id": "MDQ6VXNlcjc0MjkxOTk5", "avatar_url": "https://avatars.githubusercontent.com/u/74291999?v=4", "gravatar_id": "", "url": "https://api.github.com/users/augustinLib", "html_url": "https://github.com/augustinLib", "followers_url": "https://api.github.com/users/augustinLib/followers", "following_url": "https://api.github.com/users/augustinLib/following{/other_user}", "gists_url": "https://api.github.com/users/augustinLib/gists{/gist_id}", "starred_url": "https://api.github.com/users/augustinLib/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/augustinLib/subscriptions", "organizations_url": "https://api.github.com/users/augustinLib/orgs", "repos_url": "https://api.github.com/users/augustinLib/repos", "events_url": "https://api.github.com/users/augustinLib/events{/privacy}", "received_events_url": "https://api.github.com/users/augustinLib/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "μœ„μ—μ„œ @0525hhgus λ‹˜μ˜ λŒ“κΈ€μ„ μ œμ™Έν•˜κ³ λŠ” λ³„λ„μ˜ 리뷰사항 μ—†μŠ΅λ‹ˆλ‹€ :) ", "> Looks good! Could you update the ## GPU [[gpu]] in the english version as well? Seems like it is not rendering properly, let's kill two birds with one stone!\r\n> \r\n> <img alt=\"image\" width=\"959\" src=\"https://user-images.githubusercontent.com/48595927/255879037-d22d9839-7263-47d6-988b-408ed3bec0aa.png\">\r\n\r\n@ArthurZucker \r\nI updated that issue! and also i could confirm [here](https://huggingface.co/docs/transformers/main/en/perf_hardware) that it worked successfully. \r\nThis part didn't work well in previous versions, but it works well in the current `main`. It was a problem that appeared because the newline was omitted in the markdown file\r\n\r\nThank you for your hard work and dedication, it is greatly appreciated." ]
1,689
1,690
1,690
CONTRIBUTOR
null
<!-- PR의 제λͺ©μ€ "🌐 [i18n-KO] Translated `<your_file>.md` to Korean" 으둜 λΆ€νƒλ“œλ¦½λ‹ˆλ‹€! --> # What does this PR do? Translated the `perf_hardware.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (λ²ˆμ—­ λˆ„λ½/쀑볡 검사) - [x] Grammar Check (λ§žμΆ€λ²• 검사) - [x] Review or Add new terms to glossary (μš©μ–΄ 확인 및 μΆ”κ°€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview둜 μ •μƒμž‘λ™ 확인) ## Who can review? (Initial) @0525hhgus, @Sunmin0520, @54data, @seank021, @kihoon71 <!-- 1. μœ„ 체크가 λͺ¨λ‘ μ™„λ£Œλœ 뒀에, 이 μ•„λž˜μ— 리뷰λ₯Ό μš”μ²­ν•  νŒ€μ›λ“€μ„ λ©˜μ…˜ν•΄μ£Όμ„Έμš”! --> <!-- May you please review this PR? @member1 @member2 ... --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. νŒ€μ›λ“€κ³Ό 리뷰가 λλ‚œ ν›„μ—λ§Œ ν—ˆκΉ…νŽ˜μ΄μŠ€ μ§μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24966/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/24966/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24966", "html_url": "https://github.com/huggingface/transformers/pull/24966", "diff_url": "https://github.com/huggingface/transformers/pull/24966.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24966.patch", "merged_at": 1690285464000 }
https://api.github.com/repos/huggingface/transformers/issues/24965
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24965/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24965/comments
https://api.github.com/repos/huggingface/transformers/issues/24965/events
https://github.com/huggingface/transformers/issues/24965
1,814,934,313
I_kwDOCUB6oc5sLbMp
24,965
device_map='sequential' does not utilize gpu devices other than the first when running in 8bit and 4bit
{ "login": "Daryl149", "id": 6736668, "node_id": "MDQ6VXNlcjY3MzY2Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/6736668?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Daryl149", "html_url": "https://github.com/Daryl149", "followers_url": "https://api.github.com/users/Daryl149/followers", "following_url": "https://api.github.com/users/Daryl149/following{/other_user}", "gists_url": "https://api.github.com/users/Daryl149/gists{/gist_id}", "starred_url": "https://api.github.com/users/Daryl149/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Daryl149/subscriptions", "organizations_url": "https://api.github.com/users/Daryl149/orgs", "repos_url": "https://api.github.com/users/Daryl149/repos", "events_url": "https://api.github.com/users/Daryl149/events{/privacy}", "received_events_url": "https://api.github.com/users/Daryl149/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Daryl149 \r\nThanks for your issue, per the documentation it says:\r\n\r\n> \"sequential\" will fit what it can on GPU 0, then move on GPU 1 and so forth (so won’t use the last GPUs if it doesn’t need to).\r\n\r\nMaybe there are some corner cases when using 8bit quantization, hence the OOM error. What is the GPU hardware you are using ? (with gpu vram)\r\nIndeed the canonical way to load a model and split it across all GPU evenly is to use device_map=auto", "Could be 8 bit related, also happens in 4bit. Will have to try unquantized.\r\nI have unequal memory on this particular setup, so `device_map=auto` is\r\nsuboptimal:\r\nFirst gpu is an A6000 (non Ada), 48GB.\r\nSecond gpu is an RTX 3090, 24GB. With `auto`, it only uses 24GB on both\r\ncards as expected (48GB combined) . Whereas the expected `sequential`\r\nbehavior would be perfect for this situation by filling up to the combined\r\n72GB.\r\n\r\nOn Fri, Jul 21, 2023, 08:46 Younes Belkada ***@***.***> wrote:\r\n\r\n> Hi @Daryl149 <https://github.com/Daryl149>\r\n> Thanks for your issue, per the documentation it says:\r\n>\r\n> \"sequential\" will fit what it can on GPU 0, then move on GPU 1 and so\r\n> forth (so won’t use the last GPUs if it doesn’t need to).\r\n>\r\n> Maybe there are some corner cases when using 8bit quantization, hence the\r\n> OOM error. What is the GPU hardware you are using ? (with gpu vram)\r\n> Indeed the canonical way to load a model and split it across all GPU\r\n> evenly is to use device_map=auto\r\n>\r\n> β€”\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/24965#issuecomment-1645064989>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/ABTMWHBJYDKPTVEFXZCMLLDXRIQV3ANCNFSM6AAAAAA2SC443A>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n\r\n\r\nUpdate:\r\nIt is definitely caused by setting the flag `load_in_8bit=True` (also occurs for `load_in_4bit=True`). When loading the model as such:\r\n```\r\nfrom transformers import LlamaForCausalLM, LlamaTokenizer\r\ntokenizer = LlamaTokenizer.from_pretrained(\"meta-llama/Llama-2-70b-chat-hf\", use_safetensors=True)\r\nmodel = LlamaForCausalLM.from_pretrained(\"meta-llama/Llama-2-70b-chat-hf\", device_map=\"sequential\", use_safetensors=True) \r\n\r\n```\r\nIt shows the expected behaviour for `sequential`. Unfortunately, this means that: \r\n- when running in 4 and 8 bit it goes OOM because it only utilizes the first gpu.\r\n- when running the model in full, it goes OOM because it is too big anyway for 2 GPUs.\r\n\r\nHmmm, I'll report this with bitsandbytes as well. I would not consider the 8bit and 4bit flags as corner cases anymore for transformers though.\r\n", "Any updates on this issue?", "cc @muellerzr, you might have an idea as to what is going on here. When porting llama2 I had also a similar bug, using `device_map = 'auto'` just produces random outputs. Reproducer is [here](https://github.com/ArthurZucker/transformers/blob/98c869f16df2f23a2fffed102864c901fd3b2702/tests/models/llama/test_modeling_llama.py#L440)", "cc @SunMarc for big model inference.", "I think the key is that 'auto' works but 'sequential' does not. This means it is definitely capable of spreading weights across multiple gpus in 8 and 4-bit, but the part that handles that for 'auto' is probably missing from the logic for 'sequential'.", "I was able to reproduce and I will work on fixing this. In the meantime, please use `max_memory` if you have a particular setup. " ]
1,689
1,693
1,693
NONE
null
### System Info transformers==4.31.0 python==3.10.6 bitsandbytes==0.40.2 torch==2.0.1 Whenever I set the parameter `device_map='sequential'`, only the first gpu device is taken into account. For models that do not fit on the first gpu, the model returns a cuda OOM, as if only running on the first gpu, instead of spilling over to the second gpu. This is contrary to the expected behaviour as described in https://huggingface.co/docs/accelerate/usage_guides/big_modeling. It has happened for as long as I have been using it (last 3 months), so it is not just for the most recent version of transformers or torch, or bitsandbytes. I have both a cuda:0 and cuda:1 gpu correctly installed and recognized by torch and transformers. For example, when setting `device_map= 'auto'`, the model is split on both gpus equally, as expected. ### Who can help? @ArthurZucker @younesbelkada ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction example code: ``` from transformers import LlamaForCausalLM, LlamaTokenizer tokenizer = LlamaTokenizer.from_pretrained("meta-llama/Llama-2-70b-chat-hf", use_safetensors=True) model = LlamaForCausalLM.from_pretrained("meta-llama/Llama-2-70b-chat-hf", device_map="sequential", load_in_8bit=True, use_safetensors=True) ``` ### Expected behavior model correcly loaded by spilling over remaining layers to second gpu, instead of OOM on just first gpu.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24965/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24965/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24964
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24964/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24964/comments
https://api.github.com/repos/huggingface/transformers/issues/24964/events
https://github.com/huggingface/transformers/pull/24964
1,814,767,987
PR_kwDOCUB6oc5WCz7g
24,964
improve from_pretrained for zero3 multi gpus mode
{ "login": "1ytic", "id": 27285181, "node_id": "MDQ6VXNlcjI3Mjg1MTgx", "avatar_url": "https://avatars.githubusercontent.com/u/27285181?v=4", "gravatar_id": "", "url": "https://api.github.com/users/1ytic", "html_url": "https://github.com/1ytic", "followers_url": "https://api.github.com/users/1ytic/followers", "following_url": "https://api.github.com/users/1ytic/following{/other_user}", "gists_url": "https://api.github.com/users/1ytic/gists{/gist_id}", "starred_url": "https://api.github.com/users/1ytic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/1ytic/subscriptions", "organizations_url": "https://api.github.com/users/1ytic/orgs", "repos_url": "https://api.github.com/users/1ytic/repos", "events_url": "https://api.github.com/users/1ytic/events{/privacy}", "received_events_url": "https://api.github.com/users/1ytic/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thank you for the PR, @1ytic \r\n\r\nThe idea is excellent, but we need to think about multiple use-cases here.\r\n\r\nwhat happens if zero3 is used, but not `zero.Init` - won't this code fail if you try to access any of the meta weights before `deepspeed.initialize` gets called at which point deepspeed will partition weights from rank 0 to other ranks. I'm flagging the issue of timing of when they are sharded\r\n\r\nUnfortunately, when I initially designed this I didn't think anybody would want to use zero-3 w/o `zero.Init` - so I lumped them together - @pacman100 improved upon it in Accelerate to have 2 separate possibilities - is zero3 and is zero.init enabled, so it gives a more refined control for such optimizations.\r\n\r\nSo let's wait for Sourab to weigh in.\r\n\r\nMeanwhile please test \r\n1. that this code works with pytorch-1.9 (I don't remember when `meta` was made to work)\r\n2. use `USE_SLOW=1 pytest tests/deepspeed` to do coverage testing, since the PR CI doesn't run deepspeed tests. ", "Thank you for feedback, @stas00 \r\n\r\nJust to clarify a little bit.\r\n\r\nMy changes effect only state_dict from checkpoints. The function `deepspeed.zero.Init()` doesn't care about state_dict. The real magic happened [here](https://github.com/huggingface/transformers/blob/9ef5256dfb28f5419649af8ea94e82573a6b2e77/src/transformers/modeling_utils.py#L539), when we exit from `deepspeed.zero.GatheredParameters()` context.\r\n\r\nI know `modelling_utils.py` is 4k lines monster and maybe I missed something, but seems like I effect only one scenario when we load checkpoints for already partitioned zero3 model. At least, I tested this scenario with 10GB checkpoint and 4 GPUs. I was able to decrease RAM consumption from 45GB to 17GB on single node.", "Exactly, the model instantiation is super complex, that's why I wrote the above.\r\n\r\nThe deepspeed integration test suite has a very high coverage so if you try running it and it succeeds then most likely it's all good. The size of the checkpoint doesn't matter for the purpose of accepting the PR, what matters is to ensure it doesn't break things.\r\n\r\nand btw, you actually don't need to even load the checkpoint if you're resuming from a deepspeed zero checkpoint. In another project I hacked to have the model created w/o loading the model and then just used deepspeed checkpoint loading directly, which should already be doing that efficiently, since each gpu will only read its own shard of weights.\r\n\r\nBut, alas, making it generic enough so that it'd satisfy everybody is very difficult, that's why the general case is to ensure ease of use out of the box often at the cost of slow startup and more memory consumption.\r\n\r\nIdeally the protocol should be like this:\r\n\r\n1. create a model on meta (~0 secs)\r\n2. load each shard into the gpu it belongs to (a few secs)\r\n\r\nthis should be extremely fast even for a huge model like BLOOM-176B\r\n\r\nIn the case of new training, there should be a way to pre-shard the model before loading it, so resume and new training will be identical model loading-wise. This is eventually will be done when universal checkpoint will be implemented for ZeRO (currently it's only available in Megatron-Deepspeed) https://github.com/microsoft/DeepSpeed/issues/2921\r\n\r\nSo lots and lots of things to improve there.\r\n\r\nAnd more things to fix on the deepspeed side, e.g. this is very wasteful https://github.com/microsoft/DeepSpeed/issues/1971", "so practically please run the integration tests I described in the first reply of mine and if possible with pytorch-1.9 (minimal supported pytorch version).", "Just to quickly chime int, the minimal version is actually 1.10 now ;-) The meta device is in 1.9+ so that shouldn't be an issue.", "Thank you for this insight, Sylvain.\r\n\r\nSo then any recent pt version should be ok to test with, @1ytic ", "Hello,\r\n\r\nThe trainer's behaviour isn't changed at all because the DeepSpeed config is still created using `HfTrainerDeepSpeedConfig` which sets the weakref `_hf_deepspeed_config_weak_ref` which is used in `is_deepspeed_zero3_enabled` to check if it is Stage-3. So, from trainer's perspective, this should work fine.\r\n\r\nFrom Accelerate's perspective, when user specifies `zero3_init_flag=False`, the weakref `_hf_deepspeed_config_weak_ref` isn't created and as such the `is_deepspeed_zero3_enabled` will return `False` even if it is using Stage-3 because the user doesn't want to use `deepspeed.zero.Init` context manager. So, in this case too, this PR should work fine as `map_location = \"cpu\"` due to absence of weakref.\r\n\r\nSo, the changes of this PR look good if all the slow tests pass.", "_The documentation is not available anymore as the PR was closed or merged._", "@stas00 you were right, I caught an uninitialized error while testing. After fixing the tests passed:\r\n\r\n`RUN_SLOW=1 pytest -rs tests/deepspeed/`\r\n\r\n```\r\n================================================= short test summary info ==================================================\r\nSKIPPED [1] tests/deepspeed/test_deepspeed.py:949: test requires bfloat16 hardware support\r\n================================= 108 passed, 1 skipped, 98 warnings in 2273.67s (0:37:53) =================================\r\n```\r\n\r\nI also added one more import. If it's too much, I can revert it.", "@tjruwase, please kindly have a look - do you see any problems to this approach of loading weights only on rank 0 and relying on partitioning to distribute the weights to the rest of the ranks under zero3? Could this somehow cause problem in the future?\r\n\r\nThe idea is to skip loading weights on all ranks but rank 0, since they will be discarded anyway.\r\n\r\nThank you!", "@1ytic, this is pretty neat. LGTM. Thanks! ", "> Thanks for your PR! Can we just leave `torch.distributed` as it was? `dist` is way less obvious as a name.\r\n\r\nI thought `dist` quite common name for `torch.distributed`, but up to you. I will rename it back.", "It is quite common and I would have no problem if this was in the Trainer file, but this file is not a distributed script and can be read by people less used to this. That's why it's better to spell it out IMO.", "Thanks for bearing with me!", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24964). All of your documentation changes will be reflected on that endpoint." ]
1,689
1,689
1,689
CONTRIBUTOR
null
# Decrease RAM consumption during Deepspeed Zero 3 model initialisation with multiple GPUs. This simple PR will save a ton of RAM in case with multi GPUs and Zero 3 Deepspeed scenario. The idea is simple. We do not need to load checkpoints for all instances, because `deepspeed.zero.GatheredParameters` will copy weights from 0 rank. ## Issues Related issue #12273 ## Who can review? - @stas00 - @pacman100
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24964/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24964/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24964", "html_url": "https://github.com/huggingface/transformers/pull/24964", "diff_url": "https://github.com/huggingface/transformers/pull/24964.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24964.patch", "merged_at": 1689968368000 }
https://api.github.com/repos/huggingface/transformers/issues/24963
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24963/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24963/comments
https://api.github.com/repos/huggingface/transformers/issues/24963/events
https://github.com/huggingface/transformers/pull/24963
1,814,744,305
PR_kwDOCUB6oc5WCurq
24,963
Remove tokenizers from the doc table
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? As discussed offline with @LysandreJik and @stas00 it doesn't really make sense to have the info on tokenizers in this table in the index, which: - makes it look like something is missing for vision models or speech models - is not accurate when a model uses another's tokenizer model. This PR removes it and only leaves the frameworks supported by each model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24963/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24963/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24963", "html_url": "https://github.com/huggingface/transformers/pull/24963", "diff_url": "https://github.com/huggingface/transformers/pull/24963.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24963.patch", "merged_at": 1689946896000 }
https://api.github.com/repos/huggingface/transformers/issues/24962
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24962/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24962/comments
https://api.github.com/repos/huggingface/transformers/issues/24962/events
https://github.com/huggingface/transformers/issues/24962
1,814,736,352
I_kwDOCUB6oc5sKq3g
24,962
Trainer bug > using dict (with single or multiple datasets) for eval_dataset arg throws error
{ "login": "lenn-arts", "id": 25841810, "node_id": "MDQ6VXNlcjI1ODQxODEw", "avatar_url": "https://avatars.githubusercontent.com/u/25841810?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lenn-arts", "html_url": "https://github.com/lenn-arts", "followers_url": "https://api.github.com/users/lenn-arts/followers", "following_url": "https://api.github.com/users/lenn-arts/following{/other_user}", "gists_url": "https://api.github.com/users/lenn-arts/gists{/gist_id}", "starred_url": "https://api.github.com/users/lenn-arts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lenn-arts/subscriptions", "organizations_url": "https://api.github.com/users/lenn-arts/orgs", "repos_url": "https://api.github.com/users/lenn-arts/repos", "events_url": "https://api.github.com/users/lenn-arts/events{/privacy}", "received_events_url": "https://api.github.com/users/lenn-arts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It looks like one of your datasets might not have labels, so the metrics returned are None, maybe?", "Thanks for the prompt answer! I don't think this is the problem for two reasons:\r\n1st) running \r\n```\r\neval_datasets[\"val\"].unique(\"labels\")\r\neval_datasets[\"test\"].unique(\"labels\")\r\n```\r\nreturns `[0,1]` for both eval datasets.\r\n\r\n2nd) When passing the datasets directly as dataset instances, I get metrics that are fine, so it seems it has to do with the fact of them together or individually being passed as dict type.", "Could you share a small reproducer of the issue I can execute?", "Hi @sgugger, please find a minimum version of the code below.\r\n\r\n```python\r\n# by Lennart Schulze, July 2023\r\n\r\n############## SETUP\r\nimport random\r\nimport numpy as np\r\nimport os\r\n\r\nimport torch\r\nimport torch.nn as nn\r\nimport torch.optim\r\n\r\nCACHE_ROOT = \"/.cache/hf/\"\r\n\r\nos.environ['TRANSFORMERS_CACHE'] = CACHE_ROOT\r\nos.environ['HF_DATASETS_CACHE'] = CACHE_ROOT\r\nimport transformers as hft\r\nimport datasets as hfds\r\nimport evaluate as hfev\r\n\r\ng = torch.Generator()\r\nSEED = 3\r\ndef seed(seed_num=SEED):\r\n torch.manual_seed(seed_num)\r\n random.seed(seed_num)\r\n np.random.seed(seed_num)\r\n g.manual_seed(seed_num)\r\nseed()\r\nDEVICE = \"cuda\" if torch.cuda.is_available() else \"cpu\"\r\nprint(DEVICE)\r\n\r\n############## MODEL & TOKENIZER\r\ndef get_model():\r\n model_name = \"microsoft/deberta-v3-large\"\r\n\r\n tokenizer = hft.AutoTokenizer.from_pretrained(model_name)\r\n print(tokenizer)\r\n\r\n num_labels = 1\r\n model = hft.DebertaV2ForSequenceClassification.from_pretrained(model_name, num_labels=num_labels)\r\n print(type(model))\r\n print(model.config.id2label)\r\n print(model)\r\n return model, tokenizer\r\n\r\n############## DATASET\r\n\r\ndef get_datasets(): \r\n cache_dir = os.path.join(CACHE_ROOT)\r\n datasets = hfds.load_dataset(\"rotten_tomatoes\", cache_dir=cache_dir)\r\n datasets[\"test\"] = hfds.load_dataset(\"rotten_tomatoes\", split=\"test\", cache_dir=cache_dir)\r\n \r\n print(datasets)\r\n print(type(datasets))\r\n print(datasets.keys())\r\n for key, ds in datasets.items():\r\n print(key, \":\", len(ds))\r\n return datasets\r\n\r\n# tokenize dataset & prepare for torch transformer\r\ndef tokenize_function(tokenizer, sample):\r\n return tokenizer(sample[\"text\"], padding=\"max_length\", truncation=True,\r\n return_tensors=\"pt\", max_length=128)\r\n\r\ndef preprocess_datasets(tokenizer, datasets):\r\n tokenize_fn = lambda sample: tokenize_function(tokenizer, sample)\r\n tokenized_datasets = datasets.map(tokenize_fn, batched=True) \r\n\r\n tokenized_datasets = tokenized_datasets.rename_column(\"label\", \"labels\")\r\n new_features = tokenized_datasets[\"train\"].features.copy()\r\n new_features[\"labels\"] = hfds.ClassLabel(num_classes=2, names=[\"No\", \"Yes\"])\r\n tokenized_datasets = tokenized_datasets.cast(new_features)\r\n tokenized_datasets = tokenized_datasets.with_format(\"torch\")\r\n\r\n print(\"preprocess_datasets > train: \\t\", tokenized_datasets[\"train\"].features)\r\n print(\"preprocess_datasets > val: \\t\", tokenized_datasets[\"validation\"].features)\r\n print(\"preprocess_datasets > test: \\t\", tokenized_datasets[\"test\"].features)\r\n print(\"preprocess_datasets > val:\", tokenized_datasets[\"validation\"][0])\r\n print(\"preprocess_datasets > test:\", tokenized_datasets[\"test\"][0])\r\n\r\n return tokenized_datasets\r\n\r\n############## TRAINING\r\n\r\n# metric for trainer\r\ndef get_metric_fn():\r\n metric = hfev.load(\"accuracy\", cache_dir=CACHE_ROOT)\r\n def compute_metrics(eval_prediction):\r\n out = {}\r\n logits, gt_labels = eval_prediction\r\n predicted_labels = (torch.sigmoid(torch.tensor(logits))>=0.5).float().numpy()\r\n key = 0\r\n out[f\"ACC_{key}\"] = metric.compute(predictions=predicted_labels, references=gt_labels)[\"accuracy\"]\r\n return out\r\n return compute_metrics\r\n\r\n\r\nclass CustomTrainer(hft.Trainer):\r\n def __init__(self, **kwargs):\r\n super().__init__(**kwargs)\r\n\r\n def evaluate(self, **kwargs):\r\n print(\"\\n> EVALUATING >\\n\")\r\n super().evaluate(**kwargs)\r\n print(\"\\n< EVALUATING DONE <\\n\")\r\n\r\n def compute_loss(self, model, inputs, return_outputs=False):\r\n inputs[\"labels\"] = inputs[\"labels\"].float()\r\n labels = inputs.get(\"labels\")\r\n\r\n outputs = model(**inputs)\r\n logits = outputs.get(\"logits\")\r\n threshold = 0.5\r\n preds = (torch.sigmoid(logits) >= threshold).float()\r\n\r\n loss_kwargs = {}\r\n loss_fn = nn.BCEWithLogitsLoss(**loss_kwargs)\r\n loss = loss_fn(logits.view(-1, 1), labels.view(-1, 1))\r\n \r\n return (loss, outputs) if return_outputs==True else loss\r\n\r\n\r\ndef get_trainer(t_args, model, datasets, \r\n eval_ds=\"test\", train_size=5000, eval_size=50):\r\n metric_fn = get_metric_fn()\r\n\r\n sample_size_train = train_size if train_size!=\"all\" else len(datasets[\"train\"])\r\n if eval_size!=\"all\": \r\n sample_size_eval_val = eval_size\r\n sample_size_eval_test = sample_size_eval_val\r\n else:\r\n sample_size_eval_val = len(datasets[\"validation\"]) \r\n sample_size_eval_test = len(datasets[\"test\"])\r\n \r\n eval_datasets = {}\r\n if eval_ds!=None:\r\n eval_datasets = {\"test\": datasets[\"test\"].shuffle(seed=SEED).select(range(sample_size_eval_test)),\r\n \"val\": datasets[\"validation\"].shuffle(seed=SEED).select(range(sample_size_eval_val)), \r\n }\r\n print(\"get trainer > eval_datasets val \",eval_datasets[\"val\"][0])\r\n print(\"get trainer > eval_datasets test\",eval_datasets[\"test\"][0])\r\n print(\"get trainer > eval_datasets val \",eval_datasets[\"val\"].unique(\"labels\"))\r\n print(\"get trainer > eval_datasets test\",eval_datasets[\"test\"].unique(\"labels\"))\r\n if eval_ds==\"val\":\r\n ###del eval_datasets[\"test\"] # this won't work!\r\n eval_datasets = eval_datasets[\"val\"] # this will work!\r\n elif eval_ds==\"test\":\r\n del eval_datasets[\"val\"] # this won't work!\r\n #eval_datasets = eval_datasets[\"test\"] # this will work!\r\n \r\n trainer_constructor_kwargs = {\r\n \"model\":model,\r\n \"args\":t_args,\r\n \"train_dataset\":datasets[\"train\"].shuffle(seed=SEED).select(range(sample_size_train)),\r\n \"eval_dataset\":eval_datasets, \r\n \"compute_metrics\": metric_fn\r\n }\r\n trainer = CustomTrainer(\r\n **trainer_constructor_kwargs,\r\n )\r\n return trainer\r\n\r\n# training arguments for trainer\r\ndef get_trainer_args():\r\n output_dirs_root = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"../\")\r\n output_dir = \"chkpts\"\r\n\r\n t_args = hft.TrainingArguments(\r\n output_dir=os.path.join(output_dirs_root, output_dir),\r\n evaluation_strategy=\"steps\",\r\n logging_strategy=\"steps\",\r\n per_device_train_batch_size=4, \r\n per_device_eval_batch_size=4,\r\n gradient_accumulation_steps=1,\r\n eval_steps=1,\r\n logging_steps=1,\r\n num_train_epochs=1,\r\n learning_rate=1e-6,\r\n save_steps=1000,\r\n report_to=\"tensorboard\",\r\n )\r\n return t_args\r\n\r\n############## AUTOMATION\r\n\r\ndef run_experiment_(model=None, tokenizer=None, datasets=None):\r\n t_args = get_trainer_args()\r\n trainer = get_trainer(t_args, model, datasets)\r\n trainer.train()\r\n\r\ndef start_notebook():\r\n model, tokenizer = get_model()\r\n\r\n datasets = get_datasets()\r\n datasets = preprocess_datasets(tokenizer, datasets)\r\n\r\n return model, tokenizer, datasets\r\n\r\n\r\n############## EXPERIMENTS\r\nif __name__==\"__main__\":\r\n print(torch.version.cuda)\r\n print(torch.cuda.is_available())\r\n\r\n model, tokenizer, datasets = start_notebook() \r\n run_experiment_(model=model, datasets=datasets)\r\n``` \r\n\r\nThe key is the `get_trainer` function, where the key argument is `eval_ds` (try \"all\"/\"test\"/\"val\").\r\n\r\nPlease let me know in case of questions.", "In your code sample, your orverloaded `evaluate` function does not return anything, this is what causes the problem.\r\n```py \r\ndef evaluate(self, **kwargs):\r\n print(\"\\n> EVALUATING >\\n\")\r\n result = super().evaluate(**kwargs)\r\n print(\"\\n< EVALUATING DONE <\\n\")\r\n return result\r\n``` \r\nthis the issue on my side.", "This works, thank you very much! It did not cross my mind that the results need to be returned at this level because they were still printed. " ]
1,689
1,690
1,690
NONE
null
### System Info Using python 3.10, transformers 4.28.1 Dear team @sgugger , When I tried passing multiple datasets to the eval_dataset argument of the trainer using a dict structure as per the documentation, I experienced that only metrics on the first of the passed datasets get compute at the evaluation step in the training process. For the subsequent dataset, I receive the following error: Code: ``` eval_datasets = {"test": datasets["test"].shuffle(seed=SEED), "val": datasets["val"].shuffle(seed=SEED), } trainer = Trainer("model":model, "args":t_args, "train_dataset":datasets["train"], "eval_dataset":eval_datasets, "compute_metrics": metric_fn) ``` Error: ``` Traceback (most recent call last): ... File "src/code_main.py", line 767, in run_experiment_ trainer.train(resume_from_checkpoint=resume) File "lib/python3.10/site-packages/transformers/trainer.py", line 1662, in train return inner_training_loop( File "lib/python3.10/site-packages/transformers/trainer.py", line 2006, in _inner_training_loop self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "lib/python3.10/site-packages/transformers/trainer.py", line 2285, in _maybe_log_save_evaluate metrics.update(dataset_metrics) TypeError: 'NoneType' object is not iterable ``` Notably, even if I only pass one dataset in the form of dictionary as opposed to directly passing its dataset instance, the same error appears, without any metrics being reported back. I triple checked that all the datasets I pass are correct dataset instances with >0 samples. Any suggestions? Thanks! ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction eval_datasets = {"test": datasets["test"].shuffle(seed=SEED), "val": datasets["val"].shuffle(seed=SEED), } trainer = Trainer("model":model, "args":t_args, "train_dataset":datasets["train"], "eval_dataset":eval_datasets, "compute_metrics": metric_fn) ### Expected behavior Metric function runs on both dataset instance values of the passed dict, one after the other. All metrics with the according prefixes are reported back. No error is thrown.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24962/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24962/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24961
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24961/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24961/comments
https://api.github.com/repos/huggingface/transformers/issues/24961/events
https://github.com/huggingface/transformers/issues/24961
1,814,710,715
I_kwDOCUB6oc5sKkm7
24,961
RuntimeError: mat1 and mat2 shapes cannot be multiplied - Llama-2-13b-chat-hf - v4.31.0
{ "login": "xhluca", "id": 21180505, "node_id": "MDQ6VXNlcjIxMTgwNTA1", "avatar_url": "https://avatars.githubusercontent.com/u/21180505?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xhluca", "html_url": "https://github.com/xhluca", "followers_url": "https://api.github.com/users/xhluca/followers", "following_url": "https://api.github.com/users/xhluca/following{/other_user}", "gists_url": "https://api.github.com/users/xhluca/gists{/gist_id}", "starred_url": "https://api.github.com/users/xhluca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xhluca/subscriptions", "organizations_url": "https://api.github.com/users/xhluca/orgs", "repos_url": "https://api.github.com/users/xhluca/repos", "events_url": "https://api.github.com/users/xhluca/events{/privacy}", "received_events_url": "https://api.github.com/users/xhluca/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Demo on kaggle: https://www.kaggle.com/xhlulu/error-transformers-v4-31-0-llama-2-13b-chat-hf", "Thanks for reporting! We'll take a look cc @ArthurZucker ", "@ArthurZucker The corresponding model still has `pretraining_tp=2`, this is why there is this issue.\r\n\r\n@xhluca While waiting for the model to be fixed, you can add `pretraining_tp=1` in your call to `LlamaForCausalLM.from_pretrained`, this should fix the issue.", "@sgugger this solution worked!", "Yes! Fixed in https://huggingface.co/meta-llama/Llama-2-13b-chat-hf/commit/3a989db99aa8d9ef4cfd55f87521fc4c04891d3d ! This one slipped through the cracks", "What is the best way to approach this for the inference space? We dont have this issue on 4.30.1 but upgrading to 4.31 it begins causing issues on for example the Puffin model. Is it safe to always override this like the workaround suggests for any model or should I consider the models this happens on broken / will HF fix this in an update?", "You should just set `pretraining_tp = 1` for every model that you are using, it is totally safe! \r\nThe fix should probably come from the `bitsandbytes` library πŸ˜‰ an issue was opened here. ", "Good to know since I will ship that to thousands of users in production who can run any HF model in the future. So I wanted to be sure future models won't be broken. Ill begin shipping this workaround along with transformerd 4.31.", "I would like to ask, if pretraining_tp = 1, will the score be affected?", "Generation scores were not affected no, the logit precision can be ", "> Generation scores were not affected no, the logit precision can be\r\n\r\nThanks!\r\n" ]
1,689
1,694
1,689
CONTRIBUTOR
null
### System Info ``` Name: transformers Version: 4.31.0 Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow Home-page: https://github.com/huggingface/transformers Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors) Author-email: [email protected] License: Apache 2.0 License Location: /opt/conda/lib/python3.10/site-packages Requires: filelock, huggingface-hub, numpy, packaging, pyyaml, regex, requests, safetensors, tokenizers, tqdm Required-by: --- Name: bitsandbytes Version: 0.40.2 Summary: k-bit optimizers and matrix multiplication routines. Home-page: https://github.com/TimDettmers/bitsandbytes Author: Tim Dettmers Author-email: [email protected] License: MIT Location: /opt/conda/lib/python3.10/site-packages Requires: Required-by: --- Name: accelerate Version: 0.21.0 Summary: Accelerate Home-page: https://github.com/huggingface/accelerate Author: The HuggingFace team Author-email: [email protected] License: Apache Location: /opt/conda/lib/python3.10/site-packages Requires: numpy, packaging, psutil, pyyaml, torch Required-by: catalyst --- Name: torch Version: 2.0.0 Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration Home-page: https://pytorch.org/ Author: PyTorch Team Author-email: [email protected] License: BSD-3 Location: /opt/conda/lib/python3.10/site-packages Requires: filelock, jinja2, networkx, sympy, typing-extensions Required-by: accelerate, catalyst, easyocr, fastai, kornia, pytorch-ignite, pytorch-lightning, timm, torchaudio, torchdata, torchmetrics, torchtext, torchvision ``` ### Who can help? @ArthurZucker @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` wget https://raw.githubusercontent.com/xhluca/llama-2-local-ui/main/requirements.txt pip install -r requirements.txt -q pip show transformers bitsandbytes accelerate torch ``` in python: ```python from transformers import LlamaForCausalLM, LlamaTokenizer import textwrap def format_prompt(history, message, system_prompt): B_INST, E_INST = "[INST]", "[/INST]" B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n" prompt = f"{B_INST} {B_SYS}{system_prompt}{E_SYS} " for user_msg, asst_msg in history: user_msg = str(user_msg).strip() asst_msg = str(asst_msg).strip() prompt += f"{user_msg} {E_INST} {asst_msg} </s><s> {B_INST} " message = str(message).strip() prompt += f"{message} {E_INST} " return prompt SYSTEM_PROMPT = """\ You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.""" SYSTEM_PROMPT = textwrap.dedent(SYSTEM_PROMPT).strip() model_name = "meta-llama/Llama-2-13b-chat-hf" model = LlamaForCausalLM.from_pretrained( model_name, token=auth_token, load_in_4bit=True, device_map="auto" ).eval() tokenizer = LlamaTokenizer.from_pretrained(model_name, token=auth_token) prompt = format_prompt(history=[], message="What is a llama?", system_prompt=SYSTEM_PROMPT) inputs = tokenizer(prompt, return_tensors="pt").to(model.device) max_gen_len = 4096 temperature = 0.6 top_p = 0.9 out = model.generate( **inputs, max_new_tokens=max_gen_len, temperature=temperature, top_p=top_p, ) ``` ### Expected behavior Should output something, instead getting this error: ``` /opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py:1270: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation ) warnings.warn( --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[11], line 1 ----> 1 out = model.generate( 2 **inputs, 3 max_new_tokens=max_gen_len, 4 temperature=temperature, 5 top_p=top_p, 6 ) File /opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs) 112 @functools.wraps(func) 113 def decorate_context(*args, **kwargs): 114 with ctx_factory(): --> 115 return func(*args, **kwargs) File /opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py:1538, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, **kwargs) 1532 raise ValueError( 1533 "num_return_sequences has to be 1 when doing greedy search, " 1534 f"but is {generation_config.num_return_sequences}." 1535 ) 1537 # 11. run greedy search -> 1538 return self.greedy_search( 1539 input_ids, 1540 logits_processor=logits_processor, 1541 stopping_criteria=stopping_criteria, 1542 pad_token_id=generation_config.pad_token_id, 1543 eos_token_id=generation_config.eos_token_id, 1544 output_scores=generation_config.output_scores, 1545 return_dict_in_generate=generation_config.return_dict_in_generate, 1546 synced_gpus=synced_gpus, 1547 streamer=streamer, 1548 **model_kwargs, 1549 ) 1551 elif is_contrastive_search_gen_mode: 1552 if generation_config.num_return_sequences > 1: File /opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py:2362, in GenerationMixin.greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, streamer, **model_kwargs) 2359 model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) 2361 # forward pass to get next token -> 2362 outputs = self( 2363 **model_inputs, 2364 return_dict=True, 2365 output_attentions=output_attentions, 2366 output_hidden_states=output_hidden_states, 2367 ) 2369 if synced_gpus and this_peer_finished: 2370 continue # don't waste resources running the code we don't need File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs) 1496 # If we don't have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], [] File /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs) 163 output = old_forward(*args, **kwargs) 164 else: --> 165 output = old_forward(*args, **kwargs) 166 return module._hf_hook.post_forward(module, output) File /opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:806, in LlamaForCausalLM.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict) 803 return_dict = return_dict if return_dict is not None else self.config.use_return_dict 805 # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn) --> 806 outputs = self.model( 807 input_ids=input_ids, 808 attention_mask=attention_mask, 809 position_ids=position_ids, 810 past_key_values=past_key_values, 811 inputs_embeds=inputs_embeds, 812 use_cache=use_cache, 813 output_attentions=output_attentions, 814 output_hidden_states=output_hidden_states, 815 return_dict=return_dict, 816 ) 818 hidden_states = outputs[0] 819 if self.pretraining_tp > 1: File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs) 1496 # If we don't have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], [] File /opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:693, in LlamaModel.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict) 685 layer_outputs = torch.utils.checkpoint.checkpoint( 686 create_custom_forward(decoder_layer), 687 hidden_states, (...) 690 None, 691 ) 692 else: --> 693 layer_outputs = decoder_layer( 694 hidden_states, 695 attention_mask=attention_mask, 696 position_ids=position_ids, 697 past_key_value=past_key_value, 698 output_attentions=output_attentions, 699 use_cache=use_cache, 700 ) 702 hidden_states = layer_outputs[0] 704 if use_cache: File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs) 1496 # If we don't have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], [] File /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs) 163 output = old_forward(*args, **kwargs) 164 else: --> 165 output = old_forward(*args, **kwargs) 166 return module._hf_hook.post_forward(module, output) File /opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:408, in LlamaDecoderLayer.forward(self, hidden_states, attention_mask, position_ids, past_key_value, output_attentions, use_cache) 405 hidden_states = self.input_layernorm(hidden_states) 407 # Self Attention --> 408 hidden_states, self_attn_weights, present_key_value = self.self_attn( 409 hidden_states=hidden_states, 410 attention_mask=attention_mask, 411 position_ids=position_ids, 412 past_key_value=past_key_value, 413 output_attentions=output_attentions, 414 use_cache=use_cache, 415 ) 416 hidden_states = residual + hidden_states 418 # Fully Connected File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs) 1496 # If we don't have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], [] File /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs) 163 output = old_forward(*args, **kwargs) 164 else: --> 165 output = old_forward(*args, **kwargs) 166 return module._hf_hook.post_forward(module, output) File /opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:295, in LlamaAttention.forward(self, hidden_states, attention_mask, position_ids, past_key_value, output_attentions, use_cache) 292 key_slices = self.k_proj.weight.split(key_value_slicing, dim=0) 293 value_slices = self.v_proj.weight.split(key_value_slicing, dim=0) --> 295 query_states = [F.linear(hidden_states, query_slices[i]) for i in range(self.pretraining_tp)] 296 query_states = torch.cat(query_states, dim=-1) 298 key_states = [F.linear(hidden_states, key_slices[i]) for i in range(self.pretraining_tp)] File /opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:295, in <listcomp>(.0) 292 key_slices = self.k_proj.weight.split(key_value_slicing, dim=0) 293 value_slices = self.v_proj.weight.split(key_value_slicing, dim=0) --> 295 query_states = [F.linear(hidden_states, query_slices[i]) for i in range(self.pretraining_tp)] 296 query_states = torch.cat(query_states, dim=-1) 298 key_states = [F.linear(hidden_states, key_slices[i]) for i in range(self.pretraining_tp)] RuntimeError: mat1 and mat2 shapes cannot be multiplied (145x5120 and 1x2560) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24961/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24961/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24960
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24960/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24960/comments
https://api.github.com/repos/huggingface/transformers/issues/24960/events
https://github.com/huggingface/transformers/pull/24960
1,814,702,988
PR_kwDOCUB6oc5WClYa
24,960
Avoid importing all models when instantiating a pipeline
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "It works too, but I think with the design selected for the PEFT integration in `from_pretrained` (which will supercede #24827), we won't need to remove them. I think it's a useful warning to have when users try to instantiate a pipeline with a model not suitable for it." ]
1,689
1,692
1,689
COLLABORATOR
null
# What does this PR do? The check on the model being in the right mapping for the pipeline requires the import of all models associated with that pipeline. This is because it uses the `MODEL_XXX_MAPPING` instead of the `MODEL_XXX_MAPPING_NAMES`. This PR suggests to switch to that, since the check is done on the model name anyway, to avoid weird warnings unrelevant to the user (for instance #24903 gives one example, I also end up having some logs of CUDA kernel being built on my side). The only feature of the `MODEL_XXX_MAPPING` we need to keep compared to `MODEL_XXX_MAPPING_NAMES` is the extra content (coming from model on the Hub), so this PR adds a reference from the mapping names to the mapping. Fixes #24903
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24960/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24960/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24960", "html_url": "https://github.com/huggingface/transformers/pull/24960", "diff_url": "https://github.com/huggingface/transformers/pull/24960.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24960.patch", "merged_at": 1689946917000 }
https://api.github.com/repos/huggingface/transformers/issues/24959
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24959/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24959/comments
https://api.github.com/repos/huggingface/transformers/issues/24959/events
https://github.com/huggingface/transformers/issues/24959
1,814,491,051
I_kwDOCUB6oc5sJu-r
24,959
Whisper not returning last phrase from audio >25s
{ "login": "pli66", "id": 26721632, "node_id": "MDQ6VXNlcjI2NzIxNjMy", "avatar_url": "https://avatars.githubusercontent.com/u/26721632?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pli66", "html_url": "https://github.com/pli66", "followers_url": "https://api.github.com/users/pli66/followers", "following_url": "https://api.github.com/users/pli66/following{/other_user}", "gists_url": "https://api.github.com/users/pli66/gists{/gist_id}", "starred_url": "https://api.github.com/users/pli66/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pli66/subscriptions", "organizations_url": "https://api.github.com/users/pli66/orgs", "repos_url": "https://api.github.com/users/pli66/repos", "events_url": "https://api.github.com/users/pli66/events{/privacy}", "received_events_url": "https://api.github.com/users/pli66/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for the examples - they suggest that the model is predicting the EOS token too early within the 30s segment. Having listened to the audio, I think this is because the last sentence does not finish completely within the 30s window, so the Whisper model decides to ignore it.\r\n\r\nWhat we can do is feed a larger context length to the model, so it can handle audios of arbitrary length. The last sentence now finishes within a context window, so Whisper now predicts the EOS token after this sentence. Let's also use the English variant of the medium checkpoint if working with the English language (better performance vs the multilingual one for English):\r\n```python\r\nfrom transformers import pipeline\r\nimport torch\r\n\r\nurl = \"https://github.com/pli66/test/raw/main/personage_hardy_ah_64kbTrimmed0.mp3\"\r\n\r\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\r\npipe = pipeline(task=\"automatic-speech-recognition\", model=\"openai/whisper-medium.en\", device=device)\r\n\r\nprint(pipe(url, return_timestamps=True, chunk_length_s=30.0))\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Should be fixed by the above comment. Feel free to re-open @pli66 if you're still experiencing issues!" ]
1,689
1,692
1,692
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.4.226-129.415.amzn2.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.13 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes, don't know why pytorch was not detected as it is using it for sure - Using distributed or parallel set-up in script?: no ### Who can help? @sanchit-gandhi ### Information - [X] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Whisper consistently does not return the last phrase in a transcription for clips >25s. "https://github.com/pli66/test/raw/main/personage_hardy_ah_64kbTrimmed0.mp3" is a 30s clip "https://github.com/pli66/test/raw/main/personage_hardy_ah_64kbTrimmed7.mp3" is the first half of above clip, 15s. "https://github.com/pli66/test/raw/main/personage_hardy_ah_64kbTrimmed8.mp3" is the second half of above clip, 15s. ``` import whisper import torch import tempfile import requests import os from transformers import WhisperProcessor, WhisperForConditionalGeneration, WhisperTokenizer from datasets import load_dataset, Dataset, Audio VERSION = "medium" def download_file(url): """ Download a file frome a url and save it to a named temporary file """ response = requests.get(url) temp = tempfile.NamedTemporaryFile(delete=True, dir=os.getcwd(), mode='w+b') temp.write(response.content) temp.seek(0) return temp urls = ["https://github.com/pli66/test/raw/main/personage_hardy_ah_64kbTrimmed0.mp3", "https://github.com/pli66/test/raw/main/personage_hardy_ah_64kbTrimmed7.mp3", "https://github.com/pli66/test/raw/main/personage_hardy_ah_64kbTrimmed8.mp3"] processor = WhisperProcessor.from_pretrained(f"openai/whisper-{VERSION}") generator = WhisperForConditionalGeneration.from_pretrained(f"openai/whisper-{VERSION}").to("cuda") generator.eval() tokenizer = WhisperTokenizer.from_pretrained(f"openai/whisper-{VERSION}") for url in urls: with download_file(url) as f: audio = whisper.load_audio(f.name) processed = processor(audio, sampling_rate=16000, return_tensors="pt", return_attention_mask=True).to("cuda") generated = generator.generate(processed.input_features, attention_mask=processed.attention_mask, return_timestamps=True, output_scores=True, return_dict_in_generate=True, temperature=0, repetition_penalty=8.0, num_beams=2) predicted_ids = generated[0].to("cpu") batch_transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False, decode_with_timestamps=True, output_offsets=True) print(batch_transcription) ``` [{'text': '<|startoftranscript|><|en|><|transcribe|><|0.00|> A Popular Personage at Home by Thomas Hardy.<|12.36|><|12.36|> I live here, Wessex is my nameβ€”I am a dog known rather well.<|19.12|><|19.12|> I guard the house, but how that climb to be my whim I cannot tell.<|25.40|><|25.40|><|endoftext|>', 'offsets': [{'text': ' A Popular Personage at Home by Thomas Hardy.', 'timestamp': (0.0, 12.36)}, {'text': ' I live here, Wessex is my nameβ€”I am a dog known rather well.', 'timestamp': (12.36, 19.12)}, {'text': ' I guard the house, but how that climb to be my whim I cannot tell.', 'timestamp': (19.12, 25.400000000000002)}]}] [{'text': '<|startoftranscript|><|en|><|transcribe|><|0.00|> A Popular Personage at Home by Thomas Hardy.<|4.76|><|4.76|> Read for LibriVox.org by Anita Hibbard, January 23rd, 2023.<|11.84|><|11.84|> I live here.<|14.04|><|14.04|> Essex is my name.<|15.04|><|endoftext|>', 'offsets': [{'text': ' A Popular Personage at Home by Thomas Hardy.', 'timestamp': (0.0, 4.76)}, {'text': ' Read for LibriVox.org by Anita Hibbard, January 23rd, 2023.', 'timestamp': (4.76, 11.84)}, {'text': ' I live here.', 'timestamp': (11.84, 14.040000000000001)}, {'text': ' Essex is my name.', 'timestamp': (14.040000000000001, 15.040000000000001)}]}] [{'text': '<|startoftranscript|><|en|><|transcribe|><|0.00|> name. I am a dog known rather well. I guard the house, but how that climb to be my whim<|7.76|>**<|7.76|> I cannot tell. With a leap and a heart elate I go at the end of an hour\'s expectancy."<|15.04|><|endoftext|>**', 'offsets': [{'text': ' name. I am a dog known rather well. I guard the house, but how that climb to be my whim', 'timestamp': (0.0, 7.76)}, {'text': ' I cannot tell. With a leap and a heart elate I go at the end of an hour\'s expectancy."', 'timestamp': (7.76, 15.040000000000001)}]}] Observe that the last phrase is caught by the second half 15s clip but not the 30s one. This behavior is also observed in pipeline. (The first half 15s clip is repeating oddly without the beam search but that is a different issue). ``` from transformers import pipeline pipe = pipeline(task="automatic-speech-recognition", model="openai/whisper-medium") for url in urls: print(pipe(url, return_timestamps=True)) ``` {'text': ' A Popular Personage at Home by Thomas Hardy. I live here. Wessex is my name. I am a dog known rather well. I guard the house, but how that climb to be my whim I cannot tell.', 'chunks': [{'timestamp': (0.0, 12.36), 'text': ' A Popular Personage at Home by Thomas Hardy.'}, {'timestamp': (12.36, 14.08), 'text': ' I live here.'}, {'timestamp': (14.08, 16.2), 'text': ' Wessex is my name.'}, {'timestamp': (16.2, 19.12), 'text': ' I am a dog known rather well.'}, {'timestamp': (19.12, 25.4), 'text': ' I guard the house, but how that climb to be my whim I cannot tell.'}]} {'text': ' A Popular Personage at Home by Thomas Hardy. Read for LibriVox.org by Anita Hibbard. January 23, 2023. I live here. Essex is my name. I live here. Essex is my name. I live here. Essex is my name. I live here. Essex is my name. I live here. Essex is my name. I live here. Essex is my name. I live here. Essex is my name. I live here. Essex is my name.', 'chunks': [{'timestamp': (0.0, 4.76), 'text': ' A Popular Personage at Home by Thomas Hardy.'}, {'timestamp': (4.76, 8.68), 'text': ' Read for LibriVox.org by Anita Hibbard.'}, {'timestamp': (8.68, 11.24), 'text': ' January 23, 2023.'}, {'timestamp': (11.24, 14.04), 'text': ' I live here.'}, {'timestamp': (14.04, 15.04), 'text': ' Essex is my name.'}, {'timestamp': (15.04, 16.04), 'text': ' I live here.'}, {'timestamp': (16.04, 17.04), 'text': ' Essex is my name.'}, {'timestamp': (17.04, 18.04), 'text': ' I live here.'}, {'timestamp': (18.04, 19.04), 'text': ' Essex is my name.'}, {'timestamp': (19.04, 20.04), 'text': ' I live here.'}, {'timestamp': (20.04, 21.04), 'text': ' Essex is my name.'}, {'timestamp': (21.04, 22.04), 'text': ' I live here.'}, {'timestamp': (22.04, 23.04), 'text': ' Essex is my name.'}, {'timestamp': (23.04, 24.04), 'text': ' I live here.'}, {'timestamp': (24.04, 25.04), 'text': ' Essex is my name.'}, {'timestamp': (25.04, 26.04), 'text': ' I live here.'}, {'timestamp': (26.04, 27.04), 'text': ' Essex is my name.'}, {'timestamp': (27.04, 28.04), 'text': ' I live here.'}, {'timestamp': (28.04, 29.04), 'text': ' Essex is my name.'}]} {'text': " name. I am a dog known rather well. I guard the house, but how that climb to be my whim I cannot tell. With a leap and a heart elate I go, at the end of an hour's expectancy.", 'chunks': [{'timestamp': (0.0, 7.76), 'text': ' name. I am a dog known rather well. I guard the house, but how that climb to be my whim'}, {'timestamp': (7.76, 15.04), 'text': " I cannot tell. With a leap and a heart elate I go, at the end of an hour's expectancy."}]} None of the configuration settings I change seem to affect this, including setting early_stopping to "never". This also seems to be a different issue to https://github.com/huggingface/transformers/issues/23231 as the last segment does not show up in the regular decoding output or the offsets. ### Expected behavior The last segment should be returned for 30s clips. A workaround would be splitting >25s clips in half but that is not so ideal since it impacts both performance and accuracy.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24959/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24959/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24958
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24958/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24958/comments
https://api.github.com/repos/huggingface/transformers/issues/24958/events
https://github.com/huggingface/transformers/pull/24958
1,814,444,504
PR_kwDOCUB6oc5WBslb
24,958
[`LlamaConfig`] Nit: pad token should be None by default
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? Does not set the padding token as `0` is for the `unk_token`. Though Llama has bytefallback support, meaning most of the time the unkown token will not be form the input, it can generate the `<unk>` token (probably very rarely). The pad token is used in the embedding and the unk embedding is not None. This should warn user that are doing `SequenceClassification` that the padding token is not set.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24958/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24958/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24958", "html_url": "https://github.com/huggingface/transformers/pull/24958", "diff_url": "https://github.com/huggingface/transformers/pull/24958.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24958.patch", "merged_at": 1689942754000 }
https://api.github.com/repos/huggingface/transformers/issues/24957
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24957/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24957/comments
https://api.github.com/repos/huggingface/transformers/issues/24957/events
https://github.com/huggingface/transformers/pull/24957
1,814,434,802
PR_kwDOCUB6oc5WBqeB
24,957
🌐 [i18n-KO] Translated `add_new_model.md` to Korean
{ "login": "mjk0618", "id": 39152134, "node_id": "MDQ6VXNlcjM5MTUyMTM0", "avatar_url": "https://avatars.githubusercontent.com/u/39152134?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mjk0618", "html_url": "https://github.com/mjk0618", "followers_url": "https://api.github.com/users/mjk0618/followers", "following_url": "https://api.github.com/users/mjk0618/following{/other_user}", "gists_url": "https://api.github.com/users/mjk0618/gists{/gist_id}", "starred_url": "https://api.github.com/users/mjk0618/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mjk0618/subscriptions", "organizations_url": "https://api.github.com/users/mjk0618/orgs", "repos_url": "https://api.github.com/users/mjk0618/repos", "events_url": "https://api.github.com/users/mjk0618/events{/privacy}", "received_events_url": "https://api.github.com/users/mjk0618/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello @sgugger , may you please approve the workflow for this PR?\n\nI ran doc-builder locally, and it looks OK. We just want to see the live-preview to double check.\n\nThank you so much for your support.\nHope you have a great day! πŸ‘ ", "_The documentation is not available anymore as the PR was closed or merged._", "May you please review this PR? @sgugger, @ArthurZucker, @eunseojo" ]
1,689
1,691
1,691
CONTRIBUTOR
null
<!-- PR의 제λͺ©μ€ "🌐 [i18n-KO] Translated `<your_file>.md` to Korean" 으둜 λΆ€νƒλ“œλ¦½λ‹ˆλ‹€! --> # What does this PR do? Translated the `add_new_model.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (λ²ˆμ—­ λˆ„λ½/쀑볡 검사) - [x] Grammar Check (λ§žμΆ€λ²• 검사) - [x] Review or Add new terms to glossary (μš©μ–΄ 확인 및 μΆ”κ°€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview둜 μ •μƒμž‘λ™ 확인) ## Who can review? (Initial) <!-- 1. μœ„ 체크가 λͺ¨λ‘ μ™„λ£Œλœ 뒀에, 이 μ•„λž˜μ— 리뷰λ₯Ό μš”μ²­ν•  νŒ€μ›λ“€μ„ λ©˜μ…˜ν•΄μ£Όμ„Έμš”! --> <!-- May you please review this PR? @member1 @member2 ... --> May you please review this PR? @nuatmochoi, @bolizabeth, @hyunhp, @mjk0618, @jungnerd ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) May you please review this PR? @sgugger, @ArthurZucker, @eunseojo <!-- 2. νŒ€μ›λ“€κ³Ό 리뷰가 λλ‚œ ν›„μ—λ§Œ ν—ˆκΉ…νŽ˜μ΄μŠ€ μ§μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24957/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24957/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24957", "html_url": "https://github.com/huggingface/transformers/pull/24957", "diff_url": "https://github.com/huggingface/transformers/pull/24957.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24957.patch", "merged_at": 1691598270000 }
https://api.github.com/repos/huggingface/transformers/issues/24956
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24956/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24956/comments
https://api.github.com/repos/huggingface/transformers/issues/24956/events
https://github.com/huggingface/transformers/pull/24956
1,814,366,838
PR_kwDOCUB6oc5WBbZ2
24,956
Change logic for logging in the examples
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24956). All of your documentation changes will be reflected on that endpoint." ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? Changes logic to check if we are doing distributed training by reading the `parallel_mode` instead, since `local_rank` will always (for the most part) be `>-1` with Accelerate now Fixes # (issue) Solves https://github.com/huggingface/transformers/issues/24924 Verified that it now shows as "distributed" when using multi-gpu, and `False` if not. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24956/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24956/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24956", "html_url": "https://github.com/huggingface/transformers/pull/24956", "diff_url": "https://github.com/huggingface/transformers/pull/24956.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24956.patch", "merged_at": 1689870611000 }
https://api.github.com/repos/huggingface/transformers/issues/24955
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24955/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24955/comments
https://api.github.com/repos/huggingface/transformers/issues/24955/events
https://github.com/huggingface/transformers/pull/24955
1,814,306,233
PR_kwDOCUB6oc5WBOI4
24,955
[`RWKV`] Add Gradient Checkpointing support for RWKV
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "This snippet worked on my end:\r\n\r\n```python\r\nimport torch\r\nfrom transformers import RwkvForCausalLM\r\n\r\nmodel = RwkvForCausalLM.from_pretrained(\"RWKV/rwkv-4-169m-pile\").to(0)\r\nmodel.train()\r\nmodel.gradient_checkpointing_enable()\r\n\r\ndummy_input = torch.LongTensor([[1, 2, 3, 4]]).to(0)\r\n\r\noutputs = model(dummy_input)\r\nlogits = outputs.logits\r\n\r\nloss = logits.mean()\r\nloss.backward()\r\n```\r\nI will assume things are working (+ the CI should be triggered once the model has `supports_gradient_checkpointing = True`) , merging! " ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/24831 As stated by the issue above, technically there is no reason to not support GC for RWKV model cc @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24955/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24955/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24955", "html_url": "https://github.com/huggingface/transformers/pull/24955", "diff_url": "https://github.com/huggingface/transformers/pull/24955.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24955.patch", "merged_at": 1689870563000 }
https://api.github.com/repos/huggingface/transformers/issues/24954
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24954/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24954/comments
https://api.github.com/repos/huggingface/transformers/issues/24954/events
https://github.com/huggingface/transformers/pull/24954
1,814,279,161
PR_kwDOCUB6oc5WBIC8
24,954
Bump aiohttp from 3.8.1 to 3.8.5 in /examples/research_projects/decision_transformer
{ "login": "dependabot[bot]", "id": 49699333, "node_id": "MDM6Qm90NDk2OTkzMzM=", "avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dependabot%5Bbot%5D", "html_url": "https://github.com/apps/dependabot", "followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers", "following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}", "gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}", "starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions", "organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs", "repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos", "events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}", "received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events", "type": "Bot", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
CONTRIBUTOR
null
Bumps [aiohttp](https://github.com/aio-libs/aiohttp) from 3.8.1 to 3.8.5. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/aio-libs/aiohttp/releases">aiohttp's releases</a>.</em></p> <blockquote> <h2>3.8.5</h2> <h2>Security bugfixes</h2> <ul> <li> <p>Upgraded the vendored copy of llhttp_ to v8.1.1 -- by :user:<code>webknjaz</code> and :user:<code>Dreamsorcerer</code>.</p> <p>Thanks to :user:<code>sethmlarson</code> for reporting this and providing us with comprehensive reproducer, workarounds and fixing details! For more information, see <a href="https://github.com/aio-libs/aiohttp/security/advisories/GHSA-45c4-8wx5-qw6w">https://github.com/aio-libs/aiohttp/security/advisories/GHSA-45c4-8wx5-qw6w</a>.</p> <p>.. _llhttp: <a href="https://llhttp.org">https://llhttp.org</a></p> <p>(<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7346">#7346</a>)</p> </li> </ul> <h2>Features</h2> <ul> <li> <p>Added information to C parser exceptions to show which character caused the error. -- by :user:<code>Dreamsorcerer</code></p> <p>(<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7366">#7366</a>)</p> </li> </ul> <h2>Bugfixes</h2> <ul> <li> <p>Fixed a transport is :data:<code>None</code> error -- by :user:<code>Dreamsorcerer</code>.</p> <p>(<a href="https://redirect.github.com/aio-libs/aiohttp/issues/3355">#3355</a>)</p> </li> </ul> <hr /> <h2>3.8.4</h2> <h2>Bugfixes</h2> <ul> <li>Fixed incorrectly overwriting cookies with the same name and domain, but different path. (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/6638">#6638</a>)</li> <li>Fixed <code>ConnectionResetError</code> not being raised after client disconnection in SSL environments. (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7180">#7180</a>)</li> </ul> <hr /> <h2>3.8.3</h2> <p>.. attention::</p> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/aio-libs/aiohttp/blob/v3.8.5/CHANGES.rst">aiohttp's changelog</a>.</em></p> <blockquote> <h1>3.8.5 (2023-07-19)</h1> <h2>Security bugfixes</h2> <ul> <li> <p>Upgraded the vendored copy of llhttp_ to v8.1.1 -- by :user:<code>webknjaz</code> and :user:<code>Dreamsorcerer</code>.</p> <p>Thanks to :user:<code>sethmlarson</code> for reporting this and providing us with comprehensive reproducer, workarounds and fixing details! For more information, see <a href="https://github.com/aio-libs/aiohttp/security/advisories/GHSA-45c4-8wx5-qw6w">https://github.com/aio-libs/aiohttp/security/advisories/GHSA-45c4-8wx5-qw6w</a>.</p> <p>.. _llhttp: <a href="https://llhttp.org">https://llhttp.org</a></p> <p><code>[#7346](https://github.com/aio-libs/aiohttp/issues/7346) &lt;https://github.com/aio-libs/aiohttp/issues/7346&gt;</code>_</p> </li> </ul> <h2>Features</h2> <ul> <li> <p>Added information to C parser exceptions to show which character caused the error. -- by :user:<code>Dreamsorcerer</code></p> <p><code>[#7366](https://github.com/aio-libs/aiohttp/issues/7366) &lt;https://github.com/aio-libs/aiohttp/issues/7366&gt;</code>_</p> </li> </ul> <h2>Bugfixes</h2> <ul> <li> <p>Fixed a transport is :data:<code>None</code> error -- by :user:<code>Dreamsorcerer</code>.</p> <p><code>[#3355](https://github.com/aio-libs/aiohttp/issues/3355) &lt;https://github.com/aio-libs/aiohttp/issues/3355&gt;</code>_</p> </li> </ul> <hr /> <h1>3.8.4 (2023-02-12)</h1> <h2>Bugfixes</h2> <ul> <li>Fixed incorrectly overwriting cookies with the same name and domain, but different path. <code>[#6638](https://github.com/aio-libs/aiohttp/issues/6638) &lt;https://github.com/aio-libs/aiohttp/issues/6638&gt;</code>_</li> <li>Fixed <code>ConnectionResetError</code> not being raised after client disconnection in SSL environments. <code>[#7180](https://github.com/aio-libs/aiohttp/issues/7180) &lt;https://github.com/aio-libs/aiohttp/issues/7180&gt;</code>_</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/aio-libs/aiohttp/commit/9c13a52c21c23dfdb49ed89418d28a5b116d0681"><code>9c13a52</code></a> Bump aiohttp to v3.8.5 a security release</li> <li><a href="https://github.com/aio-libs/aiohttp/commit/7c02129567bc4ec59be467b70fc937c82920948c"><code>7c02129</code></a> ο£” Bump pypa/cibuildwheel to v2.14.1</li> <li><a href="https://github.com/aio-libs/aiohttp/commit/135a45e9d655d56e4ebad78abe84f1cb7b5c62dc"><code>135a45e</code></a> Improve error messages from C parser (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7366">#7366</a>) (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7380">#7380</a>)</li> <li><a href="https://github.com/aio-libs/aiohttp/commit/9337fb3f2ab2b5f38d7e98a194bde6f7e3d16c40"><code>9337fb3</code></a> Fix bump llhttp to v8.1.1 (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7367">#7367</a>) (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7377">#7377</a>)</li> <li><a href="https://github.com/aio-libs/aiohttp/commit/f07e9b44b5cb909054a697c8dd447b30dbf8073e"><code>f07e9b4</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7373">#7373</a>/66e261a5 backport][3.8] Drop azure mention (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7374">#7374</a>)</li> <li><a href="https://github.com/aio-libs/aiohttp/commit/01d9b70e5477cd746561b52225992d8a2ebde953"><code>01d9b70</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7370">#7370</a>/22c264ce backport][3.8] fix: Spelling error fixed (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7371">#7371</a>)</li> <li><a href="https://github.com/aio-libs/aiohttp/commit/3577b1e3719d4648fa973dbdec927f78f9df34dd"><code>3577b1e</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7359">#7359</a>/7911f1e9 backport][3.8] ο£” Set up secretless publishing to PyPI (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7360">#7360</a>)</li> <li><a href="https://github.com/aio-libs/aiohttp/commit/8d45f9c99511cd80140d6658bd9c11002c697f1c"><code>8d45f9c</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7333">#7333</a>/3a54d378 backport][3.8] Fix TLS transport is <code>None</code> error (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7357">#7357</a>)</li> <li><a href="https://github.com/aio-libs/aiohttp/commit/dd8e24e77351df9c0f029be49d3c6d7862706e79"><code>dd8e24e</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7343">#7343</a>/18057581 backport][3.8] Mention encoding in <code>yarl.URL</code> (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7355">#7355</a>)</li> <li><a href="https://github.com/aio-libs/aiohttp/commit/40874103ebfaa1007d47c25ecc4288af873a07cf"><code>4087410</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7346">#7346</a>/346fd202 backport][3.8] ο£” Bump vendored llhttp to v8.1.1 (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7352">#7352</a>)</li> <li>Additional commits viewable in <a href="https://github.com/aio-libs/aiohttp/compare/v3.8.1...v3.8.5">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=aiohttp&package-manager=pip&previous-version=3.8.1&new-version=3.8.5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24954/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24954/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24954", "html_url": "https://github.com/huggingface/transformers/pull/24954", "diff_url": "https://github.com/huggingface/transformers/pull/24954.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24954.patch", "merged_at": 1689869859000 }
https://api.github.com/repos/huggingface/transformers/issues/24953
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24953/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24953/comments
https://api.github.com/repos/huggingface/transformers/issues/24953/events
https://github.com/huggingface/transformers/pull/24953
1,814,156,868
PR_kwDOCUB6oc5WAswF
24,953
🌐 [i18n-KO] Translated `add_tensorflow_model.md` to Korean
{ "login": "keonju2", "id": 54880474, "node_id": "MDQ6VXNlcjU0ODgwNDc0", "avatar_url": "https://avatars.githubusercontent.com/u/54880474?v=4", "gravatar_id": "", "url": "https://api.github.com/users/keonju2", "html_url": "https://github.com/keonju2", "followers_url": "https://api.github.com/users/keonju2/followers", "following_url": "https://api.github.com/users/keonju2/following{/other_user}", "gists_url": "https://api.github.com/users/keonju2/gists{/gist_id}", "starred_url": "https://api.github.com/users/keonju2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/keonju2/subscriptions", "organizations_url": "https://api.github.com/users/keonju2/orgs", "repos_url": "https://api.github.com/users/keonju2/repos", "events_url": "https://api.github.com/users/keonju2/events{/privacy}", "received_events_url": "https://api.github.com/users/keonju2/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Please stop opening and closing the same PR like this." ]
1,689
1,689
1,689
CONTRIBUTOR
null
<!-- PR의 제λͺ©μ€ "🌐 [i18n-KO] Translated `<your_file>.md` to Korean" 으둜 λΆ€νƒλ“œλ¦½λ‹ˆλ‹€ --> # What does this PR do? Translated the `add_tensorflow_model.md` file of the documentation to Korean πŸ˜„ Thank you in advance for your review! Part of https://github.com/huggingface/transformers/issues/20179 <!-- 메인 μ΄μŠˆμ— 기둝이 λ‚¨μ•„μš”! κ°€μ§œμ—°κ΅¬μ†Œ 리포λ₯Ό μ‚¬μš©ν•΄ μ—°μŠ΅ν•˜μ‹€λ•ŒλŠ” μ œκ±°ν•΄μ£Όμ‹œλ©΄ κ°μ‚¬ν•˜κ² μŠ΅λ‹ˆλ‹€! :smile: --> ## Before reviewing - [x] Check for missing / redundant translations (λ²ˆμ—­ λˆ„λ½/쀑볡 검사) - [x] Grammar Check (λ§žμΆ€λ²• 검사) - [x] Review or Add new terms to glossary (μš©μ–΄ 확인 및 μΆ”κ°€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [] Check live-preview for gotchas (live-preview둜 μ •μƒμž‘λ™ 확인) ## Who can review? (Initial) <!-- 1. μœ„ 체크가 λͺ¨λ‘ μ™„λ£Œλœ λ’€μ—λ§Œ κ°€μ§œμ—°κ΅¬μ†Œ νŒ€μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> <!-- Team PseudoLab, may you please review this PR? --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. κ°€μ§œμ—°κ΅¬μ†Œ νŒ€μ›λ“€κ³Ό 리뷰가 λλ‚œ ν›„μ—λ§Œ ν—ˆκΉ…νŽ˜μ΄μŠ€ μ§μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> <!-- May you please review this PR? -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24953/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24953/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24953", "html_url": "https://github.com/huggingface/transformers/pull/24953", "diff_url": "https://github.com/huggingface/transformers/pull/24953.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24953.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24952
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24952/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24952/comments
https://api.github.com/repos/huggingface/transformers/issues/24952/events
https://github.com/huggingface/transformers/pull/24952
1,814,103,275
PR_kwDOCUB6oc5WAg8t
24,952
Add Text-To-Speech pipeline
{ "login": "ylacombe", "id": 52246514, "node_id": "MDQ6VXNlcjUyMjQ2NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ylacombe", "html_url": "https://github.com/ylacombe", "followers_url": "https://api.github.com/users/ylacombe/followers", "following_url": "https://api.github.com/users/ylacombe/following{/other_user}", "gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}", "starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions", "organizations_url": "https://api.github.com/users/ylacombe/orgs", "repos_url": "https://api.github.com/users/ylacombe/repos", "events_url": "https://api.github.com/users/ylacombe/events{/privacy}", "received_events_url": "https://api.github.com/users/ylacombe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi @Narsil , thanks for your fast review!\r\nBasically, I will refactor my code to meet your expectations !\r\nThere are still 2 things I'd like to discuss before and that I talked about in the comments:\r\n\r\n1. **`speechT5` specific case:** `speechT5` was introduced 5 months ago, and has two issues - it uses a `.generate_speech` method instead of a `.generate`, and it needs an additional vocoder on top in order to actually produce audio signals instead of a spectrogram. What's the best way to stay consistent with it and with the pipeline logic? Should I still introduce model specific code or should I work on modifying `speechT5` instead? Modying `speechT5` might be problematic since it was the first TTS model introduced so users might be used to its API and because it would leave `BarkModel` has the only TTS model supported in the pipeline for a short time\r\n2. **`speaker_embeddings`** and other `Processor`-related utilities: how to stay consistent with the library and continue to use some of the benefits of the Processor or continue to use speaker embeddings in an easy way? I fear that it might add unnecessary difficulties for the users to forward `speaker_embeddings` arguments, WDYT?\r\n\r\nAnyways, many thanks again for the review!\r\n\r\n ", "> Hi @Narsil , thanks for your fast review! Basically, I will refactor my code to meet your expectations ! There are still 2 things I'd like to discuss before and that I talked about in the comments:\r\n> \r\n> 1. **`speechT5` specific case:** `speechT5` was introduced 5 months ago, and has two issues - it uses a `.generate_speech` method instead of a `.generate`, and it needs an additional vocoder on top in order to actually produce audio signals instead of a spectrogram. What's the best way to stay consistent with it and with the pipeline logic? Should I still introduce model specific code or should I work on modifying `speechT5` instead? Modying `speechT5` might be problematic since it was the first TTS model introduced so users might be used to its API and because it would leave `BarkModel` has the only TTS model supported in the pipeline for a short time\r\n\r\nI will let maintainers focusing on audio answer to that @sanchit-gandhi I think.\r\nBut what I do know is that not relying on invariants within `transformers` makes pipelines play the never ending game of `catch-up` for every model thrown into the mix. pipelines see `AutoModelFor` which should have consistent API which we can rely on.\r\n\r\nI remember talks about splitting `generate` and `generate_speech` to allow differentation between the 2.\r\n\r\nFor the `vocoder`, I don't know how, but it should be invinsible to users.\r\nIn ASR we've had ngram being added to the configuration for instance, which makes it loadable automatically.\r\n\r\n> \r\n> 2. **`speaker_embeddings`** and other `Processor`-related utilities: how to stay consistent with the library and continue to use some of the benefits of the Processor or continue to use speaker embeddings in an easy way? I fear that it might add unnecessary difficulties for the users to forward `speaker_embeddings` arguments, WDYT?\r\n\r\nAgain, there might already be solutions. \r\nBut loading from a random dataset some random data within `preprocess` is not really sustainable.\r\n\r\nMy suggestion to put this in usercode alleviates that contraint.\r\n\r\nBut in general having speaker_embedding for TTS should always be purely optional imo.\r\n\r\n> \r\n> \r\n> Anyways, many thanks again for the review!\r\n\r\n", "Thanks @Narsil , I will wait for @sanchit-gandhi opinion on it then!\r\nWhat about [this comment](https://github.com/huggingface/transformers/pull/24952#discussion_r1270368324) ?", "I haven't generated the tiny models for `bark`. I will do it today πŸ™ (I can't guarantee it will be able to be generated smoothly - usually they should be already on the Hub, and if not, it means the creation process has some issue for this model)", "Hi @ylacombe . There are a few issues that blocks the creation of tiny model (of `bark` for pipeline testing).\r\n\r\nThe first one is `tests/models/bark/test_modeling_bark.py` has no `BarkModelTest` and `BarkModelTester`. Only the component models (fine, coarse, semantic).\r\n\r\nAre those component models also used as standalone models? Or they are really just components for `BarkModel` and we expect the users to use `BarkModel` rather than those components?\r\n\r\nMore importantly, for the pipeline implemented in this PR, which model types do it needs.\r\n\r\nThanks in advance.", "Hi @ydshieh, thanks for your help on the matter!\r\n\r\nThere's only `BarkModelIntegrationTests` for now. The other sub-models are used as components for `BarkModel`. Users are expected to use `BarkModel`, which will ultimately be used in the pipeline.\r\n\r\nLet me know if I can help you with anything!", "> Users are expected to use BarkModel\r\n\r\nIn this case, it means we need to create a tiny model for `BarkModel` or models with head (if any) on top of it. This implies we need a `BarkModelTest` and `BarkModelTester`. For the creation, we don't really need to implement test methods in `BarkModelTest`, but there should be later (I would not say it's urgent though).\r\n\r\nI will open a PR to quickly add something necessary so we can create tiny model for `bark`. I will ping you for a review so you know what's necessary (in the future for new models you will add πŸ€— ).\r\n", "LGTM ! Thanks for the rehaul ", "Hey @ylacombe \r\n\r\nIf you don't mind, let's have the tiny model ready and try with it first before merge.\r\nI am just able to create it.", "Just uploaded\r\n\r\nhttps://huggingface.co/hf-internal-testing/tiny-random-BarkModel/\r\n\r\nBut I haven't tried it with the tests implemented in this PR." ]
1,689
1,692
1,692
COLLABORATOR
null
# What does this PR do? Until recently, there was only one TTS model in Transformers. Recent ([Bark](https://huggingface.co/docs/transformers/model_doc/bark)) and future ([FastSpeechConformer2](https://github.com/huggingface/transformers/pull/23439)) additions have and will further enrich the number of TTS models in Transformers. This may be the best time to add a text-to-speech pipeline to Transformers. This PR tentatively proposes: - The addition of a text-to-speech pipeline whose design could be modified in line with future TTS additions. - Add a class AutoModelForTextToSpeech - Add a `processor` task to the pipeline code to facilitate use of the `processor`. My conception of the architecture for now: - Backward compatibility with [FastSpeechConformer2](https://github.com/huggingface/transformers/pull/23439), retaining the ability to use its hacked `generate_speech` method. - Future compatibility with future TTS models, counting on the fact that these models will use a `generate` method to generate audio. - Possible compatibility with other TTA (text-to-audio) models such as [MusicGen](https://huggingface.co/docs/transformers/model_doc/musicgen). What I'm counting on: - future models should have a `generate` method, even if they are not AR models per se (for the moment, [FastSpeechConformer2](https://github.com/huggingface/transformers/pull/23439) is not AR and has no such method) or counts on an additional head model ([FastSpeechConformer2](https://github.com/huggingface/transformers/pull/23439) needs a vocoder on top to pass from a spectrogram to an audio - see [discussion here](https://github.com/huggingface/transformers/pull/23439#discussion_r1258660411)). - future models will use a `Processor` even if they only use a tokenizer, to allow easy use of other conditional inputs such as audio or speaker embeddings. And the processor must be added to `PROCESSOR_MAPPING` (not the case of [MusicGen](https://huggingface.co/docs/transformers/model_doc/musicgen) atm). I'm open to further discuss the architecture and to make some changes! EDIT: for reference, I've made another design choice following internal discussions. It is discussed [here](https://github.com/huggingface/transformers/pull/24952/#pullrequestreview-1556507240). Fixes #22487 *Note:* I was inspired by @LysandreJik draft of a [TTS pipeline](https://huggingface.co/lysandre/text-to-speech-pipeline/blob/main/tts.py). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? [LINK](https://github.com/huggingface/transformers/issues/22487#issuecomment-1496312713) - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Hey @sanchit-gandhi and @Narsil, I think you're the right people to talk to before the core review!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24952/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 4, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24952/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24952", "html_url": "https://github.com/huggingface/transformers/pull/24952", "diff_url": "https://github.com/huggingface/transformers/pull/24952.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24952.patch", "merged_at": 1692290087000 }
https://api.github.com/repos/huggingface/transformers/issues/24951
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24951/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24951/comments
https://api.github.com/repos/huggingface/transformers/issues/24951/events
https://github.com/huggingface/transformers/issues/24951
1,814,060,845
I_kwDOCUB6oc5sIF8t
24,951
T5 Tokenizer Legacy behaviour warning
{ "login": "pointonjoel", "id": 45101698, "node_id": "MDQ6VXNlcjQ1MTAxNjk4", "avatar_url": "https://avatars.githubusercontent.com/u/45101698?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pointonjoel", "html_url": "https://github.com/pointonjoel", "followers_url": "https://api.github.com/users/pointonjoel/followers", "following_url": "https://api.github.com/users/pointonjoel/following{/other_user}", "gists_url": "https://api.github.com/users/pointonjoel/gists{/gist_id}", "starred_url": "https://api.github.com/users/pointonjoel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pointonjoel/subscriptions", "organizations_url": "https://api.github.com/users/pointonjoel/orgs", "repos_url": "https://api.github.com/users/pointonjoel/repos", "events_url": "https://api.github.com/users/pointonjoel/events{/privacy}", "received_events_url": "https://api.github.com/users/pointonjoel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Will just answer for potentially confused users: \r\n- the warning is triggered if `legacy=True` which is the default for backward compatibility\r\n- to use the latest behaviour, use `tokeniser = AutoTokenizer.from_pretrained(\"google/mt5-small\", legacy=False)`\r\n", "It might be nice to add the fix to the error message -- it's a bit hard to find :)" ]
1,689
1,690
1,689
NONE
null
### System Info I am running on Google Collab (Python 3.10.6) using tokenizers-0.13.3, transformers-4.31.0, huggingface-hub-0.16.4 ,safetensors-0.3.1 ### Who can help? @ArthurZucker @younesbelkada ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I'm getting a legacy behaviour warning come up when simply loading a T5 tokenizer - it appears even before using the tokenizer. Is there an updated way to load the tokenizer? The warning appears when running the following lines of code: _from transformers import AutoTokenizer tokeniser = AutoTokenizer.from_pretrained("google/mt5-small")_ The warning is: _You are using the legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. This means that tokens that come after special tokens will not be properly handled. We recommend you to read the related pull request available at https://github.com/huggingface/transformers/pull/24565 /usr/local/lib/python3.10/dist-packages/transformers/convert_slow_tokenizer.py:470: UserWarning: The sentencepiece tokenizer that you are converting to a fast tokenizer uses the byte fallback option which is not implemented in the fast tokenizers. In practice this means that the fast version of the tokenizer can produce unknown tokens whereas the sentencepiece version would have converted these unknown tokens into a sequence of byte tokens matching the original piece of text. warnings.warn(_ ### Expected behavior No warning
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24951/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24951/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24950
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24950/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24950/comments
https://api.github.com/repos/huggingface/transformers/issues/24950/events
https://github.com/huggingface/transformers/pull/24950
1,813,817,380
PR_kwDOCUB6oc5V_iKD
24,950
Update processing_vision_text_dual_encoder.py
{ "login": "premsa", "id": 38909445, "node_id": "MDQ6VXNlcjM4OTA5NDQ1", "avatar_url": "https://avatars.githubusercontent.com/u/38909445?v=4", "gravatar_id": "", "url": "https://api.github.com/users/premsa", "html_url": "https://github.com/premsa", "followers_url": "https://api.github.com/users/premsa/followers", "following_url": "https://api.github.com/users/premsa/following{/other_user}", "gists_url": "https://api.github.com/users/premsa/gists{/gist_id}", "starred_url": "https://api.github.com/users/premsa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/premsa/subscriptions", "organizations_url": "https://api.github.com/users/premsa/orgs", "repos_url": "https://api.github.com/users/premsa/repos", "events_url": "https://api.github.com/users/premsa/events{/privacy}", "received_events_url": "https://api.github.com/users/premsa/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24950). All of your documentation changes will be reflected on that endpoint." ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes: small typo: kwrags -> kwargs ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24950/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24950/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24950", "html_url": "https://github.com/huggingface/transformers/pull/24950", "diff_url": "https://github.com/huggingface/transformers/pull/24950.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24950.patch", "merged_at": 1689855939000 }
https://api.github.com/repos/huggingface/transformers/issues/24949
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24949/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24949/comments
https://api.github.com/repos/huggingface/transformers/issues/24949/events
https://github.com/huggingface/transformers/pull/24949
1,813,770,848
PR_kwDOCUB6oc5V_XpY
24,949
Bump pygments from 2.11.2 to 2.15.0 in /examples/research_projects/decision_transformer
{ "login": "dependabot[bot]", "id": 49699333, "node_id": "MDM6Qm90NDk2OTkzMzM=", "avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dependabot%5Bbot%5D", "html_url": "https://github.com/apps/dependabot", "followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers", "following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}", "gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}", "starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions", "organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs", "repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos", "events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}", "received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events", "type": "Bot", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" } ]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24949). All of your documentation changes will be reflected on that endpoint." ]
1,689
1,689
1,689
CONTRIBUTOR
null
Bumps [pygments](https://github.com/pygments/pygments) from 2.11.2 to 2.15.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/pygments/pygments/releases">pygments's releases</a>.</em></p> <blockquote> <h2>2.15.0</h2> <ul> <li> <p>Added lexers:</p> <ul> <li>Carbon (<a href="https://redirect.github.com/pygments/pygments/issues/2362">#2362</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2365">#2365</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2366">#2366</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2367">#2367</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2368">#2368</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2369">#2369</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2370">#2370</a>)</li> <li>Dax (<a href="https://redirect.github.com/pygments/pygments/issues/2335">#2335</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2345">#2345</a>)</li> <li>MediaWiki Wikitext (<a href="https://redirect.github.com/pygments/pygments/issues/2373">#2373</a>, <a href="https://redirect.github.com/pygments/pygments/issues/827">#827</a>)</li> <li>PostgreSQL Explain (<a href="https://redirect.github.com/pygments/pygments/issues/2398">#2398</a>)</li> <li>WGSL (WebGPU Shading Language) (<a href="https://redirect.github.com/pygments/pygments/issues/2386">#2386</a>)</li> <li>X++ (<a href="https://redirect.github.com/pygments/pygments/issues/2339">#2339</a>)</li> </ul> </li> <li> <p>Updated lexers:</p> <ul> <li> <p>AMDGPU: Add support for <code>scratch_</code> instructions, the <code>attr*.*</code> argument, as well as the <code>off</code> modifier (<a href="https://redirect.github.com/pygments/pygments/issues/2327">#2327</a>).</p> </li> <li> <p>APDL: Miscellaneous improvements (<a href="https://redirect.github.com/pygments/pygments/issues/2314">#2314</a>)</p> </li> <li> <p>bash/tcsh:</p> <ul> <li>Move <code>break</code> to keywords (<a href="https://redirect.github.com/pygments/pygments/issues/2377">#2377</a>)</li> <li>Improve bash math expansion lexing (<a href="https://redirect.github.com/pygments/pygments/issues/2255">#2255</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2353">#2353</a>)</li> </ul> </li> <li> <p>Chapel: Support attributes (<a href="https://redirect.github.com/pygments/pygments/issues/2376">#2376</a>)</p> </li> <li> <p>CMake: Implement bracket style comments (<a href="https://redirect.github.com/pygments/pygments/issues/2338">#2338</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2354">#2354</a>)</p> </li> <li> <p>CSS: Improve lexing of numbers inside function calls (<a href="https://redirect.github.com/pygments/pygments/issues/2382">#2382</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2383">#2383</a>)</p> </li> <li> <p>diff: Support normal diff syntax, as opposed to unified diff syntax (<a href="https://redirect.github.com/pygments/pygments/issues/2321">#2321</a>)</p> </li> <li> <p>GLSL, HLSL:</p> <ul> <li>Support line continuations in preprocessor code (<a href="https://redirect.github.com/pygments/pygments/issues/2350">#2350</a>)</li> <li>Improve preprocessor directive handling (<a href="https://redirect.github.com/pygments/pygments/issues/2357">#2357</a>)</li> </ul> </li> <li> <p>LilyPond: minor update of builtins</p> </li> <li> <p>PHP: support attributes (<a href="https://redirect.github.com/pygments/pygments/issues/2055">#2055</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2347">#2347</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2360">#2360</a>), fix anonymous classes without parameters (<a href="https://redirect.github.com/pygments/pygments/issues/2359">#2359</a>), improve lexing of variable variable syntax (<a href="https://redirect.github.com/pygments/pygments/issues/2358">#2358</a>)</p> </li> <li> <p>Python:</p> <ul> <li>Add missing builtins (<a href="https://redirect.github.com/pygments/pygments/issues/2334">#2334</a>)</li> <li>Fix inconsistent lexing of <code>None</code> (<a href="https://redirect.github.com/pygments/pygments/issues/2406">#2406</a>)</li> </ul> </li> <li> <p>Rebol/Red: Don't require script headers (<a href="https://redirect.github.com/pygments/pygments/issues/2348">#2348</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2349">#2349</a>)</p> </li> <li> <p>Spice: Update keywords (<a href="https://redirect.github.com/pygments/pygments/issues/2336">#2336</a>)</p> </li> <li> <p>SQL+Jinja (<code>analyse_text</code> method): Fix catastrophic backtracking (<a href="https://redirect.github.com/pygments/pygments/issues/2355">#2355</a>)</p> </li> <li> <p>Terraform: Add <code>hcl</code> alias (<a href="https://redirect.github.com/pygments/pygments/issues/2375">#2375</a>)</p> </li> </ul> </li> <li> <p>Declare support for Python 3.11 and drop support for Python 3.6 (<a href="https://redirect.github.com/pygments/pygments/issues/2324">#2324</a>).</p> </li> <li> <p>Update <code>native</code> style to improve contrast (<a href="https://redirect.github.com/pygments/pygments/issues/2325">#2325</a>).</p> </li> <li> <p>Update `github-dark`` style to match latest Primer style (<a href="https://redirect.github.com/pygments/pygments/issues/2401">#2401</a>)</p> </li> <li> <p>Revert a change that made guessing lexers based on file names slower on Python 3.10 and older (<a href="https://redirect.github.com/pygments/pygments/issues/2328">#2328</a>).</p> </li> <li> <p>Fix some places where a locale-dependent encoding could unintentionally be used instead of UTF-8 (<a href="https://redirect.github.com/pygments/pygments/issues/2326">#2326</a>).</p> </li> <li> <p>Fix Python traceback handling (<a href="https://redirect.github.com/pygments/pygments/issues/2226">#2226</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2329">#2329</a>).</p> </li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/pygments/pygments/blob/master/CHANGES">pygments's changelog</a>.</em></p> <blockquote> <h2>Version 2.15.0</h2> <p>(released April 10th, 2023)</p> <ul> <li> <p>Added lexers:</p> <ul> <li>Carbon (<a href="https://redirect.github.com/pygments/pygments/issues/2362">#2362</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2365">#2365</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2366">#2366</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2367">#2367</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2368">#2368</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2369">#2369</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2370">#2370</a>)</li> <li>Dax (<a href="https://redirect.github.com/pygments/pygments/issues/2335">#2335</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2345">#2345</a>)</li> <li>MediaWiki Wikitext (<a href="https://redirect.github.com/pygments/pygments/issues/2373">#2373</a>, <a href="https://redirect.github.com/pygments/pygments/issues/827">#827</a>)</li> <li>PostgreSQL Explain (<a href="https://redirect.github.com/pygments/pygments/issues/2398">#2398</a>)</li> <li>WGSL (WebGPU Shading Language) (<a href="https://redirect.github.com/pygments/pygments/issues/2386">#2386</a>)</li> <li>X++ (<a href="https://redirect.github.com/pygments/pygments/issues/2339">#2339</a>)</li> </ul> </li> <li> <p>Updated lexers:</p> <ul> <li> <p>AMDGPU: Add support for <code>scratch_</code> instructions, the <code>attr*.*</code> argument, as well as the <code>off</code> modifier (<a href="https://redirect.github.com/pygments/pygments/issues/2327">#2327</a>).</p> </li> <li> <p>APDL: Miscellaneous improvements (<a href="https://redirect.github.com/pygments/pygments/issues/2314">#2314</a>)</p> </li> <li> <p>bash/tcsh:</p> <ul> <li>Move <code>break</code> to keywords (<a href="https://redirect.github.com/pygments/pygments/issues/2377">#2377</a>)</li> <li>Improve bash math expansion lexing (<a href="https://redirect.github.com/pygments/pygments/issues/2255">#2255</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2353">#2353</a>)</li> </ul> </li> <li> <p>Chapel: Support attributes (<a href="https://redirect.github.com/pygments/pygments/issues/2376">#2376</a>)</p> </li> <li> <p>CMake: Implement bracket style comments (<a href="https://redirect.github.com/pygments/pygments/issues/2338">#2338</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2354">#2354</a>)</p> </li> <li> <p>CSS: Improve lexing of numbers inside function calls (<a href="https://redirect.github.com/pygments/pygments/issues/2382">#2382</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2383">#2383</a>)</p> </li> <li> <p>diff: Support normal diff syntax, as opposed to unified diff syntax (<a href="https://redirect.github.com/pygments/pygments/issues/2321">#2321</a>)</p> </li> <li> <p>GLSL, HLSL:</p> <ul> <li>Support line continuations in preprocessor code (<a href="https://redirect.github.com/pygments/pygments/issues/2350">#2350</a>)</li> <li>Improve preprocessor directive handling (<a href="https://redirect.github.com/pygments/pygments/issues/2357">#2357</a>)</li> </ul> </li> <li> <p>LilyPond: minor update of builtins</p> </li> <li> <p>PHP: support attributes (<a href="https://redirect.github.com/pygments/pygments/issues/2055">#2055</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2347">#2347</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2360">#2360</a>), fix anonymous classes without parameters (<a href="https://redirect.github.com/pygments/pygments/issues/2359">#2359</a>), improve lexing of variable variable syntax (<a href="https://redirect.github.com/pygments/pygments/issues/2358">#2358</a>)</p> </li> <li> <p>Python:</p> <ul> <li>Add missing builtins (<a href="https://redirect.github.com/pygments/pygments/issues/2334">#2334</a>)</li> <li>Fix inconsistent lexing of <code>None</code> (<a href="https://redirect.github.com/pygments/pygments/issues/2406">#2406</a>)</li> </ul> </li> <li> <p>Rebol/Red: Don't require script headers (<a href="https://redirect.github.com/pygments/pygments/issues/2348">#2348</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2349">#2349</a>)</p> </li> <li> <p>Spice: Update keywords (<a href="https://redirect.github.com/pygments/pygments/issues/2336">#2336</a>)</p> </li> <li> <p>SQL+Jinja (<code>analyse_text</code> method): Fix catastrophic backtracking (<a href="https://redirect.github.com/pygments/pygments/issues/2355">#2355</a>)</p> </li> <li> <p>Terraform: Add <code>hcl</code> alias (<a href="https://redirect.github.com/pygments/pygments/issues/2375">#2375</a>)</p> </li> </ul> </li> <li> <p>Declare support for Python 3.11 and drop support for Python 3.6 (<a href="https://redirect.github.com/pygments/pygments/issues/2324">#2324</a>).</p> </li> <li> <p>Update <code>native</code> style to improve contrast (<a href="https://redirect.github.com/pygments/pygments/issues/2325">#2325</a>).</p> </li> <li> <p>Update `github-dark`` style to match latest Primer style (<a href="https://redirect.github.com/pygments/pygments/issues/2401">#2401</a>)</p> </li> <li> <p>Revert a change that made guessing lexers based on file names slower on Python 3.10 and older (<a href="https://redirect.github.com/pygments/pygments/issues/2328">#2328</a>).</p> </li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/pygments/pygments/commit/6c187ad83267be9ce142af3fd5c9e670339dc7aa"><code>6c187ad</code></a> Prepare 2.15 release.</li> <li><a href="https://github.com/pygments/pygments/commit/00b9cb022cc9c05784c43c11bd7f73e64008b347"><code>00b9cb0</code></a> Prepare for release.</li> <li><a href="https://github.com/pygments/pygments/commit/a0824a45f0bd6c45528fa16132f09dd3570a8234"><code>a0824a4</code></a> Update CHANGES</li> <li><a href="https://github.com/pygments/pygments/commit/26f9f6c852846fe579c37fe936a872b68fa686ba"><code>26f9f6c</code></a> Merge pull request <a href="https://redirect.github.com/pygments/pygments/issues/2406">#2406</a> from rdbende/fix-fromimport-none</li> <li><a href="https://github.com/pygments/pygments/commit/62b1bbbe6e329268eaa4c68f0e3eb8867c450acc"><code>62b1bbb</code></a> Change token of None after from keyword</li> <li><a href="https://github.com/pygments/pygments/commit/acee60e4e8dde9ea99fc494740e20b06188791ac"><code>acee60e</code></a> Update CHANGES</li> <li><a href="https://github.com/pygments/pygments/commit/eaca69091119e0ac5c97e626ba9e3b21b688c5ed"><code>eaca690</code></a> Add lexer for MediaWiki Wikitext (<a href="https://redirect.github.com/pygments/pygments/issues/2373">#2373</a>)</li> <li><a href="https://github.com/pygments/pygments/commit/0e9c87bcf096908956e031f15a4e589e83be1691"><code>0e9c87b</code></a> Update CHANGES</li> <li><a href="https://github.com/pygments/pygments/commit/ef0abbaece522732031d61391567c017d48d87b7"><code>ef0abba</code></a> Add PostgreSQL Explain lexer (<a href="https://redirect.github.com/pygments/pygments/issues/2398">#2398</a>)</li> <li><a href="https://github.com/pygments/pygments/commit/3c6e2af8fbc44bb1ef77389d09118c37faea8746"><code>3c6e2af</code></a> Update CHANGES</li> <li>Additional commits viewable in <a href="https://github.com/pygments/pygments/compare/2.11.2...2.15.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pygments&package-manager=pip&previous-version=2.11.2&new-version=2.15.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24949/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24949/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24949", "html_url": "https://github.com/huggingface/transformers/pull/24949", "diff_url": "https://github.com/huggingface/transformers/pull/24949.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24949.patch", "merged_at": 1689853429000 }
https://api.github.com/repos/huggingface/transformers/issues/24948
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24948/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24948/comments
https://api.github.com/repos/huggingface/transformers/issues/24948/events
https://github.com/huggingface/transformers/issues/24948
1,813,768,095
I_kwDOCUB6oc5sG-ef
24,948
Converting Llama2 to HF weight on Windows 10 PC failed
{ "login": "CinderZhang", "id": 72751738, "node_id": "MDQ6VXNlcjcyNzUxNzM4", "avatar_url": "https://avatars.githubusercontent.com/u/72751738?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CinderZhang", "html_url": "https://github.com/CinderZhang", "followers_url": "https://api.github.com/users/CinderZhang/followers", "following_url": "https://api.github.com/users/CinderZhang/following{/other_user}", "gists_url": "https://api.github.com/users/CinderZhang/gists{/gist_id}", "starred_url": "https://api.github.com/users/CinderZhang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CinderZhang/subscriptions", "organizations_url": "https://api.github.com/users/CinderZhang/orgs", "repos_url": "https://api.github.com/users/CinderZhang/repos", "events_url": "https://api.github.com/users/CinderZhang/events{/privacy}", "received_events_url": "https://api.github.com/users/CinderZhang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It looks like there is a problem with your installation of `bitsandbytes`. You should fix it or uninstall it.", "Did both. Neither ways worked. I left a ticket on bitsandbytes.", "It works for me to convert 70b-chat. I don't even install the bitsandbytes library.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "> Did both. Neither ways worked. I left a ticket on bitsandbytes.\r\n\r\nConfirmed that bitsandbytes should only work on Linux, NOT Windows.", "> \r\n\r\nHave you get the 70b-chat running? I still cannot make it run on 4X40G vRams. Thanks.", "> > \r\n> \r\n> Have you get the 70b-chat running? I still cannot make it run on 4X40G vRams. Thanks.\r\n\r\nI use 8 a100 to load it. 160G seems not enough.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,695
1,695
NONE
null
### System Info Setup: RTX A2000 12G; CUDA 12.2 Command: python r:/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir Llama-2-7b --model_size 7B --output_dir hf_wght_7b ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Setup: RTX A2000 12G; CUDA 12.2 Command: python r:/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir Llama-2-7b --model_size 7B --output_dir hf_wght_7b Error: ===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please run python -m bitsandbytes and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues ================================================================================ bin C:\Python311\Lib\site-packages\bitsandbytes\libbitsandbytes_cpu.so False CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths... C:\Python311\Lib\site-packages\bitsandbytes\cuda_setup\main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {WindowsPath('/usr/local/cuda/lib64')} warn(msg) CUDA SETUP: WARNING! libcuda.so not found! Do you have a CUDA driver installed? If you are on a cluster, make sure you are on a CUDA machine! C:\Python311\Lib\site-packages\bitsandbytes\cuda_setup\main.py:149: UserWarning: WARNING: No libcudart.so found! Install CUDA or the cudatoolkit package (anaconda)! warn(msg) C:\Python311\Lib\site-packages\bitsandbytes\cuda_setup\main.py:149: UserWarning: WARNING: No GPU detected! Check your CUDA paths. Proceeding to load CPU-only library... warn(msg) CUDA SETUP: Loading binary C:\Python311\Lib\site-packages\bitsandbytes\libbitsandbytes_cpu.so... argument of type 'WindowsPath' is not iterable CUDA SETUP: Problem: The main issue seems to be that the main CUDA library was not detected. CUDA SETUP: Solution 1): Your paths are probably not up-to-date. You can update them via: sudo ldconfig. CUDA SETUP: Solution 2): If you do not have sudo rights, you can do the following: CUDA SETUP: Solution 2a): Find the cuda library via: find / -name libcuda.so 2>/dev/null CUDA SETUP: Solution 2b): Once the library is found add it to the LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:FOUND_PATH_FROM_2a CUDA SETUP: Solution 2c): For a permanent solution add the export from 2b into your .bashrc file, located at ~/.bashrc Traceback (most recent call last): File "C:\Python311\Lib\site-packages\transformers\utils\import_utils.py", line 1099, in _get_module return importlib.import_module("." + module_name, self.__name__) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python311\Lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen importlib._bootstrap>", line 1206, in _gcd_import File "<frozen importlib._bootstrap>", line 1178, in _find_and_load File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 940, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "C:\Python311\Lib\site-packages\transformers\models\llama\modeling_llama.py", line 32, in <module> from ...modeling_utils import PreTrainedModel File "C:\Python311\Lib\site-packages\transformers\modeling_utils.py", line 86, in <module> from accelerate import dispatch_model, infer_auto_device_map, init_empty_weights File "C:\Python311\Lib\site-packages\accelerate\__init__.py", line 3, in <module> from .accelerator import Accelerator File "C:\Python311\Lib\site-packages\accelerate\accelerator.py", line 35, in <module> from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state File "C:\Python311\Lib\site-packages\accelerate\checkpointing.py", line 24, in <module> from .utils import ( File "C:\Python311\Lib\site-packages\accelerate\utils\__init__.py", line 131, in <module> from .bnb import has_4bit_bnb_layers, load_and_quantize_model File "C:\Python311\Lib\site-packages\accelerate\utils\bnb.py", line 42, in <module> import bitsandbytes as bnb File "C:\Python311\Lib\site-packages\bitsandbytes\__init__.py", line 6, in <module> from . import cuda_setup, utils, research File "C:\Python311\Lib\site-packages\bitsandbytes\research\__init__.py", line 1, in <module> from . import nn File "C:\Python311\Lib\site-packages\bitsandbytes\research\nn\__init__.py", line 1, in <module> from .modules import LinearFP8Mixed, LinearFP8Global File "C:\Python311\Lib\site-packages\bitsandbytes\research\nn\modules.py", line 8, in <module> from bitsandbytes.optim import GlobalOptimManager File "C:\Python311\Lib\site-packages\bitsandbytes\optim\__init__.py", line 6, in <module> from bitsandbytes.cextension import COMPILED_WITH_CUDA File "C:\Python311\Lib\site-packages\bitsandbytes\cextension.py", line 20, in <module> raise RuntimeError(''' RuntimeError: CUDA Setup failed despite GPU being available. Please run the following command to get more information: python -m bitsandbytes Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues The above exception was the direct cause of the following exception: Traceback (most recent call last): File "r:\transformers\src\transformers\models\llama\convert_llama_weights_to_hf.py", line 23, in <module> from transformers import LlamaConfig, LlamaForCausalLM, LlamaTokenizer File "<frozen importlib._bootstrap>", line 1231, in _handle_fromlist File "C:\Python311\Lib\site-packages\transformers\utils\import_utils.py", line 1090, in __getattr__ value = getattr(module, name) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Python311\Lib\site-packages\transformers\utils\import_utils.py", line 1089, in __getattr__ module = self._get_module(self._class_to_module[name]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python311\Lib\site-packages\transformers\utils\import_utils.py", line 1101, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.models.llama.modeling_llama because of the following error (look up to see its traceback): CUDA Setup failed despite GPU being available. Please run the following command to get more information: python -m bitsandbytes Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues ### Expected behavior Convert Llama2 to HF weights
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24948/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24948/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24947
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24947/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24947/comments
https://api.github.com/repos/huggingface/transformers/issues/24947/events
https://github.com/huggingface/transformers/pull/24947
1,813,759,444
PR_kwDOCUB6oc5V_VGV
24,947
fix: cast input pixels to appropriate dtype for image_to_text pipelines
{ "login": "JimAllanson", "id": 1419473, "node_id": "MDQ6VXNlcjE0MTk0NzM=", "avatar_url": "https://avatars.githubusercontent.com/u/1419473?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JimAllanson", "html_url": "https://github.com/JimAllanson", "followers_url": "https://api.github.com/users/JimAllanson/followers", "following_url": "https://api.github.com/users/JimAllanson/following{/other_user}", "gists_url": "https://api.github.com/users/JimAllanson/gists{/gist_id}", "starred_url": "https://api.github.com/users/JimAllanson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JimAllanson/subscriptions", "organizations_url": "https://api.github.com/users/JimAllanson/orgs", "repos_url": "https://api.github.com/users/JimAllanson/repos", "events_url": "https://api.github.com/users/JimAllanson/events{/privacy}", "received_events_url": "https://api.github.com/users/JimAllanson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks, I've run the styling and copy checks locally, which identified some copy checks that were failing. If I understand correctly, these checks are ensuring that the implementations of derived model classes match their sources? On that basis, I've gone ahead and added the same type casting to the classes identified, and pushed a commit with those in. It looks like all of the CI checks are passing now.\r\n\r\n> Usually we advise users to cast the input to the desired dtype manually by calling .to() to the processor's output\r\n\r\nIn my case, I'm currently using transformers via the AWS Deep Learning Containers, using Sagemaker. I've made a small tweak to the entrypoint to allow passing the `torch_dtype` into the pipeline kwargs, but otherwise I was trying to keep my modified container as generic as possible to aid maintainability." ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? ### _Automatically converts input pixels to the correct type when running image_to_text pipelines_ --- Currently, when using image_to_text pipelines with half precision, I encounter an error in the forward function when passing in the pixel data. I found that casting to the target_dtype within the forward function fixes this issue in my case. This is my first time working with the transformers library, so I'm not sure if this is the "correct" place to fix this kind of issue, or if perhaps there's another step in the pipeline code that I've missed that should be responsible for casting the input data to the correct type. I assume there are other similar models that may benefit from the same type of fix. However, I've constrained my fix to the models that I've been working with already, as I didn't want to contribute unvalidated code. I'm happy to make similar changes to further models if required, or if someone with more experience with this library wants to rework these changes and approach the fix differently, I'd be happy with that, and can test with the subset of models I've been using if required. - _Possibly_ Fixes #24834 _(This issue looked similar to the issue I was encountering, however I had different additional issues using 8bit optimizations, so I've only tested my fix under float16. But I think given that issue seems to relate to the same root cause of missing casting for input data within pipelines, I think there's a good chance it may be fixed by this issue.)_ PR Checklist Notes - I haven't been able to run tests as I ran into dependency issues on my local environment, and the CI workflows appear to use self hosted runners, which I assume won't be easy for me to set up. Given the small scope of my changes, perhaps someone can approve running my PR against CircleCI if necessary? - I haven't added new tests for this change, I had a read through of the existing tests and I wasn't sure if this sort of low level change would usually warrant explicit test coverage. Since I don't currently have a good way of running the tests myself, I figured it was best to omit new tests for now. Tagging @younesbelkada and @NielsRogge as the last authors of the lines I edited.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24947/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24947/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24947", "html_url": "https://github.com/huggingface/transformers/pull/24947", "diff_url": "https://github.com/huggingface/transformers/pull/24947.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24947.patch", "merged_at": 1689941817000 }
https://api.github.com/repos/huggingface/transformers/issues/24946
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24946/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24946/comments
https://api.github.com/repos/huggingface/transformers/issues/24946/events
https://github.com/huggingface/transformers/issues/24946
1,813,612,492
I_kwDOCUB6oc5sGYfM
24,946
convertion to hf format of llama2 70b get kill
{ "login": "zixiliuUSC", "id": 49173327, "node_id": "MDQ6VXNlcjQ5MTczMzI3", "avatar_url": "https://avatars.githubusercontent.com/u/49173327?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zixiliuUSC", "html_url": "https://github.com/zixiliuUSC", "followers_url": "https://api.github.com/users/zixiliuUSC/followers", "following_url": "https://api.github.com/users/zixiliuUSC/following{/other_user}", "gists_url": "https://api.github.com/users/zixiliuUSC/gists{/gist_id}", "starred_url": "https://api.github.com/users/zixiliuUSC/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zixiliuUSC/subscriptions", "organizations_url": "https://api.github.com/users/zixiliuUSC/orgs", "repos_url": "https://api.github.com/users/zixiliuUSC/repos", "events_url": "https://api.github.com/users/zixiliuUSC/events{/privacy}", "received_events_url": "https://api.github.com/users/zixiliuUSC/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "That is because you do not have enough CPU RAM to do the conversion. It needs 140GB of memory. If you have access to the weights, you can get them on the Hugging Face Hub (as long as you email address for HF matches the one with which you got the weights).", "@sgugger The files on https://huggingface.co/meta-llama/Llama-2-70b are not converted. Could you point to the right one on the Hub?", "The repo suffixed with hf: https://huggingface.co/meta-llama/Llama-2-70b-hf", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,693
1,693
NONE
null
### System Info transformers 4.31.0 RAM: 20G, 8Core A100-80GB ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction run the checkpoint conversion python script provided by transformers and the program will get kill. I run it successfully to 7B and 13B model. ### Expected behavior Convert checkpoint properly.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24946/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24946/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24945
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24945/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24945/comments
https://api.github.com/repos/huggingface/transformers/issues/24945/events
https://github.com/huggingface/transformers/issues/24945
1,813,573,783
I_kwDOCUB6oc5sGPCX
24,945
BatchSampler
{ "login": "Neptune-Trojans", "id": 68503564, "node_id": "MDQ6VXNlcjY4NTAzNTY0", "avatar_url": "https://avatars.githubusercontent.com/u/68503564?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Neptune-Trojans", "html_url": "https://github.com/Neptune-Trojans", "followers_url": "https://api.github.com/users/Neptune-Trojans/followers", "following_url": "https://api.github.com/users/Neptune-Trojans/following{/other_user}", "gists_url": "https://api.github.com/users/Neptune-Trojans/gists{/gist_id}", "starred_url": "https://api.github.com/users/Neptune-Trojans/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Neptune-Trojans/subscriptions", "organizations_url": "https://api.github.com/users/Neptune-Trojans/orgs", "repos_url": "https://api.github.com/users/Neptune-Trojans/repos", "events_url": "https://api.github.com/users/Neptune-Trojans/events{/privacy}", "received_events_url": "https://api.github.com/users/Neptune-Trojans/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only." ]
1,689
1,689
1,689
NONE
null
### Feature request I am trying to use Transformers Trainer and I want to generate batches and not single items. ### Motivation I need that in order to generate batches of different sizes without doing padding. Is that possible to do with the implemented trainer ? ### Your contribution ..
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24945/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24945/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24944
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24944/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24944/comments
https://api.github.com/repos/huggingface/transformers/issues/24944/events
https://github.com/huggingface/transformers/pull/24944
1,813,231,178
PR_kwDOCUB6oc5V9gep
24,944
replace no_cuda with use_cpu in test_pytorch_examples
{ "login": "statelesshz", "id": 28150734, "node_id": "MDQ6VXNlcjI4MTUwNzM0", "avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4", "gravatar_id": "", "url": "https://api.github.com/users/statelesshz", "html_url": "https://github.com/statelesshz", "followers_url": "https://api.github.com/users/statelesshz/followers", "following_url": "https://api.github.com/users/statelesshz/following{/other_user}", "gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}", "starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions", "organizations_url": "https://api.github.com/users/statelesshz/orgs", "repos_url": "https://api.github.com/users/statelesshz/repos", "events_url": "https://api.github.com/users/statelesshz/events{/privacy}", "received_events_url": "https://api.github.com/users/statelesshz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,694
1,689
CONTRIBUTOR
null
### What does this PR do? This PR replace `no_cuda` with `use_cpu` in `test_pytorch_examples.py`, as `no_cuda` training argument is deprecated([see](https://github.com/huggingface/transformers/pull/24863)) . By the way delete a piece of code that will never be used.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24944/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24944/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24944", "html_url": "https://github.com/huggingface/transformers/pull/24944", "diff_url": "https://github.com/huggingface/transformers/pull/24944.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24944.patch", "merged_at": 1689851345000 }
https://api.github.com/repos/huggingface/transformers/issues/24943
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24943/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24943/comments
https://api.github.com/repos/huggingface/transformers/issues/24943/events
https://github.com/huggingface/transformers/pull/24943
1,813,189,872
PR_kwDOCUB6oc5V9Xh2
24,943
🌐 [i18n-KO] Translated `perf_infer_gpu_many.md` to Korean
{ "login": "heuristicwave", "id": 31366038, "node_id": "MDQ6VXNlcjMxMzY2MDM4", "avatar_url": "https://avatars.githubusercontent.com/u/31366038?v=4", "gravatar_id": "", "url": "https://api.github.com/users/heuristicwave", "html_url": "https://github.com/heuristicwave", "followers_url": "https://api.github.com/users/heuristicwave/followers", "following_url": "https://api.github.com/users/heuristicwave/following{/other_user}", "gists_url": "https://api.github.com/users/heuristicwave/gists{/gist_id}", "starred_url": "https://api.github.com/users/heuristicwave/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/heuristicwave/subscriptions", "organizations_url": "https://api.github.com/users/heuristicwave/orgs", "repos_url": "https://api.github.com/users/heuristicwave/repos", "events_url": "https://api.github.com/users/heuristicwave/events{/privacy}", "received_events_url": "https://api.github.com/users/heuristicwave/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,690
1,690
CONTRIBUTOR
null
# What does this PR do? Translated the `perf_infer_gpu_many.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [X] Check for missing / redundant translations (λ²ˆμ—­ λˆ„λ½/쀑볡 검사) - [X] Grammar Check (λ§žμΆ€λ²• 검사) - [X] Review or Add new terms to glossary (μš©μ–΄ 확인 및 μΆ”κ°€) - [X] Check Inline TOC (e.g. `[[lowercased-header]]`) - [X] Check live-preview for gotchas (live-preview둜 μ •μƒμž‘λ™ 확인) ## Who can review? (Initial) May you please review this PR? @nuatmochoi, @bolizabeth, @hyunhp, @mjk0618, @jungnerd ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24943/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24943/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24943", "html_url": "https://github.com/huggingface/transformers/pull/24943", "diff_url": "https://github.com/huggingface/transformers/pull/24943.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24943.patch", "merged_at": 1690985197000 }
https://api.github.com/repos/huggingface/transformers/issues/24942
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24942/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24942/comments
https://api.github.com/repos/huggingface/transformers/issues/24942/events
https://github.com/huggingface/transformers/pull/24942
1,813,183,021
PR_kwDOCUB6oc5V9WCV
24,942
Fallback for missing attribute `Parameter.ds_numel`
{ "login": "apoorvkh", "id": 7005565, "node_id": "MDQ6VXNlcjcwMDU1NjU=", "avatar_url": "https://avatars.githubusercontent.com/u/7005565?v=4", "gravatar_id": "", "url": "https://api.github.com/users/apoorvkh", "html_url": "https://github.com/apoorvkh", "followers_url": "https://api.github.com/users/apoorvkh/followers", "following_url": "https://api.github.com/users/apoorvkh/following{/other_user}", "gists_url": "https://api.github.com/users/apoorvkh/gists{/gist_id}", "starred_url": "https://api.github.com/users/apoorvkh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apoorvkh/subscriptions", "organizations_url": "https://api.github.com/users/apoorvkh/orgs", "repos_url": "https://api.github.com/users/apoorvkh/repos", "events_url": "https://api.github.com/users/apoorvkh/events{/privacy}", "received_events_url": "https://api.github.com/users/apoorvkh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Sure -- imo my rewrite was more readable but I guess that's subjective πŸ‘ \r\n\r\nAnyway, I've reverted the other changes and added the fix. Lmk, thanks!", "> Sure -- imo my rewrite was more readable but I guess that's subjective +1\r\n> \r\n> Anyway, I've reverted the other changes and added the fix. Lmk, thanks!\r\n\r\nIf I may share why I wrote it this way:\r\n\r\nFrom the perspective of a user who doesn't know anything about deepspeed the current version is easier to understand since they don't need to go into that branch. \r\n\r\nFrom the perspective of a deepspeed user your original version is easier to read.\r\n\r\nAs there are many more non-deepspeed users the former version is preferred.\r\n\r\nWe could actually rewrite all of this code into a single line of \r\n\r\n```\r\np.ds_numel if hasattr(p, \"ds_numel\") else p.numel()\r\n```\r\nand not even need to check if it's running under deepspeed zero-3, but then the reader will wonder what in the world `ds_numel` is ;)\r\n\r\nThank you again for your contribution, @apoorvkh \r\n", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
CONTRIBUTOR
null
#22193 added a parameter count for Deepspeed sharded models (i.e. using the `Parameter` attribute `ds_numel`). However, `Parameter` Tensors don't always have the `ds_numel` attribute (even when the model is sharded with Zero stage 3). We can see how this is [alternatively handled in Deepspeed](https://github.com/microsoft/DeepSpeed/blob/ceccfa3ef68182384c6db1349fab43b9af3ed7f3/deepspeed/runtime/engine.py#L3220), by falling back to `Parameter.numel()` if `ds_numel` is not an attribute. I've added this fix to the function in question. Fixes #24792 @stas00 @pacman100
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24942/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24942/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24942", "html_url": "https://github.com/huggingface/transformers/pull/24942", "diff_url": "https://github.com/huggingface/transformers/pull/24942.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24942.patch", "merged_at": 1689880775000 }
https://api.github.com/repos/huggingface/transformers/issues/24941
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24941/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24941/comments
https://api.github.com/repos/huggingface/transformers/issues/24941/events
https://github.com/huggingface/transformers/pull/24941
1,812,945,088
PR_kwDOCUB6oc5V8i9d
24,941
Prevent Dynamo graph fragmentation in GPTNeoX with torch.baddbmm fix
{ "login": "norabelrose", "id": 39116809, "node_id": "MDQ6VXNlcjM5MTE2ODA5", "avatar_url": "https://avatars.githubusercontent.com/u/39116809?v=4", "gravatar_id": "", "url": "https://api.github.com/users/norabelrose", "html_url": "https://github.com/norabelrose", "followers_url": "https://api.github.com/users/norabelrose/followers", "following_url": "https://api.github.com/users/norabelrose/following{/other_user}", "gists_url": "https://api.github.com/users/norabelrose/gists{/gist_id}", "starred_url": "https://api.github.com/users/norabelrose/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/norabelrose/subscriptions", "organizations_url": "https://api.github.com/users/norabelrose/orgs", "repos_url": "https://api.github.com/users/norabelrose/repos", "events_url": "https://api.github.com/users/norabelrose/events{/privacy}", "received_events_url": "https://api.github.com/users/norabelrose/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm struggling to see what's wrong with my code style since when I run `black` locally it says it's fine.", "> Hey! Thanks for submitting a PR! You should be using `make style` to properly format the code.\r\n\r\nThis doesn't seem to work for me. `make style` makes a ton of changes to _other_ files that I didn't modify, but doesn't change my code at all.", "> My only concern here is that the dtype is not set. A small test like this shows that this will have some effect for some head size\r\n\r\nI suppose I can manually cast `norm_factor` to the appropriate dtype and then convert back to a Python scalar. On the other hand, inverse square root isn't going to have _any_ numerical error as long as the head size is a power of 2, which it almost always is (in fact, it's almost always 64). So in practice this shouldn't matter at all.\r\n\r\nAlso, presumably the ground truth should be the Eleuther `gpt-neox` implementation. That implementation seems to already encode `norm_factor` as a Python scalar and uses `math.sqrt`, see [here](https://github.com/EleutherAI/gpt-neox/blob/408e29d9c746a02d842917bb7447c5c4be0b42d4/megatron/model/transformer.py#L298). It passes a Python scalar to `torch.baddbmm` [here](https://github.com/EleutherAI/gpt-neox/blob/408e29d9c746a02d842917bb7447c5c4be0b42d4/megatron/model/transformer.py#L420).\r\n\r\nSo in fact, our current implementation is already \"wrong\" with respect to the original, and this PR would correct the discrepancy. Although, as I said, in almost all cases this will make no difference.", "Ok! Got what you mean. Part of the code was added in #22888 in order to have proper float16 casting. Yes we should match the original results (which is currently the case) but we have also other functionalities on which a lot of users rely! \r\nLet's make sure that the value is casted to the correct type for the next computations. \r\n\r\n(make style is probably behaving wrongly because of the black / ruff versioning) ", "> Ok! Got what you mean. Part of the code was added in #22888 in order to have proper float16 casting. Yes we should match the original results (which is currently the case) but we have also other functionalities on which a lot of users rely! Let's make sure that the value is casted to the correct type for the next computations.\r\n> \r\n> (make style is probably behaving wrongly because of the black / ruff versioning)\r\n\r\nHmm okay, it looks like prior to #22888 we were just using `float32` all the time which is clearly wrong. This PR shouldn't cause any regressions because the Python scalar will work with any parameter dtype.\r\n\r\nIs there anything I can do to get the codestyle check to pass? That seems to be the only thing preventing merging right now.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24941). All of your documentation changes will be reflected on that endpoint.", "Sorry but no, in transformers we autocast the weights to the required dtype. You can try the following:\r\n```python \r\n>>> from transformers import GPTNeoXModel\r\n>>> import torch \r\n\r\n>>> model = GPTNeoXModel.from_pretrained(\"EleutherAI/gpt-neox-20b\", torch_dtype = torch.float16)\r\n>>> print(model.layers[1].attention.norm_factor)\r\ntensor(9.7969, dtype=torch.float16)\r\n```\r\nSo the dtype was properly handle. We initialized it with float32 but then it can be casted to any dtype. \r\n\r\nFor the red ci run `make style`! ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "> Sorry but no, in transformers we autocast the weights to the required dtype.\r\n\r\nBut this isn't a _weight_, though. It's a constant value deterministically computed from the config.\r\n\r\n> For the red ci run `make style`!\r\n\r\nI'm sorry, but this simply does not work! It changes over 200 files that I haven't changed, but doesn't do anything to the files I did change. Please help me here, can you just clone my fork and somehow fix it or figure out what's wrong?", "Just pushed the changes! \r\nYou are right it is not a weight, but it was πŸ˜“ \r\nWhich mean that this fix would be breaking previous outputs. Since the model is pretty old, would like to just cast the norm factor to the correct dtype (since I am not sure we can be 100% sure all previous outputs will be the sameWDYT?) ", "Thanks a lot for that!\r\n\r\nI'm not sure what you mean by \"cast the norm factor to the correct dtype.\" At the very least, we must pass a Python scalar to `torch.baddbmm` in order to fix the Dynamo graph fragmentation issue. Python scalars will work with any model dtype.\r\n\r\nIf you're saying we cast it to the model's dtype and then call `.item()` or somethingβ€” I _suppose_ we could do that, but I would rather not. Right now there is _technically_ a discrepancy between our implementation and the original NeoX implementation. If we merge this PR as-is, the discrepancy would be fixed.\r\n\r\nThat said, this discrepancy will virtually never matter since `head_size` is always a power of 2 in all the pretrained checkpoints. They will behave identically.", "Thanks @norabelrose for bearing with me and merging this! " ]
1,689
1,692
1,692
CONTRIBUTOR
null
# What does this PR do? Fixes #24940. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ArthurZucker and @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24941/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24941/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24941", "html_url": "https://github.com/huggingface/transformers/pull/24941", "diff_url": "https://github.com/huggingface/transformers/pull/24941.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24941.patch", "merged_at": 1692792466000 }
https://api.github.com/repos/huggingface/transformers/issues/24940
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24940/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24940/comments
https://api.github.com/repos/huggingface/transformers/issues/24940/events
https://github.com/huggingface/transformers/issues/24940
1,812,942,941
I_kwDOCUB6oc5sD1Bd
24,940
TorchDynamo graph needlessly fragmented for GPTNeoX due to baddbmm type mistake
{ "login": "norabelrose", "id": 39116809, "node_id": "MDQ6VXNlcjM5MTE2ODA5", "avatar_url": "https://avatars.githubusercontent.com/u/39116809?v=4", "gravatar_id": "", "url": "https://api.github.com/users/norabelrose", "html_url": "https://github.com/norabelrose", "followers_url": "https://api.github.com/users/norabelrose/followers", "following_url": "https://api.github.com/users/norabelrose/following{/other_user}", "gists_url": "https://api.github.com/users/norabelrose/gists{/gist_id}", "starred_url": "https://api.github.com/users/norabelrose/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/norabelrose/subscriptions", "organizations_url": "https://api.github.com/users/norabelrose/orgs", "repos_url": "https://api.github.com/users/norabelrose/repos", "events_url": "https://api.github.com/users/norabelrose/events{/privacy}", "received_events_url": "https://api.github.com/users/norabelrose/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@fxmarty I think you are more familiar with this topic? If so, could you take a look, thanks!", "Hi @norabelrose, would you like to submit a PR?", "> Hi @norabelrose, would you like to submit a PR?\n\nI already did! 😊 See #24941.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,692
1,692
CONTRIBUTOR
null
### System Info - `transformers` version: 4.32.0.dev0 - Platform: Linux-5.19.0-42-generic-x86_64-with-glibc2.27 - Python version: 3.10.10 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction ```py from transformers import AutoModelForCausalLM import torch def debug_backend(gm: torch.fx.GraphModule, example_inputs: list[torch.Tensor]): print("debug_backend() called with FX graph:") gm.graph.print_tabular() return gm.forward # return a python callable model = AutoModelForCausalLM.from_pretrained("EleutherAI/pythia-160m") jitted = torch.compile(model, backend=debug_backend) jitted(**model.dummy_inputs) ``` The output is too long to fit in a comment, so you'll have to run the code yourself. It features `"debug_backend() called with FX graph:"` being printed several times, each time followed with a fragment of the whole computation graph. This is not expected since NeoX has no data-dependent control flow. ### Expected behavior The `torch.compile` backend should only be called once, and therefore `"debug_backend() called with FX graph:"` should only appear once, because GPT NeoX does not actually require any data-dependent control flow. I've already checked that this can be fixed by turning `GPTNeoXAttention.norm_factor` into a Python scalar instead of a tensor. This is actually what `torch.baddbmm` expects for its `alpha` parameter; it's [supposed to be](https://pytorch.org/docs/stable/generated/torch.baddbmm.html) a scalar. But it seems to silently convert tensors into scalars, so this doesn't cause a crash in normal use. <img width="577" alt="Captura de pantalla 2023-07-19 a la(s) 5 27 42 p m" src="https://github.com/huggingface/transformers/assets/39116809/24274bdb-2599-4ab6-896b-dd77ff98461e"> The exact fix is, in `modeling_gpt_neox.py`, replace lines 103-107 with: ```py self.norm_factor = self.head_size ** -0.5 ``` and replace the `baddbmm` call inside `_attn` with: ```py attn_scores = torch.baddbmm( attn_scores, query, key.transpose(1, 2), beta=1.0, alpha=self.norm_factor, ) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24940/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24940/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24939
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24939/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24939/comments
https://api.github.com/repos/huggingface/transformers/issues/24939/events
https://github.com/huggingface/transformers/issues/24939
1,812,827,119
I_kwDOCUB6oc5sDYvv
24,939
#24028 seems to break the last epoch for a logging integration
{ "login": "franz101", "id": 18228395, "node_id": "MDQ6VXNlcjE4MjI4Mzk1", "avatar_url": "https://avatars.githubusercontent.com/u/18228395?v=4", "gravatar_id": "", "url": "https://api.github.com/users/franz101", "html_url": "https://github.com/franz101", "followers_url": "https://api.github.com/users/franz101/followers", "following_url": "https://api.github.com/users/franz101/following{/other_user}", "gists_url": "https://api.github.com/users/franz101/gists{/gist_id}", "starred_url": "https://api.github.com/users/franz101/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/franz101/subscriptions", "organizations_url": "https://api.github.com/users/franz101/orgs", "repos_url": "https://api.github.com/users/franz101/repos", "events_url": "https://api.github.com/users/franz101/events{/privacy}", "received_events_url": "https://api.github.com/users/franz101/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Reproducible in [this colab example](https://colab.research.google.com/drive/1NObtjxY37VEx2u3SBFDDa_zwTILbX3B2?usp=sharing) currently", "Found the issue, it seems like the on step end is called only after two data points have been collated." ]
1,689
1,692
1,692
NONE
null
### System Info Hey @muellerzr, thanks for your lightning fast (accelerated) reply ;) regarding #24028, I'm currently debugging what's causing the issue Setup: - A custom callback to log embeddings, the data collator in the Trainer is wrapped to extract ids of each sample in a batch Error: - The wrapped data collation works fine except in the last step How to reproduce? See reproduction tab Currently this is the example I can show for reproduction. My first guess, it's related to multiprocessing. It seems like the custom collator is not called in the last step. But will give more details or a possible solution soon. ### Who can help? @muellerzr ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` git clone https://github.com/rungalileo/dataquality.git cd dataquality python -m venv .venv source .venv/bin/activate pip install invoke inv all pip install --upgrade transformers pytest tests/integrations/hf/test_text_classification_hf.py -s -k test_remove_unused_columns ``` ### Expected behavior Test should finish collating each step
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24939/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24939/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24938
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24938/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24938/comments
https://api.github.com/repos/huggingface/transformers/issues/24938/events
https://github.com/huggingface/transformers/issues/24938
1,812,804,204
I_kwDOCUB6oc5sDTJs
24,938
Serious issue with `device_map='balanced'` on GPT-2
{ "login": "eric-mitchell", "id": 56408839, "node_id": "MDQ6VXNlcjU2NDA4ODM5", "avatar_url": "https://avatars.githubusercontent.com/u/56408839?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eric-mitchell", "html_url": "https://github.com/eric-mitchell", "followers_url": "https://api.github.com/users/eric-mitchell/followers", "following_url": "https://api.github.com/users/eric-mitchell/following{/other_user}", "gists_url": "https://api.github.com/users/eric-mitchell/gists{/gist_id}", "starred_url": "https://api.github.com/users/eric-mitchell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eric-mitchell/subscriptions", "organizations_url": "https://api.github.com/users/eric-mitchell/orgs", "repos_url": "https://api.github.com/users/eric-mitchell/repos", "events_url": "https://api.github.com/users/eric-mitchell/events{/privacy}", "received_events_url": "https://api.github.com/users/eric-mitchell/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sgugger (especially the user says `Simply removing the device_map='balanced' argument produces the correct output in all cases`)", "cc @SunMarc who was investigating the same thing on another (or maybe the same?) model.", "Hi @eric-mitchell , thanks for reporting. Concerning the warning in `4.31.0`, this is solved in the latest version `4.32.0` with this [PR](https://github.com/huggingface/transformers/pull/25101).\r\nFor the hardware issue, can you try with the latest version `4.32.0` ? I don't have a A6000 right now but I will try to reproduce this issue asap. To summarize, on a A6000 + transformers `4.29.0` , you get `tensor(10.8342, grad_fn=<ToCopyBackward0>) `with `device_map=\"auto\"` and `tensor(4.0143, grad_fn=<ToCopyBackward0>) `without `device_map`, is that right ? ", "@SunMarc That's right, removing `device_map='balanced'` fixes the problem.\r\n\r\nI re-ran my repro snippet with `4.32.0` on a dual-A6000 machine, and the problem seems resolved (I get the expected output, same as A100).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,695
1,695
NONE
null
### System Info - `transformers` version: 4.29.1 - Platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Loading GPT-2 with `device_map='balanced'` silently fails to load the pre-trained parameters for the LM head. On transformers `4.29.1`, there is no warning; if I upgrade to `4.31.0`, there is a warning that the LM head is not using pre-trained weights: ``` Some weights of GPT2LMHeadModel were not initialized from the model checkpoint at gpt2-xl and are newly initialized: ['lm_head.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` The problem seems to be hardware-specific (a dual-A100 machine does **not** show this problem, but a dual-A6000 machine does). Repro: ``` import transformers m = transformers.GPT2LMHeadModel.from_pretrained("gpt2-xl", device_map='balanced', cache_dir='/scr/em7') tokenizer = transformers.AutoTokenizer.from_pretrained("gpt2-xl", cache_dir='/scr/em7') inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") inputs['labels'] = inputs.input_ids.clone() outputs = m(**inputs) loss = outputs.loss print(loss) ``` Running this twice on a dual-A6000 machine with transformers 4.29.1, I get ``` tensor(10.8342, grad_fn=<ToCopyBackward0>) ``` both times. On a dual-A100 machine, I get the expected value of ``` tensor(4.0143, grad_fn=<ToCopyBackward0>) ``` Simply removing the `device_map='balanced'` argument produces the correct output in all cases. If I had to guess, I'd say there is a bug in `PretrainedModel.from_pretrained()` and/or `PretrainedModel._load_pretrained_model`. ### Expected behavior The loss should be the same (`tensor(4.0143, grad_fn=<ToCopyBackward0>)`) on all machines and model configurations.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24938/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24938/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24937
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24937/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24937/comments
https://api.github.com/repos/huggingface/transformers/issues/24937/events
https://github.com/huggingface/transformers/issues/24937
1,812,800,546
I_kwDOCUB6oc5sDSQi
24,937
Support symbolic tracing for NeoX models
{ "login": "norabelrose", "id": 39116809, "node_id": "MDQ6VXNlcjM5MTE2ODA5", "avatar_url": "https://avatars.githubusercontent.com/u/39116809?v=4", "gravatar_id": "", "url": "https://api.github.com/users/norabelrose", "html_url": "https://github.com/norabelrose", "followers_url": "https://api.github.com/users/norabelrose/followers", "following_url": "https://api.github.com/users/norabelrose/following{/other_user}", "gists_url": "https://api.github.com/users/norabelrose/gists{/gist_id}", "starred_url": "https://api.github.com/users/norabelrose/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/norabelrose/subscriptions", "organizations_url": "https://api.github.com/users/norabelrose/orgs", "repos_url": "https://api.github.com/users/norabelrose/repos", "events_url": "https://api.github.com/users/norabelrose/events{/privacy}", "received_events_url": "https://api.github.com/users/norabelrose/received_events", "type": "User", "site_admin": false }
[ { "id": 2392046359, "node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue", "name": "Good Second Issue", "color": "dd935a", "default": false, "description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!" }, { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "Hi @norabelrose \r\n\r\nThat would be very nice if you are able to enable this πŸ€— . Thanks in advance.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "I'd love to contribute if possible!", "> I'd love to contribute if possible!\r\n\r\nSorry I haven't gotten around to doing this, I was sort of hoping my related PR #24941 could get merged first. It's been stuck on some silly style issue in the CI. You can actually extract a computational graph easily by writing a [simple custom backend](https://pytorch.org/docs/stable/dynamo/custom-backends.html) for Torch Dynamo, and it'd be nice to be able to use either Dynamo or symbolic tracing." ]
1,689
1,692
null
CONTRIBUTOR
null
### Feature request Currently `transformers.utils.fx.symbolic_trace` fails when passed any NeoX model, and I'd like to fix that: ``` NotImplementedError: Model GPTNeoXForCausalLM is not supported yet, supported models: AlbertForMaskedLM, AlbertForMultipleChoice, AlbertForPreTraining, AlbertForQuestionAnswering, AlbertForSequenceClassification, AlbertForTokenClassification, AlbertModel, AltCLIPModel, AltCLIPTextModel, AltCLIPVisionModel, BartForCausalLM, BartForConditionalGeneration, BartForQuestionAnswering, BartForSequenceClassification, BartModel, BertForMaskedLM, BertForMultipleChoice, BertForNextSentencePrediction, BertForPreTraining, BertForQuestionAnswering, BertForSequenceClassification, BertForTokenClassification, BertLMHeadModel, BertModel, BlenderbotForCausalLM, BlenderbotForConditionalGeneration, BlenderbotModel, BlenderbotSmallForCausalLM, BlenderbotSmallForConditionalGeneration, BlenderbotSmallModel, BloomForCausalLM, BloomForQuestionAnswering, BloomForSequenceClassification, BloomForTokenClassification, BloomModel, CLIPModel, CLIPTextModel, CLIPTextModelWithProjection, CLIPVisionModel, CLIPVisionModelWithProjection, ConvNextBackbone, ConvNextForImageClassification, ConvNextModel, DebertaForMaskedLM, DebertaForQuestionAnswering, DebertaForSequenceClassification, DebertaForTokenClassification, DebertaModel, DebertaV2ForMaskedLM, DebertaV2ForMultipleChoice, DebertaV2ForQuestionAnswering, DebertaV2ForSequenceClassification, DebertaV2ForTokenClassification, DebertaV2Model, DistilBertForMaskedLM, DistilBertForMultipleChoice, DistilBertForQuestionAnswering, DistilBertForSequenceClassification, DistilBertForTokenClassification, DistilBertModel, DonutSwinModel, ElectraForCausalLM, ElectraForMaskedLM, ElectraForMultipleChoice, ElectraForPreTraining, ElectraForQuestionAnswering, ElectraForSequenceClassification, ElectraForTokenClassification, ElectraModel, GPT2DoubleHeadsModel, GPT2ForQuestionAnswering, GPT2ForSequenceClassification, GPT2ForTokenClassification, GPT2LMHeadModel, GPT2Model, GPTJForCausalLM, GPTJForQuestionAnswering, GPTJForSequenceClassification, GPTJModel, GPTNeoForCausalLM, GPTNeoForQuestionAnswering, GPTNeoForSequenceClassification, GPTNeoForTokenClassification, GPTNeoModel, GitVisionModel, HubertForCTC, HubertForSequenceClassification, HubertModel, LayoutLMForMaskedLM, LayoutLMForQuestionAnswering, LayoutLMForSequenceClassification, LayoutLMForTokenClassification, LayoutLMModel, LxmertForPreTraining, LxmertForQuestionAnswering, LxmertModel, M2M100ForConditionalGeneration, M2M100Model, MBartForCausalLM, MBartForConditionalGeneration, MBartForQuestionAnswering, MBartForSequenceClassification, MBartModel, MT5ForConditionalGeneration, MT5Model, MarianForCausalLM, MarianMTModel, MarianModel, MegatronBertForCausalLM, MegatronBertForMaskedLM, MegatronBertForMultipleChoice, MegatronBertForNextSentencePrediction, MegatronBertForPreTraining, MegatronBertForQuestionAnswering, MegatronBertForSequenceClassification, MegatronBertForTokenClassification, MegatronBertModel, MobileBertForMaskedLM, MobileBertForMultipleChoice, MobileBertForNextSentencePrediction, MobileBertForPreTraining, MobileBertForQuestionAnswering, MobileBertForSequenceClassification, MobileBertForTokenClassification, MobileBertModel, NezhaForMaskedLM, NezhaForMultipleChoice, NezhaForNextSentencePrediction, NezhaForPreTraining, NezhaForQuestionAnswering, NezhaForSequenceClassification, NezhaForTokenClassification, NezhaModel, OPTForCausalLM, OPTForQuestionAnswering, OPTForSequenceClassification, OPTModel, PLBartForCausalLM, PLBartForConditionalGeneration, PLBartForSequenceClassification, PLBartModel, PeftModelForCausalLM, PeftModelForSeq2SeqLM, PegasusForCausalLM, PegasusForConditionalGeneration, PegasusModel, ResNetBackbone, ResNetForImageClassification, ResNetModel, RobertaForCausalLM, RobertaForMaskedLM, RobertaForMultipleChoice, RobertaForQuestionAnswering, RobertaForSequenceClassification, RobertaForTokenClassification, RobertaModel, SegformerForImageClassification, SegformerForSemanticSegmentation, SegformerModel, Speech2Text2Decoder, Speech2Text2ForCausalLM, Speech2TextForConditionalGeneration, Speech2TextModel, SwinBackbone, SwinForImageClassification, SwinForMaskedImageModeling, SwinModel, T5ForConditionalGeneration, T5Model, TrOCRDecoder, TrOCRForCausalLM, ViTForImageClassification, ViTForMaskedImageModeling, ViTModel, Wav2Vec2ForCTC, Wav2Vec2ForMaskedLM, Wav2Vec2ForPreTraining, Wav2Vec2ForSequenceClassification, Wav2Vec2Model, XGLMForCausalLM, XGLMModel ``` ### Motivation The main motivation for this is to enable graph rewriting with the EleutherAI Pythia model suite. Graph rewriting has various interpretability use-cases and the Pythia suite was designed for interpretability research. ### Your contribution I plan to implement a PR for this soon unless there's some major blocker for it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24937/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24937/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/24936
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24936/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24936/comments
https://api.github.com/repos/huggingface/transformers/issues/24936/events
https://github.com/huggingface/transformers/issues/24936
1,812,783,689
I_kwDOCUB6oc5sDOJJ
24,936
Add support for Llama-2-70b-chat-hf in transformers
{ "login": "Daryl149", "id": 6736668, "node_id": "MDQ6VXNlcjY3MzY2Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/6736668?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Daryl149", "html_url": "https://github.com/Daryl149", "followers_url": "https://api.github.com/users/Daryl149/followers", "following_url": "https://api.github.com/users/Daryl149/following{/other_user}", "gists_url": "https://api.github.com/users/Daryl149/gists{/gist_id}", "starred_url": "https://api.github.com/users/Daryl149/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Daryl149/subscriptions", "organizations_url": "https://api.github.com/users/Daryl149/orgs", "repos_url": "https://api.github.com/users/Daryl149/repos", "events_url": "https://api.github.com/users/Daryl149/events{/privacy}", "received_events_url": "https://api.github.com/users/Daryl149/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "Turns out the model is supported, but has a bug in the `config.json`, see https://github.com/facebookresearch/llama/issues/423\r\nAlso, I am adding a version that works without any manual changes here: https://huggingface.co/daryl149/llama-2-70b-chat-hf" ]
1,689
1,689
1,689
NONE
null
### Model description Not sure if it is a bug, or that it is intentionally not supported yet. In either case: there have been 0 confirmations of people being able to successfully run the official **Llama-2-70b-chat-hf** model in transformers. ### Open source status - [ ] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Official model weights: https://huggingface.co/meta-llama/Llama-2-70b-chat-hf Related open bug: https://github.com/facebookresearch/llama/issues/423
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24936/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24936/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24935
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24935/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24935/comments
https://api.github.com/repos/huggingface/transformers/issues/24935/events
https://github.com/huggingface/transformers/pull/24935
1,812,780,805
PR_kwDOCUB6oc5V7-dz
24,935
fix llama2 chat system prompt
{ "login": "jphme", "id": 2862336, "node_id": "MDQ6VXNlcjI4NjIzMzY=", "avatar_url": "https://avatars.githubusercontent.com/u/2862336?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jphme", "html_url": "https://github.com/jphme", "followers_url": "https://api.github.com/users/jphme/followers", "following_url": "https://api.github.com/users/jphme/following{/other_user}", "gists_url": "https://api.github.com/users/jphme/gists{/gist_id}", "starred_url": "https://api.github.com/users/jphme/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jphme/subscriptions", "organizations_url": "https://api.github.com/users/jphme/orgs", "repos_url": "https://api.github.com/users/jphme/repos", "events_url": "https://api.github.com/users/jphme/events{/privacy}", "received_events_url": "https://api.github.com/users/jphme/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> Hey! Thanks for catching this. There seems to be a missing space in our prompt, here: `Youranswers should not include...`.\r\n> \r\n> I can't accept the PR in the current state as the formating is there to comply with our linters. If you can just add the missing space would be great! Thanks for catching this. We have a test in the conversational pipeline, which did not detect this!\r\n\r\nsure; tried to fix the formatting and make it more readable....", "Hey! Sorry, #24930 is ready, we'll merge it in favor of this one! Thanks a lot for pointing out and contributing! πŸ€— ", "> Hey! Sorry, #24930 is ready, we'll merge it in favor of this one! Thanks a lot for pointing out and contributing! πŸ€—\r\n\r\nsure, didnt see that there was another PR fixing this. \r\nBut just for next time - why did the checks still fail? I have absolutely no idea from the CircleCI messages how to check myself or what the reason is, lines should be short enough?", "You have to run the linter! The lines are probably too short this time and can be optimised haha! Use `make style` again after each changed. " ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? Fixes the Llama 2 System Prompt so its consistent with METAs version. When testing my finetuning script, I found that the official LLAMA Code and the Huggingface code returned different tokens for the same code, apparently due to different linebreaks. Code to test: ```python #from https://github.com/facebookresearch/llama/blob/6c7fe276574e78057f917549435a2554000a876d/llama/generation.py#L46C11-L46C11 DEFAULT_SYSTEM_PROMPT = """\ You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.""" tokenizer = LlamaTokenizer.from_pretrained("/Users/jph/dev2/models/Llama-2-7b-chat-hf") token_llama=tokenizer.encode(DEFAULT_SYSTEM_PROMPT) from transformers.models.llama.tokenization_llama import DEFAULT_SYSTEM_PROMPT as DEFAULT_SYSTEM_PROMPT_TRANSFORMERS token_hf=tokenizer.encode(DEFAULT_SYSTEM_PROMPT_TRANSFORMERS_FAST) for i in range(100): print (f"llama: {tokenizer.decode(token_llama[i])} , transformers: {tokenizer.decode(token_hf[i])}") if token_llama[i]!=token_hf[i]: print('!!!') ``` which returns: ``` ... llama: being , transformers: being llama: safe , transformers: safe llama: . , transformers: . llama: Your , transformers: Your llama: answers , transformers: ans !!! llama: should , transformers: wers ... ``` I don't know how big the difference is, but it surely doesn't make sense to deviate here (even if there is no performance issue and it's just for debugging reasons). ## Who can review? @ArthurZucker @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24935/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24935/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24935", "html_url": "https://github.com/huggingface/transformers/pull/24935", "diff_url": "https://github.com/huggingface/transformers/pull/24935.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24935.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24934
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24934/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24934/comments
https://api.github.com/repos/huggingface/transformers/issues/24934/events
https://github.com/huggingface/transformers/issues/24934
1,812,635,816
I_kwDOCUB6oc5sCqCo
24,934
Change package name from "transformers" to something less generic
{ "login": "geajack", "id": 2124157, "node_id": "MDQ6VXNlcjIxMjQxNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/2124157?v=4", "gravatar_id": "", "url": "https://api.github.com/users/geajack", "html_url": "https://github.com/geajack", "followers_url": "https://api.github.com/users/geajack/followers", "following_url": "https://api.github.com/users/geajack/following{/other_user}", "gists_url": "https://api.github.com/users/geajack/gists{/gist_id}", "starred_url": "https://api.github.com/users/geajack/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/geajack/subscriptions", "organizations_url": "https://api.github.com/users/geajack/orgs", "repos_url": "https://api.github.com/users/geajack/repos", "events_url": "https://api.github.com/users/geajack/events{/privacy}", "received_events_url": "https://api.github.com/users/geajack/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You do realize this would break the existing code of many many people?", "Yes\r\n\r\nMy theory/suggestion is that HF is still a relatively young library used by a relatively niche community used to having to move in a rapidly developing field (we're not talking about the C standard lib or something), that a lot of people likely feel this way, and that if this change were implemented it would be looked back on as a good decision ten years later (not as if we're new to breaking changes in the Python community - hell even HF has pushed breaking changes before)", "That kind of stuff would be hell for projects like ours, we have many low level patches in place to extend HF.", "Hello there.\r\nWould like to share some mental models about this\r\n\r\n_General TLTR ; No, because for now, the libraries are consistent and helpful in becoming the standard ._\r\n\r\nThe following comments have sections\r\n * Impact \r\n **TLTR;** _17M monthly downloads, 1700 monthly MAUs, +100K repositories impact_\r\n Q : can I help you brainstorm other names -syntactically and semantically aligned- that could help solve your problem?\r\n * Considerations in the matter\r\n **TLTR;** _Other standard names are also taken._\r\n_Balancing Makers and Takers to scale and sustain Open Source is a line of thought to take into account_ \r\nQ : would it be worthy to think deeply about the trade-off that the libraries are giving with respect to what they are taking ? Can I help you brainstorm the utilities you put on **evaluate.py** and **datasets.py** on your code and submit a contribution so we can encapsulate your needs to all coders and avoid frustration?\r\n * Responsibility when becoming the standard\r\n**TLTR;** _Motivation of owners might be becoming the standard. They seem worried about that responsibility in many dimensions._\r\nQ : do you think we shall consider this dimension into account for this matter? \r\n * Bibliography and Openness \r\n\r\n\r\n### Impact \r\n\r\n\r\n**TLTR;** _17M monthly downloads, 1700 monthly MAUs, +100K repositories impact_\r\n**Hypothesis limitations**: _this data could change with other insights about MAUs funnel conversion and maintained active repositories + private repositories. Total MAUs have not being calculated due to incomplete information that would made data-driven conclusions too intuitive_\r\n\r\n\r\nIn order to gain some data-driven perspective about the impact of this change, what I did is check-in the downloads coming from [PyPI](https://pypistats.org/) from the 3 libraries and make a sum of the last month's downloads, giving an overall sum of 17M-ish . I'm assuming that there is a _clear funnel_ here that separates users that are newcomers, explorers, and MAUs ( Monthly Active Users ). My analysis took me to focus on these last ones, as they are using the code regularly or might be the ones that might be using the libraries in a production scenario or in a work dependent project. Taking out 4 orders of magnitude - in a pessimistic overview - the hypothesis takes us to new 1700 montly-MAUs\r\n\r\n<img width=\"1153\" alt=\"Captura de pantalla 2023-07-27 a las 12 47 21\" src=\"https://github.com/huggingface/transformers/assets/24204714/74b1fce6-3eec-471b-a204-77e5a804dd79\">\r\n\r\n<img width=\"1117\" alt=\"Captura de pantalla 2023-07-27 a las 12 48 20\" src=\"https://github.com/huggingface/transformers/assets/24204714/424d490d-7b22-4965-a32f-575e712f37d9\">\r\n\r\n<img width=\"1105\" alt=\"Captura de pantalla 2023-07-27 a las 12 47 50\" src=\"https://github.com/huggingface/transformers/assets/24204714/1d699b9b-c26e-4ade-a7c1-3582f8c61cbb\">\r\n\r\n\r\nTherefore, the data-driven impact exploration took me to **used-by** reporting in the head page of the repository, as the impact of a number of repositories that depend on the libraries. Transformers library has been reported to be used by 84,4 K people, datasets by 20,4 k people, and datasets by 2.9 k people. This gave a total of +100K repositories this change could have impact in . \r\n\r\nHypothesis limitations: this data could change with other insights about MAUs funnel conversion and maintained active repositories + private repositories. \r\n\r\n_Before going further, and I guess this is a question directly for @geajack , can I help you brainstorm other names - syntactically and semantically aligned - that could help solve your problem?_ \r\n\r\n\r\n### Considerations in the matter\r\n\r\n**TLTR;** _Other standard names are also taken._\r\n_Balancing Makers and Takers to scale and sustain Open Source is a line of thought to take into account_ \r\n\r\nWhat I understood from the issue is that the generalization of the package name supposed an interference and a cognitive dissonance WRT the naming standard with respect to other libraries. Then I went to `check-availability` package to see if **other standard names** could solve your problem - tried dataset and evaluation - and none were available. \r\n\r\n```\r\ncheck-availability pypi dataset --verbose 3\r\nGET https://pypi.org/project/dataset\r\nGot status code 200\r\nThe name dataset is not available on pypi\r\n```\r\n```\r\nGET https://pypi.org/project/evaluation\r\nGot status code 200\r\nThe name evaluation is not available on pypi\r\n```\r\nI really -really- tried to benchmark your motivations with Open Source Research insights [1](https://arxiv.org/pdf/2101.10291.pdf) [2](https://arxiv.org/pdf/2306.05548.pdf) [3](https://openaccess.city.ac.uk/id/eprint/5955/1/ContentServer_%281%29.pdf) to try to have an _empathetic generalistic view about this concern ._ Still maturing it, but what Im taking is that you might encounter beneficial and aligned with some Open Source ideas(yet to be proven representative) that generalistic **names** are not proprietary, beyond your individual code problem. \r\n\r\n However, I invite you to go deeper into motivations behind Open Source, as there seem to be equally important motivations that contributors and users are driven by. Encourage you to please share with me mature ideas that might not be aligned with my mental model. If we can go beyond one individual, and try to catch a community o a more general mental model, that would be amazing. \r\n\r\nOn the other hand, putting myself in Hugginface's shoes, I couldnΒ΄t stop thinking broadly about their Open Source sustainability contribution with respect to other companies and proprietary software. Really recommend this [reading](https://dri.es/balancing-makers-and-takers-to-scale-and-sustain-open-source)!\r\n\r\n\r\n_Before going further , and I guess these is a question for @geajack , would it be worthy to think deeply about the trade-off that the libraries are giving with respect to what they are taking ? Can I help you brainstorm the utilities you put on evaluate.py and datasets.py on your code and submit a contribution so we can encapsulate your needs to all coders and avoid frustration?_ \r\n\r\n### Responsibility when becoming the standard\r\n\r\n**TLTR;** _Motivation of owners might be becoming the standard. They seem worried about that responsibility in many dimensions._\r\n\r\nIt might be fair to think that that **naming** in this case might entail the search for becoming the standard, and I left to the reader to analyze whether the owners of the libraries are being responsible or not with respect to their Open Source duties for being recognized as such beyond the naming in order to analyze coherence. On my side, the trust level system and contributor management , together with the pro-active response with respect to other Open Source responsibilities, talk by itself. This doesnΒ΄t entail that they should have a present and future concern on this matter. \r\n\r\nI guess this is a question for @geajack , do you think we shall consider this dimension into account for this matter? \r\n\r\n### Bibliography and Openess \r\n\r\nBeyond the cited readings, I really recommend this [book](https://www.amazon.com/Perspectives-Free-Source-Software-Press/dp/0262562278) . \r\n\r\nI m acknowledging that this response might be dense, so I would like to thank the reader, the owner of this issue, the contributors, and the maintainer for going through this material. As an emotional openness exercise and following the bravery of @geajack , I must confess It has taken me a significant amount of **courage** to press Comment on this one. \r\nI just hope that this can glimpse another logical perspective, new possible paths coming from questions, and other thoughts that might be mutable due to new shreds of evidence. \r\n\r\n\r\n\r\n\r\n", "@SoyGema thanks for the detailed breakdown. First of all I just want to say that I don't intend to present myself as some kind of sponsor for these issues - I just want there to be a place in the issue tracker for people to voice this concern if it is indeed a common concern.\r\n\r\nI do think you may have misunderstood the issue at a couple of points, though. In your second section, it sounds like you think the complaint is that because HF is taking up `evaluate` on PyPi, that therefore I or somebody else can't have our own package on PyPi. That isn't the issue - the issue is that if I want to use HF's `evaluate` locally, I can't have my own *local* `evaluate.py`.\r\n\r\nMy most recent use-case for this was wanting a script called `evaluate.py` that I would actually run from the command line to run evaluation of my results - I had to change it to something more awkward like `evaluation.py`, which is annoying because it is after all a command and should ideally have the form of an imperative verb. I also routinely have a package called in my codebases to provide utility functions for managing my own datasets. As it happens, I've always called that package `data`, but I could imagine another programmer wanting to call it `datasets` and being annoyed that they can't.\r\n\r\nI'm not under the impression that this is a change that can be made tomorrow or even this year. When I opened these issues I pictured them (assuming they didn't just get buried) being the kinds of issues that sit open for years and years accumulating hundreds of comments, acting as an informal community forum before anything is done about them. The only place on the internet I could find someone expressing a similar sentiment was [this highly upvoted /r/Python comment](https://old.reddit.com/r/Python/comments/xeoyqc/how_did_hugging_face_get_such_good_pypi_package/ioi5ud5/), but I suspect a fair few people feel this way.", "Hey @geajack thanks for your response and for the clarification. Thanks also for the reddit link, that wasn't on my radar until now. As feedback , if you could share a line with the motivations and links behind this issue when opened that would be great!πŸ™‚\r\n\r\nI'm happy that you already have a turn around for this . Yes, you are correct. I thought that this was beyond a local use of a script and more library oriented due to the impact of the change and my normal sparks under 'annoying' naming scenario.\r\nI agree with your impression, and let's see what time brings πŸ™‚\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,693
1,693
NONE
null
### Feature request I'm repeatedly finding myself in situations where I want to have a package called `datasets.py` or `evaluate.py` in my code and can't because those names are being taken up by Huggingface packages. While I can understand how (even from the user's perspective) it's aesthetically pleasing to have nice terse library names, ultimately a library hogging simple names like this is something I find short-sighted, impractical and at my most irritable, frankly rude. My preference would be a pattern like what you get with all the other big libraries like numpy or pandas: ``` import huggingface as hf # hf.transformers, hf.datasets, hf.evaluate ``` or things like ``` import huggingface.transformers as tf # tf.load_model(), etc ``` If this isn't possible for some technical reason, at least just call the packages something like `hf_transformers` and so on. I realize this is a very big change that's probably been discussed internally already, but I'm making this issue and sister issues on each huggingface project just to start the conversation and begin tracking community feeling on the matter, since I suspect I'm not the only one who feels like this. Sorry if this has been requested already on this issue tracker, I couldn't find anything looking for terms like "package name". Sister issues: - **transformers** - [datasets](https://github.com/huggingface/datasets/issues/6053) - [evaluate](https://github.com/huggingface/evaluate/issues/476) ### Motivation Not taking up package names the user is likely to want to use. ### Your contribution No - more a matter of internal discussion among core library authors.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24934/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24934/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24933
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24933/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24933/comments
https://api.github.com/repos/huggingface/transformers/issues/24933/events
https://github.com/huggingface/transformers/issues/24933
1,812,501,830
I_kwDOCUB6oc5sCJVG
24,933
KeyError: 'input_ids' on Whisper training with include_inputs_for_metrics
{ "login": "dmurillo976s", "id": 79943298, "node_id": "MDQ6VXNlcjc5OTQzMjk4", "avatar_url": "https://avatars.githubusercontent.com/u/79943298?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dmurillo976s", "html_url": "https://github.com/dmurillo976s", "followers_url": "https://api.github.com/users/dmurillo976s/followers", "following_url": "https://api.github.com/users/dmurillo976s/following{/other_user}", "gists_url": "https://api.github.com/users/dmurillo976s/gists{/gist_id}", "starred_url": "https://api.github.com/users/dmurillo976s/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dmurillo976s/subscriptions", "organizations_url": "https://api.github.com/users/dmurillo976s/orgs", "repos_url": "https://api.github.com/users/dmurillo976s/repos", "events_url": "https://api.github.com/users/dmurillo976s/events{/privacy}", "received_events_url": "https://api.github.com/users/dmurillo976s/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The `include_inputs_for_metrics=True` feature only supports text models, not other modalities for now.", "Noted. Thanks for the quick response! I couldn't find anything that suggested so in the docs, so I hope this github issue comes in handy if anyone else tries this. \r\n\r\nI'll go ahead and close the issue.", "You don't need to close this, we should probably support other modalities better in the Trainer ;-) I was mainly stating that `input_ids` is hard-coded in this code and we would need to update it.", "Oh ok. I opened it again then. Thanks!", "Should be fixed by the PR linked above if you want to try.", "Hi. That's awesome! I just tried it in the colab notebook I shared and it worked nicely. Thanks!", "please can you tell me the solution!!!", "Hey @rokayabencheikh - could you confirm that you're using the latest version of the Transformers library? i.e. with:\r\n```\r\npip install --upgrade transformers\r\n```" ]
1,689
1,702
1,689
NONE
null
### System Info - `transformers` version: 4.32.0.dev0 - Platform: Linux-5.15.109+-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.7.0 (gpu) - Jax version: 0.4.13 - JaxLib version: 0.4.13 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @sanchit-gandhi ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction This link https://colab.research.google.com/drive/1mMolQVClnnC_hi1J6DeCUOElJtatSXoz?usp=sharing points to a Colab notebook that reproduces the issue. It is a slightly modified version from the official post in https://huggingface.co/blog/fine-tune-whisper Basically, whenever setting `include_inputs_for_metrics=True` for a training with whisper, the error message and stack trace below appear. As a clarification, I was just experimenting with `include_inputs_for_metrics`. I realize it doesn't make much sense in this context to receive the bare inputs in `compute_metrics`. It would be better to receive any other type of metadata from other dataset fields. Nevertheless, it seems like this is a bug, given that these models receive their inputs as `input_features` and not `input_ids`. ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) [<ipython-input-20-3435b262f1ae>](https://localhost:8080/#) in <cell line: 1>() ----> 1 trainer.train() 6 frames [/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1524 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size 1525 ) -> 1526 return inner_training_loop( 1527 args=args, 1528 resume_from_checkpoint=resume_from_checkpoint, [/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval) 1886 self.control = self.callback_handler.on_step_end(args, self.state, self.control) 1887 -> 1888 self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) 1889 else: 1890 self.control = self.callback_handler.on_substep_end(args, self.state, self.control) [/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _maybe_log_save_evaluate(self, tr_loss, model, trial, epoch, ignore_keys_for_eval) 2211 metrics.update(dataset_metrics) 2212 else: -> 2213 metrics = self.evaluate(ignore_keys=ignore_keys_for_eval) 2214 self._report_to_hp_search(trial, self.state.global_step, metrics) 2215 [/usr/local/lib/python3.10/dist-packages/transformers/trainer_seq2seq.py](https://localhost:8080/#) in evaluate(self, eval_dataset, ignore_keys, metric_key_prefix, **gen_kwargs) 157 self._gen_kwargs = gen_kwargs 158 --> 159 return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) 160 161 def predict( [/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in evaluate(self, eval_dataset, ignore_keys, metric_key_prefix) 2919 2920 eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop -> 2921 output = eval_loop( 2922 eval_dataloader, 2923 description="Evaluation", [/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in evaluation_loop(self, dataloader, description, prediction_loss_only, ignore_keys, metric_key_prefix) 3109 # Prediction step 3110 loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys) -> 3111 inputs_decode = self._prepare_input(inputs["input_ids"]) if args.include_inputs_for_metrics else None 3112 3113 if is_torch_tpu_available(): [/usr/local/lib/python3.10/dist-packages/transformers/feature_extraction_utils.py](https://localhost:8080/#) in __getitem__(self, item) 84 """ 85 if isinstance(item, str): ---> 86 return self.data[item] 87 else: 88 raise KeyError("Indexing with integers is not available when using Python based feature extractors") KeyError: 'input_ids' ``` ### Expected behavior The dummy training should be able to complete its validation loop.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24933/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24933/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24932
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24932/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24932/comments
https://api.github.com/repos/huggingface/transformers/issues/24932/events
https://github.com/huggingface/transformers/issues/24932
1,812,381,133
I_kwDOCUB6oc5sBr3N
24,932
Don't wait for mlflow.log_artifact in Trainer api
{ "login": "avramdj", "id": 48069158, "node_id": "MDQ6VXNlcjQ4MDY5MTU4", "avatar_url": "https://avatars.githubusercontent.com/u/48069158?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avramdj", "html_url": "https://github.com/avramdj", "followers_url": "https://api.github.com/users/avramdj/followers", "following_url": "https://api.github.com/users/avramdj/following{/other_user}", "gists_url": "https://api.github.com/users/avramdj/gists{/gist_id}", "starred_url": "https://api.github.com/users/avramdj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avramdj/subscriptions", "organizations_url": "https://api.github.com/users/avramdj/orgs", "repos_url": "https://api.github.com/users/avramdj/repos", "events_url": "https://api.github.com/users/avramdj/events{/privacy}", "received_events_url": "https://api.github.com/users/avramdj/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "I think this makes a lot of sense. \r\n\r\nNote that:\r\n\r\n> Integrations with reporting platforms are entirely maintained by the developers of those integrations or the community.\r\n\r\nWould you like to open a PR πŸ€— ?", "Hi @ydshieh, thank's for the response. \r\n\r\nBefore opening a PR, I'd like to discuss the feasability of such a feature in the first place. \r\nI can't tell if this would require an aggressive refactor of platform integration callbacks, if so, maybe it isn't worth it at the moment. \r\n\r\nAs I mentioned, I have no idea how introducing this concurrency will affect the rest of the Trainer api, it could cause a race condition when there's stuff like `save_total_limit`. Anyway, here's an outline I thought of so far, haven't tested it yet:\r\n\r\n```py\r\nclass AsyncTrainerCallback(TrainerCallback):\r\n # ...\r\n def __init__(self, *args, **kwargs):\r\n super().__init__(*args, **kwargs)\r\n self._log_queue = queue.Queue()\r\n self._worker_thread = threading.Thread(target=self._worker_loop)\r\n self._shutdown = False\r\n self._worker_loop.start()\r\n\r\n def _worker_loop(self):\r\n while not self._shutdown or not self._log_queue.empty():\r\n task = self._log_queue.get()\r\n task()\r\n self._log_queue.task_done()\r\n \r\n def _stop_worker(self):\r\n self._shutdown = True\r\n if not self._log_queue.empty():\r\n print(\"Waiting for logging to finish...\")\r\n self._log_queue.join()\r\n self._worker_thread.join()\r\n \r\n\r\nclass MLflowCallback(AsyncTrainerCallback):\r\n def on_save(self, args, state, control, **kwargs):\r\n if self._initialized and state.is_world_process_zero and self._log_artifacts:\r\n ckpt_dir = f\"checkpoint-{state.global_step}\"\r\n artifact_path = os.path.join(args.output_dir, ckpt_dir)\r\n\r\n ### instead of:\r\n # logger.info(f\"Logging checkpoint artifacts in {ckpt_dir}. This may take time.\")\r\n # self._ml_flow.pyfunc.log_model(\r\n # ckpt_dir,\r\n # artifacts={\"model_path\": artifact_path},\r\n # python_model=self._ml_flow.pyfunc.PythonModel(),\r\n # )\r\n\r\n ### do:\r\n task = lambda: self._ml_flow.pyfunc.log_model(\r\n ckpt_dir,\r\n artifacts={\"model_path\": artifact_path},\r\n python_model=self._ml_flow.pyfunc.PythonModel(),\r\n )\r\n self._log_queue.put(task)\r\n\r\n def on_train_end(self, args, state, control, **kwargs):\r\n # ...\r\n self._stop_worker()\r\n```", "I think it's probably better not to create the worker from the start. Just create the queue and worker inside `on_save`, then put the task to the queue.\r\n\r\nWE shouldn't have to worry about `save_total_limit`, as I don't see it is used in the current `MLflowCallback`, which means it is controlled by the `Trainer` class.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,693
1,693
NONE
null
### Feature request It would be ideal if there was an option to make `transformers.Trainer.train` with `HF_MLFLOW_LOG_ARTIFACTS=1` continue training while a separate process/thread logs the artifacts, so the whole run doesn't hang because of a network bottleneck. I would be happy to discuss the possibilities and/or limitations of something like this (for example if it conflicts with `save_total_limit`). ### Motivation I've had multiple situations where model artifact logging was almost as slow as training itself. This is especially a problem when training on expensive cloud GPU nodes for reasons I don't even need to explain. ### Your contribution I would be willing to discuss and potentially contribute to a feature like this once we've discussed it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24932/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24932/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24931
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24931/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24931/comments
https://api.github.com/repos/huggingface/transformers/issues/24931/events
https://github.com/huggingface/transformers/pull/24931
1,812,352,338
PR_kwDOCUB6oc5V6f68
24,931
[doc] `image_processing_vilt.py` wrong default documented
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
CONTRIBUTOR
null
Sync the doc with reality. https://github.com/huggingface/transformers/blob/ee4250a35f3bd5e9a4379b4907b3d8f9d5d9523f/src/transformers/models/vilt/image_processing_vilt.py#L299 https://github.com/huggingface/transformers/blob/ee4250a35f3bd5e9a4379b4907b3d8f9d5d9523f/src/transformers/models/vilt/image_processing_vilt.py#L310
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24931/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24931/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24931", "html_url": "https://github.com/huggingface/transformers/pull/24931", "diff_url": "https://github.com/huggingface/transformers/pull/24931.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24931.patch", "merged_at": 1689800261000 }
https://api.github.com/repos/huggingface/transformers/issues/24930
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24930/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24930/comments
https://api.github.com/repos/huggingface/transformers/issues/24930/events
https://github.com/huggingface/transformers/pull/24930
1,812,329,329
PR_kwDOCUB6oc5V6avK
24,930
Fix missing spaces in system prompt of Llama2 tokenizer
{ "login": "chenjoya", "id": 20626415, "node_id": "MDQ6VXNlcjIwNjI2NDE1", "avatar_url": "https://avatars.githubusercontent.com/u/20626415?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chenjoya", "html_url": "https://github.com/chenjoya", "followers_url": "https://api.github.com/users/chenjoya/followers", "following_url": "https://api.github.com/users/chenjoya/following{/other_user}", "gists_url": "https://api.github.com/users/chenjoya/gists{/gist_id}", "starred_url": "https://api.github.com/users/chenjoya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chenjoya/subscriptions", "organizations_url": "https://api.github.com/users/chenjoya/orgs", "repos_url": "https://api.github.com/users/chenjoya/repos", "events_url": "https://api.github.com/users/chenjoya/events{/privacy}", "received_events_url": "https://api.github.com/users/chenjoya/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ArthurZucker ", "Hey thanks, a similar PR is also #24935. Same comment would apply here, make sure `make style` is green and should be good! ", "Hi, thank you so much for your help. Seems it still cannot go green. I am not very familiar with it ... Could you give me some guidance? ;)", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? This PR fixes a typo in system prompt of Llama2 tokenizer. There are missing spaces compared to https://github.com/facebookresearch/llama/blob/main/llama/generation.py Thank you.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24930/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24930/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24930", "html_url": "https://github.com/huggingface/transformers/pull/24930", "diff_url": "https://github.com/huggingface/transformers/pull/24930.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24930.patch", "merged_at": 1689942535000 }
https://api.github.com/repos/huggingface/transformers/issues/24929
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24929/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24929/comments
https://api.github.com/repos/huggingface/transformers/issues/24929/events
https://github.com/huggingface/transformers/issues/24929
1,812,283,672
I_kwDOCUB6oc5sBUEY
24,929
The xformers result can not match with norm attention result
{ "login": "guozhiyao", "id": 21999339, "node_id": "MDQ6VXNlcjIxOTk5MzM5", "avatar_url": "https://avatars.githubusercontent.com/u/21999339?v=4", "gravatar_id": "", "url": "https://api.github.com/users/guozhiyao", "html_url": "https://github.com/guozhiyao", "followers_url": "https://api.github.com/users/guozhiyao/followers", "following_url": "https://api.github.com/users/guozhiyao/following{/other_user}", "gists_url": "https://api.github.com/users/guozhiyao/gists{/gist_id}", "starred_url": "https://api.github.com/users/guozhiyao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/guozhiyao/subscriptions", "organizations_url": "https://api.github.com/users/guozhiyao/orgs", "repos_url": "https://api.github.com/users/guozhiyao/repos", "events_url": "https://api.github.com/users/guozhiyao/events{/privacy}", "received_events_url": "https://api.github.com/users/guozhiyao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You should use the [forums](https://discuss.huggingface.co/) to help debug your code. It's not really Transformers fault if you can't match the results of a model after changing its implementation πŸ˜… " ]
1,689
1,689
1,689
NONE
null
### System Info Collecting environment information... PyTorch version: 1.13.0 Is debug build: False CUDA used to build PyTorch: 11.6 ROCM used to build PyTorch: N/A OS: Alibaba Group Enterprise Linux Server 7.2 (Paladin) (x86_64) GCC version: (GCC) 7.5.0 Clang version: Could not collect CMake version: version 3.22.0 Libc version: glibc-2.32 Python version: 3.8.13 (default, Oct 21 2022, 23:50:54) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.10.112-005.ali5000.alios7.x86_64-x86_64-with-glibc2.17 Is CUDA available: True CUDA runtime version: 11.3.58 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB Nvidia driver version: 470.154 cuDNN version: Probably one of the following: /usr/lib64/libcudnn.so.8.4.0 /usr/lib64/libcudnn_adv_infer.so.8.4.0 /usr/lib64/libcudnn_adv_train.so.8.4.0 /usr/lib64/libcudnn_cnn_infer.so.8.4.0 /usr/lib64/libcudnn_cnn_train.so.8.4.0 /usr/lib64/libcudnn_ops_infer.so.8.4.0 /usr/lib64/libcudnn_ops_train.so.8.4.0 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] mypy-extensions==0.4.3 [pip3] numpy==1.23.4 [pip3] torch==1.13.0+cu111 [pip3] torchaudio==0.11.0 [pip3] torchvision==0.14.0 [conda] No relevant packages A matching Triton is not available, some optimizations will not be enabled. Error caught was: module 'triton.language' has no attribute 'constexpr' A matching Triton is not available, some optimizations will not be enabled. Error caught was: module 'triton.language' has no attribute 'constexpr' xFormers 0.0.15.dev+103e863.d20221125 memory_efficient_attention.flshatt: available - requires GPU with compute capability 7.5+ memory_efficient_attention.cutlass: available memory_efficient_attention.small_k: available swiglu.fused.p.cpp: available is_triton_available: False is_functorch_available: False pytorch.version: 1.13.0 pytorch.cuda: available gpu.compute_capability: 8.0 gpu.name: NVIDIA A100-SXM4-80GB ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I use the gpt-neox model to inference. And try to modify the `_attn` with xformers to speedup, but the generate output result is wrong with `use_cache=True` while correct with `use_cache=False`. I modify from #24653 by replacing `_attn` function of `GPTNeoXAttention` with code below ``` def _xformers_attn(self, query, key, value, **kwargs): # q, k, v: [bs, num_attention_heads, seq_len, attn_head_size] # xformers Input tensors must be in format [B, M, H, K], where B is the batch size, M the sequence length, H the number of heads, and K the embeding size per head # [bs, num_attention_heads, seq_len, attn_head_size] -> [bs, seq_len, num_attention_heads, attn_head_size] query = query.transpose(1, 2).to(value.dtype) key = key.transpose(1, 2).to(value.dtype) value = value.transpose(1, 2) # org [bs, num_attention_heads, seq_len, attn_head_size] # xformers return multi-head attention Tensor with shape [B, Mq, H, Kv] output = xops.memory_efficient_attention( query, key, value, op=xops.MemoryEfficientAttentionFlashAttentionOp, attn_bias=xops.LowerTriangularMask(), p=self.config.attention_dropout if self.training else 0.0 ) # [b, sq, np, hn] -> [b, np, sq, hn] matmul_result = output.transpose(1, 2) return matmul_result.to(query.dtype), None ``` The generate output is correct with `use_cache=False`, while wrong with `use_cache=True` (first token is right but the latter ones are wrong). here is the generate output with `use_cache=True` ![image](https://github.com/huggingface/transformers/assets/21999339/0fe8015e-097c-4854-a8c5-7fe3cfa14250) And I have test the output of `_attn` and `_xformers_attn` in https://github.com/facebookresearch/xformers/issues/798 , which is correct. ### Expected behavior I want to speed up the attention with xformers.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24929/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24929/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24928
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24928/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24928/comments
https://api.github.com/repos/huggingface/transformers/issues/24928/events
https://github.com/huggingface/transformers/pull/24928
1,812,188,151
PR_kwDOCUB6oc5V57qz
24,928
Remove `llama2`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24928). All of your documentation changes will be reflected on that endpoint." ]
1,689
1,693
1,689
COLLABORATOR
null
A mistake πŸ˜…
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24928/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24928/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24928", "html_url": "https://github.com/huggingface/transformers/pull/24928", "diff_url": "https://github.com/huggingface/transformers/pull/24928.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24928.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24927
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24927/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24927/comments
https://api.github.com/repos/huggingface/transformers/issues/24927/events
https://github.com/huggingface/transformers/pull/24927
1,812,166,092
PR_kwDOCUB6oc5V522f
24,927
Allow generic composite models to pass more kwargs
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Looking at the errors, we probably need to check whether the model has `self.encoder` or `self.model.encoder` ", "> Looking at the errors, we probably need to check whether the model has `self.encoder` or `self.model.encoder`\r\n\r\nI can do this, but the `model` in `self.model` is not something from convention I guess: it's more a implementation detail of each (concrete) encoder decoder model class (like `bart`)", "@gante \r\n\r\nLet me know if the latest change LGTY πŸ€— . Thanks!" ]
1,689
1,690
1,690
COLLABORATOR
null
# What does this PR do? Generic composite models: `(Text|Vision|Speech)EncoderDecoder`: their `forward` don't have the full info. in their 2 components. Fix #24919
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24927/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24927/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24927", "html_url": "https://github.com/huggingface/transformers/pull/24927", "diff_url": "https://github.com/huggingface/transformers/pull/24927.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24927.patch", "merged_at": 1690294020000 }
https://api.github.com/repos/huggingface/transformers/issues/24926
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24926/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24926/comments
https://api.github.com/repos/huggingface/transformers/issues/24926/events
https://github.com/huggingface/transformers/pull/24926
1,812,093,640
PR_kwDOCUB6oc5V5nDy
24,926
fix fsdp checkpointing issues
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? 1. FSDP loading now return the `load_result` to be given to `_issue_warnings_after_load`. Should be merged after https://github.com/huggingface/accelerate/pull/1745 2. Earlier it was wrongly saving `optimizer.pt` with FSDP, fixed it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24926/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24926/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24926", "html_url": "https://github.com/huggingface/transformers/pull/24926", "diff_url": "https://github.com/huggingface/transformers/pull/24926.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24926.patch", "merged_at": 1689922046000 }
https://api.github.com/repos/huggingface/transformers/issues/24925
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24925/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24925/comments
https://api.github.com/repos/huggingface/transformers/issues/24925/events
https://github.com/huggingface/transformers/issues/24925
1,812,076,702
I_kwDOCUB6oc5sAhie
24,925
Fully compatible with the open clip Tokeniser
{ "login": "laksjdjf", "id": 22386664, "node_id": "MDQ6VXNlcjIyMzg2NjY0", "avatar_url": "https://avatars.githubusercontent.com/u/22386664?v=4", "gravatar_id": "", "url": "https://api.github.com/users/laksjdjf", "html_url": "https://github.com/laksjdjf", "followers_url": "https://api.github.com/users/laksjdjf/followers", "following_url": "https://api.github.com/users/laksjdjf/following{/other_user}", "gists_url": "https://api.github.com/users/laksjdjf/gists{/gist_id}", "starred_url": "https://api.github.com/users/laksjdjf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/laksjdjf/subscriptions", "organizations_url": "https://api.github.com/users/laksjdjf/orgs", "repos_url": "https://api.github.com/users/laksjdjf/repos", "events_url": "https://api.github.com/users/laksjdjf/events{/privacy}", "received_events_url": "https://api.github.com/users/laksjdjf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@patrickvonplaten", "cc @patil-suraj and @ArthurZucker ", "Hey, I am not really sure I understand the issue here. The `pad_token_id` is dependent on the `pad_token`. By default the `pad_token` is set to `'!'` which, maps to `0` : `tokenizer.decode(torch.tensor([0]))`, `tokenizer.convert_tokens_to_ids('!')`. If you set the padding token to a different value by hand, the following will happen: \r\n```python \r\n\r\n>>> tokenizer = CLIPTokenizer.from_pretrained(\"stabilityai/stable-diffusion-2-1\", pad_token=\"<|endoftext|>\")\r\n>>> print(tokenizer.pad_token_id)\r\n49407\r\n>>> tokenizer.pad_token_id = 0\r\n>>> print(tokenizer.pad_token)\r\n '!'\r\n```", "> Hey, I am not really sure I understand the issue here. The `pad_token_id` is dependent on the `pad_token`. By default the `pad_token` is set to `'!'` which, maps to `0` : `tokenizer.decode(torch.tensor([0]))`, `tokenizer.convert_tokens_to_ids('!')`. If you set the padding token to a different value by hand, the following will happen:\r\n\r\nYou are right, my implementation does not seem to be perfect. The purpose of this implementation is to match when converting from text to token_id. What I want to know is how to set pad_token_id to 0 without affecting the words that are normally used like ```!``` \r\n\r\n\r\n", "You can't really do that unless you train a new tokenizer πŸ˜… \r\nYou can add a new token, with a new index, which will prevent splitting `!`. The problem is that the embedding at position `0` might have been trained as padding token and is thus a random tensor (not updated by gradient computation). ", "cc @patil-suraj to explain the context around Stable Diffusion and OAI vs. openCLIP here maybe", "Hey @laksjdjf , \r\n\r\nIndeed, there's a discrepancy between the `pad_token_id` in the open clip tokenizer and the `CLIPTokenizer` in `transformers`. But we can't change it for the existing models for backward compatibility reasons.\r\n\r\nBut note that for the tokenizer used in SD2 and SDXL it's already set correctly cf https://huggingface.co/stabilityai/stable-diffusion-2-1/blob/main/tokenizer/special_tokens_map.json#L16\r\n\r\nAnd a bit more context about padding token in CLIP. CLIP doesn't care about padding token and the wrong padding token will only affect inference when using all token embeddings (like Stable Diffusion). For training, even if the padding token is wrong (i.e if we use `eos` instead of `!`, it shouldn't affect) because\r\n\r\n- `CLIP` did not use `attention_mask` during training.\r\n- `CLIPTextEncoder` uses a casual mask, so the tokens to the right don't influence the hidden states of tokens to the left.\r\n- `CLIP` is trained with contrastive loss, which is computed using the `projections`, and the `text_projection` is computed by pooling the `eos _token` embeddings, which will always be similar no matter what the padding token is, because `CLIPTextEncoder` is causal, so the eos embeddings won't be affected by tokens on the right.\r\n- For downstream training (like SD), as long as a consistent token is used for padding, it shouldn't severely affect the training. But for inference, we will need to use the same token.\r\n\r\nSo the way CLIP is trained, it doesn't care about padding token. It'll only affect the inference if a different token (compared to the padding token used for training) is used for padding. And this is already taken care of in SD 2 and SDXL repos.", "> But note that for the tokenizer used in SD2 and SDXL it's already set correctly cf https://huggingface.co/stabilityai/stable-diffusion-2-1/blob/main/tokenizer/special_tokens_map.json#L16\r\n\r\nMy concern is that the above process will result in ```!``` no longer being available in its normal sense.\r\n\r\n", "Hey @laksjdjf, \r\n\r\nAs @patil-suraj mentioned CLIP never used a padding token for training. It was trained with a causal mask and only tokens **until the eos token** are taken into account when computing the CLIP contrastive loss. All tokens following the eos token have **no** influence on the model, so one could have added any token here. \r\n\r\nNow, it only matters if a pretrained CLIP is further fine-tuned as is done for SD. In this case the padding token was used to influence the loss and in that sense SD does not make a difference between `!` and a `padding` token. **But** this is purely due to the way SD uses CLIP for fine-tuning - this is not an inherit characteristic of CLIP.", "Hmmm, maybe I just don't understand, but my question is about the behaviour of the ```! token```, not the behaviour of the ```pad token```. If ```! token``` is converted to ```pad token```, it seems to make a difference when processing text containing ```! token``` .\r\n\r\n```Python console\r\n>>>tokenizer = CLIPTokenizer.from_pretrained(\"stabilityai/stable-diffusion-xl-base-0.9\", subfolder=\"tokenizer_2\")\r\n>>>prompt = \"! !! !!!\"\r\n>>>input_ids = tokenizer(prompt,padding=\"max_length\", max_length=tokenizer.model_max_length, truncation=True, return_tensors='pt').input_ids\r\n\r\n>>>print(input_ids)\r\ntensor([[49406, 0, 0, 0, 0, 0, 0, 49407, 0, 0, ...\r\n\r\n>>>print(open_clip.tokenize(prompt))\r\ntensor([[49406, 256, 748, 995, 49407, 0, 0, 0, 0, 0, ...\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,693
1,693
NONE
null
### Feature request The open clip tokeniser has a ```pad_token_id``` of 0, but this cannot be achieved because ```CLIPTokeniser.__init__()``` cannot set a ```pad_token_id``` ### Motivation Related to https://github.com/huggingface/diffusers/issues/4153 Stable diffusion v2 and XL use the open clip tokeniser. To avoid increasing dependencies, the transformers must also have the same functionality. ### Your contribution It seems possible to set pad_token_id directly, but it is not realistic. ``` tokenizer = CLIPTokenizer.from_pretrained("stabilityai/stable-diffusion-2-1", pad_token="<|endoftext|>") tokenizer.pad_token_id = 0 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24925/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24925/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24924
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24924/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24924/comments
https://api.github.com/repos/huggingface/transformers/issues/24924/events
https://github.com/huggingface/transformers/issues/24924
1,812,052,995
I_kwDOCUB6oc5sAbwD
24,924
`VisionTextDualEncoder`: Distributed training is always enabled
{ "login": "phiyodr", "id": 33572125, "node_id": "MDQ6VXNlcjMzNTcyMTI1", "avatar_url": "https://avatars.githubusercontent.com/u/33572125?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phiyodr", "html_url": "https://github.com/phiyodr", "followers_url": "https://api.github.com/users/phiyodr/followers", "following_url": "https://api.github.com/users/phiyodr/following{/other_user}", "gists_url": "https://api.github.com/users/phiyodr/gists{/gist_id}", "starred_url": "https://api.github.com/users/phiyodr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phiyodr/subscriptions", "organizations_url": "https://api.github.com/users/phiyodr/orgs", "repos_url": "https://api.github.com/users/phiyodr/repos", "events_url": "https://api.github.com/users/phiyodr/events{/privacy}", "received_events_url": "https://api.github.com/users/phiyodr/received_events", "type": "User", "site_admin": false }
[ { "id": 5616426447, "node_id": "LA_kwDOCUB6oc8AAAABTsPdzw", "url": "https://api.github.com/repos/huggingface/transformers/labels/solved", "name": "solved", "color": "B1D6DC", "default": false, "description": "" } ]
closed
false
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false } ]
[ "How are you launching the training script? Could you share that part?", "I use the unchanged code from the [example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text#train-the-model):\r\n\r\n```\r\npython examples/pytorch/contrastive-image-text/run_clip.py \\\r\n --output_dir ./clip-roberta-finetuned \\\r\n --model_name_or_path ./clip-roberta \\\r\n --data_dir $PWD/data \\\r\n --dataset_name ydshieh/coco_dataset_script \\\r\n --dataset_config_name=2017 \\\r\n --image_column image_path \\\r\n --caption_column caption \\\r\n --remove_unused_columns=False \\\r\n --do_train --do_eval \\\r\n --per_device_train_batch_size=\"64\" \\\r\n --per_device_eval_batch_size=\"64\" \\\r\n --learning_rate=\"5e-5\" --warmup_steps=\"0\" --weight_decay 0.1 \\\r\n --overwrite_output_dir\r\n```\r\n\r\nI neither use `python -m torch.distributed.launch ...` nor things like `accelerate launch ...`. \r\nJust pure `python ...` :)\r\n\r\nThank you in advance!", "That is really weird. @muellerzr could you have a look here to check we didn't mess something with the Accelerate integration in the Trainer?", "This is fine, the scripts need to be updated however as checking `local_rank != -1` is the wrong check to use after the accelerate integration. Will open a PR. You can confirm it's training on non-multi-GPU by adding the following to that warning:\r\n\r\n```python\r\n logger.warning(\r\n f\"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}\"\r\n + f\"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}\"\r\n + f'State: {training_args.distributed_state}'\r\n )\r\n```\r\nWhich will print the accelerator state which has:\r\n```\r\nDistributed environment: NO\r\nNum processes: 1\r\nProcess index: 0\r\nLocal process index: 0\r\nDevice: cuda\r\n```\r\nLike we expect πŸ˜„ ", "All the examples are updated in #24956 ", "Perfect! Thanks a lot for the clarification :+1: " ]
1,689
1,689
1,689
CONTRIBUTOR
null
### System Info - `transformers` version: 4.32.0.dev0 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.31 - Python version: 3.10.10 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): 2.13.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.7.0 (cpu) - Jax version: 0.4.13 - JaxLib version: 0.4.13 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: **It seems yes, but I don't want to ;)** ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Hi, I'm running the **unchanged** ["VisionTextDualEncoder and CLIP model training example"](https://github.com/huggingface/transformers/blob/main/examples/pytorch/contrastive-image-text/run_clip.py) on my local laptop (which has 1 GPU) and wonder why it claims to do `distributed training: True` (and not `False`). From the output: ``` 07/19/2023 15:21:22 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1distributed training: True, 16-bits training: False ``` The above output originates from [`run_clip.py`](https://github.com/huggingface/transformers/blob/ee4250a35f3bd5e9a4379b4907b3d8f9d5d9523f/examples/pytorch/contrastive-image-text/run_clip.py#L260C1-L263C6) ``` logger.warning( f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}" + f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}" ) ``` * The default should be `training_args.local_rank=-1` according to [`TrainingArguments`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments) but is somehow set to `0` in this example and I don't know why. * Adding `local_rank=-1` to the [run_clip.py example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text#train-the-model) does not show any effect. My questions: * Is it intended that `local_rank` is set to `0`? * Does `local_rank=0` really mean that distributed training in `Trainer` is enabled? (I'm new to `Trainer` and usually work with `DistributedDataParallel`) * How to switch off distributed training? --- Bigger picture: Sometimes my training (on a cluster) hangs up in n-1 iteration and never finishes. I wonder if this has to do with distributed training. I don't know how to debug this. ``` 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 2875/2876 [11:34<00:00, 4.10it/s] ```` Thanks in advance! ### Expected behavior I don't want to use distributed training, i.e. `training_args.local_rank = -1`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24924/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24924/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24923
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24923/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24923/comments
https://api.github.com/repos/huggingface/transformers/issues/24923/events
https://github.com/huggingface/transformers/pull/24923
1,812,048,860
PR_kwDOCUB6oc5V5dPZ
24,923
🌐 [i18n-KO] Translated `perf_train_cpu_many.md` to Korean
{ "login": "nuatmochoi", "id": 46990061, "node_id": "MDQ6VXNlcjQ2OTkwMDYx", "avatar_url": "https://avatars.githubusercontent.com/u/46990061?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nuatmochoi", "html_url": "https://github.com/nuatmochoi", "followers_url": "https://api.github.com/users/nuatmochoi/followers", "following_url": "https://api.github.com/users/nuatmochoi/following{/other_user}", "gists_url": "https://api.github.com/users/nuatmochoi/gists{/gist_id}", "starred_url": "https://api.github.com/users/nuatmochoi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nuatmochoi/subscriptions", "organizations_url": "https://api.github.com/users/nuatmochoi/orgs", "repos_url": "https://api.github.com/users/nuatmochoi/repos", "events_url": "https://api.github.com/users/nuatmochoi/events{/privacy}", "received_events_url": "https://api.github.com/users/nuatmochoi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,691
1,691
CONTRIBUTOR
null
# What does this PR do? Translated the `perf_train_cpu_many.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (λ²ˆμ—­ λˆ„λ½/쀑볡 검사) - [x] Grammar Check (λ§žμΆ€λ²• 검사) - [x] Review or Add new terms to glossary (μš©μ–΄ 확인 및 μΆ”κ°€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview둜 μ •μƒμž‘λ™ 확인) ## Who can review? (Initial) May you please review this PR? @nuatmochoi, @bolizabeth, @hyunhp, @heuristicwave, @mjk0618, @jungnerd ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24923/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24923/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24923", "html_url": "https://github.com/huggingface/transformers/pull/24923", "diff_url": "https://github.com/huggingface/transformers/pull/24923.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24923.patch", "merged_at": 1691561731000 }
https://api.github.com/repos/huggingface/transformers/issues/24922
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24922/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24922/comments
https://api.github.com/repos/huggingface/transformers/issues/24922/events
https://github.com/huggingface/transformers/pull/24922
1,812,020,570
PR_kwDOCUB6oc5V5W97
24,922
Deprecate unused OpenLlama architecture
{ "login": "tomaarsen", "id": 37621491, "node_id": "MDQ6VXNlcjM3NjIxNDkx", "avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomaarsen", "html_url": "https://github.com/tomaarsen", "followers_url": "https://api.github.com/users/tomaarsen/followers", "following_url": "https://api.github.com/users/tomaarsen/following{/other_user}", "gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions", "organizations_url": "https://api.github.com/users/tomaarsen/orgs", "repos_url": "https://api.github.com/users/tomaarsen/repos", "events_url": "https://api.github.com/users/tomaarsen/events{/privacy}", "received_events_url": "https://api.github.com/users/tomaarsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> FAILED tests/test_modeling_utils.py::ModelPushToHubTester::test_push_to_hub - huggingface_hub.utils._errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://hub-ci.huggingface.co/api/repos/create\r\n\r\nTest failure seems unrelated, I can push an empty commit or let you rerun them.", "> Re-launched the tests to make sure it's just a fluke. Pinging the original author @s-JoL for information and to see if there is any plan on open-sourcing another checkpoint for this architecture?\r\n\r\nDue to various reasons, the previous open-source project has been shut down. Considering that there are more and more open-source models available now (including commercially available ones like Llama2), I believe that open-sourcing another similar model won't add much value to the community. Therefore, I think it's reasonable to mark this model as deprecated. However, I understand that some users are training with the Llama model that includes XFormers. Is it possible to add an optional XFormers acceleration in the Llama model to facilitate these users?", "We can see how to add an optional XFormers acceleration in Llama in other PRs, yes." ]
1,689
1,689
1,689
MEMBER
null
Hello! # What does this PR do? 1. Deprecate `OpenLlama` following #24787. 2. Add a disclaimer pointing users to `LLaMA` for the [OpenLLaMA models](https://huggingface.co/models?search=openllama). 3. Resolve a typo in a warning in `check_repo.py` 4. Read modeling files with `encoding="utf8"` in `check_config_attributes_being_used`. If preferred, I can revert 3 and 4 and cherry-pick them into a separate PR. Follow-up of #24913. This is a considerably better solution - I feel a bit embarrassed I even considered the other one. ## Details I've followed the steps that @sgugger seems to have taken from #24787 to move OpenLlama into deprecation. This involves moving the main code, adapting the `__init__` files, removing the tests, and updating the documentation with a disclaimer. Feel free to let me know if you'd rather keep the model non-deprecated for the time being, and then I'll revert to only the addition of the disclaimer. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Discussed in #24913. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger - Tom Aarsen
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24922/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24922/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24922", "html_url": "https://github.com/huggingface/transformers/pull/24922", "diff_url": "https://github.com/huggingface/transformers/pull/24922.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24922.patch", "merged_at": 1689851004000 }
https://api.github.com/repos/huggingface/transformers/issues/24921
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24921/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24921/comments
https://api.github.com/repos/huggingface/transformers/issues/24921/events
https://github.com/huggingface/transformers/issues/24921
1,811,996,093
I_kwDOCUB6oc5sAN29
24,921
error when load model
{ "login": "linhui1020", "id": 91501054, "node_id": "MDQ6VXNlcjkxNTAxMDU0", "avatar_url": "https://avatars.githubusercontent.com/u/91501054?v=4", "gravatar_id": "", "url": "https://api.github.com/users/linhui1020", "html_url": "https://github.com/linhui1020", "followers_url": "https://api.github.com/users/linhui1020/followers", "following_url": "https://api.github.com/users/linhui1020/following{/other_user}", "gists_url": "https://api.github.com/users/linhui1020/gists{/gist_id}", "starred_url": "https://api.github.com/users/linhui1020/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/linhui1020/subscriptions", "organizations_url": "https://api.github.com/users/linhui1020/orgs", "repos_url": "https://api.github.com/users/linhui1020/repos", "events_url": "https://api.github.com/users/linhui1020/events{/privacy}", "received_events_url": "https://api.github.com/users/linhui1020/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You will need to drop that key from your state dict, or use the `save_pretrained` and `from_pretrained` method of the library.", "I'm having the same issue, except my model is a custom model that includes a transformer model as a part of it so I need to use `torch.load_state_dict` to load the model.\r\n\r\nHere's some code to reproduce the issue:\r\n\r\n```python\r\nimport torch.nn as nn\r\nfrom transformers import XLMRobertaConfig, XLMRobertaModel\r\n\r\n\r\nclass MyCustomModel(nn.Module):\r\n def __init__(self):\r\n super().__init__()\r\n self.roberta = XLMRobertaModel(\r\n XLMRobertaConfig.from_pretrained(\"Unbabel/xlm-roberta-comet-small\")\r\n )\r\n self.my_thing = nn.Linear(384, 1) # let's pretend this is way more complicated\r\n\r\n\r\ndef main():\r\n model = MyCustomModel()\r\n\r\n print(sorted(list(model.state_dict().keys()))[:10])\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\nWhen running the script with transformers==4.30.0 (which is what I used for training the model) I get the following output:\r\n\r\n```\r\n['my_thing.bias', 'my_thing.weight', 'roberta.embeddings.LayerNorm.bias', 'roberta.embeddings.LayerNorm.weight', 'roberta.embeddings.position_embeddings.weight', 'roberta.embeddings.position_ids', 'roberta.embeddings.token_type_embeddings.weight', 'roberta.embeddings.word_embeddings.weight', 'roberta.encoder.layer.0.attention.output.LayerNorm.bias', 'roberta.encoder.layer.0.attention.output.LayerNorm.weight']\r\n```\r\n\r\nNote the presence of `roberta.embeddings.position_ids`.\r\n\r\nNow when I try to load the model using transformers==4.31.0, it gives me a key error, because the position IDs key seems to have been removed in the new version. Running the same code gives:\r\n\r\n```\r\n['my_thing.bias', 'my_thing.weight', 'roberta.embeddings.LayerNorm.bias', 'roberta.embeddings.LayerNorm.weight', 'roberta.embeddings.position_embeddings.weight', 'roberta.embeddings.token_type_embeddings.weight', 'roberta.embeddings.word_embeddings.weight', 'roberta.encoder.layer.0.attention.output.LayerNorm.bias', 'roberta.encoder.layer.0.attention.output.LayerNorm.weight', 'roberta.encoder.layer.0.attention.output.dense.bias']\r\n```\r\n\r\nThe position IDs key is not there anymore.\r\n\r\nShouldn't this at least be documented as a breaking change in the release notes?", "Same issue here", "You should use `save_pretrained` and `from_pretained` to save/load your models. There is no breaking changes with those methods, we do not guarantee the same if you choose to save/load your model on your own across different versions of Transformers.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,693
1,693
NONE
null
### System Info 2023-07-19 13:47:33.834732: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT ### Who can help? _No response_ ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction model = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels = 2) model.to(device) model.load_state_dict(torch.load("my_model.pt", map_location = torch.device("cpu"))) ### Expected behavior When running the code, it shows run time error: _IncompatibleKeys(missing_keys=[], unexpected_keys=['bert.embeddings.position_ids']). The error does not occur yesterday, but appear now.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24921/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24921/timeline
completed
null
null