url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/24518
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24518/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24518/comments
https://api.github.com/repos/huggingface/transformers/issues/24518/events
https://github.com/huggingface/transformers/issues/24518
1,776,676,598
I_kwDOCUB6oc5p5e72
24,518
[i18n-<English>] Translating docs to <Chinese>
{ "login": "liteli1987gmail", "id": 59245973, "node_id": "MDQ6VXNlcjU5MjQ1OTcz", "avatar_url": "https://avatars.githubusercontent.com/u/59245973?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liteli1987gmail", "html_url": "https://github.com/liteli1987gmail", "followers_url": "https://api.github.com/users/liteli1987gmail/followers", "following_url": "https://api.github.com/users/liteli1987gmail/following{/other_user}", "gists_url": "https://api.github.com/users/liteli1987gmail/gists{/gist_id}", "starred_url": "https://api.github.com/users/liteli1987gmail/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liteli1987gmail/subscriptions", "organizations_url": "https://api.github.com/users/liteli1987gmail/orgs", "repos_url": "https://api.github.com/users/liteli1987gmail/repos", "events_url": "https://api.github.com/users/liteli1987gmail/events{/privacy}", "received_events_url": "https://api.github.com/users/liteli1987gmail/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
null
[]
[ "Please completely fill the template with your language and also search the existing GitHub issues to avoid duplicates." ]
1,687
1,687
1,687
CONTRIBUTOR
null
<!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! I translated all the English documents into Chinese. Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). <!-- Keep on adding more as you go 🔥 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24518/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24518/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24517
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24517/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24517/comments
https://api.github.com/repos/huggingface/transformers/issues/24517/events
https://github.com/huggingface/transformers/issues/24517
1,776,601,749
I_kwDOCUB6oc5p5MqV
24,517
[Bug]Non-robust directory splitting and detection at get_cached_module_file
{ "login": "Tpinion", "id": 27395365, "node_id": "MDQ6VXNlcjI3Mzk1MzY1", "avatar_url": "https://avatars.githubusercontent.com/u/27395365?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tpinion", "html_url": "https://github.com/Tpinion", "followers_url": "https://api.github.com/users/Tpinion/followers", "following_url": "https://api.github.com/users/Tpinion/following{/other_user}", "gists_url": "https://api.github.com/users/Tpinion/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tpinion/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tpinion/subscriptions", "organizations_url": "https://api.github.com/users/Tpinion/orgs", "repos_url": "https://api.github.com/users/Tpinion/repos", "events_url": "https://api.github.com/users/Tpinion/events{/privacy}", "received_events_url": "https://api.github.com/users/Tpinion/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Are you really certain `os.path.isdir(pretrained_model_name_or_path)` returns `True` with a path containing `/` on Windows? This seems really weird to me.", "> Are you really certain `os.path.isdir(pretrained_model_name_or_path)` returns `True` with a path containing `/` on Windows? This seems really weird to me.\r\n\r\nyes~\r\nThis is the screenshot of the debugging I just did on vscode.\r\n![image](https://github.com/huggingface/transformers/assets/27395365/b623865f-a0cc-45db-9ed0-4f3f3a9c8517)\r\n", "Annoying (that it's super inconsistent like this). Will look at this later today and try to come up with a fix!", "Could you try if the PR mentioned above fixes your issue?", "> Could you try if the PR mentioned above fixes your issue?\r\n\r\nNo problem~", "> Could you try if the PR mentioned above fixes your issue?\r\n\r\nProblem has been solved by this PR! Should I close this issue now?", "It will be closed auto-magically by GitHub when the PR is merged :-)" ]
1,687
1,687
1,687
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.10.12 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. I specified the correct model directory path under windows, but the library doesn't seem to find it correctly. Just like this: ```python # Directory THUDM/chatglm2-6b is existed at same directory with code. tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True) ``` 2. Then I found the reason was that the `submodule` was not splitted correctly. **Source Code At**: `transformers/dynamic_module_utils:get_cached_module_file` ```python # line: 235 This code returns true on both linux and windows. is_local = os.path.isdir(pretrained_model_name_or_path) if is_local: # line: 237 But this code cann't split path correctly, because os.path.sep is "\\" rather than "/". submodule = pretrained_model_name_or_path.split(os.path.sep)[-1] ``` 3. So I solved the problem by modifying my path. ```python tokenizer = AutoTokenizer.from_pretrained("THUDM\\chatglm2-6b", trust_remote_code=True) ``` ### Expected behavior Even on windows code, it is common to specify paths by forward slashes. Although it may not be a standard paradigm. So I think there are two possible approaches: 1. Use a more robust way to split `submodule`, such as the pathlib. 2. Explicitly throws a warning asking windows users to pass in a standard windows path.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24517/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24517/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24516
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24516/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24516/comments
https://api.github.com/repos/huggingface/transformers/issues/24516/events
https://github.com/huggingface/transformers/issues/24516
1,776,569,033
I_kwDOCUB6oc5p5ErJ
24,516
RuntimeError: Cross Attention in GPTBigCodeModel
{ "login": "dawnik17", "id": 135340243, "node_id": "U_kgDOCBEg0w", "avatar_url": "https://avatars.githubusercontent.com/u/135340243?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dawnik17", "html_url": "https://github.com/dawnik17", "followers_url": "https://api.github.com/users/dawnik17/followers", "following_url": "https://api.github.com/users/dawnik17/following{/other_user}", "gists_url": "https://api.github.com/users/dawnik17/gists{/gist_id}", "starred_url": "https://api.github.com/users/dawnik17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dawnik17/subscriptions", "organizations_url": "https://api.github.com/users/dawnik17/orgs", "repos_url": "https://api.github.com/users/dawnik17/repos", "events_url": "https://api.github.com/users/dawnik17/events{/privacy}", "received_events_url": "https://api.github.com/users/dawnik17/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Would you mind sharing a full reproduced? (ex the config you are using)? \r\nThis might just be a raise Value error missing and a check on the shapes! ", "Sure @ArthurZucker \r\n\r\nReproducible Code:\r\n```\r\nimport torch\r\nfrom transformers import GPTBigCodeConfig\r\n\r\nconfig = GPTBigCodeConfig(multi_query=False, n_embd=16, n_head=2)\r\nattn = GPTBigCodeAttention(config, is_cross_attention=True)\r\n\r\ninp = torch.rand((2, 4, 16))\r\nattn(inp, encoder_hidden_states=inp)\r\n```\r\nThanks in advance :)", "Bump :) \r\n@ArthurZucker", "Okay! Thanks for bumping. \r\nFew things here: \r\n- `GPTBigCodeAttention` is not in the `__init__` so not on the surface of transformers: we don't really support using it outside in such a way. Having a working snipper which uses a `GPTBigCodeModel` is better. \r\n- You are not properly setting the arguments that you want to set: `attn.embed_dim` will show `768`. To change the number of heads and the `attn.embed_dim`, use `config = GPTBigCodeConfig(multi_query=False,hidden_size=16, num_attention_heads=2)`\r\n- I don't recommend trying to use it this way, as it is not intended. I tried a few things to see if a quick fix was possible, but it seems that during integration all edge cases were not tested. Especially `add_cross_attention` which is not part of the configuration. \r\n", "Got it. \r\nThanks @ArthurZucker! " ]
1,687
1,690
1,690
NONE
null
### System Info Code: https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/gpt_bigcode/modeling_gpt_bigcode.py In the case of cross attention, if `self.c_attn` is initialised to give output of dimension `2 * self.embed_dim` **(Line 112 & 227)** The `key_value` split in **line 246**, `key, value = key_value.split((self.head_dim, self.head_dim), dim=-1)` * would raise an exception _(when self.embeded_dim != self.head_dim)_ `RuntimeError: split_with_sizes expects split_sizes to sum exactly to 2*self.embeded_dim` _PS - I could be mistaken, but it would be worth having a look (and correct me if I am wrong!)._ ### Who can help? @ArthurZucker @younesbelkada @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` inp = torch.rand((2, 4, 16)) g = GPTBigCodeAttention(config, is_cross_attention=True) g(inp, encoder_hidden_states=inp) ``` `RuntimeError: split_with_sizes expects split_sizes to sum exactly to 32 (input tensor's size at dimension -1), but got split_sizes=[4, 4]` ### Expected behavior The forward method should return the attention output of the shape [2, 4, 16].
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24516/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24516/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24515
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24515/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24515/comments
https://api.github.com/repos/huggingface/transformers/issues/24515/events
https://github.com/huggingface/transformers/pull/24515
1,776,443,686
PR_kwDOCUB6oc5UAhGm
24,515
Update hyperparameter_search.py
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,687
1,687
1,687
CONTRIBUTOR
null
# What does this PR do? 1. PR #24384 results in many tests failing related to HP Search. https://huggingface.slack.com/archives/C02CH2YP4EQ/p1687823783572709 2. Error being `tests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_hyperparameter_search (line 122) TypeError: is_available() missing 1 required positional argument: 'self'` when `default_hp_search_backend` is called. 3. This PR fixes it. Tested this via: ``` export CUDA_VISIBLE_DEVICES="0" export RUN_SLOW="yes" export LOGLEVEL=INFO cd transformers pytest -sv tests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_hyperparameter_search ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24515/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24515/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24515", "html_url": "https://github.com/huggingface/transformers/pull/24515", "diff_url": "https://github.com/huggingface/transformers/pull/24515.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24515.patch", "merged_at": 1687871535000 }
https://api.github.com/repos/huggingface/transformers/issues/24514
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24514/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24514/comments
https://api.github.com/repos/huggingface/transformers/issues/24514/events
https://github.com/huggingface/transformers/issues/24514
1,776,235,620
I_kwDOCUB6oc5p3zRk
24,514
LlamaModel.forward() got an unexpected keyword argument 'token_type_ids'
{ "login": "twang2218", "id": 6299096, "node_id": "MDQ6VXNlcjYyOTkwOTY=", "avatar_url": "https://avatars.githubusercontent.com/u/6299096?v=4", "gravatar_id": "", "url": "https://api.github.com/users/twang2218", "html_url": "https://github.com/twang2218", "followers_url": "https://api.github.com/users/twang2218/followers", "following_url": "https://api.github.com/users/twang2218/following{/other_user}", "gists_url": "https://api.github.com/users/twang2218/gists{/gist_id}", "starred_url": "https://api.github.com/users/twang2218/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/twang2218/subscriptions", "organizations_url": "https://api.github.com/users/twang2218/orgs", "repos_url": "https://api.github.com/users/twang2218/repos", "events_url": "https://api.github.com/users/twang2218/events{/privacy}", "received_events_url": "https://api.github.com/users/twang2218/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ArthurZucker ", "Hey, this is a duplicate of #23818 , #24042. Make sure to use the latest version of transformers! ", "@ArthurZucker I just double checked, as provided above, the `transformers` I used is version `4.30.2`, which is the latest version for now.\r\n\r\n<img width=\"1186\" alt=\"截屏2023-06-28 上午2 26 09\" src=\"https://github.com/huggingface/transformers/assets/6299096/41da9fa9-de69-4564-9159-3c63273dafaa\">\r\n\r\n<img width=\"1421\" alt=\"截屏2023-06-28 上午2 26 34\" src=\"https://github.com/huggingface/transformers/assets/6299096/e9928d80-d4a6-4891-ac47-2242e915cd6e\">\r\n\r\nIs there any new version I missed?", "You can use the latest version using `pip install git+https://github.com/huggingface/transformers`! The pull request is not part of the release 😉 ", "Oh, 👌, Thank you 🙏.", "Would it make sense to let LlamaModel.forward() just ignore token_type_ids such that we can use AutoTokenizer in a modular style? Otherwise I have to do something like this:\r\n\r\n\r\n`encoding = self.tokenizer.encode_plus(`\r\n` ...`\r\n` return_token_type_ids=False if \"llama\" in CONFIG.pretrained_model_name else True,`\r\n` ...`\r\n `)`\r\n ", "No, if you use the latest release then you won't have the issue anyway. Llama does not take token types ids so not adding this! 🤗 ", "@premsa if you use the latest tokenizer (with `AutoTokenizer`) it won't generate the token type IDs anymore, so there is no need to update the llama forward method :)", "@ArthurZucker I got the same error even after installing the last transformer version using \"pip install git+https://github.com/huggingface/transformers\" command, I got the following error: \"TypeError: MBartModel.forward() got an unexpected keyword argument 'token_type_ids'\"\r\n\r\nHow can I resolve this?", "Hey @HGamalElDin make sure to open a new issue with a proper reproducer if you want help " ]
1,687
1,702
1,687
NONE
null
### System Info * transformers version: **4.30.2** * Python version: **3.10.11** * System: Ubuntu 22.04.2 LTS ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction For the following code, ```python model_name = 'TheBloke/guanaco-7B-HF' model = AutoModel.from_pretrained(model_name, torch_dtype=torch.bfloat16, trust_remote_code=True) model = model.to('cuda') inputs = tokenizer(['hello'], max_length=100, truncation=True, return_tensors="pt").to(model.device) outputs = model(**inputs) ``` I got following error messages: ```js --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[34], line 2 1 inputs = tokenizer(['hello'], max_length=100, truncation=True, return_tensors="pt").to(model.device) ----> 2 outputs = model(**inputs) File ~/miniconda/envs/vocab/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs) 1496 # If we don't have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], [] TypeError: LlamaModel.forward() got an unexpected keyword argument 'token_type_ids' ``` The `inputs` value is following: ```js {'input_ids': tensor([[ 1, 22172]], device='cuda:0'), 'token_type_ids': tensor([[0, 0]], device='cuda:0'), 'attention_mask': tensor([[1, 1]], device='cuda:0')} ``` There is `token_type_ids` in the value returned from `tokenizer`, however, `LlamaModel.forward()` don't accept the arguments. And I compared `LlamaTokenizer` and `LlamaTokenizerFast`, I found they behave differently. ```python from transformers import LlamaTokenizer, LlamaTokenizerFast print(f"LlamaTokenizer: {LlamaTokenizer.from_pretrained(model_name)('hello')}") print(f"LlamaTokenizerFast: {LlamaTokenizerFast.from_pretrained(model_name)('hello')}") ``` the results are: ```js LlamaTokenizer: {'input_ids': [1, 22172], 'attention_mask': [1, 1]} LlamaTokenizerFast: {'input_ids': [1, 22172], 'token_type_ids': [0, 0], 'attention_mask': [1, 1]} ``` ### Expected behavior Should `LlamaTokenizerFast` remove the `token_type_ids` in the returned value? or should `LlamaModel.forward()` accept the `token_type_ids` in the function arguments list? Thanks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24514/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24514/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24513
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24513/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24513/comments
https://api.github.com/repos/huggingface/transformers/issues/24513/events
https://github.com/huggingface/transformers/pull/24513
1,776,200,547
PR_kwDOCUB6oc5T_sHZ
24,513
Add Multi Resolution Analysis (MRA) (New PR)
{ "login": "novice03", "id": 44259234, "node_id": "MDQ6VXNlcjQ0MjU5MjM0", "avatar_url": "https://avatars.githubusercontent.com/u/44259234?v=4", "gravatar_id": "", "url": "https://api.github.com/users/novice03", "html_url": "https://github.com/novice03", "followers_url": "https://api.github.com/users/novice03/followers", "following_url": "https://api.github.com/users/novice03/following{/other_user}", "gists_url": "https://api.github.com/users/novice03/gists{/gist_id}", "starred_url": "https://api.github.com/users/novice03/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/novice03/subscriptions", "organizations_url": "https://api.github.com/users/novice03/orgs", "repos_url": "https://api.github.com/users/novice03/repos", "events_url": "https://api.github.com/users/novice03/events{/privacy}", "received_events_url": "https://api.github.com/users/novice03/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Copied all files over from #20573 ", "_The documentation is not available anymore as the PR was closed or merged._", "Could you fix the failing tests?", "Hello @sgugger, I've made sure all checks pass and fixed conflicts. ", "Hello @amyeroberts, I've addressed your comments and made some code changes. Please take a look at the updated files. ", "Hi @amyeroberts, I've addressed the suggestions from the code review. Please take a look at the updated code. ", "Thanks for catching these errors @amyeroberts! I've applied both changes. ", "@novice03 \r\n\r\nIt seems the CI get\r\n\r\n```bash\r\n(line 403) ValueError: sequence length must be divisible by the block_size.\r\n```\r\nwhen `load_cuda_kernels` loads successfully.\r\n\r\nIt's likely due to `seq_length=8` from `MraModelTester`, but I am not able to set the correct combination of `seq_length`, `block_size`, `num_blocks` to make it works.\r\n\r\nNote, our daily CI (with torch 2.0.1 + CUDA 11.8) fails to load `custom CUDA kernels` and the execution goes to \r\n```python\r\n if cuda_kernel is None:\r\n return torch.zeros_like(query).requires_grad_()\r\n```\r\nin `mra2_attention` and tests pass.\r\n\r\nHowever, in our CI with torch 1.13 (and with CUDA 11.6.2), kernel is loaded, but the tests fail.\r\n\r\nIt would be great if you can help us to find the correct settings where the CI will pass when kernel is loaded.\r\n\r\nThanks in advance 🤗 .", "You can run\r\n\r\n```python\r\npython3 -m pytest -v tests/models/mra/test_modeling_mra.py::MraModelTest::test_for_masked_lm\r\n```\r\n\r\nThe full error log is (if custom cuda kernal is loaded successfully)\r\n\r\n```bash\r\nself = <tests.models.mra.test_modeling_mra.MraModelTest testMethod=test_for_masked_lm>\r\n\r\n def test_for_masked_lm(self):\r\n config_and_inputs = self.model_tester.prepare_config_and_inputs()\r\n> self.model_tester.create_and_check_for_masked_lm(*config_and_inputs)\r\n\r\ntests/models/mra/test_modeling_mra.py:322: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/models/mra/test_modeling_mra.py:210: in create_and_check_for_masked_lm\r\n result = model(input_ids, attention_mask=input_mask, token_type_ids=token_type_ids, labels=token_labels)\r\n/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1194: in _call_impl\r\n return forward_call(*input, **kwargs)\r\nsrc/transformers/models/mra/modeling_mra.py:1093: in forward\r\n outputs = self.mra(\r\n/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1194: in _call_impl\r\n return forward_call(*input, **kwargs)\r\nsrc/transformers/models/mra/modeling_mra.py:1028: in forward\r\n encoder_outputs = self.encoder(\r\n/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1194: in _call_impl\r\n return forward_call(*input, **kwargs)\r\nsrc/transformers/models/mra/modeling_mra.py:782: in forward\r\n layer_outputs = layer_module(hidden_states, attention_mask)\r\n/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1194: in _call_impl\r\n return forward_call(*input, **kwargs)\r\nsrc/transformers/models/mra/modeling_mra.py:729: in forward\r\n self_attention_outputs = self.attention(hidden_states, attention_mask)\r\n/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1194: in _call_impl\r\n return forward_call(*input, **kwargs)\r\nsrc/transformers/models/mra/modeling_mra.py:681: in forward\r\n self_outputs = self.self(hidden_states, attention_mask)\r\n/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1194: in _call_impl\r\n return forward_call(*input, **kwargs)\r\nsrc/transformers/models/mra/modeling_mra.py:615: in forward\r\n context_layer = mra2_attention(\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nquery = tensor([[[[ 0.0500, -0.0523, -0.0260, ..., 0.0000, 0.0000, 0.0000],\r\n [-0.1339, 0.0844, 0.0287, ..., 0... [ 0.0293, 0.1609, 0.0547, ..., 0.0000, 0.0000, 0.0000]]]],\r\n device='cuda:0', grad_fn=<CatBackward0>)\r\nkey = tensor([[[[ 0.0185, -0.0316, 0.0150, ..., 0.0000, 0.0000, 0.0000],\r\n [-0.0575, -0.1123, 0.0832, ..., 0... [ 0.0608, 0.0932, -0.0973, ..., 0.0000, 0.0000, 0.0000]]]],\r\n device='cuda:0', grad_fn=<CatBackward0>)\r\nvalue = tensor([[[[ 0.0131, 0.1242, 0.0672, ..., 0.0000, 0.0000, 0.0000],\r\n [-0.0212, 0.0600, 0.0269, ..., 0... [-0.1005, -0.0048, 0.0561, ..., 0.0000, 0.0000, 0.0000]]]],\r\n device='cuda:0', grad_fn=<CatBackward0>)\r\nmask = tensor([[-2.1475e+09, 1.0000e+00, 1.0000e+00, -2.1475e+09, 1.0000e+00,\r\n -2.1475e+09, -2.1475e+09, 1.0000e+... 1.0000e+00, 1.0000e+00, -2.1475e+09, 1.0000e+00,\r\n -2.1475e+09, -2.1475e+09, 1.0000e+00]], device='cuda:0')\r\nnum_blocks = 64, approx_mode = 'full', block_size = 32, initial_prior_first_n_blocks = 0, initial_prior_diagonal_n_blocks = 0\r\n\r\n def mra2_attention(\r\n query,\r\n key,\r\n value,\r\n mask,\r\n num_blocks,\r\n approx_mode,\r\n block_size=32,\r\n initial_prior_first_n_blocks=0,\r\n initial_prior_diagonal_n_blocks=0,\r\n ):\r\n \"\"\"\r\n Use Mra to approximate self-attention.\r\n \"\"\"\r\n if cuda_kernel is None:\r\n return torch.zeros_like(query).requires_grad_()\r\n \r\n batch_size, num_head, seq_len, head_dim = query.size()\r\n meta_batch = batch_size * num_head\r\n \r\n if seq_len % block_size != 0:\r\n> raise ValueError(\"sequence length must be divisible by the block_size.\")\r\nE ValueError: sequence length must be divisible by the block_size.\r\n\r\nsrc/transformers/models/mra/modeling_mra.py:403: ValueError\r\n```", "Hello @ydshieh, thanks for bringing this up. We will likely have to use larger values for seq_len and hidden_size. Can you please try with the values [here](https://github.com/novice03/transformers/blob/1612188d6b6d094c81cc34a77641936687b8f7b3/tests/models/mra/test_modeling_mra.py)?", "Hi @novice03 Really appreciated you taking time on this. I tried it, and there are still 5 failures (it's already a great improvement!).\r\n\r\nHowever, we (`transformers`) are in a series of reducing CI time and cost, and change to large values is really what we tried very hard to avoid, as you can see in #24824 , #25005 and #25266. Also, large values is very likely introducing OOM when running tests in multiprocesses settings (we use 8 processes to reduce the CI cost) and it's very hard to figure out when this happens. \r\n\r\nI think it would be great if we can have an attribute `block_size` in the config classes with a default `32`. And in the modeling file, everywhere calling methods like `sparse_mask`, `mm_to_sparse` etc. pass `config.block_size` to them.\r\n\r\nThis way, we will have a way to use small values in the tests. Furthermore, the users of this model will have more flexibility to run the model. And we can also have a better documentation about how to set the config values and the inputs to make it work.\r\n\r\nLet me know WDYT 🙏 Thanks again!\r\n", "Hello @ydshieh, thanks for your reply. I understand that using large values increases the time and memory cost. However, since MRA was specifically designed for large sequences, it will be very tricky to run tests with small `seq_len` and `hidden_size`. \r\n\r\nUnfortunately, I don't think that the tests can be fixed by lowering the block size. I've tried setting block size to 4 or 8, and got multiple other errors (index out of bounds errors, CUDA errors, etc.). Also, all of the released checkpoints are with block size = 32, so users cannot use the pretrained models with a different block size.\r\n\r\nI hope I'm not asking too much, but is there an alternative/ exception that can be made? Either via allowing larger values or by running MRA tests without CUDA kernels. I've already verified that the HF model and the original code output similar logits and hidden states when CUDA kernels are loaded (with large sequence lengths).", "> Also, all of the released checkpoints are with block size = 32, so users cannot use the pretrained models with a different block size.\r\n\r\nFair point!\r\n\r\nWe will discuss internally what to deal with this model testing, but could you check the following 5 (remaining) failed tests that is from the new values you provided in an earlier comment, and see if you are able to fix them 🙏 ? Thanks!\r\n\r\n(It's run on torch 1.13 + CUDA 11.6.2)\r\n\r\n```bash\r\nFAILED tests/models/mra/test_modeling_mra.py::MraModelTest::test_determinism - ValueError: zero-size array to reduction operation maximum which has no identity\r\nFAILED tests/models/mra/test_modeling_mra.py::MraModelTest::test_feed_forward_chunking - AssertionError: False is not true\r\nFAILED tests/models/mra/test_modeling_mra.py::MraModelTest::test_load_with_mismatched_shapes - ValueError: sequence length must be divisible by the block_size.\r\nFAILED tests/models/mra/test_modeling_mra.py::MraModelTest::test_model_outputs_equivalence - TypeError: forward() got an unexpected keyword argument 'output_attentions'\r\nFAILED tests/models/mra/test_modeling_mra.py::MraModelTest::test_retain_grad_hidden_states_attentions - TypeError: 'NoneType' object is not subscriptable\r\n\r\n```" ]
1,687
1,691
1,688
CONTRIBUTOR
null
# Add Multi Resolution Analysis (MRA) for Approximate Self-Attention This PR adds the MRA model to the repository. Paper: [https://arxiv.org/pdf/2207.10284.pdf](https://arxiv.org/pdf/2207.10284.pdf) Code: [https://github.com/mlpen/mra-attention](https://github.com/mlpen/mra-attention) To-do: - [x] Improve loading cuda kernels - [x] Improve formatting and documentation - [x] Upload checkpoints
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24513/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24513/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24513", "html_url": "https://github.com/huggingface/transformers/pull/24513", "diff_url": "https://github.com/huggingface/transformers/pull/24513.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24513.patch", "merged_at": 1688982643000 }
https://api.github.com/repos/huggingface/transformers/issues/24512
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24512/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24512/comments
https://api.github.com/repos/huggingface/transformers/issues/24512/events
https://github.com/huggingface/transformers/issues/24512
1,776,071,960
I_kwDOCUB6oc5p3LUY
24,512
Whisper model predicts "thank you" or "you" on silence
{ "login": "mirfan899", "id": 3822565, "node_id": "MDQ6VXNlcjM4MjI1NjU=", "avatar_url": "https://avatars.githubusercontent.com/u/3822565?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mirfan899", "html_url": "https://github.com/mirfan899", "followers_url": "https://api.github.com/users/mirfan899/followers", "following_url": "https://api.github.com/users/mirfan899/following{/other_user}", "gists_url": "https://api.github.com/users/mirfan899/gists{/gist_id}", "starred_url": "https://api.github.com/users/mirfan899/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mirfan899/subscriptions", "organizations_url": "https://api.github.com/users/mirfan899/orgs", "repos_url": "https://api.github.com/users/mirfan899/repos", "events_url": "https://api.github.com/users/mirfan899/events{/privacy}", "received_events_url": "https://api.github.com/users/mirfan899/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hmm, this code snippet involves `gradio` which is usually out of the scope of the `transformers` GitHub issue pages. But trying to have a reproducible example with only `transformers` involved seems tricky in this case.\r\n\r\n@sgugger @amyeroberts @ArthurZucker Any general comment on how we proceed in such cases?\r\n\r\n", "Hey! \r\nGradio can be interfering indeed, especially given the `streaming = True`. A few questions would help:\r\n- does this happen on silence for multiple audio? \r\n- was \"thank you\" said before? Seems like the example\r\n- Can you provide an example audio file when this happens? \r\n- Did you try without gradio (just the pipeline on the audio)? \r\nThis would help us a lot if you can have these informations! ", "yes\r\nno\r\naudios (silence, noise, recording)[audios.zip](https://github.com/huggingface/transformers/files/11882755/audios.zip)\r\nYes.\r\n\r\n```python\r\np = pipeline(\"automatic-speech-recognition\", model=\"openai/whisper-base\")\r\n\r\np(\"silence.wav\")['text']\r\n/home/irfan/.pyenv/versions/3.10.10/envs/WhisperDemo/lib/python3.10/site-packages/transformers/generation/utils.py:1353: UserWarning: Using `max_length`'s default (448) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.\r\n warnings.warn(\r\n' you'\r\n```", "Same issue, using gradio and non-gradio wav sound sources. I've also seen the behavior in [Unity Undertone](https://leastsquares.io/docs/unity/undertone) , a Whisper package for Unity 3D. So it may be in Whisper and not the ASR pipeline. Maybe a few more switches to control returned info might help.\r\n[Whisper: Decode with condition_on_previous_text=False](https://github.com/huggingface/transformers/issues/21467#top)](https://github.com/huggingface/transformers/issues/21467)\r\n[A possible solution to Whisper hallucination](https://github.com/openai/whisper/discussions/679)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Pretty sure this is whisper hallucinating with the given input. Let's close this for now as it does not appear to be a bug with the whisper model implementation / pipeline.", "@ArthurZucker I was guessing the same, but how do you prevent it? I was thinking of listening with another tool to the audio and if it is below certain volume (Db) ignore it. Is there a way to do it with out audio processing? " ]
1,687
1,698
1,698
NONE
null
### System Info wave2vec2 models predict text even there is no speech. The model predicts "thank you" and "you" on silence or empty speech. Python 3.10 Ubuntu 20.04 transformers==4.30.2 https://github.com/gradio-app/gradio/issues/4663#issue-1772508542 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Follow these steps: 1. install gradio and transformers. 2. use automatic speech recognition pipeline and wave2vec2 base model. 3. use a microphone and don't speak. It will keep predicting words on an empty stream. Code to reproduce it. ```python from transformers import pipeline import gradio as gr p = pipeline("automatic-speech-recognition", model="openai/whisper-base") def transcribe(audio, state=""): text = p(audio)["text"] state += text + " " return state, state # Set the starting state to an empty string gr.Interface( fn=transcribe, inputs=[ gr.Audio(source="microphone", type="filepath", streaming=True), "state" ], outputs=[ "textbox", "state" ], live=True).launch(share=True) ``` ### Expected behavior It should not predict words on an empty stream.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24512/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24512/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24511
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24511/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24511/comments
https://api.github.com/repos/huggingface/transformers/issues/24511/events
https://github.com/huggingface/transformers/issues/24511
1,776,070,371
I_kwDOCUB6oc5p3K7j
24,511
Decoding adds space between special tokens when skip_special_tokens = True
{ "login": "Praful932", "id": 45713796, "node_id": "MDQ6VXNlcjQ1NzEzNzk2", "avatar_url": "https://avatars.githubusercontent.com/u/45713796?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Praful932", "html_url": "https://github.com/Praful932", "followers_url": "https://api.github.com/users/Praful932/followers", "following_url": "https://api.github.com/users/Praful932/following{/other_user}", "gists_url": "https://api.github.com/users/Praful932/gists{/gist_id}", "starred_url": "https://api.github.com/users/Praful932/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Praful932/subscriptions", "organizations_url": "https://api.github.com/users/Praful932/orgs", "repos_url": "https://api.github.com/users/Praful932/repos", "events_url": "https://api.github.com/users/Praful932/events{/privacy}", "received_events_url": "https://api.github.com/users/Praful932/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! The behaviour is totally expected: you added `\" \"` as an added token. This means that it will not be split, and will not be encoded directly using the underlying model. \r\nHere is what happens: (check [this](https://github.com/ArthurZucker/transformers/blob/74cb2b1dd9755f6bd72fc9e65d1e374e287afd84/src/transformers/models/t5/tokenization_t5.py#L324-L327)\r\n\r\n```python \r\n>>> tokenizer.encode(\"Hello world\")\r\n# 1. Split the input using a trie made of special tokens and added tokens: \r\n[\"Hello\", \" \", \"world\"]\r\n# 2. Add a prefix space\r\n[\" Hello\", \" \", \" world\"]\r\n# 3. Replace prefix space with meta-space\r\n['▁Hello', ' ', '▁world']\r\n# 4. Get the corresponding tokens\r\n[8774, 32106, 296, 1]\r\n>>> tokenizer.decode(inp, skip_special_tokens = False,spaces_between_special_tokens=True)\r\n# When decoding, `convert_tokens_to_string` is called. A ` ` is added before every special token. But ` ` is a special token, so a space will be added to it\r\n[\"▁Hello\", \" \", \"▁world\"]\r\n```\r\nWhen you skip special tokens, `\" \"` is skiped, but you still have the space from the tokenizer that *joins* the text on a space to properly reformat it.\r\n\r\nYou are not correctly using the API, I would recommend you to remove \" \" from the special tokens. " ]
1,687
1,687
1,687
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.15.109+-x86_64-with-glibc2.31 - Python version: 3.10.10 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.0 (True) - Tensorflow version (GPU?): 2.11.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu) - Jax version: 0.4.8 - JaxLib version: 0.4.7 - Using GPU in script?: no - Using distributed or parallel set-up in script?: - ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model_id = "lmsys/fastchat-t5-3b-v1.0" tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast = False) inp = tokenizer.encode("Hello world") print(inp) out = tokenizer.decode(inp, skip_special_tokens = False,spaces_between_special_tokens=True) print(out) > 'Hello world</s>' out = tokenizer.decode(inp, skip_special_tokens = True,spaces_between_special_tokens=True) print(out) > 'Hello world' out = tokenizer.decode(inp, skip_special_tokens = True,spaces_between_special_tokens=False) print(out) > 'Hello world' ``` ### Expected behavior I expected the last two outputs to be same. In the 2nd last output, Since special tokens are skipped, no space should have been added even if `spaces_between_special_tokens=True`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24511/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24511/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24510
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24510/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24510/comments
https://api.github.com/repos/huggingface/transformers/issues/24510/events
https://github.com/huggingface/transformers/pull/24510
1,775,943,610
PR_kwDOCUB6oc5T-17r
24,510
Show a warning for missing attention masks when pad_token_id is not None
{ "login": "hackyon", "id": 1557853, "node_id": "MDQ6VXNlcjE1NTc4NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/1557853?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hackyon", "html_url": "https://github.com/hackyon", "followers_url": "https://api.github.com/users/hackyon/followers", "following_url": "https://api.github.com/users/hackyon/following{/other_user}", "gists_url": "https://api.github.com/users/hackyon/gists{/gist_id}", "starred_url": "https://api.github.com/users/hackyon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hackyon/subscriptions", "organizations_url": "https://api.github.com/users/hackyon/orgs", "repos_url": "https://api.github.com/users/hackyon/repos", "events_url": "https://api.github.com/users/hackyon/events{/privacy}", "received_events_url": "https://api.github.com/users/hackyon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@ydshieh @gante\r\n\r\nI've added the warning message as per #21916 and also a short test. Please let me know if you have any suggestions. If it looks good, I can copy/paste the warning to more models as well. Thanks!\r\n", "@hackyon LGTM 👍 \r\n\r\nBut before you spend time propagating the change to the other models, let's also get a green light from a core maintainer (cc @sgugger )", "Thanks for the review. \r\n\r\nWould it be better to check for the presence of the pad_token_id inside input_ids first before throwing the error, as per\r\nhttps://github.com/huggingface/transformers/pull/17444/files? If so, I can make the change to reflect that here.\r\n\r\n", "@sgugger \r\n\r\nThanks for the input. I changed my pull request up to be more like #17444. Let me know what you think. Thanks!\r\n", "Thanks, I've updated the code accordingly.", "@gante could you have a second look here?", "_The documentation is not available anymore as the PR was closed or merged._", "@hackyon Thank you for the contribution! Would you like to add it to the remaining models? 🤗 ", "Sure, I'll look into it 👍", "Thanks @ydshieh for fixing the flaky test!\r\n\r\nI was busy in July, but will now add the warning to more models over the next couple of days." ]
1,687
1,691
1,688
CONTRIBUTOR
null
# What does this PR do? Fixes #16136 Shows a one-time warning message when the pad_token_id is not None and no attention masks are given. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? #16136 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @gante @ydshieh
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24510/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24510/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24510", "html_url": "https://github.com/huggingface/transformers/pull/24510", "diff_url": "https://github.com/huggingface/transformers/pull/24510.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24510.patch", "merged_at": 1688127579000 }
https://api.github.com/repos/huggingface/transformers/issues/24509
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24509/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24509/comments
https://api.github.com/repos/huggingface/transformers/issues/24509/events
https://github.com/huggingface/transformers/issues/24509
1,775,842,582
I_kwDOCUB6oc5p2TUW
24,509
Documentation Clarification: Autoregressive Models using GenerationMixin
{ "login": "JoeREISys", "id": 89039719, "node_id": "MDQ6VXNlcjg5MDM5NzE5", "avatar_url": "https://avatars.githubusercontent.com/u/89039719?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JoeREISys", "html_url": "https://github.com/JoeREISys", "followers_url": "https://api.github.com/users/JoeREISys/followers", "following_url": "https://api.github.com/users/JoeREISys/following{/other_user}", "gists_url": "https://api.github.com/users/JoeREISys/gists{/gist_id}", "starred_url": "https://api.github.com/users/JoeREISys/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JoeREISys/subscriptions", "organizations_url": "https://api.github.com/users/JoeREISys/orgs", "repos_url": "https://api.github.com/users/JoeREISys/repos", "events_url": "https://api.github.com/users/JoeREISys/events{/privacy}", "received_events_url": "https://api.github.com/users/JoeREISys/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I was confusing training vs inference. At training time, the model autoregressively trains by shifting the labels. It has the entire input and output allowing for cross entropy for each label prediction. However, at inference time, that autoregressive generation has to be coded. Where I'm not totally clear is where the code is that connects the models to the `GenerationMixin` for inference. It looks like that it should just work to call `.generate` from the models don't inherent from `GenerationMixin`.", "Found it, the base class, `class PreTrainedModel(nn.Module, ModuleUtilsMixin, GenerationMixin, PushToHubMixin):` enables all classes to have access to generate method and they are guarded based on the config and `can_generate` which is based on whether or not it implements `prepare_inputs_for_generation`method (e.g. all models have `generate` method they can invoke, `generate` method will work if `prepare_inputs_for_generation` is implemented otherwise give an error). If only I can delete this issue." ]
1,687
1,687
1,687
NONE
null
In the documentation on HuggingFace and within the source code for autoregressive models in comments, it shows to use the `generate` method from the GenerationMixin. Here is an example in the code for Llama model. ``` def forward( self, input_ids: torch.LongTensor = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.LongTensor] = None, past_key_values: Optional[List[torch.FloatTensor]] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple, CausalLMOutputWithPast]: r""" Args: labels (torch.LongTensor of shape (batch_size, sequence_length), *optional*): Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]. Returns: Example: ###python >>> from transformers import AutoTokenizer, LlamaForCausalLM >>> model = LlamaForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS) >>> tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER) >>> prompt = "Hey, are you conscious? Can you talk to me?" >>> inputs = tokenizer(prompt, return_tensors="pt") >>> # Generate >>> generate_ids = model.generate(inputs.input_ids, max_length=30) >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] "Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you." """ ``` However, autoregressive models shouldn't need beam search as casual language modeling output should be able to directly decode the tokens. In tracing the model inheritance, there is no connection to `GenerationMixin` either to expose a generate method nor an implementation of the generate method for this models. What am I missing? ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Review the documentation of autoregressive models 2. Review the source code of autoregressive models 3. Autoregressive models do not have `GenerationMixin` but have comment to use `.generate` method. ### Expected behavior 1. The source code of the models reflects the comments with `GenerationMixin` implementation or `.generate` method implementation or the comments and model documentation reflect how to use for inference if `GenerationMixin` is not used.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24509/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24509/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24508
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24508/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24508/comments
https://api.github.com/repos/huggingface/transformers/issues/24508/events
https://github.com/huggingface/transformers/pull/24508
1,775,829,154
PR_kwDOCUB6oc5T-ejE
24,508
[WIP] Add Flax diverse group search
{ "login": "yeandy", "id": 14128880, "node_id": "MDQ6VXNlcjE0MTI4ODgw", "avatar_url": "https://avatars.githubusercontent.com/u/14128880?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yeandy", "html_url": "https://github.com/yeandy", "followers_url": "https://api.github.com/users/yeandy/followers", "following_url": "https://api.github.com/users/yeandy/following{/other_user}", "gists_url": "https://api.github.com/users/yeandy/gists{/gist_id}", "starred_url": "https://api.github.com/users/yeandy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yeandy/subscriptions", "organizations_url": "https://api.github.com/users/yeandy/orgs", "repos_url": "https://api.github.com/users/yeandy/repos", "events_url": "https://api.github.com/users/yeandy/events{/privacy}", "received_events_url": "https://api.github.com/users/yeandy/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
null
[]
[ "cc @sanchit-gandhi ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hey @yeandy! This PR is looking in good shape - thanks for your efforts so far! Would you like to go all the way and see it to completion? Happy to help with the remainder of the integration!", "Hey @sanchit-gandhi. Due to other commitments, I currently don't have bandwidth to continue this. And the timeline for me to get to this unknown right now. If someone else wants to work on this, I'm ok with that. ", "Thanks for letting me know @yeandy! Best of luck with your other commitments, I hope they go well 🤗 Opening this one up to the community to complete!" ]
1,687
1,693
null
NONE
null
# What does this PR do? Mimics https://github.com/huggingface/transformers/pull/9006, but for Flax. We want to match how PyTorch's logic accounts for `group_size` and `num_beam_groups` [here](https://github.com/huggingface/transformers/blob/v4.30.2/src/transformers/generation/beam_search.py#L175) and [here](https://github.com/huggingface/transformers/blob/v4.30.2/src/transformers/generation/beam_search.py#L249C1-L281C26) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24508/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24508/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24508", "html_url": "https://github.com/huggingface/transformers/pull/24508", "diff_url": "https://github.com/huggingface/transformers/pull/24508.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24508.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24507
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24507/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24507/comments
https://api.github.com/repos/huggingface/transformers/issues/24507/events
https://github.com/huggingface/transformers/pull/24507
1,775,806,828
PR_kwDOCUB6oc5T-Zb1
24,507
Add Compact Convolutional Transformer model (CCT)
{ "login": "rishabbala", "id": 39146400, "node_id": "MDQ6VXNlcjM5MTQ2NDAw", "avatar_url": "https://avatars.githubusercontent.com/u/39146400?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rishabbala", "html_url": "https://github.com/rishabbala", "followers_url": "https://api.github.com/users/rishabbala/followers", "following_url": "https://api.github.com/users/rishabbala/following{/other_user}", "gists_url": "https://api.github.com/users/rishabbala/gists{/gist_id}", "starred_url": "https://api.github.com/users/rishabbala/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rishabbala/subscriptions", "organizations_url": "https://api.github.com/users/rishabbala/orgs", "repos_url": "https://api.github.com/users/rishabbala/repos", "events_url": "https://api.github.com/users/rishabbala/events{/privacy}", "received_events_url": "https://api.github.com/users/rishabbala/received_events", "type": "User", "site_admin": false }
[ { "id": 5724035499, "node_id": "LA_kwDOCUB6oc8AAAABVS3Zqw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Model%20on%20the%20Hub", "name": "Model on the Hub", "color": "9CA0E9", "default": false, "description": "" } ]
closed
false
null
[]
[ "Could you let me know how to fix the failed test. The issue apparently is that PIL.Image.LINEAR does not exist", "Rebase on main, a fix has been merge! 😉 ", "Done. Could you let me know if there are any further changes", "Hey! Sorry I forgot to mention this in my previous comment, we are trying to push models to the hub as often as possible, to make it a LOT easier for contributors! Then we share the repo on the hub where everything is supported 🤗 \r\nI would recommend following [this tutorial](https://huggingface.co/docs/transformers/custom_models), and sharing here the uploaded model! Tell me if that sounds good to you! ", "Sorry if I misunderstood, but I followed this [tutorial](https://huggingface.co/docs/transformers/add_new_model#514-port-brandnewbert-to-transformers), and it looks similar to the tutorial you shared. Could you quickly point out the difference between the two, or what additionally I must do?", "Sure, adding the model to the hub rather than with a PR to transformers would give a model usable out of the box, without having to fix all of the CI, while keeping your code and no need for reviews etc. Mostly you would have to add MAPPINGS as explain in the tutorial, and users will just need to use `trust_remote_code = True` when doing `class.from_pretrained`! ", "I've made the changes, and uploaded the models: [cct_224](https://huggingface.co/rishabbala/cct_14_7x2_224) and [cct_384](https://huggingface.co/rishabbala/cct_14_7x2_384). Let me know if this looks ok, and if I should go ahead and close the PR. Thanks for the help :)", "Look good to me thanks a lot! Would suggest you to add a model card for people who don't really know the model and might want a quick way to use it! " ]
1,687
1,689
1,688
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #20133 (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @amyeroberts @ArthurZucker <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24507/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24507/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24507", "html_url": "https://github.com/huggingface/transformers/pull/24507", "diff_url": "https://github.com/huggingface/transformers/pull/24507.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24507.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24506
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24506/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24506/comments
https://api.github.com/repos/huggingface/transformers/issues/24506/events
https://github.com/huggingface/transformers/issues/24506
1,775,598,020
I_kwDOCUB6oc5p1XnE
24,506
Image-Classification Dataloader Missing Image Key
{ "login": "umarkhalidAI", "id": 46391971, "node_id": "MDQ6VXNlcjQ2MzkxOTcx", "avatar_url": "https://avatars.githubusercontent.com/u/46391971?v=4", "gravatar_id": "", "url": "https://api.github.com/users/umarkhalidAI", "html_url": "https://github.com/umarkhalidAI", "followers_url": "https://api.github.com/users/umarkhalidAI/followers", "following_url": "https://api.github.com/users/umarkhalidAI/following{/other_user}", "gists_url": "https://api.github.com/users/umarkhalidAI/gists{/gist_id}", "starred_url": "https://api.github.com/users/umarkhalidAI/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/umarkhalidAI/subscriptions", "organizations_url": "https://api.github.com/users/umarkhalidAI/orgs", "repos_url": "https://api.github.com/users/umarkhalidAI/repos", "events_url": "https://api.github.com/users/umarkhalidAI/events{/privacy}", "received_events_url": "https://api.github.com/users/umarkhalidAI/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This dataset seems to have `img` as key in the dataset.", "If you just want to try the script, the official example in the README is\r\n\r\n```python\r\npython run_image_classification.py \\\r\n --dataset_name beans \\\r\n --output_dir ./beans_outputs/ \\\r\n --remove_unused_columns False \\\r\n --do_train \\\r\n --do_eval \\\r\n --push_to_hub \\\r\n --push_to_hub_model_id vit-base-beans \\\r\n --learning_rate 2e-5 \\\r\n --num_train_epochs 5 \\\r\n --per_device_train_batch_size 8 \\\r\n --per_device_eval_batch_size 8 \\\r\n --logging_strategy steps \\\r\n --logging_steps 10 \\\r\n --evaluation_strategy epoch \\\r\n --save_strategy epoch \\\r\n --load_best_model_at_end True \\\r\n --save_total_limit 3 \\\r\n --seed 1337\r\n```", "> This dataset seems to have `img` as key in the dataset.\r\n\r\nThat's not the case. In the trained, _remove_unused_columns remove \"image\" keys\r\n", "What I am saying is the original dataset has the `img` key, but the script expects `image` key. That is why there is an error. ", "If you really want to try `cifar10`, the quick way is to replace `\"image\"` to `\"img\"`.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,687
1,691
1,691
NONE
null
### System Info - `transformers` version: 4.31.0.dev0 - Platform: Linux-5.15.0-1029-azure-x86_64-with-glibc2.31 - Python version: 3.10.10 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - ### Who can help? @amyeroberts I tried to run the official example of image classification as: python run_image_classification.py --output_dir output --dataset_name cifar10 --do_train --overwrite_output_dir ![image](https://github.com/huggingface/transformers/assets/46391971/ab2e7a19-b024-4dc7-91c2-a79833492037) ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I just try to run the official image- classification examples. ### Expected behavior I was expecting the code to find the image key.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24506/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24506/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24505
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24505/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24505/comments
https://api.github.com/repos/huggingface/transformers/issues/24505/events
https://github.com/huggingface/transformers/pull/24505
1,775,591,167
PR_kwDOCUB6oc5T9s68
24,505
Clean load keys
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger with this change, a few of our trained models that used the older format no longer are able to properly load unless we set `strict=False` because they contain `embeddings.position_ids` key that no longer exists. I wonder if there is a way to land this change such that it would be backwards compat with older model files as well. I see a few different issues have popped up as a result of this change and a lot of just required loading and resaving the model files but that is sometimes difficult to do at scale." ]
1,687
1,707
1,687
COLLABORATOR
null
# What does this PR do? This PR finishes the work done in and completely cleans up the `_keys_to_ignore_on_save`, `_keys_to_ignore_on_load_missing` and `_keys_to_ignore_on_load_unexpected`. Those were used in three situations: 1. Not saving the tied weights. This came from the (wrong) assumption that torch would take twice the space for tied weights (which it doesn't) and also created bugs where non-tied weights were not saved (unless a hack was added like for RoBERTa models). This is not necessary since PyTorch doesn't take more space for tied weights and safetensors will properly remove them (with `_tied_weights_keys`) 2. Ignoring non-saved non-persistent buffers. This can be done automatically in the code of modeling_utils as non-persistent buffers are keys in the model named buffers not in the state dict, so easy to dectect 3. Ignoring known unexpected weights from another architecture (like the pooler). This isn't necessary anymore since we don't issue a warning in this case.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24505/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24505/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24505", "html_url": "https://github.com/huggingface/transformers/pull/24505", "diff_url": "https://github.com/huggingface/transformers/pull/24505.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24505.patch", "merged_at": 1687891541000 }
https://api.github.com/repos/huggingface/transformers/issues/24504
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24504/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24504/comments
https://api.github.com/repos/huggingface/transformers/issues/24504/events
https://github.com/huggingface/transformers/pull/24504
1,775,555,966
PR_kwDOCUB6oc5T9lM7
24,504
Add bitsandbytes support for gpt2 models
{ "login": "DarioSucic", "id": 7669299, "node_id": "MDQ6VXNlcjc2NjkyOTk=", "avatar_url": "https://avatars.githubusercontent.com/u/7669299?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DarioSucic", "html_url": "https://github.com/DarioSucic", "followers_url": "https://api.github.com/users/DarioSucic/followers", "following_url": "https://api.github.com/users/DarioSucic/following{/other_user}", "gists_url": "https://api.github.com/users/DarioSucic/gists{/gist_id}", "starred_url": "https://api.github.com/users/DarioSucic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DarioSucic/subscriptions", "organizations_url": "https://api.github.com/users/DarioSucic/orgs", "repos_url": "https://api.github.com/users/DarioSucic/repos", "events_url": "https://api.github.com/users/DarioSucic/events{/privacy}", "received_events_url": "https://api.github.com/users/DarioSucic/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> FYI, on my side I get these failing tests, I believe there might be a small difference between our envs. We can always update the expected sentence later in case they fail on the daily CI (which probably will be the case). Happy also to add the missing test in a follow up PR.\r\n> <img alt=\"Screenshot 2023-06-27 at 16 08 55\" width=\"1314\" src=\"https://user-images.githubusercontent.com/49240599/249179643-418abf8c-3408-49b6-8212-5e4e75e5f284.png\">\r\n\r\nAha, it's been stable for me so far, but I can see that happening. If it's any help I'm running this on an RTX 4090 and `torch==2.1.0.dev20230603+cu121`.\r\n\r\n> Also one test is failing for 4bit:\r\n> \r\n> ```shell\r\n> FAILED tests/bnb/test_4bit.py::Bnb4BitGPT2Test::test_memory_footprint - AttributeError: 'GPT2MLP' object has no attribute 'dense_4h_to_h'\r\n> ```\r\n> \r\n> Could quickly address a fix? 🙏 After that we should be ready to merge\r\n\r\nNice catch! I have a fix in mind that should also remove most of the int8 test code I added, so I'll get that in asap." ]
1,687
1,687
1,687
CONTRIBUTOR
null
# What does this PR do? The current bitsandbytes integration only supports models using [nn.Linear](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html#torch.nn.Linear), which excludes gpt2 and other models that instead use [Conv1D](https://github.com/huggingface/transformers/blob/68c92981ff2b804979d2e6107eeefe298d1e5183/src/transformers/pytorch_utils.py#L85). This PR enables loading/serialization of these models, as well as gpt2-xl tests for int8 and 4bit. This is achieved by transposing the weight matrices of Conv1D layers before quantization. Note: Following the suggestion in the bnb tests to use models with >1b params only leaves [gpt2-xl](https://huggingface.co/gpt2-xl), which is unfortunately a 6.4GB download due to being stored in float32. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @younesbelkada, @TimDettmers
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24504/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 3, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24504/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24504", "html_url": "https://github.com/huggingface/transformers/pull/24504", "diff_url": "https://github.com/huggingface/transformers/pull/24504.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24504.patch", "merged_at": 1687924532000 }
https://api.github.com/repos/huggingface/transformers/issues/24503
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24503/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24503/comments
https://api.github.com/repos/huggingface/transformers/issues/24503/events
https://github.com/huggingface/transformers/pull/24503
1,775,508,611
PR_kwDOCUB6oc5T9ahd
24,503
Separate kwargs of tokenizer and feature_extractor in `ClapProcessor`
{ "login": "anmolojha", "id": 35429956, "node_id": "MDQ6VXNlcjM1NDI5OTU2", "avatar_url": "https://avatars.githubusercontent.com/u/35429956?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anmolojha", "html_url": "https://github.com/anmolojha", "followers_url": "https://api.github.com/users/anmolojha/followers", "following_url": "https://api.github.com/users/anmolojha/following{/other_user}", "gists_url": "https://api.github.com/users/anmolojha/gists{/gist_id}", "starred_url": "https://api.github.com/users/anmolojha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anmolojha/subscriptions", "organizations_url": "https://api.github.com/users/anmolojha/orgs", "repos_url": "https://api.github.com/users/anmolojha/repos", "events_url": "https://api.github.com/users/anmolojha/events{/privacy}", "received_events_url": "https://api.github.com/users/anmolojha/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24503). All of your documentation changes will be reflected on that endpoint.", "@sanchit-gandhi can we fix both the points discussed in the issue (#23648) (shared kwargs and unexpected padding behaviour) in the same PR or should we have separate PRs?", "Separate PR would be preferable since you can work on each bit in isolation (making it easier for yourself and the reviewer) and merge this one as soon as you have it ready :)", "Think it's going to be quite fast to finish this PR - WDYT @anmolojha?", "Hey @sanchit-gandhi, thanks for following up 🙏\r\nThis got delayed because of a bunch of reasons, I will close this this weekend.", "Awesome! Looking forward to it @anmolojha!", "Would like the tokenizer expert's review cc @ArthurZucker ", "Gently pinging @ArthurZucker for a final review here", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Do you want to have a go at explicitly writing out the required args for the feature extractor / tokenizer @anmolojha? It should make the signature for the `__call__` method more precise as @ArthurZucker astutely mentioned." ]
1,687
1,701
1,700
NONE
null
# What does this PR do? Currently, `ClapProcessor` shares kwargs between the tokenizer and the feature extractor. This PR introduces separate kwargs for both of them. This was discussed in [comments](https://github.com/huggingface/transformers/issues/23648#issuecomment-1557532041) of #23648. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @sanchit-gandhi
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24503/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24503/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24503", "html_url": "https://github.com/huggingface/transformers/pull/24503", "diff_url": "https://github.com/huggingface/transformers/pull/24503.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24503.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24502
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24502/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24502/comments
https://api.github.com/repos/huggingface/transformers/issues/24502/events
https://github.com/huggingface/transformers/issues/24502
1,775,486,427
I_kwDOCUB6oc5p08Xb
24,502
Extremely slow model inference for load_in_4bit
{ "login": "cnut1648", "id": 37067883, "node_id": "MDQ6VXNlcjM3MDY3ODgz", "avatar_url": "https://avatars.githubusercontent.com/u/37067883?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cnut1648", "html_url": "https://github.com/cnut1648", "followers_url": "https://api.github.com/users/cnut1648/followers", "following_url": "https://api.github.com/users/cnut1648/following{/other_user}", "gists_url": "https://api.github.com/users/cnut1648/gists{/gist_id}", "starred_url": "https://api.github.com/users/cnut1648/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cnut1648/subscriptions", "organizations_url": "https://api.github.com/users/cnut1648/orgs", "repos_url": "https://api.github.com/users/cnut1648/repos", "events_url": "https://api.github.com/users/cnut1648/events{/privacy}", "received_events_url": "https://api.github.com/users/cnut1648/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada ", "Hey @cnut1648 👋 \r\n\r\nWe also had an internal user reporting the same issue, I'm currently exploring whether it is from the text generation end or from the 4-bit end. Our internal user also reported that unbatched text generation worked fine (in terms of output quality and inference time), so you can try that route until this issue gets sorted :)\r\n\r\ncc @younesbelkada \r\n\r\n", "Hi @cnut1648 \r\nThanks for bringing this discussion up \r\nNote that this is a more or less known issue, bitsandbytes is working on optimized 4bit inference kernels that should be much faster than the current ones. \r\nOne the other hand, I believe that there is a high variance across devices, for example this user: https://github.com/huggingface/transformers/issues/23989#issuecomment-1577727968 reports the same speed than bf16 using Falcon. \r\nDo you face the same issue if you run your inference on a single A100?", "Hey @gante, @younesbelkada thanks! Excited to see how bnb 4bit inference will accelerate the generation. For unbatched inference (bsz=1) w/ multi-gpu, I tried that it takes more than 1 hour and only produced 4 out of 6 inputs and I have to cut it to save cost. As for one single A 100 4 bit, I have\r\n- batched: 3038 seconds, no big improvement\r\n- unbatched: again this go over 1 hour", "Actually, I had the same confusion, I used the load_in_4bit parameter and got a 2-3x slower inference time than full precision", "@BaileyWei 2-3x slower is to be expected with `load_in_4bit` (vs 16-bit weights), on any model -- that's the current price of performing dynamic quantization :)", "@cnut1648 @younesbelkada \r\n\r\nIf we take the code example from @cnut1648 and play around with the following settings\r\n1. `tiiuae/falcon-7b-instruct` vs `huggyllama/llama-7b` (i.e. Falcon vs LLaMA)\r\n2. `load_in_4bit=True` vs `torch_dtype=torch.bfloat16`\r\n3. short prompts vs long prompts (e.g. first two vs last two in the code example)\r\n\r\nWe quickly conclude that the problem seems to be related to Falcon itself, not the 4-bit part nor `generate`. In a nutshell, on my end, `load_in_4bit=True` added a stable 4-5x slowdown vs `torch_dtype=torch.bfloat16`, but the execution time grew very quickly with the sequence length (i.e. with the prompt size and with `max_new_tokens`) AND batch size. This does not happen with other models, and explains the extremely slow execution times you're seeing -- especially in 4-bit format. I'm not sure if there are additional 4-bit-related issues that further explain what you're seeing, but the behavior I described above is not normal.\r\n\r\nAs for solutions: currently, the Falcon code sits on the Hub, and we have a [PR open](https://github.com/huggingface/transformers/pull/24523) to add it to `transformers`. If the issue is still present after the port is complete, we can dive deeper 🤗 ", "Thank you so much for this @gante!", "@cnut1648 \r\nCheck out this tweet: https://twitter.com/Tim_Dettmers/status/1677826353457143808 you should be able to benefit from that out of the box by just updating bitsandbytes; can you quickly give it a try? 🙏 ", "Hmm @younesbelkada I have a test run today using llama-65b and falcon-40b.\r\nSince it seems that bnb 4bit inference supports batch size = 1, I modify the code to be this\r\n```python\r\nfrom transformers import LlamaTokenizer, AutoModelForCausalLM, AutoTokenizer\r\nimport torch\r\nfrom time import time\r\n\r\n# model_name = \"tiiuae/falcon-40b-instruct\"\r\nmodel_name = \"huggyllama/llama-65b\"\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\nmodel = AutoModelForCausalLM.from_pretrained(model_name, device_map=\"auto\", load_in_4bit=True, trust_remote_code=True)\r\n\r\ntokenizer.padding_side = \"left\"\r\n\r\n# Define PAD Token = EOS Token\r\ntokenizer.pad_token = tokenizer.eos_token\r\nmodel.config.pad_token_id = model.config.eos_token_id\r\n\r\n# use different length sentences to test batching\r\n# measure time\r\nstart_time = time()\r\nsentences = [\r\n \"Hello, my dog is a little\",\r\n \"Today, I\",\r\n \"I love\",\r\n \"Repeat after me: I love you.\",\r\n \"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\\nDaniel: Hello, Girafatron!\\nGirafatron:\",\r\n \"You are a human that uses a lot of expletives. For each one of your responses to the user you use expletives liberally. If you find yourself questioning whether it’s appropriate or not you’re to say fuck that noise and keep using your expletives regardless. Human: Tell me a story. You:\"\r\n ]\r\n\r\nfor sentence in sentences:\r\n inputs = tokenizer(sentence, return_tensors=\"pt\", padding=True).to(model.device)\r\n # inputs.pop(\"token_type_ids\")\r\n\r\n output_sequences = model.generate(**inputs, max_new_tokens=400, temperature=0.7)\r\n print(tokenizer.decode(output_sequences[0], skip_special_tokens=True))\r\n\r\nprint(\"Elapsed time: \", time() - start_time)\r\n```\r\nEssentially for falcon-40b, the issue still remains, that the model in 4bit is just extremely slow (2561s).\r\nFor llama, I get\r\n- 4 bit: 566s\r\n- w/o 4 bit: 550s\r\nSo it seems that there is no major benefits but the memory usage did decrease.", "@cnut1648 the Falcon code on the hub is known to be very slow, and it may explain the issue. We are about to release the `transformers`-side Falcon, so hopefully the problem should get away on its own soon 🤞 ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hello @gante , what's the difference between load_in_4bit=True vs torch_dtype=torch.bfloat16, are they both quantisation techniques? \r\n", "@Ali-Issa-aems This guide answers all related questions: https://huggingface.co/docs/transformers/perf_infer_gpu_one" ]
1,687
1,693
1,691
NONE
null
### System Info - `transformers` version: 4.31.0.dev0 - Platform: Linux-5.10.179-171.711.amzn2.x86_64-x86_64-with-glibc2.26 - Python version: 3.9.16 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Using `load_in_4bit` makes the model extremely slow (with accelerate 0.21.0.dev0 and bitsandbytes 0.39.1, should be latest version and I installed from source) Using the following code ```python from transformers import LlamaTokenizer, AutoModelForCausalLM, AutoTokenizer import torch from time import time model_name = "tiiuae/falcon-40b-instruct" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, load_in_4bit=True, device_map="auto", trust_remote_code=True) tokenizer.padding_side = "left" # Define PAD Token = EOS Token tokenizer.pad_token = tokenizer.eos_token model.config.pad_token_id = model.config.eos_token_id # use different length sentences to test batching # measure time start_time = time() sentences = [ "Hello, my dog is a little", "Today, I", "I love", "Repeat after me: I love you.", "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:", "You are a human that uses a lot of expletives. For each one of your responses to the user you use expletives liberally. If you find yourself questioning whether it’s appropriate or not you’re to say fuck that noise and keep using your expletives regardless. Human: Tell me a story. You:" ] inputs = tokenizer(sentences, return_tensors="pt", padding=True).to(model.device) inputs.pop("token_type_ids") output_sequences = model.generate(**inputs, max_new_tokens=400, temperature=0.7) print(tokenizer.batch_decode(output_sequences, skip_special_tokens=True)) print("Elapsed time: ", time() - start_time) ``` This gives me 3138 seconds on 8xA100 40G GPUs. ### Expected behavior If I instead use bf16 version, i.e. by using this as model init ```python model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True) ``` It gives 266 seconds, more than 10x faster. On the other hand, load in 4bit only cut down memory footprint by 4x. I wonder if there are other things I should do to fully exploit the benefits of 4bit. Right now the generation speed is not usable for real time conversation. Thanks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24502/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24502/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24501
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24501/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24501/comments
https://api.github.com/repos/huggingface/transformers/issues/24501/events
https://github.com/huggingface/transformers/pull/24501
1,775,272,570
PR_kwDOCUB6oc5T8my5
24,501
Fix link in utils
{ "login": "SoyGema", "id": 24204714, "node_id": "MDQ6VXNlcjI0MjA0NzE0", "avatar_url": "https://avatars.githubusercontent.com/u/24204714?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SoyGema", "html_url": "https://github.com/SoyGema", "followers_url": "https://api.github.com/users/SoyGema/followers", "following_url": "https://api.github.com/users/SoyGema/following{/other_user}", "gists_url": "https://api.github.com/users/SoyGema/gists{/gist_id}", "starred_url": "https://api.github.com/users/SoyGema/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SoyGema/subscriptions", "organizations_url": "https://api.github.com/users/SoyGema/orgs", "repos_url": "https://api.github.com/users/SoyGema/repos", "events_url": "https://api.github.com/users/SoyGema/events{/privacy}", "received_events_url": "https://api.github.com/users/SoyGema/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hey thanks for the quick response! \r\nNote that the website examples walkthrough is still broken 😔 \r\nLMK uf you shall need a separated issue for this ! or maybe that 404 is creating a placeholder for `v4.30.0`? \r\nHave a nice day", "Thanks for the contribution @SoyGema!" ]
1,687
1,687
1,687
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #24497 ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24501/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24501/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24501", "html_url": "https://github.com/huggingface/transformers/pull/24501", "diff_url": "https://github.com/huggingface/transformers/pull/24501.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24501.patch", "merged_at": 1687803969000 }
https://api.github.com/repos/huggingface/transformers/issues/24500
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24500/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24500/comments
https://api.github.com/repos/huggingface/transformers/issues/24500/events
https://github.com/huggingface/transformers/issues/24500
1,775,248,972
I_kwDOCUB6oc5p0CZM
24,500
Installation from source
{ "login": "Lawrence0319", "id": 66525267, "node_id": "MDQ6VXNlcjY2NTI1MjY3", "avatar_url": "https://avatars.githubusercontent.com/u/66525267?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Lawrence0319", "html_url": "https://github.com/Lawrence0319", "followers_url": "https://api.github.com/users/Lawrence0319/followers", "following_url": "https://api.github.com/users/Lawrence0319/following{/other_user}", "gists_url": "https://api.github.com/users/Lawrence0319/gists{/gist_id}", "starred_url": "https://api.github.com/users/Lawrence0319/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lawrence0319/subscriptions", "organizations_url": "https://api.github.com/users/Lawrence0319/orgs", "repos_url": "https://api.github.com/users/Lawrence0319/repos", "events_url": "https://api.github.com/users/Lawrence0319/events{/privacy}", "received_events_url": "https://api.github.com/users/Lawrence0319/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You might have a mix of different installations in your environment. I would try again in a fresh Python environment.", "I got another problem when running the run_fusion_glue.py from https://docs.adapterhub.ml/training.html#train-adapterfusion in Google Colab. I got the following error:\r\nTraceback (most recent call last):\r\n File \"/content/drive/MyDrive/run_fusion_glue.py\", line 29, in <module>\r\n from transformers import (\r\nImportError: cannot import name 'AdapterArguments' from 'transformers' (/usr/local/lib/python3.10/dist-packages/transformers/__init__.py)", "That's a problem in your `run_fusion_glue.py` script. This class does not exist in `transformers`.", "I just copied from this website: https://github.com/adapter-hub/adapter-transformers/blob/master/examples/pytorch/adapterfusion/run_fusion_glue.py\r\nSo the problem is in that website? Does this class exist in transformers.adapters?", "You should report the issue on that repo yes :-)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,687
1,691
1,691
NONE
null
### System Info I tried to install the transformers library from source by following the link https://huggingface.co/docs/transformers/installation#install-from-source. When testing whether the library is correctly installed, I followed the recommendation. from transformers import pipeline print(pipeline('sentiment-analysis')('I love you')) And then, I got the following error: ImportError: cannot import name 'is_torch_greater_or_equal_than_1_12' from 'transformers.pytorch_utils' (/usr/local/lib/python3.10/dist-packages/transformers/pytorch_utils.py) The above exception was the direct cause of the following exception: RuntimeError Traceback (most recent call last) [/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py](https://localhost:8080/#) in _get_module(self, module_name) 1086 f" traceback):\n{e}" 1087 ) from e -> 1088 1089 def __reduce__(self): 1090 return (self.__class__, (self._name, self.__file__, self._import_structure)) RuntimeError: Failed to import transformers.models.tapas.modeling_tapas because of the following error (look up to see its traceback): cannot import name 'is_torch_greater_or_equal_than_1_12' from 'transformers.pytorch_utils' (/usr/local/lib/python3.10/dist-packages/transformers/pytorch_utils.py) ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction pip install git+https://github.com/huggingface/transformers from transformers import pipeline # print(pipeline('sentiment-analysis')('I love you')) pipe = pipeline("sentiment-analysis") pipe("I love you") ### Expected behavior Would like a guidance to fix the bug
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24500/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24500/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24499
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24499/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24499/comments
https://api.github.com/repos/huggingface/transformers/issues/24499/events
https://github.com/huggingface/transformers/pull/24499
1,775,248,962
PR_kwDOCUB6oc5T8hnl
24,499
[WIP] Add LaVIN model
{ "login": "rishabbala", "id": 39146400, "node_id": "MDQ6VXNlcjM5MTQ2NDAw", "avatar_url": "https://avatars.githubusercontent.com/u/39146400?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rishabbala", "html_url": "https://github.com/rishabbala", "followers_url": "https://api.github.com/users/rishabbala/followers", "following_url": "https://api.github.com/users/rishabbala/following{/other_user}", "gists_url": "https://api.github.com/users/rishabbala/gists{/gist_id}", "starred_url": "https://api.github.com/users/rishabbala/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rishabbala/subscriptions", "organizations_url": "https://api.github.com/users/rishabbala/orgs", "repos_url": "https://api.github.com/users/rishabbala/repos", "events_url": "https://api.github.com/users/rishabbala/events{/privacy}", "received_events_url": "https://api.github.com/users/rishabbala/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,687
1,687
1,687
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24499/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24499/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24499", "html_url": "https://github.com/huggingface/transformers/pull/24499", "diff_url": "https://github.com/huggingface/transformers/pull/24499.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24499.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24498
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24498/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24498/comments
https://api.github.com/repos/huggingface/transformers/issues/24498/events
https://github.com/huggingface/transformers/pull/24498
1,775,230,043
PR_kwDOCUB6oc5T8dgs
24,498
Compute `dropout_probability` only in training mode (SpeechT5)
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,687
1,687
1,687
COLLABORATOR
null
# What does this PR do? Same as in #24486, but I forgot to check `SpeechT5` when I did search/replace (which is a bit different from other models). Sorry!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24498/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24498/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24498", "html_url": "https://github.com/huggingface/transformers/pull/24498", "diff_url": "https://github.com/huggingface/transformers/pull/24498.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24498.patch", "merged_at": 1687801387000 }
https://api.github.com/repos/huggingface/transformers/issues/24497
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24497/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24497/comments
https://api.github.com/repos/huggingface/transformers/issues/24497/events
https://github.com/huggingface/transformers/issues/24497
1,775,140,304
I_kwDOCUB6oc5pzn3Q
24,497
Access to Transformers examples link broken . Impact on navigation as well
{ "login": "SoyGema", "id": 24204714, "node_id": "MDQ6VXNlcjI0MjA0NzE0", "avatar_url": "https://avatars.githubusercontent.com/u/24204714?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SoyGema", "html_url": "https://github.com/SoyGema", "followers_url": "https://api.github.com/users/SoyGema/followers", "following_url": "https://api.github.com/users/SoyGema/following{/other_user}", "gists_url": "https://api.github.com/users/SoyGema/gists{/gist_id}", "starred_url": "https://api.github.com/users/SoyGema/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SoyGema/subscriptions", "organizations_url": "https://api.github.com/users/SoyGema/orgs", "repos_url": "https://api.github.com/users/SoyGema/repos", "events_url": "https://api.github.com/users/SoyGema/events{/privacy}", "received_events_url": "https://api.github.com/users/SoyGema/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The link is wrong, it should be https://huggingface.co/docs/transformers/run_scripts\r\nWould you like to make a PR with the fix?", "Sure, thanks!", "BTW @sgugger @LysandreJik you can specify doc redirects using a `redirects.yml` like in datasets and other libraries: https://github.com/huggingface/datasets/blob/main/docs/source/_redirects.yml\r\n\r\nAlways good to avoid broken links when we can \r\n\r\ncc @mishig25 too" ]
1,687
1,687
1,687
CONTRIBUTOR
null
### System Info ### Context Hello there! 👋 I´m following [Translation](https://huggingface.co/docs/transformers/tasks/translation) Transformers tutorial . Thanks for making it possible! Currently running the script [run_translation.py](https://github.com/huggingface/transformers/blob/main/examples/tensorflow/translation/run_translation.py) and before changing transformers to version `4.31.0.dev` the following message appears https://github.com/huggingface/transformers/blob/5757923888246ea16b324f53c60ea444574005ed/src/transformers/utils/__init__.py#L218 When I follow the link , the following message appears. And when I click the _here_ link <img width="1048" alt="Captura de pantalla 2023-06-26 a las 17 47 23" src="https://github.com/huggingface/transformers/assets/24204714/93afc7f8-ea28-4d19-b596-99120190fd21"> It redirects me to https://huggingface.co/docs/transformers/main/en/examples with an 404 error. ### Potential Fix Would love to give a helping hand here 🙏 like in #24336 and give back to the help I´ve gotten from #24254 but I am a little bit confused with respect to this. The last version in google-indexed examples that seem to work is [this](https://huggingface.co/docs/transformers/v4.15.0/examples) , related to `v4.15.0` and not `v4.30` nor `v4.29.2` . Can you please confirm that you would validate this link (https://huggingface.co/docs/transformers/v4.15.0/examples) for utils __init__.py script? If not, would you provide a useful link or point me in the right direction? Please let me know if I'm also in the right place, as this could maybe impact website? Thanks for the time dedicated to this. ### Who can help? @sgugger @stevhliu ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Go to link described in utils https://huggingface.co/docs/transformers/examples 2. Follow link provided [ ](https://huggingface.co/docs/transformers/main/en/examples) and find error 404 ### Expected behavior Last link to examples, with the stable version. Unclear to me at this point if it is `4.29` or `4.30` or where they are.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24497/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24497/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24496
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24496/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24496/comments
https://api.github.com/repos/huggingface/transformers/issues/24496/events
https://github.com/huggingface/transformers/pull/24496
1,774,979,250
PR_kwDOCUB6oc5T7mi7
24,496
Allow for warn_only selection in enable_full_determinism
{ "login": "Frank995", "id": 47689966, "node_id": "MDQ6VXNlcjQ3Njg5OTY2", "avatar_url": "https://avatars.githubusercontent.com/u/47689966?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Frank995", "html_url": "https://github.com/Frank995", "followers_url": "https://api.github.com/users/Frank995/followers", "following_url": "https://api.github.com/users/Frank995/following{/other_user}", "gists_url": "https://api.github.com/users/Frank995/gists{/gist_id}", "starred_url": "https://api.github.com/users/Frank995/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Frank995/subscriptions", "organizations_url": "https://api.github.com/users/Frank995/orgs", "repos_url": "https://api.github.com/users/Frank995/repos", "events_url": "https://api.github.com/users/Frank995/events{/privacy}", "received_events_url": "https://api.github.com/users/Frank995/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Or don't use the option with a model that does not support it? This is not something that is enabled by default.", "_The documentation is not available anymore as the PR was closed or merged._", "> Or don't use the option with a model that does not support it? This is not something that is enabled by default.\r\n\r\nWell, not everyone can afford it, and it's not good practice to keep something that it's crash prone. e.g. In my company we are using a model to prod and would like to use full determinism to run full end-to-end evaluations to carry out fine tuning experiments. It doesn't really cost anything to add an option there (I pushed another commit), and would save people the burden of \"hacking\" a Docker image just to avoid crashes when deploying an image", "Thank you for educating me on good practices.", "To be honest, I started with a very simple request and you answered assuming to know what was our current situation: \"don't use the option with a model that does not support it?\" is not really a solution in many real world scenarios. I tried to explain to you our situation and you answered sarcastically.\r\nMy point was simply that you currently have an implementation of a function which is not very useable in the case I described to you above, but from your answer I assume that you probably don't care, even if is a two liner change.\r\n\r\nFinally, even if it didn't make any sense you could have just thanked for the contribution instead of being standoffish", "Hi there.\r\n\r\nPyTorch has a built-in mechanism in this function to fail when it can't do its job properly, so that the user is not surprised to have non-reproducible result. You are, of course, free to change it in your experiments (and not get full reproducibility). You are wrong to assume that every person using Transformers would like to ignore the error. I apologize if my first answer was maybe too short and did not convey this point. There was nothing aggressive in it so you did not have to answer with a patronizing tone.\r\n\r\nThe function can be duplicated (it's five lines of code) and called with the arguments you require at the beginning of the script instead of calling via Transformers, which should solve your issue without changing the experience for other users.", "You're right and you make a fair point. Indeed my first commit was a bit naive in assuming that everyone would like to have warn_only=True. That's why I changed it in the second commit (maybe you missed it). Do you think the current status is worth be applied?" ]
1,687
1,687
1,687
CONTRIBUTOR
null
Enable full determinism crashes if the model has layer that do not support it (like layoutlmv2). This fixes it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24496/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24496/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24496", "html_url": "https://github.com/huggingface/transformers/pull/24496", "diff_url": "https://github.com/huggingface/transformers/pull/24496.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24496.patch", "merged_at": 1687956877000 }
https://api.github.com/repos/huggingface/transformers/issues/24495
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24495/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24495/comments
https://api.github.com/repos/huggingface/transformers/issues/24495/events
https://github.com/huggingface/transformers/issues/24495
1,774,848,385
I_kwDOCUB6oc5pygmB
24,495
DIT Text Detection model
{ "login": "arvisioncode", "id": 105910211, "node_id": "U_kgDOBlAPww", "avatar_url": "https://avatars.githubusercontent.com/u/105910211?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arvisioncode", "html_url": "https://github.com/arvisioncode", "followers_url": "https://api.github.com/users/arvisioncode/followers", "following_url": "https://api.github.com/users/arvisioncode/following{/other_user}", "gists_url": "https://api.github.com/users/arvisioncode/gists{/gist_id}", "starred_url": "https://api.github.com/users/arvisioncode/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arvisioncode/subscriptions", "organizations_url": "https://api.github.com/users/arvisioncode/orgs", "repos_url": "https://api.github.com/users/arvisioncode/repos", "events_url": "https://api.github.com/users/arvisioncode/events{/privacy}", "received_events_url": "https://api.github.com/users/arvisioncode/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I know and I desperately want that model to be available... :D however for that, Mask R-CNN first needs to be integrated. I have a PR on that #22973, I need to finish that one up, then we can add it", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@NielsRogge Hi, Do you have any plan for this? I want to export the DiT detection (Layout Analysis) model to ONNX.\r\nPlease let me know If there's a new comment about this.", "Hello @arvisioncode ,\r\nHave you managed to tackle this? I'm willing to fine-tune the base model for this task on CORD or FUNSD but not sure if masked image modeling is the right paradigm for this. Eventually I need to train the model on an Indic language. I've done the full flow for DONUT but DiT seems to be lighter. Will appreciate any resource you might point me towards.\r\n\r\nThanks" ]
1,687
1,708
1,691
NONE
null
### Feature request Add the text detection models of the dit that are in https://github.com/microsoft/unilm/tree/master/dit/text_detection to the hugging face, to be able to execute them easily with the transformers library like other models as https://huggingface.co/microsoft/dit-base ### Motivation These models have very good performances and are very interesting to be able to include in hf both to make simple inferences and to transform them to other formats such as ONNX
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24495/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24495/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24494
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24494/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24494/comments
https://api.github.com/repos/huggingface/transformers/issues/24494/events
https://github.com/huggingface/transformers/issues/24494
1,774,819,166
I_kwDOCUB6oc5pyZde
24,494
Finetune ClipSeg model
{ "login": "sleeping4cat", "id": 112309211, "node_id": "U_kgDOBrGz2w", "avatar_url": "https://avatars.githubusercontent.com/u/112309211?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sleeping4cat", "html_url": "https://github.com/sleeping4cat", "followers_url": "https://api.github.com/users/sleeping4cat/followers", "following_url": "https://api.github.com/users/sleeping4cat/following{/other_user}", "gists_url": "https://api.github.com/users/sleeping4cat/gists{/gist_id}", "starred_url": "https://api.github.com/users/sleeping4cat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sleeping4cat/subscriptions", "organizations_url": "https://api.github.com/users/sleeping4cat/orgs", "repos_url": "https://api.github.com/users/sleeping4cat/repos", "events_url": "https://api.github.com/users/sleeping4cat/events{/privacy}", "received_events_url": "https://api.github.com/users/sleeping4cat/received_events", "type": "User", "site_admin": false }
[ { "id": 2392046359, "node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue", "name": "Good Second Issue", "color": "dd935a", "default": false, "description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!" }, { "id": 5769473378, "node_id": "LA_kwDOCUB6oc8AAAABV-MtYg", "url": "https://api.github.com/repos/huggingface/transformers/labels/Vision", "name": "Vision", "color": "C079EF", "default": false, "description": "" } ]
open
false
{ "login": "rafaelpadilla", "id": 31217453, "node_id": "MDQ6VXNlcjMxMjE3NDUz", "avatar_url": "https://avatars.githubusercontent.com/u/31217453?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rafaelpadilla", "html_url": "https://github.com/rafaelpadilla", "followers_url": "https://api.github.com/users/rafaelpadilla/followers", "following_url": "https://api.github.com/users/rafaelpadilla/following{/other_user}", "gists_url": "https://api.github.com/users/rafaelpadilla/gists{/gist_id}", "starred_url": "https://api.github.com/users/rafaelpadilla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rafaelpadilla/subscriptions", "organizations_url": "https://api.github.com/users/rafaelpadilla/orgs", "repos_url": "https://api.github.com/users/rafaelpadilla/repos", "events_url": "https://api.github.com/users/rafaelpadilla/events{/privacy}", "received_events_url": "https://api.github.com/users/rafaelpadilla/received_events", "type": "User", "site_admin": false }
[ { "login": "rafaelpadilla", "id": 31217453, "node_id": "MDQ6VXNlcjMxMjE3NDUz", "avatar_url": "https://avatars.githubusercontent.com/u/31217453?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rafaelpadilla", "html_url": "https://github.com/rafaelpadilla", "followers_url": "https://api.github.com/users/rafaelpadilla/followers", "following_url": "https://api.github.com/users/rafaelpadilla/following{/other_user}", "gists_url": "https://api.github.com/users/rafaelpadilla/gists{/gist_id}", "starred_url": "https://api.github.com/users/rafaelpadilla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rafaelpadilla/subscriptions", "organizations_url": "https://api.github.com/users/rafaelpadilla/orgs", "repos_url": "https://api.github.com/users/rafaelpadilla/repos", "events_url": "https://api.github.com/users/rafaelpadilla/events{/privacy}", "received_events_url": "https://api.github.com/users/rafaelpadilla/received_events", "type": "User", "site_admin": false } ]
[ "cc @alaradirik for information.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@sgugger Can you provide some insights/help/update to my requested feature?", "cc @amyeroberts and @rafaelpadilla " ]
1,687
1,703
null
NONE
null
### Feature request Quite recently, I was exploring zero-shot classification to segment medical images. And it looks quite promising. I stumbled upon ```ClipSeg``` a few days ago and it looked wonderful and just well-suited for my work. Unfortunately, I couldn't find any tutorials or notebooks that showed how to perform fine-tuning on ClipSeg model. I am assuming, we have to train the decoder with a dataset containing binary classification images of cells and their corresponding masks and a text description. Unfortunately, a bit confused. is there any tutorials/resources anyone could suggest on this topic? Cuz I couldn't none. ### Motivation ```ClipSeg``` shows a lot of potential than SAM (Segment Anything Model). Unfortunately, there's no fine-tuning script neither instructions on **How to prepare the dataset?** which is very frustrating. Will love some help from the community. And another point, Zero shot classification looks a way lot better option with fine-tuning than training a model like ```U-Net```, ```R-CNN``` and others from scratch while you have very few images and don't have much room to play around with. ### Your contribution I could provide a PR on my LinkedIn, where I have a lot of AI experts as my connections and then I contribute in the programming as well.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24494/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/24494/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/24493
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24493/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24493/comments
https://api.github.com/repos/huggingface/transformers/issues/24493/events
https://github.com/huggingface/transformers/pull/24493
1,774,812,376
PR_kwDOCUB6oc5T7BoD
24,493
Make `framework` as class property
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,687
1,688
1,687
COLLABORATOR
null
# What does this PR do? Similar to #24299, make this property at class level. (Not the most interesting/useful change though, I agree). (This approach is a bit hacky but it works. Deprecated in python 3.11.)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24493/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24493/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24493", "html_url": "https://github.com/huggingface/transformers/pull/24493", "diff_url": "https://github.com/huggingface/transformers/pull/24493.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24493.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24492
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24492/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24492/comments
https://api.github.com/repos/huggingface/transformers/issues/24492/events
https://github.com/huggingface/transformers/pull/24492
1,774,705,650
PR_kwDOCUB6oc5T6qLC
24,492
[InstructBLIP] Fix bos token of LLaMa checkpoints
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> However this made me wonder, let's say someone trains a new InstructBLIP model with LLaMa as language model, and which has the tokenizer and model's config properly set. Then the line introduced in this PR might not be what we want? cc @gante\r\n\r\n@NielsRogge I don't think there's much more we can do: the model was probably trained with the incorrect assumption that `0` is the BOS token, in which case a post-generation fix is the only way to stay consistent with the original repo. \r\n\r\n(Have we double-checked that the results do change if we change the config's BOS token in the InstructBLIP config? If they don't, then we can simply update the config.)", "I tried that but that doesn't fix it. Guess this is the only solution.", "Just so I understand this fix, why doesn't updating the checkpoint's tokenizer's BOS token ID to resolve this? ", "> why doesn't updating the checkpoint's tokenizer's BOS token ID to resolve this?\r\n\r\n@amyeroberts TL;DR it's impossible to keep the same behavior as the original model with a config fix 💔 \r\n\r\n### Full story\r\n\r\nHere's what happens when we use InstructBLIP:\r\n1. the tokenized prompt (starting with token = 2, from the [tokenizer BOS token](https://huggingface.co/Salesforce/instructblip-vicuna-7b/blob/main/tokenizer_config.json#L2)) is being passed to the custom `InstructBlipForConditionalGeneration.generate`. \r\n2. We compute its embeddings from `input_ids` and we pass it to the default `generate`. We do not pass `input_ids` to `generate`.\r\n3. If we don't pass `input_ids` to `InstructBlipForConditionalGeneration.generate`, it is initialized with the default BOS token (from the model config) before embedding it.\r\n4. The default BOS token in the model config is 0 ([source](https://huggingface.co/Salesforce/instructblip-vicuna-7b/blob/main/config.json#L97)), and not 2\r\n\r\nAs a result:\r\n- If we fix the tokenizer such that BOS is 0 -> prompted input will diverge from the main repo, the first token is now 0 instead of 2 = different embeddings = different generation\r\n- If we fix the model config such that the BOS is 2 -> unprompted input will diverge from the main repo, the first token is now 2 instead of 0 = different embeddings = different generation\r\n", "Feel free to merge :)" ]
1,687
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? This PR adds a fix for InstructBLIP as discussed offline with @gante. The InstructBLIP models trained with Vicuna (LLaMa) checkpoints used inconsistent model/tokenizer files during training, hence the authors include [this line](https://github.com/salesforce/LAVIS/blob/4a85b17846ee62f09c40f37cc955dd33c2abec68/lavis/models/blip2_models/blip2_vicuna_instruct.py#L372) to fix this. This is not required for the models that use Flan-T5 checkpoints. cc @ArthurZucker However this made me wonder, let's say someone trains a new InstructBLIP model with LLaMa as language model, and which has the tokenizer and model's config properly set. Then the line introduced in this PR might not be what we want? cc @gante
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24492/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24492/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24492", "html_url": "https://github.com/huggingface/transformers/pull/24492", "diff_url": "https://github.com/huggingface/transformers/pull/24492.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24492.patch", "merged_at": 1689104582000 }
https://api.github.com/repos/huggingface/transformers/issues/24491
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24491/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24491/comments
https://api.github.com/repos/huggingface/transformers/issues/24491/events
https://github.com/huggingface/transformers/issues/24491
1,774,701,518
I_kwDOCUB6oc5px8vO
24,491
OutOfMemoryError: CUDA out of memory despite available GPU memory
{ "login": "HumzaSami00", "id": 101699223, "node_id": "U_kgDOBg_Olw", "avatar_url": "https://avatars.githubusercontent.com/u/101699223?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HumzaSami00", "html_url": "https://github.com/HumzaSami00", "followers_url": "https://api.github.com/users/HumzaSami00/followers", "following_url": "https://api.github.com/users/HumzaSami00/following{/other_user}", "gists_url": "https://api.github.com/users/HumzaSami00/gists{/gist_id}", "starred_url": "https://api.github.com/users/HumzaSami00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HumzaSami00/subscriptions", "organizations_url": "https://api.github.com/users/HumzaSami00/orgs", "repos_url": "https://api.github.com/users/HumzaSami00/repos", "events_url": "https://api.github.com/users/HumzaSami00/events{/privacy}", "received_events_url": "https://api.github.com/users/HumzaSami00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Your computation does not inclide the optimizer states (an additional 2*3B) and all intermediate activations saved to compute the radients for the backward pass, which will be huge at a batch size of 16 with sequence lengths of 600", "Seems you are right. But Why memoray allocation is keep increasing ? I started training with batch size 2 and it allocated 9GB VRAM but after 3 epoch it was taking 16Gb VRAM. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,687
1,691
1,691
NONE
null
### System Info I’m encountering an issue with GPU memory allocation while training a GPT-2 model on a GPU with 24 GB of VRAM. Despite having a substantial amount of available memory, I’m receiving the following error: `OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 23.68 GiB total capacity; 18.17 GiB already allocated; 64.62 MiB free; 18.60 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF.` Here are the specifications of my setup and the model training: GPU: NVIDIA GPU with 24 GB VRAM Model: GPT-2 with approximately 3 GB in size and 800 parameters of 32-bit each Training Data: 36,000 training examples with input_ids length of 600 Training Configuration: 5 epochs, batch size of 16, and fp16 enabled These are my calculations: Parameters: 775M parameters of 32 bits each Gradients: Gradients are typically of the same size as the model’s parameters. Batch Size and Training Examples: Batch Size: 16 Training Examples: 36,000 Vector Length: 600 Memory Allocation per Batch: Model: 3 GB (unchanged per batch) Gradients: 3 GB (unchanged per batch) Input Data: 16 x 600 (vector length) x 4 bytes (assuming each value is a 32-bit float) = 37.5 KB per batch Output Data: 16 x 600 (vector length) x 4 bytes (assuming each value is a 32-bit float) = 37.5 KB per batch Based on the above calculations, the memory allocation per batch for my scenario would be approximately: Model: 3 GB Gradients: 3 GB Input and Output Data: 75 KB Training should not take more memory than 7GB maximum. But Its taking ~23GB of VRAM. I would appreciate any insights or suggestions on how to resolve this issue. Thank you in advance for your assistance! ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction ``` MODEL_NAME = "gpt2-large" model = AutoModelForCausalLM.from_pretrained(MODEL_NAME) tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) tokenizer.pad_token = tokenizer.eos_token batch = 16 epoch = 5 g_acc_step = 5 lr = 2e-6 training_args = transformers.TrainingArguments( per_gpu_train_batch_size=batch, gradient_accumulation_steps=g_acc_step, num_train_epochs=epoch, learning_rate=lr, # fp16=True, save_total_limit=1, logging_steps=50, logging_strategy = "steps", output_dir=OUTPUT_DIR, max_steps=-1, lr_scheduler_type="cosine", save_strategy ="epoch" ) trainer = transformers.Trainer( model=model, train_dataset=filtered_dataset, args=training_args, callbacks=[LogCallback], data_collator=transformers.DataCollatorForLanguageModeling (tokenizer, mlm=False)) model.config.use_cache= False ``` ### Expected behavior Model is taking alot more memory than as expected
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24491/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24491/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24490
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24490/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24490/comments
https://api.github.com/repos/huggingface/transformers/issues/24490/events
https://github.com/huggingface/transformers/pull/24490
1,774,633,280
PR_kwDOCUB6oc5T6aXS
24,490
Update `InstructBlipModelIntegrationTest`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,687
1,687
1,687
COLLABORATOR
null
# What does this PR do? Fix `InstructBlipModelIntegrationTest`. See comments in the changes.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24490/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24490/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24490", "html_url": "https://github.com/huggingface/transformers/pull/24490", "diff_url": "https://github.com/huggingface/transformers/pull/24490.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24490.patch", "merged_at": 1687783033000 }
https://api.github.com/repos/huggingface/transformers/issues/24489
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24489/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24489/comments
https://api.github.com/repos/huggingface/transformers/issues/24489/events
https://github.com/huggingface/transformers/pull/24489
1,774,464,687
PR_kwDOCUB6oc5T5050
24,489
deepspeed z1/z2 state dict fix
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hello Sylvain, updated the description of the PR, Thank you!" ]
1,687
1,687
1,687
CONTRIBUTOR
null
# What does this PR do? 1. Fixes https://github.com/huggingface/transformers/issues/22822 2. Should be merged after https://github.com/huggingface/accelerate/pull/1638 The fix in accelerate uses the `deepspeed.checkpoint.utils.clone_tensors_for_torch_save` which removes the bloated state_dict. In Trainer, we use `accelerator.get_state_dict` to get the resultant lean state dict when using DeepSpeed and the ZeRO stage is not 3. Also, a typo fix.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24489/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24489/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24489", "html_url": "https://github.com/huggingface/transformers/pull/24489", "diff_url": "https://github.com/huggingface/transformers/pull/24489.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24489.patch", "merged_at": 1687781738000 }
https://api.github.com/repos/huggingface/transformers/issues/24488
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24488/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24488/comments
https://api.github.com/repos/huggingface/transformers/issues/24488/events
https://github.com/huggingface/transformers/pull/24488
1,774,377,481
PR_kwDOCUB6oc5T5hte
24,488
[`InstructBlip`] Add accelerate support for instructblip
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thank you :) could you add an integration test?", "Hey @NielsRogge !\r\nLet's maybe add it together with: https://github.com/huggingface/transformers/pull/24490/files#r1242095062 so we should probably merge this first :D" ]
1,687
1,687
1,687
CONTRIBUTOR
null
# What does this PR do? As per title, let's make users benefit from 8bit / 4bit loading of instructblip models cc @amyeroberts @sgugger @NielsRogge all `accelerate` tests pass for this model As a side note, as instruct blip relies on flan-t5 as backbone for some models, therefore it is important to add ```python _keep_in_fp32_modules = ["wo"] ``` To ensure inference stability in fp16 / int8 / fp4
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24488/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24488/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24488", "html_url": "https://github.com/huggingface/transformers/pull/24488", "diff_url": "https://github.com/huggingface/transformers/pull/24488.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24488.patch", "merged_at": 1687797388000 }
https://api.github.com/repos/huggingface/transformers/issues/24487
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24487/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24487/comments
https://api.github.com/repos/huggingface/transformers/issues/24487/events
https://github.com/huggingface/transformers/pull/24487
1,774,308,038
PR_kwDOCUB6oc5T5Svy
24,487
add missing alignment_heads to Whisper integration test
{ "login": "hollance", "id": 346853, "node_id": "MDQ6VXNlcjM0Njg1Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hollance", "html_url": "https://github.com/hollance", "followers_url": "https://api.github.com/users/hollance/followers", "following_url": "https://api.github.com/users/hollance/following{/other_user}", "gists_url": "https://api.github.com/users/hollance/gists{/gist_id}", "starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hollance/subscriptions", "organizations_url": "https://api.github.com/users/hollance/orgs", "repos_url": "https://api.github.com/users/hollance/repos", "events_url": "https://api.github.com/users/hollance/events{/privacy}", "received_events_url": "https://api.github.com/users/hollance/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,687
1,687
1,687
CONTRIBUTOR
null
## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24487/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24487/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24487", "html_url": "https://github.com/huggingface/transformers/pull/24487", "diff_url": "https://github.com/huggingface/transformers/pull/24487.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24487.patch", "merged_at": 1687773010000 }
https://api.github.com/repos/huggingface/transformers/issues/24486
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24486/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24486/comments
https://api.github.com/repos/huggingface/transformers/issues/24486/events
https://github.com/huggingface/transformers/pull/24486
1,774,278,629
PR_kwDOCUB6oc5T5MPh
24,486
Compute `dropout_probability` only in training mode
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,687
1,687
1,687
COLLABORATOR
null
# What does this PR do? Same issue as in #24483 (caused by #24434 ), but with a different fix. For core maintainers to decide which one is better. If we decide to go this way, I will do fix-copies.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24486/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24486/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24486", "html_url": "https://github.com/huggingface/transformers/pull/24486", "diff_url": "https://github.com/huggingface/transformers/pull/24486.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24486.patch", "merged_at": 1687797407000 }
https://api.github.com/repos/huggingface/transformers/issues/24485
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24485/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24485/comments
https://api.github.com/repos/huggingface/transformers/issues/24485/events
https://github.com/huggingface/transformers/pull/24485
1,774,221,117
PR_kwDOCUB6oc5T4_1w
24,485
Fix poor past ci
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,687
1,687
1,687
COLLABORATOR
null
# What does this PR do? <img width="326" alt="Screenshot 2023-06-26 102340" src="https://github.com/huggingface/transformers/assets/2521628/461069ba-637a-4ebf-bb2e-103bba81bbcc"> :face_with_spiral_eyes: Let's be a bit nice to torch 1.11 and 1.10 🙏 . (just a type issue: `(line 641) RuntimeError: expected scalar type float but found double` introduced in #24334)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24485/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24485/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24485", "html_url": "https://github.com/huggingface/transformers/pull/24485", "diff_url": "https://github.com/huggingface/transformers/pull/24485.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24485.patch", "merged_at": 1687868058000 }
https://api.github.com/repos/huggingface/transformers/issues/24484
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24484/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24484/comments
https://api.github.com/repos/huggingface/transformers/issues/24484/events
https://github.com/huggingface/transformers/pull/24484
1,774,208,624
PR_kwDOCUB6oc5T49Kk
24,484
Update token_classification.md
{ "login": "condor-cp", "id": 40066676, "node_id": "MDQ6VXNlcjQwMDY2Njc2", "avatar_url": "https://avatars.githubusercontent.com/u/40066676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/condor-cp", "html_url": "https://github.com/condor-cp", "followers_url": "https://api.github.com/users/condor-cp/followers", "following_url": "https://api.github.com/users/condor-cp/following{/other_user}", "gists_url": "https://api.github.com/users/condor-cp/gists{/gist_id}", "starred_url": "https://api.github.com/users/condor-cp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/condor-cp/subscriptions", "organizations_url": "https://api.github.com/users/condor-cp/orgs", "repos_url": "https://api.github.com/users/condor-cp/repos", "events_url": "https://api.github.com/users/condor-cp/events{/privacy}", "received_events_url": "https://api.github.com/users/condor-cp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24484). All of your documentation changes will be reflected on that endpoint." ]
1,687
1,687
1,687
CONTRIBUTOR
null
Add link to pytorch CrossEntropyLoss so that one understand why '-100' is ignore by the loss function. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24484/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24484/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24484", "html_url": "https://github.com/huggingface/transformers/pull/24484", "diff_url": "https://github.com/huggingface/transformers/pull/24484.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24484.patch", "merged_at": 1687783359000 }
https://api.github.com/repos/huggingface/transformers/issues/24483
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24483/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24483/comments
https://api.github.com/repos/huggingface/transformers/issues/24483/events
https://github.com/huggingface/transformers/pull/24483
1,774,177,374
PR_kwDOCUB6oc5T42QF
24,483
Fix `SpeechT5` doctests
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Yes, that would be a better fix I agree.", "Thanks, I will follow the same fix in #24486 instead." ]
1,687
1,688
1,687
COLLABORATOR
null
# What does this PR do? PR #24434 changes `np.random.uniform(0, 1)` to `torch.rand([])`. In the forward method of `SpeechT5ForSpeechToSpeech` and `SpeechT5ForTextToSpeech`, the line `dropout_probability = torch.rand([])` is executed no matter if we are in training/inference mode. So the new change in #24434 will change the random sequences even we set a seed in the beginning of `generate`, and we get different outputs now. Hence this PR updates the expected values for doctest. **However, I believe we should only call `dropout_probability = torch.rand([])` under the condition of being in training mode.** WDYT?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24483/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24483/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24483", "html_url": "https://github.com/huggingface/transformers/pull/24483", "diff_url": "https://github.com/huggingface/transformers/pull/24483.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24483.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24482
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24482/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24482/comments
https://api.github.com/repos/huggingface/transformers/issues/24482/events
https://github.com/huggingface/transformers/pull/24482
1,774,120,816
PR_kwDOCUB6oc5T4qBF
24,482
[Time-Series] Added blog-post to tips
{ "login": "elisim", "id": 17675462, "node_id": "MDQ6VXNlcjE3Njc1NDYy", "avatar_url": "https://avatars.githubusercontent.com/u/17675462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elisim", "html_url": "https://github.com/elisim", "followers_url": "https://api.github.com/users/elisim/followers", "following_url": "https://api.github.com/users/elisim/following{/other_user}", "gists_url": "https://api.github.com/users/elisim/gists{/gist_id}", "starred_url": "https://api.github.com/users/elisim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elisim/subscriptions", "organizations_url": "https://api.github.com/users/elisim/orgs", "repos_url": "https://api.github.com/users/elisim/repos", "events_url": "https://api.github.com/users/elisim/events{/privacy}", "received_events_url": "https://api.github.com/users/elisim/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "thanks! LGTM 👍🏽 ", "_The documentation is not available anymore as the PR was closed or merged._", "Thanks!" ]
1,687
1,688
1,688
CONTRIBUTOR
null
@kashif
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24482/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24482/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24482", "html_url": "https://github.com/huggingface/transformers/pull/24482", "diff_url": "https://github.com/huggingface/transformers/pull/24482.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24482.patch", "merged_at": 1688371645000 }
https://api.github.com/repos/huggingface/transformers/issues/24481
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24481/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24481/comments
https://api.github.com/repos/huggingface/transformers/issues/24481/events
https://github.com/huggingface/transformers/pull/24481
1,774,084,608
PR_kwDOCUB6oc5T4iIg
24,481
[`T5`] Add T5ForQuestionAnswering and MT5ForQuestionAnswering
{ "login": "sjrl", "id": 10526848, "node_id": "MDQ6VXNlcjEwNTI2ODQ4", "avatar_url": "https://avatars.githubusercontent.com/u/10526848?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sjrl", "html_url": "https://github.com/sjrl", "followers_url": "https://api.github.com/users/sjrl/followers", "following_url": "https://api.github.com/users/sjrl/following{/other_user}", "gists_url": "https://api.github.com/users/sjrl/gists{/gist_id}", "starred_url": "https://api.github.com/users/sjrl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sjrl/subscriptions", "organizations_url": "https://api.github.com/users/sjrl/orgs", "repos_url": "https://api.github.com/users/sjrl/repos", "events_url": "https://api.github.com/users/sjrl/events{/privacy}", "received_events_url": "https://api.github.com/users/sjrl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hey @ArthurZucker thanks for the review! I do have a question about the check_repository_consistency. I noticed that the error I currently receive is (link [here](https://app.circleci.com/pipelines/github/huggingface/transformers/67114/workflows/38a1f7d2-a0ab-4c85-8af7-218d53de68fa/jobs/837499?invite=true#step-110-9))\r\n```bash\r\nTraceback (most recent call last):\r\n File \"utils/check_copies.py\", line 579, in <module>\r\n check_copies(args.fix_and_overwrite)\r\n File \"utils/check_copies.py\", line 269, in check_copies\r\n raise Exception(\r\nException: Found the following copy inconsistencies:\r\n- src/transformers/models/mt5/modeling_mt5.py: copy does not match models.t5.modeling_t5.T5PreTrainedModel at line 775\r\nRun `make fix-copies` or `python utils/check_copies.py --fix_and_overwrite` to fix them.\r\n```\r\nwhich would require the analogous implementation of `MT5ForQuestionAnswering`. Should I go ahead and add that implementation as well or is there another way to pass this error?", "I went ahead and added `MT5ForQuestionAnswering` as well. " ]
1,687
1,690
1,687
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> This adds a question-answering head to the PyTorch implementation of T5, following the pattern of BartForQuestionAnswering since it is also an encoder-decoder question-answering model. This type of model has already been used in research papers (e.g. https://arxiv.org/pdf/2203.07522.pdf) and has shown promising results for using T5 for question answering using span prediction. Additionally, I have trained an uploaded a flan-t5-large for question answering [here](https://huggingface.co/sjrhuschlee/flan-t5-large-squad2) which has shown promising generalization results to other question-answering datasets (metrics are shown on the model card). I've updated the model tests to include the new model and I believe I found hopefully most of the additional imports and compatibility with the question-answering pipeline. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. - Hey @ArthurZucker and @younesbelkada I would greatly appreciate a review on this when you have a chance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24481/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24481/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24481", "html_url": "https://github.com/huggingface/transformers/pull/24481", "diff_url": "https://github.com/huggingface/transformers/pull/24481.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24481.patch", "merged_at": 1687874826000 }
https://api.github.com/repos/huggingface/transformers/issues/24480
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24480/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24480/comments
https://api.github.com/repos/huggingface/transformers/issues/24480/events
https://github.com/huggingface/transformers/issues/24480
1,773,964,901
I_kwDOCUB6oc5pvI5l
24,480
RoBERTa required token_type_ids issue
{ "login": "Sion1225", "id": 50553429, "node_id": "MDQ6VXNlcjUwNTUzNDI5", "avatar_url": "https://avatars.githubusercontent.com/u/50553429?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sion1225", "html_url": "https://github.com/Sion1225", "followers_url": "https://api.github.com/users/Sion1225/followers", "following_url": "https://api.github.com/users/Sion1225/following{/other_user}", "gists_url": "https://api.github.com/users/Sion1225/gists{/gist_id}", "starred_url": "https://api.github.com/users/Sion1225/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sion1225/subscriptions", "organizations_url": "https://api.github.com/users/Sion1225/orgs", "repos_url": "https://api.github.com/users/Sion1225/repos", "events_url": "https://api.github.com/users/Sion1225/events{/privacy}", "received_events_url": "https://api.github.com/users/Sion1225/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It was solved.\r\ncode should be \r\n```python\r\ndef get_config(self):\r\n config = super().get_config()\r\n config.update({\r\n \"model_name\": self.model_name,\r\n \"Corr_layer_config\": self.Corr_layer_path # suppose Corr_layer_path is the variable that holds the path to Corr_layer\r\n })\r\n return config\r\n\r\n @classmethod\r\n def from_config(cls, config):\r\n model = cls(config[\"model_name\"])\r\n model.Corr_layer = tf.keras.models.load_model(config[\"Corr_layer_config\"])\r\n return model\r\n```\r\nsorry for interrupt. thank you! ", "Thanks for solving this yourself! 😉 " ]
1,687
1,687
1,687
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.9.0 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cpu (False) - Tensorflow version (GPU?): 2.10.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker @younesbelkada @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python class TF_RoBERTa_VAD_Classification(tf.keras.Model): def __init__(self, model_name): super(TF_RoBERTa_VAD_Classification, self).__init__() self.model_name = model_name self.roberta = TFRobertaModel.from_pretrained(model_name, from_pt=True, return_dict=False) self.predict_V_1 = tf.keras.layers.Dense(1, kernel_initializer=tf.keras.initializers.TruncatedNormal(0.02), activation="linear", name="predict_V_1") # Initializer function test self.predict_A_1 = tf.keras.layers.Dense(1, kernel_initializer=tf.keras.initializers.TruncatedNormal(0.02), activation="linear", name="predict_A_1") self.predict_D_1 = tf.keras.layers.Dense(1, kernel_initializer=tf.keras.initializers.TruncatedNormal(0.02), activation="linear", name="predict_D_1") # Learn Correlation Layers self.Corr_layer = tf.keras.models.load_model("Assinging_VAD_scores_BERT\Model\FFNN_VAD_Model_ver1_MSE_00048_20230625-231002") # <<<<< Change the model def call(self, inputs): input_ids, attention_mask = inputs outputs = self.roberta(input_ids=input_ids, attention_mask=attention_mask) cls_token = outputs[1] self.V_1 = self.predict_V_1(cls_token) self.A_1 = self.predict_A_1(cls_token) self.D_1 = self.predict_D_1(cls_token) VAD_1 = tf.concat([self.V_1, self.A_1, self.D_1], 1) # 0: up-down 1: side final_outputs = self.Corr_layer(VAD_1) return final_outputs def get_config(self): config = super().get_config() config.update({ "model_name": self.model_name, "Corr_layer_config": self.Corr_layer.get_config() }) return config @classmethod def from_config(cls, config): return cls(**config) ``` This is my model with RoBERTa and I trained model and saved this. And I loaded model and when I tried to get predicted value, ```python # Load trained model custom_objects = {"model_name": TF_RoBERTa_VAD_Classification, "FFNN_VAD_model": FFNN_VAD_model} model = tf.keras.models.load_model("Assinging_VAD_scores_BERT\Model\VAD_Assinging_RoBERTa_model_ver1.2_20230626-142030", custom_objects=custom_objects, compile=False) pred = model.predict((id, mask))[0][0] ``` ### Expected behavior the error is occurred ``` Traceback (most recent call last): File "c:\Users\Siwon\Documents\GitHub\Assinging_VAD_scores_BERT\Test_model.py", line 122, in <module> f.int32, name=None)}, None, None, None, None, None, None, None, None, None, None, None, None, False), {}) Second structure: type=tuple str=((TensorSpec(shape=(None, 512), dtype=tf.int32, name='input_ids'), TensorSpec(shape=(None, 512), dtype=tf.int32, name='attention_mask'), None, None, None, None, None, None, None, None, None, None, None, False), {}) More specifically: Substructure "type=dict str={'attention_mask': TensorSpec(shape=(None, None), dtype=tf.int32, name=None), 'token_type_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name=None), 'input_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name=None)}" is a sequence, while substructure "type=TensorSpec str=TensorSpec(shape=(None, 512), dtype=tf.int32, name='input_ids')" is not Entire first structure: (({'attention_mask': ., 'token_type_ids': ., 'input_ids': .}, ., ., ., ., ., ., ., ., ., ., ., ., .), {}) Entire second structure: ((., ., ., ., ., ., ., ., ., ., ., ., ., .), {}) ``` The error occurred `model = tf.keras.models.load_model("Assinging_VAD_scores_BERT\Model\VAD_Assinging_RoBERTa_model_ver1.2_20230626-155718", custom_objects=custom_objects, compile=False) ` this part But, This is looks like model requested token_type_ids even if RoBERTa model doesn't require token_type_ids. I humbly request, if anyone knows a solution, could you please inform me? I would be grateful for your assistance. Thank you.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24480/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24480/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24479
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24479/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24479/comments
https://api.github.com/repos/huggingface/transformers/issues/24479/events
https://github.com/huggingface/transformers/issues/24479
1,773,937,296
I_kwDOCUB6oc5pvCKQ
24,479
Enhanced Parameter Freezing Capabilities in Trainer Class
{ "login": "Hiusam", "id": 43744525, "node_id": "MDQ6VXNlcjQzNzQ0NTI1", "avatar_url": "https://avatars.githubusercontent.com/u/43744525?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hiusam", "html_url": "https://github.com/Hiusam", "followers_url": "https://api.github.com/users/Hiusam/followers", "following_url": "https://api.github.com/users/Hiusam/following{/other_user}", "gists_url": "https://api.github.com/users/Hiusam/gists{/gist_id}", "starred_url": "https://api.github.com/users/Hiusam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hiusam/subscriptions", "organizations_url": "https://api.github.com/users/Hiusam/orgs", "repos_url": "https://api.github.com/users/Hiusam/repos", "events_url": "https://api.github.com/users/Hiusam/events{/privacy}", "received_events_url": "https://api.github.com/users/Hiusam/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,687
1,691
1,691
NONE
null
### Feature request The feature proposal aims to introduce a more streamlined and intuitive approach to freezing and unfreezing specific model components directly in the Trainer class of the Hugging Face transformers library. ### Motivation I've found that when I need to freeze certain parameters or components of my models, the process can be a bit complicated. Currently, I need to set requires_grad to False for the parameters I want to freeze before calling Trainer.train(). But since Trainer.train() calls model.train() before the training loop, some parameters (e.g., running mean and running var of BatchNorm layers) will still change during training. To get around this, I have to implement additional flags in my model and manually call model.eval() in the forward function for the parts of the model I want to freeze. It would be great if there was a more streamlined way to accomplish this directly in the Trainer class. Maybe an additional argument in the Trainer.train() method, or a method in the Trainer class to freeze/unfreeze specified layers or parameters. This could make fine-tuning models easier and more intuitive, particularly for new users or those with less experience in PyTorch. ### Your contribution I am glad to offer help.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24479/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24479/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24478
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24478/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24478/comments
https://api.github.com/repos/huggingface/transformers/issues/24478/events
https://github.com/huggingface/transformers/issues/24478
1,773,908,552
I_kwDOCUB6oc5pu7JI
24,478
cannot import name 'OwlViTImageProcessor' from 'transformers'
{ "login": "mazhai", "id": 5811436, "node_id": "MDQ6VXNlcjU4MTE0MzY=", "avatar_url": "https://avatars.githubusercontent.com/u/5811436?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mazhai", "html_url": "https://github.com/mazhai", "followers_url": "https://api.github.com/users/mazhai/followers", "following_url": "https://api.github.com/users/mazhai/following{/other_user}", "gists_url": "https://api.github.com/users/mazhai/gists{/gist_id}", "starred_url": "https://api.github.com/users/mazhai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mazhai/subscriptions", "organizations_url": "https://api.github.com/users/mazhai/orgs", "repos_url": "https://api.github.com/users/mazhai/repos", "events_url": "https://api.github.com/users/mazhai/events{/privacy}", "received_events_url": "https://api.github.com/users/mazhai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It is definitely in the lbirary so I would suggest re-installing `transformers` and double-checking you are running your code in the right environment.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,687
1,691
1,691
NONE
null
### System Info transformer version:4.29.2 platform: macOS Ventura cpu: apple M2 Max python versin: 3.10 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` import requests from PIL import Image import torch from transformers import OwlViTProcessor, OwlViTForObjectDetection, OwlViTImageProcessor processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32") model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) texts = [["a photo of a cat", "a photo of a dog"]] inputs = processor(text=texts, images=image, return_tensors="pt") outputs = model(**inputs) ``` ### Expected behavior exception: ``` ImportError: cannot import name 'OwlViTImageProcessor' from 'transformers' (/Users/tt/opt/anaconda3/envs/test_transfromer/lib/python3.10/site-packages/transformers/__init__.py) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24478/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24478/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24477
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24477/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24477/comments
https://api.github.com/repos/huggingface/transformers/issues/24477/events
https://github.com/huggingface/transformers/pull/24477
1,773,871,047
PR_kwDOCUB6oc5T34bd
24,477
[`Umt5`] Add google's umt5 to `transformers`
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi @ArthurZucker , thanks for updating this! As far as we can tell, it is not just mT5, because of joined/separate key-value in attention. Was this problem solved in latest conversion script of this PR :thinking: \r\n\r\n/cc @agemagician ", "The conversion went well, the outputs are still a bit gibberish but didn’t have problem of un matching shape. \r\nThey mentioned that the model is closer to MT5, which is why if we we can have minimal changes, it will look like this + adapted conversion ", "> The conversion went well, the outputs are still a bit gibberish but didn’t have problem of un matching shape. They mentioned that the model is closer to MT5, which is why if we we can have minimal changes, it will look like this + adapted conversion\r\n\r\nSo far, I can see you made similar changes as we did before, which led to gibberish output.\r\nThe one addition change you did is allowing fall back to byte for the tokenizer.\r\n\r\nI belive the issue still exist because of the way we reshape and convert the q,k and v for the attention as @stefan-it mentioned.", "There is also a different logic for `postion_bias` which seems to be missing. \r\nJoint 3D matrix vs what we have now can be linked to the new sharding scheme, It probably the last thing to check. \r\n\r\n", "Regarding the split / merge, I don't really see a problem with the code. The checkpoints are split, and the actual code is similar to mt5 with the difference being `scanning` I believe. However feel free to check and I hope we can get coherent outputs! ", "Update, the outputs match 🔥 The issue was : the tokenizer", "@ArthurZucker Awesome news! I will check downstream performance as well soon :hugs: ", "Wait wait 😅 I have to update and push the tokenizers 😉 ", "> Update, the outputs match 🔥 The issue was : the tokenizer\r\n\r\n\"The outputs match\" Do you mean you have tested both the original t5x inference pipeline against the converted transformer Pytorch version ?", "Yes @agemagician. If you read the PR description there’s a link to the reproducing script for generating with the t5x repo", "> Yes @agemagician. If you read the PR description there’s a link to the reproducing script for generating with the t5x repo\r\n\r\nAwesome work :)", "Currently setting up an instance ton convert an upload the `xxl` model, other models are available [here](https://huggingface.co/models?search=umt5)" ]
1,687
1,688
1,688
COLLABORATOR
null
# What does this PR do? Superseeds #22626 which has been stale for quite some time A kaggle notebook for reproducing and running the originial model: https://www.kaggle.com/arthurzucker/umt5-inference - Tokenizer is a BertGenerationTokenizer. Here is how to convert it: ```python !wget https://storage.googleapis.com/t5-data/vocabs/umt5.256000/sentencepiece.model from transformers import T5Tokenizer umt5 = T5Tokenizer("/Users/arthurzucker/Work/transformers/sentencepiece.model") to_add = [] for i in range(300): to_add.append(f"<extra_id_{i}>") tokenizer.add_tokens(list(reversed(to_add)), True) ``` 84 tokens are free to use apparently. - The modeling code is just MT5, addapted to have more relative bias that is not the same for all layers. For a first conversion I'll be using this: ```bash python src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py --t5x_checkpoint_path "/Users/arthurzucker/Work/transformers/checkpoint_1000000" --config_file "/Users/arthurzucker/Work/transformers/checkpoint_1000000/config.json" --pytorch_dump_path ./ArthurZ --scalable_attention ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24477/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24477/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24477", "html_url": "https://github.com/huggingface/transformers/pull/24477", "diff_url": "https://github.com/huggingface/transformers/pull/24477.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24477.patch", "merged_at": 1688362702000 }
https://api.github.com/repos/huggingface/transformers/issues/24476
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24476/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24476/comments
https://api.github.com/repos/huggingface/transformers/issues/24476/events
https://github.com/huggingface/transformers/pull/24476
1,773,763,792
PR_kwDOCUB6oc5T3jyH
24,476
[`WhisperTokenizer`] Allow encoding timestamp tokens
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "cc @sanchit-gandhi ", "In order to keep backward compatibility / follow the original behaviour, I'll add a `encode_special_token` to whisper tokenizer. Not sure we can have 100% backward on this, because all specials tokens will be affected. ", "Closing this as #25081 adds `split_special_tokens` and the timestamp tokens will be manually added! ", "Just to clarify - we'll only need to update the tokenizer vocabs on the Hub following #25081?", "yes! ", "Cool! Happy to open the Hub PRs!\r\n\r\nJust to clarify, it looks like the slow tokenizer still doesn't _quite_ give the expected behaviour when new special tokens are added:\r\n```python\r\nfrom transformers import WhisperTokenizer, AddedToken\r\n\r\ntokenizer = WhisperTokenizer.from_pretrained(\"openai/whisper-tiny\")\r\n\r\ntimestamps = [AddedToken(\"<|%.2f|>\" % (i * 0.02), lstrip=False, rstrip=False) for i in range(1500 + 1)]\r\ntokenizer.add_tokens(timestamps)\r\n\r\nprint(tokenizer.decode(tokenizer(\"<|0.00|> But like mobile phones have screens and they're cheap.<|2.60|>\", split_special_tokens=False).input_ids))\r\n```\r\n**Print Output:**\r\n```\r\n\"<|startoftranscript|><|notimestamps|><|0.00|>But like mobile phones have screens and they're cheap.<|2.60|><|endoftext|>\"\r\n```\r\n\r\n=> we loose the space between a special token and the adjacent token, e.g. `<|0.00|> But` goes to `<|0.00|>But`\r\n", "Yep, will be fixed by #23909 😉 ", "Cool! And the inconsistency between the slow and fast tokenizer too? Is this related to the add tokens?\r\n```python\r\nfrom transformers import WhisperTokenizer, WhisperTokenizerFast\r\n\r\ntokenizer = WhisperTokenizer.from_pretrained(f\"openai/whisper-tiny\")\r\ntokenizer_fast = WhisperTokenizerFast.from_pretrained(f\"openai/whisper-tiny\")\r\n\r\nprint(tokenizer.encode(\"<|0.00|> hey\"))\r\nprint(tokenizer_fast.encode(\"<|0.00|> hey\"))\r\n```\r\n**Print Output:**\r\n```\r\n[50258, 50363, 50364, 17230, 50257]\r\n[50258, 50363, 50364, 4177, 50257]\r\n```", "Yep, when you add the token add them as `AddedToken` with rstrip = True and `lstrip=True` if you want the same behaviour " ]
1,687
1,694
1,690
COLLABORATOR
null
# What does this PR do? Adresses #20225. Openai recently changed their tokenizer to allow encoding timestamp tokens as is (instead of splitting them). This is a breaking change because you can't encode them by splitting anymore, it will fail with the following error: ```ptyhon ValueError: Encountered text corresponding to disallowed special token '<|7.86|>'. If you want this text to be encoded as a special token, pass it to `allowed_special`, e.g. `allowed_special={'<|7.86|>', ...}`. If you want this text to be encoded as normal text, disable the check for this token by passing `disallowed_special=(enc.special_tokens_set - {'<|7.86|>'})`. To disable this check for all special tokens, pass `disallowed_special=()`. ``` This PR will have to wait before being merge. This is because the models on the hub need to be updated first otherwise the tests will be red. Moreover, `add_tokens` has to be fixed before that! Snipper showing why: ```python from transformers import WhisperTokenizer, WhisperTokenizerFast, AddedToken timestamps = [AddedToken("<|%.2f|>" % (i * 0.02), lstrip=False, rstrip=False) for i in range(1500 + 1)] from whisper.tokenizer import get_tokenizer openai_tok = get_tokenizer(multilingual=True, language="en", task="transcribe") model_path =f"openai/whisper-tiny" slow = WhisperTokenizer.from_pretrained(model_path) fast = WhisperTokenizerFast.from_pretrained(model_path) slow.bos_token = AddedToken(slow.eos_token, lstrip=False, rstrip=False) fast.bos_token = AddedToken(slow.eos_token, lstrip=False, rstrip=False) slow.add_tokens(timestamps) fast.add_tokens(timestamps) ``` The output from slow and fast is different. Fast matches the original implementation (not stripping spaces on the rigth and left) while slow does not. ```python >>> openai_tok.encode("<|7.86|> Hey", allowed_special=set(openai_tok.special_tokens.keys())) [50757, 1911] >>> fast.encode('<|7.86|> Hey', add_special_tokens = False) [50757, 1911] >>> slow.encode('<|7.86|> Hey', add_special_tokens = False) [50757, 7057] ``` script to update all models : ```python from transformers import WhisperTokenizer, WhisperTokenizerFast, AddedToken timestamps = [AddedToken("<|%.2f|>" % (i * 0.02), lstrip=False, rstrip=False) for i in range(1500 + 1)] models_ids = ["tiny","small","medium","base","large"] from whisper.tokenizer import get_tokenizer openai_tok = get_tokenizer(multilingual=True, language="en", task="transcribe") openai_tok.encode("<|1.00|>", allowed_special=set(openai_tok.special_tokens.keys())) for id in models_ids: model_path =f"openai/whisper-{id}" slow = WhisperTokenizer.from_pretrained(model_path) fast = WhisperTokenizerFast.from_pretrained(model_path) slow.bos_token = AddedToken(slow.eos_token, lstrip=False, rstrip=False) fast.bos_token = AddedToken(slow.eos_token, lstrip=False, rstrip=False) slow.add_tokens(timestamps) fast.add_tokens(timestamps) slow.push_to_hub(model_path, create_pr = True) fast.push_to_hub(model_path, create_pr = True) if id == "large": exit(0) model_path += '.en' slow = WhisperTokenizer.from_pretrained(model_path) fast = WhisperTokenizerFast.from_pretrained(model_path) slow.bos_token = AddedToken(slow.eos_token, lstrip=False, rstrip=False) fast.bos_token = AddedToken(slow.eos_token, lstrip=False, rstrip=False) slow.add_tokens(timestamps) fast.add_tokens(timestamps) slow.push_to_hub(model_path, create_pr = True) fast.push_to_hub(model_path, create_pr = True) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24476/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24476/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24476", "html_url": "https://github.com/huggingface/transformers/pull/24476", "diff_url": "https://github.com/huggingface/transformers/pull/24476.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24476.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24475
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24475/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24475/comments
https://api.github.com/repos/huggingface/transformers/issues/24475/events
https://github.com/huggingface/transformers/issues/24475
1,773,727,595
I_kwDOCUB6oc5puO9r
24,475
Does the model.generate supports batch_size > 1 ?
{ "login": "liuchengyuan123", "id": 34617968, "node_id": "MDQ6VXNlcjM0NjE3OTY4", "avatar_url": "https://avatars.githubusercontent.com/u/34617968?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liuchengyuan123", "html_url": "https://github.com/liuchengyuan123", "followers_url": "https://api.github.com/users/liuchengyuan123/followers", "following_url": "https://api.github.com/users/liuchengyuan123/following{/other_user}", "gists_url": "https://api.github.com/users/liuchengyuan123/gists{/gist_id}", "starred_url": "https://api.github.com/users/liuchengyuan123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liuchengyuan123/subscriptions", "organizations_url": "https://api.github.com/users/liuchengyuan123/orgs", "repos_url": "https://api.github.com/users/liuchengyuan123/repos", "events_url": "https://api.github.com/users/liuchengyuan123/events{/privacy}", "received_events_url": "https://api.github.com/users/liuchengyuan123/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes, I'm specifically adding docs for it in #24432 ", "FYI this is the script you can use for batched generation:\r\n```\r\nfrom transformers import LlamaTokenizer, AutoModelForCausalLM\r\nimport torch\r\n\r\ntokenizer = LlamaTokenizer.from_pretrained(\"openlm-research/open_llama_3b\")\r\nmodel = AutoModelForCausalLM.from_pretrained(\"openlm-research/open_llama_3b\", torch_dtype=torch.float16, device_map=\"auto\")\r\n\r\ntokenizer.padding_side = \"left\"\r\n\r\n# Define PAD Token = EOS Token\r\ntokenizer.pad_token = tokenizer.eos_token\r\nmodel.config.pad_token_id = model.config.eos_token_id\r\n\r\n# use different length sentences to test batching\r\nsentences = [\r\n \"Hello, my dog is a little\",\r\n \"Today, I\",\r\n ]\r\n\r\ninputs = tokenizer(sentences, return_tensors=\"pt\", padding=True).to(model.device)\r\n\r\noutput_sequences = model.generate(**inputs, max_new_tokens=20)\r\n\r\nprint(tokenizer.batch_decode(output_sequences, skip_special_tokens=True))\r\n```", "> FYI this is the script you can use for batched generation:\r\n> \r\n> ```\r\n> from transformers import LlamaTokenizer, AutoModelForCausalLM\r\n> import torch\r\n> \r\n> tokenizer = LlamaTokenizer.from_pretrained(\"openlm-research/open_llama_3b\")\r\n> model = AutoModelForCausalLM.from_pretrained(\"openlm-research/open_llama_3b\", torch_dtype=torch.float16, device_map=\"auto\")\r\n> \r\n> tokenizer.padding_side = \"left\"\r\n> \r\n> # Define PAD Token = EOS Token\r\n> tokenizer.pad_token = tokenizer.eos_token\r\n> model.config.pad_token_id = model.config.eos_token_id\r\n> \r\n> # use different length sentences to test batching\r\n> sentences = [\r\n> \"Hello, my dog is a little\",\r\n> \"Today, I\",\r\n> ]\r\n> \r\n> inputs = tokenizer(sentences, return_tensors=\"pt\", padding=True).to(model.device)\r\n> \r\n> output_sequences = model.generate(**inputs, max_new_tokens=20)\r\n> \r\n> print(tokenizer.batch_decode(output_sequences, skip_special_tokens=True))\r\n> ```\r\n\r\n@NielsRogge hi, just curious about what `Define PAD Token = EOS Token` is for ?", "By default, the padding token is not set in the tokenizer's config: https://huggingface.co/openlm-research/open_llama_3b/blob/main/tokenizer_config.json. So when you would pad you would get the following error:\r\n\r\n```\r\nValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.\r\n```", "> \r\n\r\n> FYI this is the script you can use for batched generation:\r\n> \r\n> ```\r\n> from transformers import LlamaTokenizer, AutoModelForCausalLM\r\n> import torch\r\n> \r\n> tokenizer = LlamaTokenizer.from_pretrained(\"openlm-research/open_llama_3b\")\r\n> model = AutoModelForCausalLM.from_pretrained(\"openlm-research/open_llama_3b\", torch_dtype=torch.float16, device_map=\"auto\")\r\n> \r\n> tokenizer.padding_side = \"left\"\r\n> \r\n> # Define PAD Token = EOS Token\r\n> tokenizer.pad_token = tokenizer.eos_token\r\n> model.config.pad_token_id = model.config.eos_token_id\r\n> \r\n> # use different length sentences to test batching\r\n> sentences = [\r\n> \"Hello, my dog is a little\",\r\n> \"Today, I\",\r\n> ]\r\n> \r\n> inputs = tokenizer(sentences, return_tensors=\"pt\", padding=True).to(model.device)\r\n> \r\n> output_sequences = model.generate(**inputs, max_new_tokens=20)\r\n> \r\n> print(tokenizer.batch_decode(output_sequences, skip_special_tokens=True))\r\n> ```\r\n\r\nThis also works fine for other 13b models based on Llama, Thanks a lot.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,687
1,691
1,691
NONE
null
### System Info Does the function `model.generate` supports the case when batch size of `input_ids` > 1? It is required especially for evaluation! The following bugs are reported when I call `model.generate` to generate 2 or more `input_ids`, where model is a`LlamaForCausalLM`: ``` RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction model = LlamaForCausalLM.from_pretrained('./chinese-alpaca-7b-merged', device_map="auto").half().cuda() request_text = ["xxx", "yy"] input_ids = tokenizer(request_text, return_tensors='pt', padding="longest", truncation=True, max_length=1024) response = model.generate(input_ids=input_ids.input_ids.cuda(), max_new_tokens=1024, temperature=1,top_k=40,top_p=0.9,repetition_penalty=1.15) ### Expected behavior ``` RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24475/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24475/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24474
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24474/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24474/comments
https://api.github.com/repos/huggingface/transformers/issues/24474/events
https://github.com/huggingface/transformers/issues/24474
1,773,626,704
I_kwDOCUB6oc5pt2VQ
24,474
VideoMAE pretraining error when customizing compute_metrics
{ "login": "gindij", "id": 6021161, "node_id": "MDQ6VXNlcjYwMjExNjE=", "avatar_url": "https://avatars.githubusercontent.com/u/6021161?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gindij", "html_url": "https://github.com/gindij", "followers_url": "https://api.github.com/users/gindij/followers", "following_url": "https://api.github.com/users/gindij/following{/other_user}", "gists_url": "https://api.github.com/users/gindij/gists{/gist_id}", "starred_url": "https://api.github.com/users/gindij/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gindij/subscriptions", "organizations_url": "https://api.github.com/users/gindij/orgs", "repos_url": "https://api.github.com/users/gindij/repos", "events_url": "https://api.github.com/users/gindij/events{/privacy}", "received_events_url": "https://api.github.com/users/gindij/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @gindij, \r\n\r\nFor us help debug the issue, it's necessary to be able to reproduce the error on our side. At the moment this isn't possible without knowing `_data_collator`, `_compute_metrics_videomae` or the dataset. Could you share a minimal reproducer please? ", "@amyeroberts thanks for the quick reply!\r\n\r\nHere is a minimal reproducer:\r\n```py\r\nfrom typing import List\r\n\r\nimport datasets\r\nimport torch\r\nfrom transformers import (\r\n EvalPrediction,\r\n PretrainedConfig,\r\n Trainer,\r\n TrainingArguments,\r\n VideoMAEConfig,\r\n VideoMAEForPreTraining,\r\n)\r\n\r\n\r\ndef compute_image_mask(model_config: PretrainedConfig) -> torch.tensor:\r\n num_patches_per_frame = (model_config.image_size // model_config.patch_size) ** 2\r\n seq_length = (model_config.num_frames // model_config.tubelet_size) * num_patches_per_frame\r\n p = torch.ones((1, seq_length)) * (1 - model_config.masking_ratio)\r\n return torch.bernoulli(p).bool()\r\n\r\n\r\ndef data_collator(\r\n batch: List[dict],\r\n model_config: PretrainedConfig = None,\r\n) -> dict:\r\n padded_videos = [torch.Tensor(item[\"video\"]) for item in batch]\r\n padded_videos = torch.stack(padded_videos)\r\n mask = compute_image_mask(model_config)\r\n mask = mask.repeat((padded_videos.shape[0], 1))\r\n return {\r\n \"pixel_values\": padded_videos,\r\n \"bool_masked_pos\": mask,\r\n }\r\n\r\n\r\ndef compute_metrics_videomae(eval_pred: EvalPrediction) -> dict:\r\n pass\r\n\r\n\r\nif __name__ == \"__main__\":\r\n config = VideoMAEConfig.from_pretrained(\"MCG-NJU/videomae-base\")\r\n config.num_frames = 32\r\n config.masking_ratio = 0.9\r\n config.tubelet_size = 4\r\n config.patch_size = 32\r\n videomae = VideoMAEForPreTraining(config=config)\r\n\r\n train_dataset = {\"video\": [torch.rand((32, 3, 224, 224)) for _ in range(8)]}\r\n eval_dataset = {\"video\": [torch.rand((32, 3, 224, 224)) for _ in range(8)]}\r\n\r\n train_dataset = datasets.Dataset.from_dict(train_dataset)\r\n eval_dataset = datasets.Dataset.from_dict(eval_dataset)\r\n\r\n training_arguments = TrainingArguments(\r\n output_dir=\"./checkpts\",\r\n per_device_eval_batch_size=8,\r\n per_device_train_batch_size=8,\r\n remove_unused_columns=False,\r\n evaluation_strategy=\"epoch\",\r\n num_train_epochs=1,\r\n )\r\n\r\n trainer = Trainer(\r\n model=videomae,\r\n args=training_arguments,\r\n optimizers=(torch.optim.AdamW(videomae.parameters()), None),\r\n data_collator=lambda x: data_collator(x, config),\r\n train_dataset=train_dataset,\r\n eval_dataset=eval_dataset,\r\n compute_metrics=compute_metrics_videomae, # <--- comment this line for it to work\r\n )\r\n\r\n trainer.train()\r\n```", "@amyeroberts, do you know of any updates on this issue?", "@gindij In the example script, does `compute_metrics_videomae` intentionally return `None`? This is the cause of the error at the moment, as `Trainer` expects `compute_metrics` to be a callable which returns a dictionary. \r\n\r\n`eval_loss` isn't returned in the 'normal' loop because training an MAE pretraining model is a special case, as there are no labels passed in and it doesn't have a `return_loss` flag in its forward method. The simplest way to get what you want is to use your own custom training loop e.g. [similar to this one](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-pretraining/run_mim_no_trainer.py).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,687
1,692
1,692
NONE
null
### System Info - `transformers` version: 4.29.2 - Platform: macOS-13.4.1-arm64-i386-64bit - Python version: 3.10.11 - Huggingface_hub version: 0.15.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @sgugger @amyeroberts ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am trying to pre-train a VideoMAE model with a custom set of videos, and have found that by default, the evaluation loss is not reported after each epoch. When I try to run this code ```py config = VideoMAEConfig.from_pretrained("MCG-NJU/videomae-base") videomae = VideoMAEForPreTraining(config=config) # <load datasets> trainer = Trainer( model=videomae, args=training_arguments, optimizers=(torch.optim.AdamW(videomae.parameters()), None), data_collator=lambda x: _data_collator(x, videos_only=True, model_config=config), train_dataset=train_dataset, eval_dataset=eval_dataset, compute_metrics=_compute_metrics_videomae, ) trainer.train() ``` I get this error message ``` raceback (most recent call last): | 0/1 [00:00<?, ?it/s] File "/Users/jackgindi/Projects/echo-gpt/runtask.py", line 220, in <module> pretrain_videomae( File "/Users/jackgindi/Projects/echo-gpt/training.py", line 261, in pretrain_videomae trainer.train() File "/Users/jackgindi/miniconda3/envs/echo-gpt/lib/python3.10/site-packages/transformers/trainer.py", line 1664, in train return inner_training_loop( File "/Users/jackgindi/miniconda3/envs/echo-gpt/lib/python3.10/site-packages/transformers/trainer.py", line 2034, in _inner_training_loop self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/Users/jackgindi/miniconda3/envs/echo-gpt/lib/python3.10/site-packages/transformers/trainer.py", line 2300, in _maybe_log_save_evaluate metrics = self.evaluate(ignore_keys=ignore_keys_for_eval) File "/Users/jackgindi/miniconda3/envs/echo-gpt/lib/python3.10/site-packages/transformers/trainer.py", line 3029, in evaluate output = eval_loop( File "/Users/jackgindi/miniconda3/envs/echo-gpt/lib/python3.10/site-packages/transformers/trainer.py", line 3305, in evaluation_loop all_preds = nested_truncate(all_preds, num_samples) File "/Users/jackgindi/miniconda3/envs/echo-gpt/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 357, in nested_truncate return type(tensors)(nested_truncate(t, limit) for t in tensors) File "/Users/jackgindi/miniconda3/envs/echo-gpt/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 357, in <genexpr> return type(tensors)(nested_truncate(t, limit) for t in tensors) File "/Users/jackgindi/miniconda3/envs/echo-gpt/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 361, in nested_truncate return tensors[:limit] IndexError: too many indices for array: array is 0-dimensional, but 1 were indexed ``` If I run the same code without the `compute_metrics=...` set in the trainer, I don't encounter an error because `nested_truncate` is not called. The issue does not seem to with my my compute_metrics function, since the error occurs even before entering it. Is this just a current limitation of the VideoMAEForPreTraining model, or is there a way around this? ### Expected behavior Pass a custom `compute_metrics` function to the `Trainer` to see the evaluation loss of `VideoMAEForPreTraining` after each epoch without error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24474/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24474/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24473
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24473/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24473/comments
https://api.github.com/repos/huggingface/transformers/issues/24473/events
https://github.com/huggingface/transformers/issues/24473
1,773,624,603
I_kwDOCUB6oc5pt10b
24,473
Stucked on tokenization before training when using 3 GPU, but not when using 2 GPU
{ "login": "higopires", "id": 66256549, "node_id": "MDQ6VXNlcjY2MjU2NTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/66256549?v=4", "gravatar_id": "", "url": "https://api.github.com/users/higopires", "html_url": "https://github.com/higopires", "followers_url": "https://api.github.com/users/higopires/followers", "following_url": "https://api.github.com/users/higopires/following{/other_user}", "gists_url": "https://api.github.com/users/higopires/gists{/gist_id}", "starred_url": "https://api.github.com/users/higopires/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/higopires/subscriptions", "organizations_url": "https://api.github.com/users/higopires/orgs", "repos_url": "https://api.github.com/users/higopires/repos", "events_url": "https://api.github.com/users/higopires/events{/privacy}", "received_events_url": "https://api.github.com/users/higopires/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @Narsil you know more about me potential problems here (I remember a flag for tokenizer parallelism, might need to be set)", "This is very odd, since `tokenizers` doesn't use the GPU at all.\r\n\r\nYou could try using `TOKENIZERS_PARALLELISM=0 CUDA_VISIBLE_DEVICE....` to disable the parallelism in `tokenizers` itself.\r\nThere are ways to trigger a deadlock with using multithreading/processing with `tokenizers` from Python, but most of those should be catched.\r\nNote that this will slow down considerably the tokenizer training (it might already be what's occurring) since you're now only using 1 core instead of all the CPU.\r\n\r\nAnd most importantly, the GPU settings shouldn't have any impact, so it looks like a bug in `run_mlm.py` parallelization strategy, or something wrong in the hardware.\r\n\r\nIs it possible to isolate the `tokenizers` training from the rest of the code to sanity check things and see where the deadlock is coming from ?", "> This is very odd, since `tokenizers` doesn't use the GPU at all.\r\n\r\nMy bad. That's `nvidia-smi` with the training with the 2-GPU config already running. My intent with this was to show my hardware configuration and CUDA version.\r\n\r\n> You could try using `TOKENIZERS_PARALLELISM=0 CUDA_VISIBLE_DEVICE....` to disable the parallelism in tokenizers itself.\r\n\r\nGonna try it right now.\r\n\r\n> Is it possible to isolate the `tokenizers` training from the rest of the code to sanity check things and see where the deadlock is coming from ?\r\n\r\nI'm using a tokenizer that I trained beforehand (`merges.txt` and `vocab.json` files), so seems to me that the process is already isolated, isn't?", "Then it should load instantly and not even retrain a tokenizer, no ?\r\n\r\nI'm not sure the message you shared is the cause of your issue (the warning is probably there, but it's just a hint that there's a faster way to encode data, not necessarily that this is what is making your process stuck.", "> Gonna try it right now.\r\n\r\n\r\nJust did the process and came back here after a while: same issue:\r\n\r\n```\r\n[INFO|trainer.py:1680] 2023-06-26 13:43:56,492 >> ***** Running training *****\r\n[INFO|trainer.py:1681] 2023-06-26 13:43:56,492 >> Num examples = 2,353,535\r\n[INFO|trainer.py:1682] 2023-06-26 13:43:56,492 >> Num Epochs = 40\r\n[INFO|trainer.py:1683] 2023-06-26 13:43:56,492 >> Instantaneous batch size per device = 192\r\n[INFO|trainer.py:1684] 2023-06-26 13:43:56,492 >> Total train batch size (w. parallel, distributed & accumulation) = 768\r\n[INFO|trainer.py:1685] 2023-06-26 13:43:56,493 >> Gradient Accumulation steps = 4\r\n[INFO|trainer.py:1686] 2023-06-26 13:43:56,493 >> Total optimization steps = 122,560\r\n[INFO|trainer.py:1687] 2023-06-26 13:43:56,493 >> Number of trainable parameters = 82,170,969\r\n[INFO|integrations.py:727] 2023-06-26 13:43:56,493 >> Automatic Weights & Biases logging enabled, to disable set os.environ[\"WANDB_DISABLED\"] = \"true\"\r\nwandb: Currently logged in as: <USER>. Use `wandb login --relogin` to force relogin\r\nwandb: Tracking run with wandb version 0.15.4\r\nwandb: Run data is saved locally in /cfs/home/u021274/higo/wandb/run-20230626_134359-d7jhdqpd\r\nwandb: Run `wandb offline` to turn off syncing.\r\nwandb: Syncing run fluent-forest-46\r\nwandb: ⭐️ View project at <URL>\r\nwandb: 🚀 View run at <URL>\r\n\r\n 0%| | \r\n\r\n0/122560 [00:00<?, ?it/s][WARNING|logging.py:280] 2023-06-26 13:44:08,940 >> You're using a RobertaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\n```\r\n\r\n`nvidia-smi` returns the following:\r\n\r\n```\r\n+---------------------------------------------------------------------------------------+\r\n| NVIDIA-SMI 530.30.02 Driver Version: 530.30.02 CUDA Version: 12.1 |\r\n|-----------------------------------------+----------------------+----------------------+\r\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\r\n| | | MIG M. |\r\n|=========================================+======================+======================|\r\n| 0 NVIDIA A100 80GB PCIe Off| 00000000:52:00.0 Off | 0 |\r\n| N/A 37C P0 71W / 300W| 1885MiB / 81920MiB | 100% Default |\r\n| | | Disabled |\r\n+-----------------------------------------+----------------------+----------------------+\r\n| 1 NVIDIA A100 80GB PCIe Off| 00000000:CE:00.0 Off | 0 |\r\n| N/A 39C P0 69W / 300W| 1863MiB / 81920MiB | 100% Default |\r\n| | | Disabled |\r\n+-----------------------------------------+----------------------+----------------------+\r\n| 2 NVIDIA A100 80GB PCIe Off| 00000000:D1:00.0 Off | 0 |\r\n| N/A 43C P0 71W / 300W| 1863MiB / 81920MiB | 100% Default |\r\n| | | Disabled |\r\n+-----------------------------------------+----------------------+----------------------+\r\n \r\n+---------------------------------------------------------------------------------------+\r\n| Processes: |\r\n| GPU GI CI PID Type Process name GPU Memory |\r\n| ID ID Usage |\r\n|=======================================================================================|\r\n| 0 N/A N/A 62822 C python 1882MiB |\r\n| 1 N/A N/A 62822 C python 1860MiB |\r\n| 2 N/A N/A 62822 C python 1860MiB |\r\n+---------------------------------------------------------------------------------------+\r\n```\r\n\r\nSeems that's not the tokenization, because the GPU is (barely) used, but the message that I'm stucked remains the same.", "I would try putting a debugger in your session, and iterate step by step to figure out where the script hangs.", "```\r\n> /cfs/home/u021274/higo/run_mlm.py(234)main()\r\n-> parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(235)main()\r\n-> if len(sys.argv) == 2 and sys.argv[1].endswith(\".json\"):\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(240)main()\r\n-> model_args, data_args, training_args = parser.parse_args_into_dataclasses()\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(244)main()\r\n-> send_example_telemetry(\"run_mlm\", model_args, data_args)\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(247)main()\r\n-> logging.basicConfig(\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(248)main()\r\n-> format=\"%(asctime)s - %(levelname)s - %(name)s - %(message)s\",\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(249)main()\r\n-> datefmt=\"%m/%d/%Y %H:%M:%S\",\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(250)main()\r\n-> handlers=[logging.StreamHandler(sys.stdout)],\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(247)main()\r\n-> logging.basicConfig(\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(253)main()\r\n-> if training_args.should_log:\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(255)main()\r\n-> transformers.utils.logging.set_verbosity_info()\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(257)main()\r\n-> log_level = training_args.get_process_log_level()\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(258)main()\r\n-> logger.setLevel(log_level)\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(259)main()\r\n-> datasets.utils.logging.set_verbosity(log_level)\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(260)main()\r\n-> transformers.utils.logging.set_verbosity(log_level)\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(261)main()\r\n-> transformers.utils.logging.enable_default_handler()\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(262)main()\r\n-> transformers.utils.logging.enable_explicit_format()\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(265)main()\r\n-> logger.warning(\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(266)main()\r\n-> f\"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}\"\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(267)main()\r\n-> + f\"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}\"\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(266)main()\r\n-> f\"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}\"\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(265)main()\r\n-> logger.warning(\r\n(Pdb) n\r\n06/26/2023 19:45:08 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 3distributed training: True, 16-bits training: True\r\n> /cfs/home/u021274/higo/run_mlm.py(270)main()\r\n-> logger.info(f\"Training/evaluation parameters {training_args}\")\r\n(Pdb) n\r\n06/26/2023 19:45:09 - INFO - __main__ - Training/evaluation parameters TrainingArguments(\r\n_n_gpu=3,\r\nadafactor=False,\r\nadam_beta1=0.9,\r\nadam_beta2=0.999,\r\nadam_epsilon=1e-08,\r\nauto_find_batch_size=False,\r\nbf16=False,\r\nbf16_full_eval=False,\r\ndata_seed=None,\r\ndataloader_drop_last=False,\r\ndataloader_num_workers=0,\r\ndataloader_pin_memory=True,\r\nddp_backend=None,\r\nddp_broadcast_buffers=None,\r\nddp_bucket_cap_mb=None,\r\nddp_find_unused_parameters=None,\r\nddp_timeout=1800,\r\ndebug=[],\r\ndeepspeed=None,\r\ndisable_tqdm=False,\r\ndo_eval=False,\r\ndo_predict=False,\r\ndo_train=True,\r\neval_accumulation_steps=None,\r\neval_delay=0,\r\neval_steps=None,\r\nevaluation_strategy=no,\r\nfp16=True,\r\nfp16_backend=auto,\r\nfp16_full_eval=False,\r\nfp16_opt_level=O1,\r\nfsdp=[],\r\nfsdp_config={'fsdp_min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False},\r\nfsdp_min_num_params=0,\r\nfsdp_transformer_layer_cls_to_wrap=None,\r\nfull_determinism=False,\r\ngradient_accumulation_steps=4,\r\ngradient_checkpointing=False,\r\ngreater_is_better=None,\r\ngroup_by_length=False,\r\nhalf_precision_backend=auto,\r\nhub_model_id=None,\r\nhub_private_repo=False,\r\nhub_strategy=every_save,\r\nhub_token=<HUB_TOKEN>,\r\nignore_data_skip=False,\r\ninclude_inputs_for_metrics=False,\r\njit_mode_eval=False,\r\nlabel_names=None,\r\nlabel_smoothing_factor=0.0,\r\nlearning_rate=5e-05,\r\nlength_column_name=length,\r\nload_best_model_at_end=False,\r\nlocal_rank=0,\r\nlog_level=passive,\r\nlog_level_replica=warning,\r\nlog_on_each_node=True,\r\nlogging_dir=MyModel/runs/Jun26_19-44-10_g07,\r\nlogging_first_step=False,\r\nlogging_nan_inf_filter=True,\r\nlogging_steps=500,\r\nlogging_strategy=steps,\r\nlr_scheduler_type=linear,\r\nmax_grad_norm=1.0,\r\nmax_steps=-1,\r\nmetric_for_best_model=None,\r\nmp_parameters=,\r\nno_cuda=False,\r\nnum_train_epochs=40.0,\r\noptim=adamw_hf,\r\noptim_args=None,\r\noutput_dir=MyModel,\r\noverwrite_output_dir=True,\r\npast_index=-1,\r\nper_device_eval_batch_size=8,\r\nper_device_train_batch_size=64,\r\nprediction_loss_only=False,\r\npush_to_hub=False,\r\npush_to_hub_model_id=None,\r\npush_to_hub_organization=None,\r\npush_to_hub_token=<PUSH_TO_HUB_TOKEN>,\r\nray_scope=last,\r\nremove_unused_columns=True,\r\nreport_to=['wandb'],\r\nresume_from_checkpoint=None,\r\nrun_name=MyModel,\r\nsave_on_each_node=False,\r\nsave_safetensors=False,\r\nsave_steps=500,\r\nsave_strategy=steps,\r\nsave_total_limit=1,\r\nseed=42,\r\nsharded_ddp=[],\r\nskip_memory_metrics=True,\r\ntf32=None,\r\ntorch_compile=False,\r\ntorch_compile_backend=None,\r\ntorch_compile_mode=None,\r\ntorchdynamo=None,\r\ntpu_metrics_debug=False,\r\ntpu_num_cores=None,\r\nuse_ipex=False,\r\nuse_legacy_prediction_loop=False,\r\nuse_mps_device=False,\r\nwarmup_ratio=0.0,\r\nwarmup_steps=0,\r\nweight_decay=0.0,\r\nxpu_backend=None,\r\n)\r\n> /cfs/home/u021274/higo/run_mlm.py(273)main()\r\n-> last_checkpoint = None\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(274)main()\r\n-> if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(288)main()\r\n-> set_seed(training_args.seed)\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(299)main()\r\n-> if data_args.dataset_name is not None:\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(326)main()\r\n-> data_files = {}\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(327)main()\r\n-> if data_args.train_file is not None:\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(328)main()\r\n-> data_files[\"train\"] = data_args.train_file\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(329)main()\r\n-> extension = data_args.train_file.split(\".\")[-1]\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(330)main()\r\n-> if data_args.validation_file is not None:\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(333)main()\r\n-> if extension == \"txt\":\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(334)main()\r\n-> extension = \"text\"\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(335)main()\r\n-> raw_datasets = load_dataset(\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(336)main()\r\n-> extension,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(337)main()\r\n-> data_files=data_files,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(338)main()\r\n-> cache_dir=model_args.cache_dir,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(339)main()\r\n-> use_auth_token=True if model_args.use_auth_token else None,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(335)main()\r\n-> raw_datasets = load_dataset(\r\n(Pdb) n\r\n06/26/2023 19:45:33 - INFO - datasets.builder - Using custom data configuration default-2df3a67ae9ac7743\r\n06/26/2023 19:45:33 - INFO - datasets.info - Loading Dataset Infos from /cfs/home/u021274/higo/myenv/lib64/python3.10/site-packages/datasets/packaged_modules/text\r\n06/26/2023 19:45:33 - INFO - datasets.builder - Overwrite dataset info from restored data version if exists.\r\n06/26/2023 19:45:33 - INFO - datasets.info - Loading Dataset info from /cfs/home/u021274/.cache/huggingface/datasets/text/default-2df3a67ae9ac7743/0.0.0/cb1e9bd71a82ad27976be3b12b407850fe2837d80c22c5e03a28949843a8ace2\r\n06/26/2023 19:45:34 - WARNING - datasets.builder - Found cached dataset text (/cfs/home/u021274/.cache/huggingface/datasets/text/default-2df3a67ae9ac7743/0.0.0/cb1e9bd71a82ad27976be3b12b407850fe2837d80c22c5e03a28949843a8ace2)\r\n06/26/2023 19:45:34 - INFO - datasets.info - Loading Dataset info from /cfs/home/u021274/.cache/huggingface/datasets/text/default-2df3a67ae9ac7743/0.0.0/cb1e9bd71a82ad27976be3b12b407850fe2837d80c22c5e03a28949843a8ace2\r\n100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 16.00it/s]\r\n> /cfs/home/u021274/higo/run_mlm.py(343)main()\r\n-> if \"validation\" not in raw_datasets.keys():\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(344)main()\r\n-> raw_datasets[\"validation\"] = load_dataset(\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(345)main()\r\n-> extension,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(346)main()\r\n-> data_files=data_files,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(347)main()\r\n-> split=f\"train[:{data_args.validation_split_percentage}%]\",\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(348)main()\r\n-> cache_dir=model_args.cache_dir,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(349)main()\r\n-> use_auth_token=True if model_args.use_auth_token else None,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(344)main()\r\n-> raw_datasets[\"validation\"] = load_dataset(\r\n(Pdb) n\r\n06/26/2023 19:45:52 - INFO - datasets.builder - Using custom data configuration default-2df3a67ae9ac7743\r\n06/26/2023 19:45:52 - INFO - datasets.info - Loading Dataset Infos from /cfs/home/u021274/higo/myenv/lib64/python3.10/site-packages/datasets/packaged_modules/text\r\n06/26/2023 19:45:52 - INFO - datasets.builder - Overwrite dataset info from restored data version if exists.\r\n06/26/2023 19:45:52 - INFO - datasets.info - Loading Dataset info from /cfs/home/u021274/.cache/huggingface/datasets/text/default-2df3a67ae9ac7743/0.0.0/cb1e9bd71a82ad27976be3b12b407850fe2837d80c22c5e03a28949843a8ace2\r\n06/26/2023 19:45:52 - WARNING - datasets.builder - Found cached dataset text (/cfs/home/u021274/.cache/huggingface/datasets/text/default-2df3a67ae9ac7743/0.0.0/cb1e9bd71a82ad27976be3b12b407850fe2837d80c22c5e03a28949843a8ace2)\r\n06/26/2023 19:45:52 - INFO - datasets.info - Loading Dataset info from /cfs/home/u021274/.cache/huggingface/datasets/text/default-2df3a67ae9ac7743/0.0.0/cb1e9bd71a82ad27976be3b12b407850fe2837d80c22c5e03a28949843a8ace2\r\n> /cfs/home/u021274/higo/run_mlm.py(351)main()\r\n-> raw_datasets[\"train\"] = load_dataset(\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(352)main()\r\n-> extension,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(353)main()\r\n-> data_files=data_files,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(354)main()\r\n-> split=f\"train[{data_args.validation_split_percentage}%:]\",\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(355)main()\r\n-> cache_dir=model_args.cache_dir,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(356)main()\r\n-> use_auth_token=True if model_args.use_auth_token else None,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(351)main()\r\n-> raw_datasets[\"train\"] = load_dataset(\r\n(Pdb) n\r\n06/26/2023 19:46:02 - INFO - datasets.builder - Using custom data configuration default-2df3a67ae9ac7743\r\n06/26/2023 19:46:02 - INFO - datasets.info - Loading Dataset Infos from /cfs/home/u021274/higo/myenv/lib64/python3.10/site-packages/datasets/packaged_modules/text\r\n06/26/2023 19:46:02 - INFO - datasets.builder - Overwrite dataset info from restored data version if exists.\r\n06/26/2023 19:46:02 - INFO - datasets.info - Loading Dataset info from /cfs/home/u021274/.cache/huggingface/datasets/text/default-2df3a67ae9ac7743/0.0.0/cb1e9bd71a82ad27976be3b12b407850fe2837d80c22c5e03a28949843a8ace2\r\n06/26/2023 19:46:02 - WARNING - datasets.builder - Found cached dataset text (/cfs/home/u021274/.cache/huggingface/datasets/text/default-2df3a67ae9ac7743/0.0.0/cb1e9bd71a82ad27976be3b12b407850fe2837d80c22c5e03a28949843a8ace2)\r\n06/26/2023 19:46:02 - INFO - datasets.info - Loading Dataset info from /cfs/home/u021274/.cache/huggingface/datasets/text/default-2df3a67ae9ac7743/0.0.0/cb1e9bd71a82ad27976be3b12b407850fe2837d80c22c5e03a28949843a8ace2\r\n> /cfs/home/u021274/higo/run_mlm.py(368)main()\r\n-> \"cache_dir\": model_args.cache_dir,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(369)main()\r\n-> \"revision\": model_args.model_revision,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(370)main()\r\n-> \"use_auth_token\": True if model_args.use_auth_token else None,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(367)main()\r\n-> config_kwargs = {\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(372)main()\r\n-> if model_args.config_name:\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(374)main()\r\n-> elif model_args.model_name_or_path:\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(377)main()\r\n-> config = CONFIG_MAPPING[model_args.model_type]()\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(378)main()\r\n-> logger.warning(\"You are instantiating a new config instance from scratch.\")\r\n(Pdb) n\r\n06/26/2023 19:46:14 - WARNING - __main__ - You are instantiating a new config instance from scratch.\r\n> /cfs/home/u021274/higo/run_mlm.py(379)main()\r\n-> if model_args.config_overrides is not None:\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(380)main()\r\n-> logger.info(f\"Overriding config: {model_args.config_overrides}\")\r\n(Pdb) n\r\n06/26/2023 19:46:17 - INFO - __main__ - Overriding config: num_hidden_layers=6,max_position_embeddings=514\r\n> /cfs/home/u021274/higo/run_mlm.py(381)main()\r\n-> config.update_from_string(model_args.config_overrides)\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(382)main()\r\n-> logger.info(f\"New config: {config}\")\r\n(Pdb) n\r\n06/26/2023 19:46:19 - INFO - __main__ - New config: RobertaConfig {\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"bos_token_id\": 0,\r\n \"classifier_dropout\": null,\r\n \"eos_token_id\": 2,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"layer_norm_eps\": 1e-12,\r\n \"max_position_embeddings\": 514,\r\n \"model_type\": \"roberta\",\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 6,\r\n \"pad_token_id\": 1,\r\n \"position_embedding_type\": \"absolute\",\r\n \"transformers_version\": \"4.31.0.dev0\",\r\n \"type_vocab_size\": 2,\r\n \"use_cache\": true,\r\n \"vocab_size\": 50265\r\n}\r\n\r\n> /cfs/home/u021274/higo/run_mlm.py(385)main()\r\n-> \"cache_dir\": model_args.cache_dir,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(386)main()\r\n-> \"use_fast\": model_args.use_fast_tokenizer,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(387)main()\r\n-> \"revision\": model_args.model_revision,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(388)main()\r\n-> \"use_auth_token\": True if model_args.use_auth_token else None,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(384)main()\r\n-> tokenizer_kwargs = {\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(390)main()\r\n-> if model_args.tokenizer_name:\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(391)main()\r\n-> tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, **tokenizer_kwargs)\r\n(Pdb) n\r\n[INFO|tokenization_auto.py:503] 2023-06-26 19:47:10,919 >> Could not locate the tokenizer configuration file, will try to use the model config instead.\r\n[INFO|configuration_utils.py:710] 2023-06-26 19:47:10,922 >> loading configuration file MyModel/config.json\r\n[INFO|configuration_utils.py:768] 2023-06-26 19:47:10,932 >> Model config RobertaConfig {\r\n \"_name_or_path\": \"MyModel\",\r\n \"architectures\": [\r\n \"RobertaForMaskedLM\"\r\n ],\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"bos_token_id\": 0,\r\n \"classifier_dropout\": null,\r\n \"eos_token_id\": 2,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"layer_norm_eps\": 1e-05,\r\n \"max_position_embeddings\": 514,\r\n \"model_type\": \"roberta\",\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 6,\r\n \"pad_token_id\": 1,\r\n \"position_embedding_type\": \"absolute\",\r\n \"transformers_version\": \"4.31.0.dev0\",\r\n \"type_vocab_size\": 1,\r\n \"use_cache\": true,\r\n \"vocab_size\": 50265\r\n}\r\n\r\n[INFO|tokenization_utils_base.py:1842] 2023-06-26 19:47:10,946 >> loading file vocab.json\r\n[INFO|tokenization_utils_base.py:1842] 2023-06-26 19:47:10,946 >> loading file merges.txt\r\n[INFO|tokenization_utils_base.py:1842] 2023-06-26 19:47:10,946 >> loading file tokenizer.json\r\n[INFO|tokenization_utils_base.py:1842] 2023-06-26 19:47:10,946 >> loading file added_tokens.json\r\n[INFO|tokenization_utils_base.py:1842] 2023-06-26 19:47:10,946 >> loading file special_tokens_map.json\r\n[INFO|tokenization_utils_base.py:1842] 2023-06-26 19:47:10,946 >> loading file tokenizer_config.json\r\n[INFO|configuration_utils.py:710] 2023-06-26 19:47:10,947 >> loading configuration file MyModel/config.json\r\n[INFO|configuration_utils.py:768] 2023-06-26 19:47:10,950 >> Model config RobertaConfig {\r\n \"_name_or_path\": \"MyModel\",\r\n \"architectures\": [\r\n \"RobertaForMaskedLM\"\r\n ],\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"bos_token_id\": 0,\r\n \"classifier_dropout\": null,\r\n \"eos_token_id\": 2,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"layer_norm_eps\": 1e-05,\r\n \"max_position_embeddings\": 514,\r\n \"model_type\": \"roberta\",\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 6,\r\n \"pad_token_id\": 1,\r\n \"position_embedding_type\": \"absolute\",\r\n \"transformers_version\": \"4.31.0.dev0\",\r\n \"type_vocab_size\": 1,\r\n \"use_cache\": true,\r\n \"vocab_size\": 50265\r\n}\r\n\r\n[INFO|configuration_utils.py:710] 2023-06-26 19:47:11,024 >> loading configuration file MyModel/config.json\r\n[INFO|configuration_utils.py:768] 2023-06-26 19:47:11,027 >> Model config RobertaConfig {\r\n \"_name_or_path\": \"MyModel\",\r\n \"architectures\": [\r\n \"RobertaForMaskedLM\"\r\n ],\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"bos_token_id\": 0,\r\n \"classifier_dropout\": null,\r\n \"eos_token_id\": 2,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"layer_norm_eps\": 1e-05,\r\n \"max_position_embeddings\": 514,\r\n \"model_type\": \"roberta\",\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 6,\r\n \"pad_token_id\": 1,\r\n \"position_embedding_type\": \"absolute\",\r\n \"transformers_version\": \"4.31.0.dev0\",\r\n \"type_vocab_size\": 1,\r\n \"use_cache\": true,\r\n \"vocab_size\": 50265\r\n}\r\n\r\n> /cfs/home/u021274/higo/run_mlm.py(400)main()\r\n-> if model_args.model_name_or_path:\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(411)main()\r\n-> logger.info(\"Training new model from scratch\")\r\n(Pdb) n\r\n06/26/2023 19:47:14 - INFO - __main__ - Training new model from scratch\r\n> /cfs/home/u021274/higo/run_mlm.py(412)main()\r\n-> model = AutoModelForMaskedLM.from_config(config)\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(416)main()\r\n-> embedding_size = model.get_input_embeddings().weight.shape[0]\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(417)main()\r\n-> if len(tokenizer) > embedding_size:\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(422)main()\r\n-> if training_args.do_train:\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(423)main()\r\n-> column_names = list(raw_datasets[\"train\"].features)\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(426)main()\r\n-> text_column_name = \"text\" if \"text\" in column_names else column_names[0]\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(428)main()\r\n-> if data_args.max_seq_length is None:\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(438)main()\r\n-> if data_args.max_seq_length > tokenizer.model_max_length:\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(443)main()\r\n-> max_seq_length = min(data_args.max_seq_length, tokenizer.model_max_length)\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(445)main()\r\n-> if data_args.line_by_line:\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(447)main()\r\n-> padding = \"max_length\" if data_args.pad_to_max_length else False\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(449)main()\r\n-> def tokenize_function(examples):\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(464)main()\r\n-> with training_args.main_process_first(desc=\"dataset map tokenization\"):\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(465)main()\r\n-> if not data_args.streaming:\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(466)main()\r\n-> tokenized_datasets = raw_datasets.map(\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(467)main()\r\n-> tokenize_function,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(468)main()\r\n-> batched=True,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(469)main()\r\n-> num_proc=data_args.preprocessing_num_workers,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(470)main()\r\n-> remove_columns=[text_column_name],\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(471)main()\r\n-> load_from_cache_file=not data_args.overwrite_cache,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(472)main()\r\n-> desc=\"Running tokenizer on dataset line_by_line\",\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(466)main()\r\n-> tokenized_datasets = raw_datasets.map(\r\n(Pdb) n\r\n06/26/2023 19:47:51 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /cfs/home/u021274/.cache/huggingface/datasets/text/default-2df3a67ae9ac7743/0.0.0/cb1e9bd71a82ad27976be3b12b407850fe2837d80c22c5e03a28949843a8ace2/cache-c8ae7ecb92d28874.arrow\r\n06/26/2023 19:47:51 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /cfs/home/u021274/.cache/huggingface/datasets/text/default-2df3a67ae9ac7743/0.0.0/cb1e9bd71a82ad27976be3b12b407850fe2837d80c22c5e03a28949843a8ace2/cache-20fc928d1e2a7f3b.arrow\r\n> /cfs/home/u021274/higo/run_mlm.py(464)main()\r\n-> with training_args.main_process_first(desc=\"dataset map tokenization\"):\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(542)main()\r\n-> if training_args.do_train:\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(543)main()\r\n-> if \"train\" not in tokenized_datasets:\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(545)main()\r\n-> train_dataset = tokenized_datasets[\"train\"]\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(546)main()\r\n-> if data_args.max_train_samples is not None:\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(550)main()\r\n-> if training_args.do_eval:\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(580)main()\r\n-> pad_to_multiple_of_8 = data_args.line_by_line and training_args.fp16 and not data_args.pad_to_max_length\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(581)main()\r\n-> data_collator = DataCollatorForLanguageModeling(\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(582)main()\r\n-> tokenizer=tokenizer,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(583)main()\r\n-> mlm_probability=data_args.mlm_probability,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(584)main()\r\n-> pad_to_multiple_of=8 if pad_to_multiple_of_8 else None,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(581)main()\r\n-> data_collator = DataCollatorForLanguageModeling(\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(588)main()\r\n-> trainer = Trainer(\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(589)main()\r\n-> model=model,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(590)main()\r\n-> args=training_args,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(591)main()\r\n-> train_dataset=train_dataset if training_args.do_train else None,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(592)main()\r\n-> eval_dataset=eval_dataset if training_args.do_eval else None,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(593)main()\r\n-> tokenizer=tokenizer,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(594)main()\r\n-> data_collator=data_collator,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(595)main()\r\n-> compute_metrics=compute_metrics if training_args.do_eval and not is_torch_tpu_available() else None,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(597)main()\r\n-> if training_args.do_eval and not is_torch_tpu_available()\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(596)main()\r\n-> preprocess_logits_for_metrics=preprocess_logits_for_metrics\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(598)main()\r\n-> else None,\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(588)main()\r\n-> trainer = Trainer(\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(602)main()\r\n-> if training_args.do_train:\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(603)main()\r\n-> checkpoint = None\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(604)main()\r\n-> if training_args.resume_from_checkpoint is not None:\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(606)main()\r\n-> elif last_checkpoint is not None:\r\n(Pdb) n\r\n> /cfs/home/u021274/higo/run_mlm.py(608)main()\r\n-> train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n(Pdb) n\r\n[INFO|trainer.py:769] 2023-06-26 19:48:46,054 >> The following columns in the training set don't have a corresponding argument in `RobertaForMaskedLM.forward` and have been ignored: special_tokens_mask. If special_tokens_mask are not expected by `RobertaForMaskedLM.forward`, you can safely ignore this message.\r\n/cfs/home/u021274/higo/myenv/lib64/python3.10/site-packages/transformers/optimization.py:411: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning\r\n warnings.warn(\r\n[INFO|trainer.py:1680] 2023-06-26 19:48:46,071 >> ***** Running training *****\r\n[INFO|trainer.py:1681] 2023-06-26 19:48:46,071 >> Num examples = 2,353,535\r\n[INFO|trainer.py:1682] 2023-06-26 19:48:46,071 >> Num Epochs = 40\r\n[INFO|trainer.py:1683] 2023-06-26 19:48:46,071 >> Instantaneous batch size per device = 192\r\n[INFO|trainer.py:1684] 2023-06-26 19:48:46,071 >> Total train batch size (w. parallel, distributed & accumulation) = 768\r\n[INFO|trainer.py:1685] 2023-06-26 19:48:46,071 >> Gradient Accumulation steps = 4\r\n[INFO|trainer.py:1686] 2023-06-26 19:48:46,071 >> Total optimization steps = 122,560\r\n[INFO|trainer.py:1687] 2023-06-26 19:48:46,074 >> Number of trainable parameters = 82,170,969\r\n[INFO|integrations.py:727] 2023-06-26 19:48:46,077 >> Automatic Weights & Biases logging enabled, to disable set os.environ[\"WANDB_DISABLED\"] = \"true\"\r\nwandb: Currently logged in as: <USER>. Use `wandb login --relogin` to force relogin\r\nwandb: Tracking run with wandb version 0.15.4\r\nwandb: Run data is saved locally in /cfs/home/u021274/higo/wandb/run-20230626_194847-vr14588a\r\nwandb: Run `wandb offline` to turn off syncing.\r\nwandb: Syncing run fragrant-universe-48\r\nwandb: ⭐️ View project at <URL>\r\nwandb: 🚀 View run at <URL>\r\n 0%| | 0/122560 [00:00<?, ?it/s][WARNING|logging.py:280] 2023-06-26 19:49:01,837 >> You're using a RobertaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\n```", "Then its doesn't seem link in any way to the tokenization, you would need to step *into* the train function to know more.\r\n", "I see. How can I do this? Any suggestions? I'm kinda of new on it, and I don't know how to start searching the real problem inside the `train` function.", "Ask around in discord https://discuss.huggingface.co/t/join-the-hugging-face-discord/11263 or the forum https://discuss.huggingface.co/\r\n\r\nYou might be able to find better help for such things.\r\n\r\nI'm closing this issue, feel free to reopen one, when you have narrowed down what's going on." ]
1,687
1,687
1,687
NONE
null
### System Info - `transformers` version: 4.31.0.dev0 - Platform: Linux-5.14.21-150400.24.55-default-x86_64-with-glibc2.31 - Python version: 3.10.10 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ``` +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 530.30.02 Driver Version: 530.30.02 CUDA Version: 12.1 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA A100 80GB PCIe Off| 00000000:52:00.0 Off | 0 | | N/A 55C P0 80W / 300W| 59735MiB / 81920MiB | 12% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ | 1 NVIDIA A100 80GB PCIe Off| 00000000:CE:00.0 Off | 0 | | N/A 56C P0 87W / 300W| 40933MiB / 81920MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ | 2 NVIDIA A100 80GB PCIe Off| 00000000:D1:00.0 Off | 0 | | N/A 34C P0 44W / 300W| 0MiB / 81920MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ ``` ### Who can help? @ArthurZucker @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I intend to use [run_mlm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py) to train RoBERTa from scratch. To the training, I'm using data created my myself, and I entered the following command: ``` CUDA_VISIBLE_DEVICES=0,1,2 python run_mlm.py \ --model_type roberta \ --config_overrides="num_hidden_layers=6,max_position_embeddings=514" \ --tokenizer_name MyModel \ --train_file ./data/corpus_dedup.txt \ --max_seq_length 512 \ --line_by_line True \ --per_device_train_batch_size 64 \ --do_train \ --overwrite_output_dir True \ --gradient_accumulation_steps 4 \ --num_train_epochs 40 \ --fp16 True \ --output_dir MyModel \ --save_total_limit 1 ``` When I try to do the training using a 3-GPU configuration, I'm getting stucked for dozens of hours in the tokenization before the training, with the following message: `You're using a RobertaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.` Aditionally, when I try to do the training with only 2 GPU (`CUDA_VISIBLE_DEVICES=0,1`, followed by the same parameters), my training runs normally... ### Expected behavior Model starts to be trained from scratch on a 3 GPU configuration.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24473/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24473/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24472
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24472/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24472/comments
https://api.github.com/repos/huggingface/transformers/issues/24472/events
https://github.com/huggingface/transformers/issues/24472
1,773,585,785
I_kwDOCUB6oc5ptsV5
24,472
Adding support for scaling rotary position embeddings
{ "login": "kaiokendev", "id": 129691954, "node_id": "U_kgDOB7rxMg", "avatar_url": "https://avatars.githubusercontent.com/u/129691954?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kaiokendev", "html_url": "https://github.com/kaiokendev", "followers_url": "https://api.github.com/users/kaiokendev/followers", "following_url": "https://api.github.com/users/kaiokendev/following{/other_user}", "gists_url": "https://api.github.com/users/kaiokendev/gists{/gist_id}", "starred_url": "https://api.github.com/users/kaiokendev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kaiokendev/subscriptions", "organizations_url": "https://api.github.com/users/kaiokendev/orgs", "repos_url": "https://api.github.com/users/kaiokendev/repos", "events_url": "https://api.github.com/users/kaiokendev/events{/privacy}", "received_events_url": "https://api.github.com/users/kaiokendev/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "any updates on this?", "@lucasjinreal Yes, it is now an official part of the library\r\nhttps://github.com/huggingface/transformers/pull/24653#issuecomment-1635324005\r\n\r\nSo I will close this issue", "Here are the docs btw\r\nhttps://huggingface.co/docs/transformers/main/en/model_doc/llama#transformers.LlamaConfig.rope_scaling", "@kaiokendev So it actually support just same thing like longchat?\r\nBTW, how to adopt it to Baichuan model properly?", "Yes, for LongChat specifically, you would use \"linear\" method with factor of 8.\r\n\r\nFor Baichuan model you would not use this, as Baichuan uses ALiBi, not RoPE", "@kaiokendev I was not aware of this issue, my bad 🙈 \r\n\r\nSuggestion: tag someone when opening an issue; sometimes things fly under our radar" ]
1,687
1,689
1,689
NONE
null
### Feature request Hello, I would like if possible for Rotary Position Embedding scaling factors to be usable in the library. Currently this can only be done by monkey-patching the library. Namely, it requires modifying the: - `max_position_embeddings`: This can already be done via the model's config class or `config.json` - `position_scale`: This variable doesn't exist currently, and there is no way to incorporate this effect at the moment without monkey-patching the existing `LlamaRotaryEmbeddings` class. (I'd also like to not step over toes of a possible future XPos implementation which also uses it's own scale for different purposes) ### Motivation Recently I demonstrated it is possible to drastically reduce training compute when fine-tuning pre-trained RoPE models with an adjusted scaling factor for the purpose of extending the context length of the model. This has the effect of interpolating the position embeddings making it easier to fine-tune the model using in-distribution positions as opposed to out-of-distribution positions typically used via pure extrapolation. There is an extended write-up with motivations here https://kaiokendev.github.io/context as well as the code I used (for the 8K example) can be found here https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test/blob/main/llama_rope_scaled_monkey_patch.py Some existing discussions and benchmarks can be found here: https://github.com/ggerganov/llama.cpp/discussions/1965 Several models currently use this scaling feature, but they will not produce coherent output unless the scale is applied correctly during inference (scale is a hyperparameter): - https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test - https://huggingface.co/Peeepy/Airoboros-13b-SuperHOT-8k - https://huggingface.co/emozilla/open_llama_7b-scaled EDIT: Meta has recently written a paper about it: https://arxiv.org/abs/2306.15595 ### Your contribution I would love to help in any way possible. While the basic implementation would be easy, I'm not sure what the best way could be for adding this modification (such as if users want to used a fixed scale versus having it dynamically applied based on the input sequence length)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24472/reactions", "total_count": 8, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24472/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24471
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24471/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24471/comments
https://api.github.com/repos/huggingface/transformers/issues/24471/events
https://github.com/huggingface/transformers/issues/24471
1,773,546,213
I_kwDOCUB6oc5ptirl
24,471
"MPTForCausalLM not supported" error when using pipeline, but not when using from_pretrained
{ "login": "leondz", "id": 121934, "node_id": "MDQ6VXNlcjEyMTkzNA==", "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leondz", "html_url": "https://github.com/leondz", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "organizations_url": "https://api.github.com/users/leondz/orgs", "repos_url": "https://api.github.com/users/leondz/repos", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "received_events_url": "https://api.github.com/users/leondz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey, I suggest you to open this issue on the repo, this is because the `auto_map ` attribute in the `config.json` file is not properly set. We are probably going to add this model to transformers soon too! " ]
1,687
1,687
1,687
CONTRIBUTOR
null
### System Info Python 3.8.10 (default, Nov 14 2022, 12:59:47) transformers.__version__ is '4.30.2' lambda labs 1xA100 invoking `generator = transformers.pipeline(task="text-generation", model="mosaicml/mpt-7b", trust_remote_code=True)` ends with this exception: ``` You are using config.init_device='cpu', but you can also use config.init_device="meta" with Composer + FSDP for fast initialization. Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:05<00:00, 2.73s/it] Xformers is not installed correctly. If you want to use memory_efficient_attention to accelerate training use the following command to install Xformers pip install xformers. The model 'MPTForCausalLM' is not supported for text-generation. Supported models are ['BartForCausalLM', 'BertLMHeadModel', 'BertGenerationDecoder', 'BigBirdForCausalLM', 'BigBirdPegasusForCausalLM', 'BioGptForCausalLM', 'BlenderbotForCausalLM', 'BlenderbotSmallForCausalLM', 'BloomForCausalLM', 'CamembertForCausalLM', 'CodeGenForCausalLM', 'CpmAntForCausalLM', 'CTRLLMHeadModel', 'Data2VecTextForCausalLM', 'ElectraForCausalLM', 'ErnieForCausalLM', 'GitForCausalLM', 'GPT2LMHeadModel', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTNeoForCausalLM', 'GPTNeoXForCausalLM', 'GPTNeoXJapaneseForCausalLM', 'GPTJForCausalLM', 'LlamaForCausalLM', 'MarianForCausalLM', 'MBartForCausalLM', 'MegaForCausalLM', 'MegatronBertForCausalLM', 'MvpForCausalLM', 'OpenLlamaForCausalLM', 'OpenAIGPTLMHeadModel', 'OPTForCausalLM', 'PegasusForCausalLM', 'PLBartForCausalLM', 'ProphetNetForCausalLM', 'QDQBertLMHeadModel', 'ReformerModelWithLMHead', 'RemBertForCausalLM', 'RobertaForCausalLM', 'RobertaPreLayerNormForCausalLM', 'RoCBertForCausalLM', 'RoFormerForCausalLM', 'RwkvForCausalLM', 'Speech2Text2ForCausalLM', 'TransfoXLLMHeadModel', 'TrOCRForCausalLM', 'XGLMForCausalLM', 'XLMWithLMHeadModel', 'XLMProphetNetForCausalLM', 'XLMRobertaForCausalLM', 'XLMRobertaXLForCausalLM', 'XLNetLMHeadModel', 'XmodForCausalLM']. ``` However, loading using: ``` model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b', trust_remote_code=True ) ``` works fine. How can I load this model in a `pipeline`? ### Who can help? @Narsil @ArthurZucker @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. `generator = transformers.pipeline(task="text-generation", model="mosaicml/mpt-7b", trust_remote_code=True)` ### Expected behavior The pipeline would load OK, just as .from_pretrained works
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24471/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24471/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24470
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24470/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24470/comments
https://api.github.com/repos/huggingface/transformers/issues/24470/events
https://github.com/huggingface/transformers/issues/24470
1,773,502,078
I_kwDOCUB6oc5ptX5-
24,470
Error when try to load pretrained model
{ "login": "BrahianVT", "id": 12876560, "node_id": "MDQ6VXNlcjEyODc2NTYw", "avatar_url": "https://avatars.githubusercontent.com/u/12876560?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BrahianVT", "html_url": "https://github.com/BrahianVT", "followers_url": "https://api.github.com/users/BrahianVT/followers", "following_url": "https://api.github.com/users/BrahianVT/following{/other_user}", "gists_url": "https://api.github.com/users/BrahianVT/gists{/gist_id}", "starred_url": "https://api.github.com/users/BrahianVT/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BrahianVT/subscriptions", "organizations_url": "https://api.github.com/users/BrahianVT/orgs", "repos_url": "https://api.github.com/users/BrahianVT/repos", "events_url": "https://api.github.com/users/BrahianVT/events{/privacy}", "received_events_url": "https://api.github.com/users/BrahianVT/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Please follow the issue template and give us the result of `transformers-cli env`.", "thanks installing transformers fixed the error, I'm like a beginner in this topic , sorry \r\nRegards" ]
1,687
1,687
1,687
NONE
null
### System Info I've been trying to load a pretrained model: When I tried to execute this : from transformers import T5ForConditionalGeneration,T5Tokenizer import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") models = T5ForConditionalGeneration.from_pretrained("Michau/t5-base-en-generate-headline") tokenizer = T5Tokenizer.from_pretrained("Michau/t5-base-en-generate-headline") The result: is : ` Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from transformers import T5ForConditionalGeneration,T5Tokenizer >>> import torch >>> >>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu") >>> models = T5ForConditionalGeneration.from_pretrained("Michau/t5-base-en-generate-headline") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\BrahianVT\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 2259, in from_pretrained config, model_kwargs = cls.config_class.from_pretrained( File "C:\Users\BrahianVT\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\configuration_utils.py", line 547, in from_pretrained config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) File "C:\Users\BrahianVT\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\configuration_utils.py", line 574, in get_config_dict config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) File "C:\Users\BrahianVT\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\configuration_utils.py", line 629, in _get_config_dict resolved_config_file = cached_file( File "C:\Users\BrahianVT\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\utils\hub.py", line 417, in cached_file resolved_file = hf_hub_download( File "C:\Users\BrahianVT\AppData\Local\Programs\Python\Python310\lib\site-packages\huggingface_hub\utils\_validators.py", line 124, in _inner_fn return fn(*args, **kwargs) File "C:\Users\BrahianVT\AppData\Local\Programs\Python\Python310\lib\site-packages\huggingface_hub\file_download.py", line 1252, in hf_hub_download with FileLock(lock_path): File "C:\Users\BrahianVT\AppData\Local\Programs\Python\Python310\lib\site-packages\filelock\_api.py", line 255, in __enter__ self.acquire() File "C:\Users\BrahianVT\AppData\Local\Programs\Python\Python310\lib\site-packages\filelock\_api.py", line 213, in acquire self._acquire() File "C:\Users\BrahianVT\AppData\Local\Programs\Python\Python310\lib\site-packages\filelock\_windows.py", line 27, in _acquire fd = os.open(self.lock_file, flags, self._context.mode) OSError: [Errno 22] Invalid argument: 'C:\\Users\\BrahianVT/.cache\\huggingface\\hub\\models--Michau--t5-base-en-generate-headline\\blobs\\W/"957fcaeed54459456a54d98d552a7773e717333f.lock'` ![image](https://github.com/huggingface/transformers/assets/12876560/46c229cf-aea1-4855-a36a-c9f21533e250) I'm like a beginner in this topic anyone could help? Regards ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import T5ForConditionalGeneration,T5Tokenizer import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") models = T5ForConditionalGeneration.from_pretrained("Michau/t5-base-en-generate-headline") tokenizer = T5Tokenizer.from_pretrained("Michau/t5-base-en-generate-headline") ### Expected behavior I expect the model object
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24470/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24470/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24469
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24469/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24469/comments
https://api.github.com/repos/huggingface/transformers/issues/24469/events
https://github.com/huggingface/transformers/issues/24469
1,773,499,348
I_kwDOCUB6oc5ptXPU
24,469
ValueError raised during accelerate decoding
{ "login": "petroskarypis", "id": 52904479, "node_id": "MDQ6VXNlcjUyOTA0NDc5", "avatar_url": "https://avatars.githubusercontent.com/u/52904479?v=4", "gravatar_id": "", "url": "https://api.github.com/users/petroskarypis", "html_url": "https://github.com/petroskarypis", "followers_url": "https://api.github.com/users/petroskarypis/followers", "following_url": "https://api.github.com/users/petroskarypis/following{/other_user}", "gists_url": "https://api.github.com/users/petroskarypis/gists{/gist_id}", "starred_url": "https://api.github.com/users/petroskarypis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/petroskarypis/subscriptions", "organizations_url": "https://api.github.com/users/petroskarypis/orgs", "repos_url": "https://api.github.com/users/petroskarypis/repos", "events_url": "https://api.github.com/users/petroskarypis/events{/privacy}", "received_events_url": "https://api.github.com/users/petroskarypis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,687
1,687
1,687
NONE
null
### System Info transformers-4.28.1 python-3.11.3 ### Who can help? @gant Example code found in this blogpost raises errors: [https://huggingface.co/blog/assisted-generation](https://huggingface.co/blog/assisted-generation). ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import AutoModelForCausalLM, AutoTokenizer import torch prompt = "Alice and Bob" checkpoint = "EleutherAI/pythia-1.4b-deduped" assistant_checkpoint = "EleutherAI/pythia-160m-deduped" device = "cuda" if torch.cuda.is_available() else "cpu" tokenizer = AutoTokenizer.from_pretrained(checkpoint) inputs = tokenizer(prompt, return_tensors="pt").to(device) model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device) assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint).to(device) outputs = model.generate(**inputs, assistant_model=assistant_model) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['Alice and Bob are sitting in a bar. Alice is drinking a beer and Bob is drinking a'] ValueError Traceback (most recent call last) Cell In[9], line 14 1 # from transformers import AutoModelForCausalLM, AutoTokenizer 2 # import torch 3 (...) 12 # model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device) 13 # assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint).to(device) ---> 14 outputs = model.generate(**inputs, assistant_model=assistant_model) File ~/anaconda3/envs/fcm-ape/lib/python3.11/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs) 112 @functools.wraps(func) 113 def decorate_context(*args, **kwargs): 114 with ctx_factory(): --> 115 return func(*args, **kwargs) File ~/anaconda3/envs/fcm-ape/lib/python3.11/site-packages/transformers/generation/utils.py:1231, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, streamer, **kwargs) 1229 model_kwargs = generation_config.update(**kwargs) # All unused kwargs must be model kwargs 1230 generation_config.validate() -> 1231 self._validate_model_kwargs(model_kwargs.copy()) 1233 # 2. Set generation parameters if not already defined 1234 logits_processor = logits_processor if logits_processor is not None else LogitsProcessorList() File ~/anaconda3/envs/fcm-ape/lib/python3.11/site-packages/transformers/generation/utils.py:1109, in GenerationMixin._validate_model_kwargs(self, model_kwargs) ... 1110 f"The following `model_kwargs` are not used by the model: {unused_model_args} (note: typos in the" 1111 " generate arguments will also show up in this list)" 1112 ) ValueError: The following `model_kwargs` are not used by the model: ['assistant_model'] (note: typos in the generate arguments will also show up in this list) ### Expected behavior ['Alice and Bob are sitting in a bar. Alice is drinking a beer and Bob is drinking a']
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24469/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24469/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24468
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24468/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24468/comments
https://api.github.com/repos/huggingface/transformers/issues/24468/events
https://github.com/huggingface/transformers/issues/24468
1,773,454,335
I_kwDOCUB6oc5ptMP_
24,468
DeepSpeed ZeRO stage3+huggyllama/llama: RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
{ "login": "memray", "id": 4197249, "node_id": "MDQ6VXNlcjQxOTcyNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/4197249?v=4", "gravatar_id": "", "url": "https://api.github.com/users/memray", "html_url": "https://github.com/memray", "followers_url": "https://api.github.com/users/memray/followers", "following_url": "https://api.github.com/users/memray/following{/other_user}", "gists_url": "https://api.github.com/users/memray/gists{/gist_id}", "starred_url": "https://api.github.com/users/memray/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/memray/subscriptions", "organizations_url": "https://api.github.com/users/memray/orgs", "repos_url": "https://api.github.com/users/memray/repos", "events_url": "https://api.github.com/users/memray/events{/privacy}", "received_events_url": "https://api.github.com/users/memray/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello @memray, this issue seems unrelated to DeepSpeed integration. Could you provide a minimal reproducible example?", "No, I also tested `mosaicml/mpt-7b` and I found it worked. I believe it's more related to the implementation of LLaMA because `openlm-research/open_llama_7b` also fails. Can you let me know who can help with this issue? ", "@ArthurZucker and @younesbelkada can you check if this is related to the model implementation (say this [PR](https://github.com/huggingface/transformers/pull/22234))?", "Hey! thanks for reporting, yes will check this! Seems like this might be it. Also if so fix should be straight forward", "I am also facing the same issue with [chaoyi-wu/PMC_LLAMA_7B](https://huggingface.co/chaoyi-wu/PMC_LLAMA_7B)", "Okay, I forgot to ask, could you provide a full reproducing script? The ROPE was recently updated too, not sure if this was adressed yet.", "Same problem here.", "We cannot help you if we don't have a minimal reproducing script, just commenting same problem will not help 😅 ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,687
1,693
1,693
NONE
null
### System Info - deepspeed: 0.9.5+1491e14e - `transformers` version: 4.31.0.dev0 - Platform: Linux-5.10.133+-x86_64-with-glibc2.17 - Python version: 3.8.15 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): 0.6.2 (gpu) - Jax version: 0.3.22 - JaxLib version: 0.3.22 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @pacman100 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Is this issue highly related to this [PR](https://github.com/huggingface/transformers/pull/22234 )? Used the native deepspeed running example (no accelerate). ``` model_engine, optimizer, _, scheduler = deepspeed.initialize(config=args.deepspeed_config, model=model, model_parameters=model_parameters) ``` Tried stage-3 with 7B/13B/30B ckpt and they all errored out, but they worked well with stage2. Error message: ``` hidden_states, self_attn_weights, present_key_value = self.self_attn( File "/export/share/ruimeng/env/anaconda/envs/codegen/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1538, in _call_impl outputs = run_function(*args) File "/export/share/ruimeng/env/anaconda/envs/codegen/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 566, in custom_forward result = forward_call(*args, **kwargs) File "/export/share/ruimeng/env/anaconda/envs/codegen/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 205, in forward return module(*inputs, output_attentions, None) File "/export/share/ruimeng/env/anaconda/envs/codegen/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1538, in _call_impl query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids) File "/export/share/ruimeng/env/anaconda/envs/codegen/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 137, in apply_rotary_pos_emb cos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim] RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu) result = forward_call(*args, **kwargs) File "/export/share/ruimeng/env/anaconda/envs/codegen/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 293, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "/export/share/ruimeng/env/anaconda/envs/codegen/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1538, in _call_impl result = forward_call(*args, **kwargs) File "/export/share/ruimeng/env/anaconda/envs/codegen/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 205, in forward query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids) File "/export/share/ruimeng/env/anaconda/envs/codegen/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 137, in apply_rotary_pos_emb cos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim] RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu) ``` ### Expected behavior normal training
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24468/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24468/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24467
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24467/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24467/comments
https://api.github.com/repos/huggingface/transformers/issues/24467/events
https://github.com/huggingface/transformers/issues/24467
1,773,425,934
I_kwDOCUB6oc5ptFUO
24,467
Pipeline cannot be initialized with the "state_dict" parameter
{ "login": "tg-bomze", "id": 48222107, "node_id": "MDQ6VXNlcjQ4MjIyMTA3", "avatar_url": "https://avatars.githubusercontent.com/u/48222107?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tg-bomze", "html_url": "https://github.com/tg-bomze", "followers_url": "https://api.github.com/users/tg-bomze/followers", "following_url": "https://api.github.com/users/tg-bomze/following{/other_user}", "gists_url": "https://api.github.com/users/tg-bomze/gists{/gist_id}", "starred_url": "https://api.github.com/users/tg-bomze/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tg-bomze/subscriptions", "organizations_url": "https://api.github.com/users/tg-bomze/orgs", "repos_url": "https://api.github.com/users/tg-bomze/repos", "events_url": "https://api.github.com/users/tg-bomze/events{/privacy}", "received_events_url": "https://api.github.com/users/tg-bomze/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You cannot use the `state_dict` argument with `device_map=\"auto\"`, we only support the base case\r\n```py\r\nbase_model = LlamaForCausalLM.from_pretrained(None, config=config, state_dict=state_dict)\r\n```", "Thanks @sgugger. Your advice really helped. I don't have enough memory to load the model even on fp16, so I try to initialize the model with `load_in_4bit=True` and get the same error:\r\n\r\n`AttributeError: 'NoneType' object has no attribute 'endswith'`\r\nfrom `transformers/modeling_utils.py:448` in `load_state_dict`\r\n\r\nThere is a way to fix this, isn't there?\r\n\r\nP.S. And another question: I have not found an option to load tokenizer weights from RAM. In, for example, LlamaTokenizer can I feed the config and state_dict?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "> Thanks @sgugger. Your advice really helped. I don't have enough memory to load the model even on fp16, so I try to initialize the model with `load_in_4bit=True` and get the same error:\r\n> \r\n> `AttributeError: 'NoneType' object has no attribute 'endswith'` from `transformers/modeling_utils.py:448` in `load_state_dict`\r\n> \r\n> There is a way to fix this, isn't there?\r\n> \r\n> P.S. And another question: I have not found an option to load tokenizer weights from RAM. In, for example, LlamaTokenizer can I feed the config and state_dict?\r\n\r\nHave you solved the problem yet? `load_in_4bit=True` and get the same error" ]
1,687
1,698
1,690
NONE
null
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.10 (gpu) - Jax version: 0.4.10 - JaxLib version: 0.4.10 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @gante, @Narsil, @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python state_dict = torch.load(path_to_bin, map_location=torch.device('cpu')) base_model = LlamaForCausalLM.from_pretrained( None, config=config, state_dict=state_dict, torch_dtype=torch.float16, device_map="auto") ``` ### Expected behavior When I try to initialize the pipelining model with `state_dict`, I get an error: `AttributeError: 'NoneType' object has no attribute 'endswith'` from `transformers/modeling_utils.py:448` in `load_state_dict` I thought that if you specified `pretrained_model_name_or_path=None`, the path to the model would be ignored, and all necessary parameters and weights themselves would be taken from `config` and `state_dict`. Isn't that how it's done? How do I do initialization using `state_dict`?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24467/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24467/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24466
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24466/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24466/comments
https://api.github.com/repos/huggingface/transformers/issues/24466/events
https://github.com/huggingface/transformers/issues/24466
1,773,402,629
I_kwDOCUB6oc5ps_oF
24,466
Confusing behavior of push_to_hub + from_pretrained + safetensors
{ "login": "morrisalp", "id": 8263996, "node_id": "MDQ6VXNlcjgyNjM5OTY=", "avatar_url": "https://avatars.githubusercontent.com/u/8263996?v=4", "gravatar_id": "", "url": "https://api.github.com/users/morrisalp", "html_url": "https://github.com/morrisalp", "followers_url": "https://api.github.com/users/morrisalp/followers", "following_url": "https://api.github.com/users/morrisalp/following{/other_user}", "gists_url": "https://api.github.com/users/morrisalp/gists{/gist_id}", "starred_url": "https://api.github.com/users/morrisalp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/morrisalp/subscriptions", "organizations_url": "https://api.github.com/users/morrisalp/orgs", "repos_url": "https://api.github.com/users/morrisalp/repos", "events_url": "https://api.github.com/users/morrisalp/events{/privacy}", "received_events_url": "https://api.github.com/users/morrisalp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "FYI, the ad-hoc solution I used now is to delete `model.safetensors` from the model hub repo and then use the convert space [here](https://huggingface.co/spaces/safetensors/convert).\r\nBut it's not obvious to do this and not convenient to have to do this every time I update the model.\r\nIMO the behavior of `push_to_hub` and `from_pretrained` should be aligned.", "Would `push_to_hub(..., safe_serialization=True)` work ? Means everything would be in safetensors directly ? (Then the PT files would not be up-to-date, but you can discard them at this point).\r\n\r\nWe're only keeping PT files by default for old `transformers` users which might not have `safetensors` dependency yet (and their version might not be `safetensors` aware).\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,687
1,691
1,691
NONE
null
### System Info - `transformers` version: 4.30.1 - Platform: Linux-5.15.0-58-generic-x86_64-with-glibc2.35 - Python version: 3.9.16 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @Narsil @sgugger @stevhliu @MKhalusova ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction While actively developing my model [malper/taatiknet](https://huggingface.co/malper/taatiknet) I've encountered some confusing behavior, **bolded below**: * Using pipeline API (`pipe = pipeline(...)` etc.) * Updating model on hub with `pipe.model.push_to_hub('malper/taatiknet', create_pr=1)` and merging PR * Must also push tokenizer with `pipe.tokenizer.push_to_hub` (**undocumented** in current docs) * On first update, I get an automated [PR from SFconvertbot](https://huggingface.co/malper/taatiknet/discussions/3) and merge it without thinking * On future updates with `push_to_hub`, only `pytorch_model.bin` updates **but model.safetensors does not** * Loading model elsewhere with `.load_pretrained('malper/taatiknet')` **loads old safetensors model and does not get new weights** It took quite a while to figure out why my model was not updating when loaded and **not documented how to update safetensors version on HF model hub**. ### Expected behavior At least one of the following would be desirable: * documentation on how to update the safetensors (ST) model weights on the hub * push_to_hub automatically pushing ST weights or printing a warning that they are not updated * STconvertbot checking for updated base weights & providing a PR with converted weights * documentation that from_pretrained pulls ST weights when available * from_pretrained providing a warning when base weights & ST weights don't match (i.e. from different git commits)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24466/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24466/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24465
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24465/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24465/comments
https://api.github.com/repos/huggingface/transformers/issues/24465/events
https://github.com/huggingface/transformers/issues/24465
1,773,369,508
I_kwDOCUB6oc5ps3ik
24,465
squad_convert_examples_to_features does not work with tensorflow
{ "login": "Tialo", "id": 65392801, "node_id": "MDQ6VXNlcjY1MzkyODAx", "avatar_url": "https://avatars.githubusercontent.com/u/65392801?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tialo", "html_url": "https://github.com/Tialo", "followers_url": "https://api.github.com/users/Tialo/followers", "following_url": "https://api.github.com/users/Tialo/following{/other_user}", "gists_url": "https://api.github.com/users/Tialo/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tialo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tialo/subscriptions", "organizations_url": "https://api.github.com/users/Tialo/orgs", "repos_url": "https://api.github.com/users/Tialo/repos", "events_url": "https://api.github.com/users/Tialo/events{/privacy}", "received_events_url": "https://api.github.com/users/Tialo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is not a maintained part of the library anyore, we use `datasets` for the preprocessing, you can check the QA example [here](https://github.com/huggingface/transformers/blob/main/examples/tensorflow/question-answering/run_qa.py)." ]
1,687
1,687
1,687
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: macOS-13.4-x86_64-i386-64bit - Python version: 3.10.12 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.12.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python import tensorflow as tf from transformers import DistilBertTokenizer, TFDistilBertForQuestionAnswering from transformers import squad_convert_examples_to_features from transformers.data.processors.squad import SquadV1Processor import tensorflow_datasets as tfds if __name__ == "__main__": tfds_examples = tfds.load("squad") evaluate = False examples = SquadV1Processor().get_examples_from_dataset(tfds_examples, evaluate=evaluate) tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased') a, b = squad_convert_examples_to_features( examples=examples[:3], tokenizer=tokenizer, max_seq_length=384, doc_stride=128, max_query_length=64, is_training=not evaluate, return_dataset="tf" ) ``` This exception occurs when I use return_dataset="tf", otherwise it's fine. Error: ``` 2023-06-25 19:09:04.152047: W tensorflow/core/framework/op_kernel.cc:1818] INVALID_ARGUMENT: TypeError: `generator` yielded an element that did not match the expected structure. The expected structure was ( {'input_ids': tf.int32, 'attention_mask': tf.int32, 'feature_index': tf.int64, 'qas_id': tf.string}, {...}), but the yielded element was ( {'input_ids': [101, 2054, ..., 0], 'attention_mask': [1, 1, ..., 0], 'token_type_ids': [0, 0, ..., 0], 'feature_index': 0, 'qas_id': '57306bf68ab72b1400f9c4dc'}, {...}) ``` As you can see, for some reason there is an unwanted key in generator. https://github.com/huggingface/transformers/blob/8e164c5400b7b413c7b8fb32e35132001effc970/src/transformers/data/processors/squad.py#L437 Full error: 2023-06-25 19:09:04.152047: W tensorflow/core/framework/op_kernel.cc:1818] INVALID_ARGUMENT: TypeError: `generator` yielded an element that did not match the expected structure. The expected structure was ({'input_ids': tf.int32, 'attention_mask': tf.int32, 'feature_index': tf.int64, 'qas_id': tf.string}, {'start_positions': tf.int64, 'end_positions': tf.int64, 'cls_index': tf.int64, 'p_mask': tf.int32, 'is_impossible': tf.int32}), but the yielded element was ({'input_ids': [101, 2054, 2003, 2028, 2224, 2008, 2052, 5478, 2019, 13438, 2000, 4374, 7755, 1999, 2536, 3971, 2012, 2320, 1029, 102, 1996, 4489, 1999, 1996, 2682, 5876, 2005, 1996, 2553, 1997, 1162, 1027, 1014, 2003, 1996, 3114, 2008, 2087, 5062, 1006, 21670, 3832, 2005, 1996, 2270, 1007, 3594, 7471, 11508, 3989, 1012, 2005, 19278, 2379, 1996, 2598, 1010, 23190, 11508, 3550, 21670, 9015, 16990, 1012, 2005, 2190, 7684, 1996, 4909, 26315, 2005, 2122, 7755, 2024, 10655, 20018, 11508, 3550, 1012, 1999, 2070, 5097, 2073, 1996, 4909, 13438, 2442, 2147, 1999, 2151, 2597, 1010, 2004, 1999, 4684, 11640, 1010, 1996, 2918, 2276, 26315, 2224, 3816, 11508, 3989, 1010, 2107, 2004, 7399, 11508, 3989, 2012, 2019, 6466, 1006, 2007, 2119, 7471, 1998, 9876, 6177, 1007, 2030, 8206, 11508, 3989, 1012, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'feature_index': 0, 'qas_id': '57306bf68ab72b1400f9c4dc'}, {'start_positions': 94, 'end_positions': 95, 'cls_index': 0, 'p_mask': [0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'is_impossible': False}). Traceback (most recent call last): File "/Users/user/Desktop/transformers_test/venv/lib/python3.10/site-packages/tensorflow/python/data/ops/from_generator_op.py", line 204, in generator_py_func flattened_values = nest.flatten_up_to(output_types, values) File "/Users/user/Desktop/transformers_test/venv/lib/python3.10/site-packages/tensorflow/python/data/util/nest.py", line 377, in flatten_up_to assert_shallow_structure(shallow_tree, input_tree) File "/Users/user/Desktop/transformers_test/venv/lib/python3.10/site-packages/tensorflow/python/data/util/nest.py", line 304, in assert_shallow_structure assert_shallow_structure(shallow_branch, input_branch, File "/Users/user/Desktop/transformers_test/venv/lib/python3.10/site-packages/tensorflow/python/data/util/nest.py", line 289, in assert_shallow_structure raise ValueError( ValueError: The two structures don't have the same sequence length. Input structure has length 5, while shallow structure has length 4. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/user/Desktop/transformers_test/venv/lib/python3.10/site-packages/tensorflow/python/ops/script_ops.py", line 267, in __call__ ret = func(*args) File "/Users/user/Desktop/transformers_test/venv/lib/python3.10/site-packages/tensorflow/python/autograph/impl/api.py", line 642, in wrapper return func(*args, **kwargs) File "/Users/user/Desktop/transformers_test/venv/lib/python3.10/site-packages/tensorflow/python/data/ops/from_generator_op.py", line 206, in generator_py_func raise TypeError( ### Expected behavior There would be no error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24465/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24465/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24464
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24464/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24464/comments
https://api.github.com/repos/huggingface/transformers/issues/24464/events
https://github.com/huggingface/transformers/issues/24464
1,773,350,476
I_kwDOCUB6oc5psy5M
24,464
force_download for pipelines
{ "login": "morrisalp", "id": 8263996, "node_id": "MDQ6VXNlcjgyNjM5OTY=", "avatar_url": "https://avatars.githubusercontent.com/u/8263996?v=4", "gravatar_id": "", "url": "https://api.github.com/users/morrisalp", "html_url": "https://github.com/morrisalp", "followers_url": "https://api.github.com/users/morrisalp/followers", "following_url": "https://api.github.com/users/morrisalp/following{/other_user}", "gists_url": "https://api.github.com/users/morrisalp/gists{/gist_id}", "starred_url": "https://api.github.com/users/morrisalp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/morrisalp/subscriptions", "organizations_url": "https://api.github.com/users/morrisalp/orgs", "repos_url": "https://api.github.com/users/morrisalp/repos", "events_url": "https://api.github.com/users/morrisalp/events{/privacy}", "received_events_url": "https://api.github.com/users/morrisalp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @Narsil for information. You can already pass this in the `model_kwargs` @morrisalp , I don't think it's worth surfacing it more than this.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,687
1,691
1,691
NONE
null
### Feature request Add `force_download=True/False` argument to the pipeline API to allow for re-downloading a model and ignoring local cache. ### Motivation [PreTrainedModel](https://huggingface.co/docs/transformers/main_classes/model#transformers.PreTrainedModel) has the very useful argument `force_download` argument for ignoring local cache and downloading a model. However, this does not work with the pipeline API: ``` from transformers import pipeline pipe = pipeline("text2text-generation", model='t5-small', force_download=True) pipe('test') ``` yields error: `ValueError: The following `model_kwargs` are not used by the model: ['force_download'] (note: typos in the generate arguments will also show up in this list)` This is an issue since I am working with a pipeline using a model that is updating and would like to easily re-download it as needed. ### Your contribution could add this to docs if implemented
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24464/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24464/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24463
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24463/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24463/comments
https://api.github.com/repos/huggingface/transformers/issues/24463/events
https://github.com/huggingface/transformers/pull/24463
1,773,197,292
PR_kwDOCUB6oc5T1vGD
24,463
when resume from peft checkpoint, the model should be trainable
{ "login": "sywangyi", "id": 36058628, "node_id": "MDQ6VXNlcjM2MDU4NjI4", "avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sywangyi", "html_url": "https://github.com/sywangyi", "followers_url": "https://api.github.com/users/sywangyi/followers", "following_url": "https://api.github.com/users/sywangyi/following{/other_user}", "gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions", "organizations_url": "https://api.github.com/users/sywangyi/orgs", "repos_url": "https://api.github.com/users/sywangyi/repos", "events_url": "https://api.github.com/users/sywangyi/events{/privacy}", "received_events_url": "https://api.github.com/users/sywangyi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@younesbelkada please help review", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,687
1,687
1,687
CONTRIBUTOR
null
model.training is false after model.load_adapter of peft. see https://github.com/huggingface/peft/blob/main/src/peft/peft_model.py#L402, default value of is_trainable is False - trainer: @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24463/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24463/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24463", "html_url": "https://github.com/huggingface/transformers/pull/24463", "diff_url": "https://github.com/huggingface/transformers/pull/24463.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24463.patch", "merged_at": 1687781248000 }
https://api.github.com/repos/huggingface/transformers/issues/24462
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24462/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24462/comments
https://api.github.com/repos/huggingface/transformers/issues/24462/events
https://github.com/huggingface/transformers/issues/24462
1,773,186,997
I_kwDOCUB6oc5psK-1
24,462
'NoneType' object has no attribute 'flush'
{ "login": "17Reset", "id": 122418720, "node_id": "U_kgDOB0v2IA", "avatar_url": "https://avatars.githubusercontent.com/u/122418720?v=4", "gravatar_id": "", "url": "https://api.github.com/users/17Reset", "html_url": "https://github.com/17Reset", "followers_url": "https://api.github.com/users/17Reset/followers", "following_url": "https://api.github.com/users/17Reset/following{/other_user}", "gists_url": "https://api.github.com/users/17Reset/gists{/gist_id}", "starred_url": "https://api.github.com/users/17Reset/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/17Reset/subscriptions", "organizations_url": "https://api.github.com/users/17Reset/orgs", "repos_url": "https://api.github.com/users/17Reset/repos", "events_url": "https://api.github.com/users/17Reset/events{/privacy}", "received_events_url": "https://api.github.com/users/17Reset/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @LysandreJik ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,687
1,690
1,690
NONE
null
When I use PyInstaller to package transformers program, if I choose windowless mode, I get the following error, but I don't want console mode, I want to build the program based on windowless mode. ` File "transformers\utils\import_utils.py", line 37, in <module> logger = logging.get_logger(__name__) # pylint: disable=invalid-name ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "transformers\utils\logging.py", line 124, in get_logger _configure_library_root_logger() File "transformers\utils\logging.py", line 88, in _configure_library_root_logger _default_handler.flush = sys.stderr.flush ^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'flush'`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24462/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24462/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24461
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24461/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24461/comments
https://api.github.com/repos/huggingface/transformers/issues/24461/events
https://github.com/huggingface/transformers/issues/24461
1,773,136,095
I_kwDOCUB6oc5pr-jf
24,461
transformers.LlamaTokenizer.from_pretrained does not work after transformers are packaged with Pyinstaller
{ "login": "17Reset", "id": 122418720, "node_id": "U_kgDOB0v2IA", "avatar_url": "https://avatars.githubusercontent.com/u/122418720?v=4", "gravatar_id": "", "url": "https://api.github.com/users/17Reset", "html_url": "https://github.com/17Reset", "followers_url": "https://api.github.com/users/17Reset/followers", "following_url": "https://api.github.com/users/17Reset/following{/other_user}", "gists_url": "https://api.github.com/users/17Reset/gists{/gist_id}", "starred_url": "https://api.github.com/users/17Reset/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/17Reset/subscriptions", "organizations_url": "https://api.github.com/users/17Reset/orgs", "repos_url": "https://api.github.com/users/17Reset/repos", "events_url": "https://api.github.com/users/17Reset/events{/privacy}", "received_events_url": "https://api.github.com/users/17Reset/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The Pypi package does not have access to your local drive where `LLM_CHECKPOINT` points to, that's why you get different behavior.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,687
1,690
1,690
NONE
null
When I use the following code in my python project, the model will load and work properly, but when I package my project using Pyinstaller, I found by debugging that this code to load the model, does not work properly: tokenizer = transformers.LlamaTokenizer.from_ pretrained(LLM_CHECKPOINT). This complete function is shown below, where LLM_CHECKPOINT = 'D:/LLM/alpaca-llama-7b-fp16' and device = torch.device('cpu'): import torch import transformers def llm_initial(LLM_CHECKPOINT, device): tokenizer = transformers.LlamaTokenizer.from_pretrained(LLM_CHECKPOINT) model = transformers.LlamaForCausalLM.from_pretrained(LLM_CHECKPOINT).to(device) model.eval() generation_config = transformers.GenerationConfig( # max_new_tokens: This is the maximum number of tokens to generate. # The generated sequence will not exceed this length. max_new_tokens=256, # temperature: This is a parameter for controlling the randomness of predictions # by scaling the logits before applying softmax. Higher values (greater than 1) # increase randomness, while lower values make the model more deterministic. temperature=1, # top_k: This parameter controls the number of highest probability predictions # to consider for the next token. It's used to introduce some randomness and # creativity into the model's outputs. top_k=40, # top_p: This parameter is also known as nucleus sampling and is used to ensure # that the cumulative probability of the considered tokens is at least top_p. # This parameter also introduces randomness into the model's outputs. top_p=0.9, # repetition_penalty: This parameter is used to control for repetitive behavior # in the model's outputs. If the value is greater than 1.0, the model is # penalized for generating the same token repeatedly. repetition_penalty=1.15 ) return [device, tokenizer, model, generation_config] def llm_response(device, tokenizer, model, generation_config, prompt): prompt = generate_prompt(prompt) input_ids = tokenizer.encode(prompt, return_tensors='pt').to(device) with torch.enable_grad(): output_ids = model.generate(input_ids=input_ids, generation_config=generation_config) LLM_RESPONSE = tokenizer.decode(output_ids[0], skip_special_tokens=True) return LLM_RESPONSE.replace(prompt, '').strip()
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24461/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24461/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24460
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24460/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24460/comments
https://api.github.com/repos/huggingface/transformers/issues/24460/events
https://github.com/huggingface/transformers/pull/24460
1,773,113,734
PR_kwDOCUB6oc5T1dWj
24,460
use accelerate autocast in jit eval path, since mix precision logic is…
{ "login": "sywangyi", "id": 36058628, "node_id": "MDQ6VXNlcjM2MDU4NjI4", "avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sywangyi", "html_url": "https://github.com/sywangyi", "followers_url": "https://api.github.com/users/sywangyi/followers", "following_url": "https://api.github.com/users/sywangyi/following{/other_user}", "gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions", "organizations_url": "https://api.github.com/users/sywangyi/orgs", "repos_url": "https://api.github.com/users/sywangyi/repos", "events_url": "https://api.github.com/users/sywangyi/events{/privacy}", "received_events_url": "https://api.github.com/users/sywangyi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@yao-matrix", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,687
1,687
1,687
CONTRIBUTOR
null
… in accelerator currently Fixes # (issue) mix precision logic is all moved to accelerator, so autocast_smart_context_manager does not take effect any longer. use accelerate autocast instead ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. - trainer: @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24460/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24460/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24460", "html_url": "https://github.com/huggingface/transformers/pull/24460", "diff_url": "https://github.com/huggingface/transformers/pull/24460.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24460.patch", "merged_at": 1687869202000 }
https://api.github.com/repos/huggingface/transformers/issues/24459
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24459/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24459/comments
https://api.github.com/repos/huggingface/transformers/issues/24459/events
https://github.com/huggingface/transformers/pull/24459
1,772,947,812
PR_kwDOCUB6oc5T06vD
24,459
Add FlaxMinNewTokensLengthLogitsProcessor
{ "login": "yeandy", "id": 14128880, "node_id": "MDQ6VXNlcjE0MTI4ODgw", "avatar_url": "https://avatars.githubusercontent.com/u/14128880?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yeandy", "html_url": "https://github.com/yeandy", "followers_url": "https://api.github.com/users/yeandy/followers", "following_url": "https://api.github.com/users/yeandy/following{/other_user}", "gists_url": "https://api.github.com/users/yeandy/gists{/gist_id}", "starred_url": "https://api.github.com/users/yeandy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yeandy/subscriptions", "organizations_url": "https://api.github.com/users/yeandy/orgs", "repos_url": "https://api.github.com/users/yeandy/repos", "events_url": "https://api.github.com/users/yeandy/events{/privacy}", "received_events_url": "https://api.github.com/users/yeandy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,687
1,690
1,690
NONE
null
# What does this PR do? PyTorch's options of logit processors has both `MinLengthLogitsProcessor` and `MinNewTokensLengthLogitsProcessor`. See [code](https://github.com/huggingface/transformers/blob/v4.30.2/src/transformers/generation/utils.py#L886C1-L901C14). However, Flax only has [FlaxMinLengthLogitsProcessor](https://github.com/huggingface/transformers/blob/v4.30.2/src/transformers/generation/flax_utils.py#L500C1-L507C14), and does not account for the case if `min_token_length` is passed into the GenerationConfig (which is what PyTorch's `MinNewTokensLengthLogitsProcessor` does. As a result, when passing the same config that contains `min_new_tokens` to both the PyTorch and Flax model, I see different generated outputs. Changes: - Add `FlaxMinTokensLengthLogitsProcessor` class, which is the Flax version of PyTorch's `MinNewTokensLengthLogitsProcessor`, and add an if-statement to select `FlaxMinTokensLengthLogitsProcessor` when `min_new_tokens` is passed. - Change the conditional statement for `FlaxMinLengthLogitsProcessor`. I believe it's a bug, where it checks for `generation_config.min_length > -1`. However, the [default value is 0](https://github.com/huggingface/transformers/blob/v4.30.2/src/transformers/generation/configuration_utils.py#L228), so this expression will always be true, and won't allow us to go the if statement for `FlaxMinTokensLengthLogitsProcessor`. Checking that `generation_config.min_length > 0` is also how the PyTorch logic [does it](https://github.com/huggingface/transformers/blob/v4.30.2/src/transformers/generation/utils.py#L889). <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24459/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24459/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24459", "html_url": "https://github.com/huggingface/transformers/pull/24459", "diff_url": "https://github.com/huggingface/transformers/pull/24459.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24459.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24458
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24458/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24458/comments
https://api.github.com/repos/huggingface/transformers/issues/24458/events
https://github.com/huggingface/transformers/issues/24458
1,772,871,881
I_kwDOCUB6oc5pq-DJ
24,458
audio classification example script RuntimeError on evaluation
{ "login": "adamkatav", "id": 13109136, "node_id": "MDQ6VXNlcjEzMTA5MTM2", "avatar_url": "https://avatars.githubusercontent.com/u/13109136?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adamkatav", "html_url": "https://github.com/adamkatav", "followers_url": "https://api.github.com/users/adamkatav/followers", "following_url": "https://api.github.com/users/adamkatav/following{/other_user}", "gists_url": "https://api.github.com/users/adamkatav/gists{/gist_id}", "starred_url": "https://api.github.com/users/adamkatav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adamkatav/subscriptions", "organizations_url": "https://api.github.com/users/adamkatav/orgs", "repos_url": "https://api.github.com/users/adamkatav/repos", "events_url": "https://api.github.com/users/adamkatav/events{/privacy}", "received_events_url": "https://api.github.com/users/adamkatav/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @adamkatav - do you have a reproducible code snippet we could use to emulate this behaviour?", "> Hey @adamkatav - do you have a reproducibl\r\ne code snippet we could use to emulate this behaviour?\r\n\r\nSure, it's a copy-paste from [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification).\r\nI'm also attaching the output file from my terminal. [hugging_face_error.txt](https://github.com/huggingface/transformers/files/11885751/hugging_face_error.txt)\r\n\r\nI re-ran the script after a reboot on a fresh official hugging face container with a RTX 2080 TI GPU.", "Hi, @adamkatav I was not reproduce the error you mentioned in the above messages, I'm on the same versions (for the most part) as mentioned.", "> Hi, @adamkatav I was not reproduce the error you mentioned in the above messages, I'm on the same versions (for the most part) as mentioned.\r\n\r\nThank you for the reply, might it be a docker issue? Because I got the same error on colab, both environments are in a docker container.", "Yes, the issue you encountered with the RuntimeError could potentially be related to the Docker environment. If you are experiencing the same error in Colab as well, it suggests that the issue might be related to the specific Docker setup or configuration you are using. Maybe try testing it outside the Docker container. ", "Hey @adamkatav - I'm also not able to reproduce the error using the example script. It trains for me as expected. Just as a precautionary measure, could you try installing `accelerate` / `transformers` at the last stable PyPi versions?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,687
1,690
1,690
NONE
null
### System Info accelerate==0.21.0.dev0 OS: Ubuntu 20.04 - `transformers` version: 4.31.0.dev0 - Platform: Linux-5.15.0-57-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: RTX 2080TI, also happened on A100 (colab) - Using distributed or parallel set-up in script?: no - The rest are from the latest hugging face pytorch docker image ### Who can help? @sanchit-gandhi @sgugger @albertvillanova ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ` 693/6930 [08:55<1:18:59, 1.32it/s][INFO|trainer.py:3074] 2023-06-24 18:44:28,008 >> ***** Running Evaluation ***** [INFO|trainer.py:3076] 2023-06-24 18:44:28,008 >> Num examples = 5888 [INFO|trainer.py:3079] 2023-06-24 18:44:28,008 >> Batch size = 1 {'eval_loss': 2.8346564769744873, 'eval_accuracy': 0.2105978260869565, 'eval_runtime': 70.8074, 'eval_samples_per_second': 83.155, 'eval_steps_per_second': 83.155, 'epoch': 1.0} >Saving model checkpoint to wav2vec2-base-lang-id/checkpoint-693 [INFO|configuration_utils.py:458] 2023-06-24 18:45:38,818 >> Configuration saved in wav2vec2-base-lang-id/checkpoint-693/config.json [INFO|modeling_utils.py:1845] 2023-06-24 18:45:39,166 >> Model weights saved in wav2vec2-base-lang-id/checkpoint-693/pytorch_model.bin [INFO|feature_extraction_utils.py:377] 2023-06-24 18:45:39,166 >> Feature extractor saved in wav2vec2-base-lang-id/checkpoint-693/preprocessor_config.json Traceback (most recent call last): File "run_audio_classification.py", line 418, in <module> main() File "run_audio_classification.py", line 392, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/transformers/src/transformers/trainer.py", line 1539, in train return inner_training_loop( File "/transformers/src/transformers/trainer.py", line 1850, in _inner_training_loop self.accelerator.clip_grad_norm_( File "/usr/local/lib/python3.8/dist-packages/accelerate/accelerator.py", line 1913, in clip_grad_norm_ self.unscale_gradients() File "/usr/local/lib/python3.8/dist-packages/accelerate/accelerator.py", line 1876, in unscale_gradients self.scaler.unscale_(opt) File "/usr/local/lib/python3.8/dist-packages/torch/cuda/amp/grad_scaler.py", line 275, in unscale_ raise RuntimeError("unscale_() has already been called on this optimizer since the last update().") RuntimeError: unscale_() has already been called on this optimizer since the last update(). ` ### Expected behavior I'd expect the script to train the model and finish successfully. Thank you very much :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24458/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24458/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24457
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24457/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24457/comments
https://api.github.com/repos/huggingface/transformers/issues/24457/events
https://github.com/huggingface/transformers/pull/24457
1,772,871,730
PR_kwDOCUB6oc5T0q6u
24,457
Generate: deprecation timeline for parameterization though the model config
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24457). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,687
1,690
1,690
MEMBER
null
# What does this PR do? We had a warning about deprecating the parameterization of `.generate()` through the model config, but there was no specific date. This PR makes it clear it will go away in `v4.32`. The extra parameters and code to allow the old and the new way of parameterizing `.generate()` to live together are causing a myriad of issues, so let's get rid of them :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24457/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24457/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24457", "html_url": "https://github.com/huggingface/transformers/pull/24457", "diff_url": "https://github.com/huggingface/transformers/pull/24457.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24457.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24456
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24456/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24456/comments
https://api.github.com/repos/huggingface/transformers/issues/24456/events
https://github.com/huggingface/transformers/pull/24456
1,772,850,100
PR_kwDOCUB6oc5T0mv0
24,456
Generate: `group_beam_search` requires `diversity_penalty>0.0`
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,687
1,687
1,687
MEMBER
null
# What does this PR do? While revisiting `group_beam_search` to review https://github.com/huggingface/transformers/pull/24407, I noticed that we do not require `diversity_penalty` to be `>0.0`. If it is not `>0.0`, then `group_beam_search` degenerates to `beam_search` with `num_beams=num_beams/num_beam_groups` -- users exploring the method risk not seeing its potential. With this exception, we ensure the degeneration case does not happen (and possibly nudge the users towards the docs)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24456/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24456/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24456", "html_url": "https://github.com/huggingface/transformers/pull/24456", "diff_url": "https://github.com/huggingface/transformers/pull/24456.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24456.patch", "merged_at": 1687859199000 }
https://api.github.com/repos/huggingface/transformers/issues/24455
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24455/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24455/comments
https://api.github.com/repos/huggingface/transformers/issues/24455/events
https://github.com/huggingface/transformers/issues/24455
1,772,847,671
I_kwDOCUB6oc5pq4I3
24,455
Trajectory Transformer - NameError: name 'self' is not defined
{ "login": "km5ar", "id": 54015474, "node_id": "MDQ6VXNlcjU0MDE1NDc0", "avatar_url": "https://avatars.githubusercontent.com/u/54015474?v=4", "gravatar_id": "", "url": "https://api.github.com/users/km5ar", "html_url": "https://github.com/km5ar", "followers_url": "https://api.github.com/users/km5ar/followers", "following_url": "https://api.github.com/users/km5ar/following{/other_user}", "gists_url": "https://api.github.com/users/km5ar/gists{/gist_id}", "starred_url": "https://api.github.com/users/km5ar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/km5ar/subscriptions", "organizations_url": "https://api.github.com/users/km5ar/orgs", "repos_url": "https://api.github.com/users/km5ar/repos", "events_url": "https://api.github.com/users/km5ar/events{/privacy}", "received_events_url": "https://api.github.com/users/km5ar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "`self` is indeed not defined in your code. Please use the [forums](https://discuss.huggingface.co/) to debug your code :-)", "Yeah, but I was using the code directly from the tutorial, so that's the reason I raised this issue...", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,687
1,691
1,691
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.9.16 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import TrajectoryTransformerModel import torch import numpy as np device = "cuda" from transformers import TrajectoryTransformerModel import torch model = TrajectoryTransformerModel.from_pretrained( "CarlCochet/trajectory-transformer-halfcheetah-medium-v2" ) model.to(device) model.eval() observations_dim, action_dim, batch_size = 17, 6, 256 seq_length = observations_dim + action_dim + 1 trajectories = torch.LongTensor([np.random.permutation(self.seq_length) for _ in range(batch_size)]).to( device ) targets = torch.LongTensor([np.random.permutation(self.seq_length) for _ in range(batch_size)]).to(device) outputs = model( trajectories, targets=targets, use_cache=True, output_attentions=True, output_hidden_states=True, return_dict=True, ) ### Expected behavior ![1687627979812](https://github.com/huggingface/transformers/assets/54015474/609d3e13-0091-40f3-815f-9cc28d7bc429)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24455/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24455/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24454
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24454/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24454/comments
https://api.github.com/repos/huggingface/transformers/issues/24454/events
https://github.com/huggingface/transformers/issues/24454
1,772,764,421
I_kwDOCUB6oc5pqj0F
24,454
v4.30-release run_ner gives datasets.builder.DatasetGenerationError
{ "login": "santoshcoder23", "id": 135539914, "node_id": "U_kgDOCBQsyg", "avatar_url": "https://avatars.githubusercontent.com/u/135539914?v=4", "gravatar_id": "", "url": "https://api.github.com/users/santoshcoder23", "html_url": "https://github.com/santoshcoder23", "followers_url": "https://api.github.com/users/santoshcoder23/followers", "following_url": "https://api.github.com/users/santoshcoder23/following{/other_user}", "gists_url": "https://api.github.com/users/santoshcoder23/gists{/gist_id}", "starred_url": "https://api.github.com/users/santoshcoder23/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/santoshcoder23/subscriptions", "organizations_url": "https://api.github.com/users/santoshcoder23/orgs", "repos_url": "https://api.github.com/users/santoshcoder23/repos", "events_url": "https://api.github.com/users/santoshcoder23/events{/privacy}", "received_events_url": "https://api.github.com/users/santoshcoder23/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "That sounds like an issue for `datasets`, not Transformers :-)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,687
1,690
1,690
NONE
null
### System Info transformers==v4.30-release pytorch latest datasets latest Ubuntu-20.0 ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction python run_ner.py \ --model_name_or_path bert-base-uncased \ --dataset_name conll2003 \ --output_dir /tmp/test-ner \ --do_train \ --do_eval Error: _raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset_ ### Expected behavior Evaluation should run on pre-trained model on conll2003 dataset.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24454/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24454/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24453
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24453/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24453/comments
https://api.github.com/repos/huggingface/transformers/issues/24453/events
https://github.com/huggingface/transformers/pull/24453
1,772,580,505
PR_kwDOCUB6oc5Tzrp6
24,453
Generate: `min_tokens_to_keep` has to be `>= 1`
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks @gante!" ]
1,687
1,687
1,687
MEMBER
null
# What does this PR do? As pointed out by @njhill in [this comment](https://github.com/huggingface/transformers/pull/24111#issuecomment-1601824441), `min_tokens_to_keep` has to be `>=1`. When it is not, the sampling step will lead to numerical exceptions, as all tokens have `-float("inf")` as logits. This PR updates some of the checks, which were checking that it was `>=0`, and fixes the typical_p logits processor, which has the exact same issue as the one fixed in https://github.com/huggingface/transformers/pull/24111
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24453/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24453/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24453", "html_url": "https://github.com/huggingface/transformers/pull/24453", "diff_url": "https://github.com/huggingface/transformers/pull/24453.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24453.patch", "merged_at": 1687862903000 }
https://api.github.com/repos/huggingface/transformers/issues/24452
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24452/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24452/comments
https://api.github.com/repos/huggingface/transformers/issues/24452/events
https://github.com/huggingface/transformers/pull/24452
1,772,332,569
PR_kwDOCUB6oc5Ty2qD
24,452
Fix tpu_metrics_debug
{ "login": "cowanmeg", "id": 6570496, "node_id": "MDQ6VXNlcjY1NzA0OTY=", "avatar_url": "https://avatars.githubusercontent.com/u/6570496?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cowanmeg", "html_url": "https://github.com/cowanmeg", "followers_url": "https://api.github.com/users/cowanmeg/followers", "following_url": "https://api.github.com/users/cowanmeg/following{/other_user}", "gists_url": "https://api.github.com/users/cowanmeg/gists{/gist_id}", "starred_url": "https://api.github.com/users/cowanmeg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cowanmeg/subscriptions", "organizations_url": "https://api.github.com/users/cowanmeg/orgs", "repos_url": "https://api.github.com/users/cowanmeg/repos", "events_url": "https://api.github.com/users/cowanmeg/events{/privacy}", "received_events_url": "https://api.github.com/users/cowanmeg/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,687
1,687
1,687
CONTRIBUTOR
null
# What does this PR do? Adding the `--tpu_metrics_debug` argument causes an error. Quick fix before the argument is deprecated. In `training_args.py` check if `self.debug` is None before appending the string since `self.debug` is now initialized to None instead of the empty string. Related to https://github.com/huggingface/transformers/pull/24033. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24452/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24452/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24452", "html_url": "https://github.com/huggingface/transformers/pull/24452", "diff_url": "https://github.com/huggingface/transformers/pull/24452.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24452.patch", "merged_at": 1687773548000 }
https://api.github.com/repos/huggingface/transformers/issues/24451
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24451/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24451/comments
https://api.github.com/repos/huggingface/transformers/issues/24451/events
https://github.com/huggingface/transformers/issues/24451
1,772,258,057
I_kwDOCUB6oc5pooMJ
24,451
is_torch_bf16_gpu_available does not check for AMD GPUs
{ "login": "cjekel", "id": 13884657, "node_id": "MDQ6VXNlcjEzODg0NjU3", "avatar_url": "https://avatars.githubusercontent.com/u/13884657?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cjekel", "html_url": "https://github.com/cjekel", "followers_url": "https://api.github.com/users/cjekel/followers", "following_url": "https://api.github.com/users/cjekel/following{/other_user}", "gists_url": "https://api.github.com/users/cjekel/gists{/gist_id}", "starred_url": "https://api.github.com/users/cjekel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cjekel/subscriptions", "organizations_url": "https://api.github.com/users/cjekel/orgs", "repos_url": "https://api.github.com/users/cjekel/repos", "events_url": "https://api.github.com/users/cjekel/events{/privacy}", "received_events_url": "https://api.github.com/users/cjekel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "We don't officially support AMD GPUs yet. This is coming soon as we get AMD GPUs to run our CI and check everything runs smoothly, so stay tuned!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,687
1,690
1,690
NONE
null
### System Info transformer 4.30.2 python 3.9.13.1 torch 2.0.1 rocm 5.4.2 AMD mi250x ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction AMD GPUs like the mi250 and mi250x support bf16. See https://www.amd.com/en/products/server-accelerators/instinct-mi250x `transformers.utils.import_utils.is_torch_bf16_gpu_available` returns `False` with mi250x. Additional inspection shows that the function does not check AMD gpus at all. The problem occurs because `torch.version.cuda=None` when using hip. Steps to reproduce: 1. have AMD GPU that supports bf16 2. The problem arises when calling `transformers.TrainingArguments` and using something like `--bf16 True`. You'll see something like this... ```python File lib/python3.9/site-packages/transformers/training_args.py:1297, in TrainingArguments.__post_init__(self) 1294 raise ValueError("Your setup doesn't support bf16/(cpu, tpu, neuroncore). You need torch>=1.10") 1295 elif not self.no_cuda and torch.cuda.is_available() and not is_torch_bf16_gpu_available(): 1296 # gpu -> 1297 raise ValueError( 1298 "Your setup doesn't support bf16/gpu. You need torch>=1.10, using Ampere GPU with cuda>=11.0" 1299 ) 1301 if self.fp16 and self.bf16: 1302 raise ValueError("At most one of fp16 and bf16 can be True, but not both") ValueError: Your setup doesn't support bf16/gpu. You need torch>=1.10, using Ampere GPU with cuda>=11.0 ``` 3. It's a bit easier to call the function directly. ```python import transformers transformers.utils.import_utils.is_torch_bf16_gpu_available() ``` which returns `False` instead of `True`. ### Expected behavior `transformers.utils.import_utils.is_torch_bf16_gpu_available()` should check more than just Cuda to check if bf16 is available. A quick work around is to add ```python if torch.version.hip is not None: return True ``` to src/transformers/utils/import_utils.py. However, I don't know which AMD GPUs actually support bf16. It looks like mi200 does as well.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24451/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24451/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24450
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24450/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24450/comments
https://api.github.com/repos/huggingface/transformers/issues/24450/events
https://github.com/huggingface/transformers/pull/24450
1,771,913,022
PR_kwDOCUB6oc5TxdE5
24,450
Update AlbertModel type annotation
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,687
1,687
1,687
COLLABORATOR
null
# What does this PR do? Fixes type annotations. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24450/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24450/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24450", "html_url": "https://github.com/huggingface/transformers/pull/24450", "diff_url": "https://github.com/huggingface/transformers/pull/24450.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24450.patch", "merged_at": 1687773583000 }
https://api.github.com/repos/huggingface/transformers/issues/24449
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24449/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24449/comments
https://api.github.com/repos/huggingface/transformers/issues/24449/events
https://github.com/huggingface/transformers/issues/24449
1,771,890,836
I_kwDOCUB6oc5pnOiU
24,449
RuntimeError: unscale_() has already been called on this optimizer since the last update().
{ "login": "kunaldeo", "id": 441799, "node_id": "MDQ6VXNlcjQ0MTc5OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/441799?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kunaldeo", "html_url": "https://github.com/kunaldeo", "followers_url": "https://api.github.com/users/kunaldeo/followers", "following_url": "https://api.github.com/users/kunaldeo/following{/other_user}", "gists_url": "https://api.github.com/users/kunaldeo/gists{/gist_id}", "starred_url": "https://api.github.com/users/kunaldeo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kunaldeo/subscriptions", "organizations_url": "https://api.github.com/users/kunaldeo/orgs", "repos_url": "https://api.github.com/users/kunaldeo/repos", "events_url": "https://api.github.com/users/kunaldeo/events{/privacy}", "received_events_url": "https://api.github.com/users/kunaldeo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @pacman100 and @muellerzr ", "Hello, this is a duplicate issue. Please search the already existing ones. This is fixed via PR https://github.com/huggingface/transformers/pull/24415", "Yes this is fixed. Thanks." ]
1,687
1,687
1,687
NONE
null
### System Info - `transformers` version: 4.31.0.dev0 - `accelerate` version: 0.21.0.dev0 - `peft` version: 0.4.0.dev0 - Platform: Linux-6.3.9-zen1-1-zen-x86_64-with-glibc2.37 - Python version: 3.10.11 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?:Yes - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. Run LoRA training ```py trainer = Trainer( model=model, train_dataset=train_data, eval_dataset=val_data, args=TrainingArguments( per_device_train_batch_size=4, auto_find_batch_size=True, gradient_accumulation_steps=32, warmup_steps=100, num_train_epochs=EPOCHS, learning_rate=LEARNING_RATE, fp16=True, logging_steps=1, evaluation_strategy="steps" if VAL_SET_SIZE > 0 else "no", save_strategy="steps", eval_steps=50 if VAL_SET_SIZE > 0 else None, save_steps=500, output_dir=OUTPUT_DIR, #output_dir=repository_id, save_total_limit=3, load_best_model_at_end=True if VAL_SET_SIZE > 0 else False, ddp_find_unused_parameters=False if ddp else None, ), data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False), ) model.config.use_cache = False old_state_dict = model.state_dict model.state_dict = ( lambda self, *_, **__: get_peft_model_state_dict(self, old_state_dict()) ).__get__(model, type(model)) if torch.__version__ >= "2" and sys.platform != 'win32': model = torch.compile(model) trainer.train(resume_from_checkpoint = False) ``` 2. Sometime after 1st epoch I run into the following error ```sh {'loss': 2.2014, 'learning_rate': 0.0002538461538461538, 'epoch': 0.99} {'loss': 2.24, 'learning_rate': 0.0002492307692307692, 'epoch': 1.0} {'loss': 2.2383, 'learning_rate': 0.0002446153846153846, 'epoch': 1.01} raceback (most recent call last):███████████████████████████████████▏ | 112/333 [42:21<1:21:32, 22.14s/it] File "/home/kunal/ml/train.py", line 234, in <module> trainer.train(resume_from_checkpoint = False) File "/home/kunal/miniconda3/envs/lora/lib/python3.10/site-packages/transformers/trainer.py", line 1530, in train return inner_training_loop( File "/home/kunal/miniconda3/envs/lora/lib/python3.10/site-packages/accelerate/utils/memory.py", line 132, in decorator return function(batch_size, *args, **kwargs) File "/home/kunal/miniconda3/envs/lora/lib/python3.10/site-packages/transformers/trainer.py", line 1843, in _inner_training_loop self.accelerator.clip_grad_norm_( File "/home/kunal/miniconda3/envs/lora/lib/python3.10/site-packages/accelerate/accelerator.py", line 1913, in clip_grad_norm_ self.unscale_gradients() File "/home/kunal/miniconda3/envs/lora/lib/python3.10/site-packages/accelerate/accelerator.py", line 1876, in unscale_gradients self.scaler.unscale_(opt) File "/home/kunal/miniconda3/envs/lora/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py", line 275, in unscale_ raise RuntimeError("unscale_() has already been called on this optimizer since the last update().") RuntimeError: unscale_() has already been called on this optimizer since the last update(). ``` This training works fine on `transformers@de9255de27abfcae4a1f816b904915f0b1e23cd9`. ### Expected behavior Training should succeed.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24449/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24449/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24448
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24448/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24448/comments
https://api.github.com/repos/huggingface/transformers/issues/24448/events
https://github.com/huggingface/transformers/pull/24448
1,771,633,050
PR_kwDOCUB6oc5TwfRj
24,448
Improved keras imports
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Keras and they keep moving it around\r\n\r\nIt's tensor-flow ... that's probably the reason.", "I tested this with a spread of TF versions going back to 2.6. Even at 2.6, it was pretty hard to get an environment that worked with modern `transformers` - old TensorFlow keeps trying to use NumPy features that have been deprecated and deleted, but our modern libraries need a minimum version of NumPy to run at all, so there's actually only a very narrow window of NumPy versions that can even run both at once! I think going back to 2.5 or earlier would be very difficult, so I'm pretty comfortable with bumping our minimum version at this point.\r\n\r\nIn all versions I tested with this patch, our test suite runs well and the issue identified in #24437 is fixed, so this should be ready to go after it's reviewed!" ]
1,687
1,687
1,687
MEMBER
null
A sneaky bug was hiding in our Keras imports, where an import for `call_context` appeared to succeed on some TF versions, but actually got an older, unusable version of the function. This caused `build()` to behave improperly in some cases. I went on a quest to fix this, and generally clean up our version-specific imports for TensorFlow to stop things like this from happening in future. I also bumped the minimum version for TF to 2.6 (2.6 should be 2 years old by the time of our next release), and eliminated the version cap in our dependency table because TF 2.13 should also be fully supported now. This involved a partial rewrite of some code, where we checked for `KerasTensor` in a lot of places. However, this is an internal class for Keras and they keep moving it around, so trying to import it feels like a bad idea. Instead, I'm relying on `tf.is_tensor()`, which returns `True` for anything tensor-y, including symbolic tensors and Keras `Input` placeholders. Fixes #24437
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24448/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24448/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24448", "html_url": "https://github.com/huggingface/transformers/pull/24448", "diff_url": "https://github.com/huggingface/transformers/pull/24448.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24448.patch", "merged_at": 1687543774000 }
https://api.github.com/repos/huggingface/transformers/issues/24447
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24447/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24447/comments
https://api.github.com/repos/huggingface/transformers/issues/24447/events
https://github.com/huggingface/transformers/issues/24447
1,771,337,524
I_kwDOCUB6oc5plHc0
24,447
Error when trying to install transformers with Conda
{ "login": "sophiamaedler", "id": 15019107, "node_id": "MDQ6VXNlcjE1MDE5MTA3", "avatar_url": "https://avatars.githubusercontent.com/u/15019107?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sophiamaedler", "html_url": "https://github.com/sophiamaedler", "followers_url": "https://api.github.com/users/sophiamaedler/followers", "following_url": "https://api.github.com/users/sophiamaedler/following{/other_user}", "gists_url": "https://api.github.com/users/sophiamaedler/gists{/gist_id}", "starred_url": "https://api.github.com/users/sophiamaedler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sophiamaedler/subscriptions", "organizations_url": "https://api.github.com/users/sophiamaedler/orgs", "repos_url": "https://api.github.com/users/sophiamaedler/repos", "events_url": "https://api.github.com/users/sophiamaedler/events{/privacy}", "received_events_url": "https://api.github.com/users/sophiamaedler/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @LysandreJik ", "Hey, that seems to be a problem with a missing SSL package.\r\n\r\nCould you check if following these instructions fixes your issue? https://github.com/huggingface/transformers/issues/21805#issuecomment-1478161530", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Just ran into this as well. I think this is still a bug?\r\n\r\nOne question is why not default to `conda-forge::transformers` on the installation guide? Currently it suggests `huggingface::transformers` https://huggingface.co/docs/transformers/installation:\r\n\r\n<img width=\"395\" alt=\"Screenshot 2023-12-29 at 21 42 39\" src=\"https://github.com/huggingface/transformers/assets/7593028/d18b1c0f-a0cc-4d49-9f9e-d3c9214b6321\">\r\n\r\nbut even installing this in a clean conda environment results in this error due to libssl.\r\n\r\n(I think the only reason e.g., NVIDIA has a separate channels is because they need to upload binaries rather than source code, and conda-forge heavily discourages this). But since HF is all open-source, why not prioritize https://github.com/conda-forge/transformers-feedstock? Otherwise it will cause separate libraries for all dependencies to be installed which could cause a bunch of linking issues.\r\n\r\n", "Indeed @MilesCranmer, updating that line to recommend installing from conda-forge would be best.\r\n\r\nWould you like to open a PR updating it? " ]
1,687
1,704
1,690
NONE
null
### System Info - Platform: Linux-5.14.21-150400.24.41-default-x86_64-with-glibc2.10 - Python version: 3.8.17 - Huggingface_hub version: 0.15.1 - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` # Install PyTorch According to Documentation for System conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia #install transformers according to documentation conda install -c huggingface transformers ``` Error message when trying to import transformers: ``` Python 3.8.17 | packaged by conda-forge | (default, Jun 16 2023, 07:06:00) [GCC 11.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from transformers import AutoImageProcessor, ConvNextModel, ConvNextConfig, ViTFeatureExtractor Traceback (most recent call last): File "/fs/home/maedler/.local/miniconda3/envs/ConvNext_pip/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1110, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/fs/home/maedler/.local/miniconda3/envs/ConvNext_pip/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 843, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/fs/home/maedler/.local/miniconda3/envs/ConvNext_pip/lib/python3.8/site-packages/transformers/models/__init__.py", line 19, in <module> from . import ( File "/fs/home/maedler/.local/miniconda3/envs/ConvNext_pip/lib/python3.8/site-packages/transformers/models/mt5/__init__.py", line 40, in <module> from ..t5.tokenization_t5_fast import T5TokenizerFast File "/fs/home/maedler/.local/miniconda3/envs/ConvNext_pip/lib/python3.8/site-packages/transformers/models/t5/tokenization_t5_fast.py", line 24, in <module> from ...tokenization_utils_fast import PreTrainedTokenizerFast File "/fs/home/maedler/.local/miniconda3/envs/ConvNext_pip/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 25, in <module> import tokenizers.pre_tokenizers as pre_tokenizers_fast File "/fs/home/maedler/.local/miniconda3/envs/ConvNext_pip/lib/python3.8/site-packages/tokenizers/__init__.py", line 79, in <module> from .tokenizers import ( ImportError: libssl.so.10: cannot open shared object file: No such file or directory The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<frozen importlib._bootstrap>", line 1039, in _handle_fromlist File "/fs/home/maedler/.local/miniconda3/envs/ConvNext_pip/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1100, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/fs/home/maedler/.local/miniconda3/envs/ConvNext_pip/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1112, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.models.auto because of the following error (look up to see its traceback): libssl.so.10: cannot open shared object file: No such file or directory ``` If I first install tokenizers from Conda-forge and then afterwards install transformers with --no-update-deps I can use the package: ``` # Install PyTorch According to Documentation for System conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia #install transformers with workaround conda install tokenizers -c conda-forge Conda install -c huggingface 'transformers==4.26.0' --no-update-deps ``` This results in: ``` Python 3.8.17 | packaged by conda-forge | (default, Jun 16 2023, 07:06:00) [GCC 11.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from transformers import AutoImageProcessor, ConvNextModel, ConvNextConfig, ViTFeatureExtractor >>> ``` ### Expected behavior A working installation of transformer with the default tokenizers installation from Conda.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24447/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24447/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24446
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24446/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24446/comments
https://api.github.com/repos/huggingface/transformers/issues/24446/events
https://github.com/huggingface/transformers/pull/24446
1,771,213,469
PR_kwDOCUB6oc5TvDvt
24,446
fixes issue when saving fsdp via accelerate's FSDP plugin
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks again for this issue! I just finished training a largeish (7b param) model using this fix and have some questions. \r\nI noticed the directory where the model was meant to save looks a bit odd. While the training code seems to have finished without error, the directory contains some checkpoint directories and a directory called pytorch_model_0 (which has a bunch of .distcp files), but none of the files I previously would see in my trained model directories, like the config.json and .bin files. Is this expected save behavior? ", "Hello @amartino1, this is because it is now using the FSDP's recommended way of saving ckpts, see this: https://github.com/pytorch/pytorch/blob/e71ab214226af1f9dbded944e939c6447e0e8f09/torch/distributed/checkpoint/examples/fsdp_checkpoint_example.py#L59\r\n\r\nYou will only notice that if you are using `SHARDED_STATE_DICT` as the `fsdp_state_dict_type`. \r\n\r\nwith PR https://github.com/huggingface/transformers/pull/24591, it should save the whole model in transformers format as well as FSDP ckpt following what you have chosen as `fsdp_state_dict_type`. " ]
1,687
1,688
1,687
CONTRIBUTOR
null
# What does this PR do? 1. Fixes https://github.com/huggingface/transformers/issues/24057#issuecomment-1595152783 2. When using Accelerate's integration for FSDP, fsdp_plugin saves the optimizer state under various configs such as Full_Dict, Sharded_Dict ... properly. For other cases such as with FSDP-XLA, the trainer's behaviour is unchanged.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24446/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24446/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24446", "html_url": "https://github.com/huggingface/transformers/pull/24446", "diff_url": "https://github.com/huggingface/transformers/pull/24446.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24446.patch", "merged_at": 1687523637000 }
https://api.github.com/repos/huggingface/transformers/issues/24445
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24445/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24445/comments
https://api.github.com/repos/huggingface/transformers/issues/24445/events
https://github.com/huggingface/transformers/issues/24445
1,771,188,958
I_kwDOCUB6oc5pkjLe
24,445
LoRA is incompatible with DeepSpeed ZeRO3
{ "login": "Weiyun1025", "id": 47669167, "node_id": "MDQ6VXNlcjQ3NjY5MTY3", "avatar_url": "https://avatars.githubusercontent.com/u/47669167?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Weiyun1025", "html_url": "https://github.com/Weiyun1025", "followers_url": "https://api.github.com/users/Weiyun1025/followers", "following_url": "https://api.github.com/users/Weiyun1025/following{/other_user}", "gists_url": "https://api.github.com/users/Weiyun1025/gists{/gist_id}", "starred_url": "https://api.github.com/users/Weiyun1025/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Weiyun1025/subscriptions", "organizations_url": "https://api.github.com/users/Weiyun1025/orgs", "repos_url": "https://api.github.com/users/Weiyun1025/repos", "events_url": "https://api.github.com/users/Weiyun1025/events{/privacy}", "received_events_url": "https://api.github.com/users/Weiyun1025/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, please refer this doc for the correct way of using PEFT + DeepSpeed: https://huggingface.co/docs/peft/accelerate/deepspeed-zero3-offload", "> Hello, please refer this doc for the correct way of using PEFT + DeepSpeed: https://huggingface.co/docs/peft/accelerate/deepspeed-zero3-offload\r\n\r\n Thank you for your response!\r\n\r\nI note that this doc is based on `accelerate`. However, my code is based on `transformers.Trainer`. Can you provide me any example to use PEFT + DeepSpeed with `transformers.Trainer` correctly?", "The following steps work for me:\r\n1. Create `TrainingArguments(..., deepspeed=\"ds_config_zero3.json\")`\r\n2. Load model with `from_pretrained()`\r\n3. Wrap it with `get_peft_model()`\r\n4. Run `Trainer.train()`\r\n\r\nFew important notes:\r\n1. You have to create `TrainingArguments` before initialising the model with Zero3 partitioning.\r\n2. If you use `TaskType.SEQ_CLS` task, `get_peft_model` will break the forward path. A quick workaround is recreate unpartitioned classification head after the model initialised with `deepspeed.zero.Init()`, i.e. after `from_pretrained()`.", "> The following steps work for me:\r\n> \r\n> 1. Create `TrainingArguments(..., deepspeed=\"ds_config_zero3.json\")`\r\n> 2. Load model with `from_pretrained()`\r\n> 3. Wrap it with `get_peft_model()`\r\n> 4. Run `Trainer.train()`\r\n> \r\n> Few important notes:\r\n> \r\n> 1. You have to create `TrainingArguments` before initialising the model with Zero3 partitioning.\r\n> 2. If you use `TaskType.SEQ_CLS` task, `get_peft_model` will break the forward path. A quick workaround is recreate unpartitioned classification head after the model initialised with `deepspeed.zero.Init()`, i.e. after `from_pretrained()`.\r\n\r\nThanks! And I would imagine you launch with `deepspeed`? Do you have to specify `ds_config_zero3.json` in CLI command now it is provided in TrainingArguments? ", "> Thanks! And I would imagine you launch with `deepspeed`? Do you have to specify `ds_config_zero3.json` in CLI command now it is provided in TrainingArguments?\r\n\r\nYes, I launch it with `deepspeed` and I do not specify the config in the command, only in the TrainingArguments.", "> The following steps work for me:\r\n> \r\n> 1. Create `TrainingArguments(..., deepspeed=\"ds_config_zero3.json\")`\r\n> 2. Load model with `from_pretrained()`\r\n> 3. Wrap it with `get_peft_model()`\r\n> 4. Run `Trainer.train()`\r\n> \r\n> Few important notes:\r\n> \r\n> 1. You have to create `TrainingArguments` before initialising the model with Zero3 partitioning.\r\n> 2. If you use `TaskType.SEQ_CLS` task, `get_peft_model` will break the forward path. A quick workaround is recreate unpartitioned classification head after the model initialised with `deepspeed.zero.Init()`, i.e. after `from_pretrained()`.\r\n\r\n@1ytic very useful explaination! Could you offer a example how to implement this quick workaround ? thx", "@1ytic I am getting this error while running LORA with zero 3 deepspeed.:\r\nSomething seems to have broken.\r\n\r\nCan you please explain this more clearly:\r\n\"If you use TaskType.SEQ_CLS task, get_peft_model will break the forward path. A quick workaround is recreate unpartitioned classification head after the model initialised with deepspeed.zero.Init(), i.e. after from_pretrained().\"\r\n\r\nTraceback (most recent call last):\r\n File \"/home/ec2-user/SageMaker/final_training/lora_scripts/run_clm.py\", line 263, in <module>\r\n main()\r\n File \"/home/ec2-user/SageMaker/final_training/lora_scripts/run_clm.py\", line 259, in main\r\n training_function(args)\r\n File \"/home/ec2-user/SageMaker/final_training/lora_scripts/run_clm.py\", line 220, in training_function\r\n trainer.train()\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/trainer.py\", line 1539, in train\r\nTraceback (most recent call last):\r\n File \"/home/ec2-user/SageMaker/final_training/lora_scripts/run_clm.py\", line 263, in <module>\r\n main()\r\n File \"/home/ec2-user/SageMaker/final_training/lora_scripts/run_clm.py\", line 259, in main\r\n training_function(args)\r\n File \"/home/ec2-user/SageMaker/final_training/lora_scripts/run_clm.py\", line 220, in training_function\r\n return inner_training_loop(\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/trainer.py\", line 1809, in _inner_training_loop\r\n trainer.train()\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/trainer.py\", line 1539, in train\r\nTraceback (most recent call last):\r\n File \"/home/ec2-user/SageMaker/final_training/lora_scripts/run_clm.py\", line 263, in <module>\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/trainer.py\", line 2665, in training_step\r\nreturn inner_training_loop(\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/trainer.py\", line 1809, in _inner_training_loop\r\n main()\r\n File \"/home/ec2-user/SageMaker/final_training/lora_scripts/run_clm.py\", line 259, in main\r\n training_function(args)\r\n File \"/home/ec2-user/SageMaker/final_training/lora_scripts/run_clm.py\", line 220, in training_function\r\n trainer.train()\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/trainer.py\", line 1539, in train\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/trainer.py\", line 2665, in training_step\r\nTraceback (most recent call last):\r\n File \"/home/ec2-user/SageMaker/final_training/lora_scripts/run_clm.py\", line 263, in <module>\r\n self.accelerator.backward(loss)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1847,in backward\r\n return inner_training_loop(\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/trainer.py\", line 1809, in _inner_training_loop\r\n main()\r\n File \"/home/ec2-user/SageMaker/final_training/lora_scripts/run_clm.py\", line 259, in main\r\n training_function(args)\r\n File \"/home/ec2-user/SageMaker/final_training/lora_scripts/run_clm.py\", line 220, in training_function\r\n trainer.train()\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/trainer.py\", line 1539, in train\r\n self.deepspeed_engine_wrapped.backward(loss, **kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/accelerate/utils/deepspeed.py\", line 167, in backward\r\n self.accelerator.backward(loss)self.engine.backward(loss, **kwargs)\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1847,in backward\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py\", line 15, in wrapped_fn\r\n ret_val = func(*args, **kwargs)tr_loss_step = self.training_step(model, inputs)\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/engine.py\", line 1923, in backward\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/trainer.py\", line 2665, in training_step\r\n return inner_training_loop(\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/trainer.py\", line 1809, in _inner_training_loop\r\n self.deepspeed_engine_wrapped.backward(loss, **kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/accelerate/utils/deepspeed.py\", line 167, in backward\r\n self.optimizer.backward(loss, retain_graph=retain_graph)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py\", line 15, in wrapped_fn\r\n self.engine.backward(loss, **kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py\", line 15, in wrapped_fn\r\n ret_val = func(*args, **kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/stage3.py\", line 2080, in backward\r\n ret_val = func(*args, **kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/engine.py\", line 1923, in backward\r\n self.accelerator.backward(loss)tr_loss_step = self.training_step(model, inputs)\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1847,in backward\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/trainer.py\", line 2665, in training_step\r\n self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)self.optimizer.backward(loss, retain_graph=retain_graph)\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/fp16/loss_scaler.py\", line 63, in backward\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py\", line 15, in wrapped_fn\r\n ret_val = func(*args, **kwargs) scaled_loss.backward(retain_graph=retain_graph)\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/stage3.py\", line 2080, in backward\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/_tensor.py\", line 487, in backward\r\n self.deepspeed_engine_wrapped.backward(loss, **kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/accelerate/utils/deepspeed.py\", line 167, in backward\r\n torch.autograd.backward(self.engine.backward(loss, **kwargs)\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/autograd/__init__.py\", line 200,in backward\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py\", line 15, in wrapped_fn\r\n ret_val = func(*args, **kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/engine.py\", line 1923, in backward\r\n self.accelerator.backward(loss)Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1847,in backward\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/autograd/function.py\", line 274,in apply\r\n return user_fn(self, *args)self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/utils/checkpoint.py\", line 141, in backward\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/fp16/loss_scaler.py\", line 63, in backward\r\n outputs = ctx.run_function(*detached_inputs)\r\nscaled_loss.backward(retain_graph=retain_graph) File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py\", line 681, in custom_forward\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/_tensor.py\", line 487, in backward\r\n self.optimizer.backward(loss, retain_graph=retain_graph)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py\", line 15, in wrapped_fn\r\n ret_val = func(*args, **kwargs)self.deepspeed_engine_wrapped.backward(loss, **kwargs)torch.autograd.backward(\r\n\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/stage3.py\", line 2080, in backward\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/accelerate/utils/deepspeed.py\", line 167, in backward\r\nreturn module(*inputs, output_attentions, None) File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/autograd/__init__.py\", line 200, in backward\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1538, in _call_impl\r\n self.engine.backward(loss, **kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py\", line 15, in wrapped_fn\r\n Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass\r\n ret_val = func(*args, **kwargs) File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/autograd/function.py\", line 274, in apply\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/engine.py\", line 1923, in backward\r\n return user_fn(self, *args)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/utils/checkpoint.py\", line 141, in backward\r\n outputs = ctx.run_function(*detached_inputs)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py\", line 681, in custom_forward\r\n result = forward_call(*args, **kwargs)\r\nself.loss_scaler.backward(loss.float(), retain_graph=retain_graph) File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py\", line 408, in forward\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/fp16/loss_scaler.py\", line 63, in backward\r\n scaled_loss.backward(retain_graph=retain_graph)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/_tensor.py\", line 487, in backward\r\n return module(*inputs, output_attentions, None)self.optimizer.backward(loss, retain_graph=retain_graph)hidden_states, self_attn_weights, present_key_value = self.self_attn(\r\n\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1538, in _call_impl\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py\", line 15, in wrapped_fn\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1538, in _call_impl\r\n torch.autograd.backward(\r\nret_val = func(*args, **kwargs) File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/autograd/__init__.py\", line 200, in backward\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/stage3.py\", line 2080, in backward\r\n Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/autograd/function.py\", line 274,in apply\r\n return user_fn(self, *args)\r\nresult = forward_call(*args, **kwargs) File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/utils/checkpoint.py\", line 141, in backward\r\n\r\nresult = forward_call(*args, **kwargs) File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py\", line 305, in forward\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py\", line 408, in forward\r\n outputs = ctx.run_function(*detached_inputs)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py\", line 681, in custom_forward\r\n query_states = self.q_proj(hidden_states)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)hidden_states, self_attn_weights, present_key_value = self.self_attn(\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/fp16/loss_scaler.py\", line 63, in backward\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1538, in _call_impl\r\n return module(*inputs, output_attentions, None)\r\nscaled_loss.backward(retain_graph=retain_graph) File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1538, in _call_impl\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/_tensor.py\", line 487, in backward\r\n torch.autograd.backward(\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/autograd/__init__.py\", line 200,in backward\r\n result = hook(self, args)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py\", line 15, in wrapped_fn\r\n result = forward_call(*args, **kwargs)Variable._execution_engine.run_backward( # Calls into the C++ engine torun the backward passret_val = func(*args, **kwargs)\r\n\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py\", line 305, in forward\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/autograd/function.py\", line 274,in apply\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/parameter_offload.py\", line 392, in _pre_forward_module_hook\r\n result = forward_call(*args, **kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py\", line 408, in forward\r\n query_states = self.q_proj(hidden_states)return user_fn(self, *args)\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/utils/checkpoint.py\", line 141, in backward\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\nself.pre_sub_module_forward_function(module)\r\n hidden_states, self_attn_weights, present_key_value = self.self_attn( File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/parameter_offload.py\", line 505, in pre_sub_module_forward_function\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1538, in _call_impl\r\n outputs = ctx.run_function(*detached_inputs)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py\", line 681, in custom_forward\r\n param_coordinator.fetch_sub_module(sub_module, forward=prev_grad_state)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py\", line 15, in wrapped_fn\r\n ret_val = func(*args, **kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/utils/_contextlib.py\", line 115,in decorate_context\r\n return module(*inputs, output_attentions, None)result = hook(self, args)\r\nreturn func(*args, **kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1538, in _call_impl\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py\", line 15, in wrapped_fn\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py\", line 284, in fetch_sub_module\r\nresult = forward_call(*args, **kwargs)\r\nret_val = func(*args, **kwargs) File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py\", line 305, in forward\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/parameter_offload.py\", line 392, in _pre_forward_module_hook\r\n self.__all_gather_params(params_to_fetch, forward)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py\", line 15, in wrapped_fn\r\n query_states = self.q_proj(hidden_states)ret_val = func(*args, **kwargs)\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\nself.pre_sub_module_forward_function(module) File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py\", line 428, in __all_gather_params\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/parameter_offload.py\", line 505, in pre_sub_module_forward_function\r\n result = forward_call(*args, **kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py\", line 408, in forward\r\n self.__all_gather_params_(nonquantized_params, forward, quantize=self.zero_quantized_weights)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py\", line 446, in __all_gather_params_\r\n param_coordinator.fetch_sub_module(sub_module, forward=prev_grad_state)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py\", line 15, in wrapped_fn\r\n hidden_states, self_attn_weights, present_key_value = self.self_attn(\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1538, in _call_impl\r\n ret_val = func(*args, **kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/utils/_contextlib.py\", line 115,in decorate_context\r\n handle = partitioned_params[0].all_gather_coalesced(partitioned_params,result = hook(self, args)\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py\", line 15,in wrapped_fn\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py\", line 15, in wrapped_fn\r\nreturn func(*args, **kwargs)\r\n ret_val = func(*args, **kwargs) File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py\", line 284, in fetch_sub_module\r\n\r\nret_val = func(*args, **kwargs) File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py\", line 1155, in all_gather_coalesced\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/parameter_offload.py\", line 392, in _pre_forward_module_hook\r\n result = forward_call(*args, **kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py\", line 305, in forward\r\n self.__all_gather_params(params_to_fetch, forward)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py\", line 15, in wrapped_fn\r\n ret_val = func(*args, **kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py\", line 428, in __all_gather_params\r\n self.pre_sub_module_forward_function(module)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/parameter_offload.py\", line 505, in pre_sub_module_forward_function\r\n query_states = self.q_proj(hidden_states)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n self.__all_gather_params_(nonquantized_params, forward, quantize=self.zero_quantized_weights)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py\", line 446, in __all_gather_params_\r\n dtype=get_only_unique_item(p.ds_tensor.dtypeparam_coordinator.fetch_sub_module(sub_module, forward=prev_grad_state)\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/utils.py\", line 842,in get_only_unique_item\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py\", line 15, in wrapped_fn\r\n ret_val = func(*args, **kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/utils/_contextlib.py\", line 115,in decorate_context\r\n handle = partitioned_params[0].all_gather_coalesced(partitioned_params,\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py\", line 15, in wrapped_fn\r\n return func(*args, **kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py\", line 284, in fetch_sub_module\r\nret_val = func(*args, **kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py\", line 1155, in all_gather_coalesced\r\nresult = hook(self, args)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py\", line 15, in wrapped_fn\r\n raise RuntimeError(f\"expected there to be only one unique element in {items}\")\r\nRuntimeError: expected there to be only one unique element in <generator object Init._convert_to_deepspeed_param.<locals>.all_gather_coalesced.<locals>.<genexpr> at 0x7f8eb7f2c510>\r\n ret_val = func(*args, **kwargs)self.__all_gather_params(params_to_fetch, forward)\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/parameter_offload.py\", line 392, in _pre_forward_module_hook\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py\", line 15, in wrapped_fn\r\n ret_val = func(*args, **kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py\", line 428, in __all_gather_params\r\n self.pre_sub_module_forward_function(module)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/parameter_offload.py\", line 505, in pre_sub_module_forward_function\r\n dtype=get_only_unique_item(p.ds_tensor.dtypeself.__all_gather_params_(nonquantized_params, forward, quantize=self.zero_quantized_weights)\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/utils.py\", line 842,in get_only_unique_item\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py\", line 446, in __all_gather_params_\r\n param_coordinator.fetch_sub_module(sub_module, forward=prev_grad_state)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py\", line 15, in wrapped_fn\r\n handle = partitioned_params[0].all_gather_coalesced(partitioned_params,ret_val = func(*args, **kwargs)\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py\", line 15, in wrapped_fn\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/torch/utils/_contextlib.py\", line 115,in decorate_context\r\n raise RuntimeError(f\"expected there to be only one unique element in {items}\")\r\nret_val = func(*args, **kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py\", line 1155, in all_gather_coalesced\r\nRuntimeError : return func(*args, **kwargs)expected there to be only one unique element in <generator object Init._convert_to_deepspeed_param.<locals>.all_gather_coalesced.<locals>.<genexpr> at 0x7f3bd8933ca0>\r\n\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py\", line 284, in fetch_sub_module\r\n self.__all_gather_params(params_to_fetch, forward)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py\", line 15, in wrapped_fn\r\n ret_val = func(*args, **kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py\", line 428, in __all_gather_params\r\n dtype=get_only_unique_item(p.ds_tensor.dtype\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/utils.py\", line 842,in get_only_unique_item\r\n self.__all_gather_params_(nonquantized_params, forward, quantize=self.zero_quantized_weights)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py\", line 446, in __all_gather_params_\r\n handle = partitioned_params[0].all_gather_coalesced(partitioned_params,\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py\", line 15, in wrapped_fn\r\n raise RuntimeError(f\"expected there to be only one unique element in {items}\")\r\n ret_val = func(*args, **kwargs)RuntimeError\r\n: expected there to be only one unique element in <generator object Init._convert_to_deepspeed_param.<locals>.all_gather_coalesced.<locals>.<genexpr> at 0x7fd9e6c0b840> File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py\", line 1155, in all_gather_coalesced\r\n\r\n dtype=get_only_unique_item(p.ds_tensor.dtype\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/deepspeed/runtime/utils.py\", line 842,in get_only_unique_item\r\n raise RuntimeError(f\"expected there to be only one unique element in {items}\")\r\nRuntimeError: expected there to be only one unique element in <generator object Init._convert_to_deepspeed_param.<locals>.all_gather_coalesced.<locals>.<genexpr> at 0x7fad61111000>\r\n 0%| ", "> The following steps work for me:\r\n> \r\n> 1. Create `TrainingArguments(..., deepspeed=\"ds_config_zero3.json\")`\r\n> 2. Load model with `from_pretrained()`\r\n> 3. Wrap it with `get_peft_model()`\r\n> 4. Run `Trainer.train()`\r\n> \r\n> Few important notes:\r\n> \r\n> 1. You have to create `TrainingArguments` before initialising the model with Zero3 partitioning.\r\n> 2. If you use `TaskType.SEQ_CLS` task, `get_peft_model` will break the forward path. A quick workaround is recreate unpartitioned classification head after the model initialised with `deepspeed.zero.Init()`, i.e. after `from_pretrained()`.\r\n\r\nCould you explain a bit more on `get_peft_model` breaks the forward path under `SEQ_CLS`? Thank you!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hello! I'm facing the same issue with `deepspeed==0.12.4`, stage3, no cpu offloading, `transformers==4.36.2` and `peft==0.7.1`:\r\n```\r\nAttributeError: 'PeftModelForCausalLM' object has no attribute \r\n'_ds_child_entered'\r\n....\r\n....\r\n File \".../site-packages/peft/peft_model.py\", line 528, in __getattr__\r\n return super().__getattr__(name) # defer to nn.Module's logic\r\n File \".../torch/nn/modules/module.py\", line 1695, in __getattr__\r\n raise AttributeError(f\"'{type(self).__name__}' object has no attribute '{name}'\")\r\nAttributeError: 'PeftModelForCausalLM' object has no attribute 'base_model'\r\n``` \r\nand eventually\r\n```\r\nRecursionError: maximum recursion depth exceeded while calling a Python object\r\n```\r\n\r\nI'm using `pytorch lightning`:\r\n```python\r\ntrainer = Trainer(\r\n ...\r\n strategy = DeepSpeedStrategy(stage=3)\r\n)\r\n\r\nclass Module(LightningModule):\r\n def configure_model(self) -> None:\r\n deepspeed_config = self.trainer.strategy.config\r\n self.dschf = HfDeepSpeedConfig(deepspeed_config)\r\n model = AutoModelForCausalLM.from_pretrained(...)\r\n model = get_peft_model(\r\n model,\r\n LoraConfig(\r\n task_type=TaskType.CAUSAL_LM,\r\n inference_mode=False,\r\n target_modules=target_modules,\r\n r=48,\r\n lora_alpha=16,\r\n lora_dropout=0.0,\r\n ),\r\n )\r\n\r\n```" ]
1,687
1,704
1,699
NONE
null
### System Info `pytorch==2.0.0, transformers==4.28.0, peft==0.2.0` When use LoRA to wrap model in `__init__` and enable deepspeed ZeRO3, i will get the following errors: ``` ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ │ /mnt/petrelfs/wangweiyun/miniconda3/envs/recognize_anything/lib/python3.9/si │ │ te-packages/peft/peft_model.py:287 in __getattr__ │ │ │ │ 284 │ def __getattr__(self, name: str): │ │ 285 │ │ """Forward missing attributes to the wrapped module.""" │ │ 286 │ │ try: │ │ ❱ 287 │ │ │ return super().__getattr__(name) # defer to nn.Module's l │ │ 288 │ │ except AttributeError: │ │ 289 │ │ │ return getattr(self.base_model, name) │ │ 290 │ │ │ │ /mnt/petrelfs/wangweiyun/miniconda3/envs/recognize_anything/lib/python3.9/si │ │ te-packages/torch/nn/modules/module.py:1614 in __getattr__ │ │ │ │ 1611 │ │ │ modules = self.__dict__['_modules'] │ │ 1612 │ │ │ if name in modules: │ │ 1613 │ │ │ │ return modules[name] │ │ ❱ 1614 │ │ raise AttributeError("'{}' object has no attribute '{}'".form │ │ 1615 │ │ │ type(self).__name__, name)) │ │ 1616 │ │ │ 1617 │ def __setattr__(self, name: str, value: Union[Tensor, 'Module']) │ ╰──────────────────────────────────────────────────────────────────────────────╯ AttributeError: 'PeftModelForCausalLM' object has no attribute '_ds_child_entered' During handling of the above exception, another exception occurred: ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ │ /mnt/petrelfs/wangweiyun/miniconda3/envs/recognize_anything/lib/python3.9/si │ │ te-packages/peft/peft_model.py:287 in __getattr__ │ │ │ │ 284 │ def __getattr__(self, name: str): │ │ 285 │ │ """Forward missing attributes to the wrapped module.""" │ │ 286 │ │ try: │ │ ❱ 287 │ │ │ return super().__getattr__(name) # defer to nn.Module's l │ │ 288 │ │ except AttributeError: │ │ 289 │ │ │ return getattr(self.base_model, name) │ │ 290 │ │ │ │ /mnt/petrelfs/wangweiyun/miniconda3/envs/recognize_anything/lib/python3.9/si │ │ te-packages/torch/nn/modules/module.py:1614 in __getattr__ │ │ │ │ 1611 │ │ │ modules = self.__dict__['_modules'] │ │ 1612 │ │ │ if name in modules: │ │ 1613 │ │ │ │ return modules[name] │ │ ❱ 1614 │ │ raise AttributeError("'{}' object has no attribute '{}'".form │ │ 1615 │ │ │ type(self).__name__, name)) │ │ 1616 │ │ │ 1617 │ def __setattr__(self, name: str, value: Union[Tensor, 'Module']) │ ╰──────────────────────────────────────────────────────────────────────────────╯ AttributeError: 'PeftModelForCausalLM' object has no attribute 'base_model' ``` It seems like that deepspeed begins to partition parameters before `PeftModelForCausalLM` finish its `__init__`, since it can not get the attribute `base_model`. It's also notable that this error leads to a infinite recursion, since `PeftModel` catch the AttributeError when trying to get the attribute `base_model` while this attribute does not exist so the AttributeError will be raised again and again. ``` ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ │ /mnt/petrelfs/wangweiyun/projects/region_wise_model/main_clip_v6.py:120 in │ │ <module> │ │ │ │ 117 │ │ 118 │ │ 119 if __name__ == '__main__': │ │ ❱ 120 │ main() │ │ 121 │ │ │ │ /mnt/petrelfs/wangweiyun/projects/region_wise_model/main_clip_v6.py:42 in │ │ main │ │ │ │ 39 │ │ │ 40 │ if config.use_window_attn: │ │ 41 │ │ state_dict = preprocess_state_dict(model_args.model_name_or_pa │ │ ❱ 42 │ │ model = HuskyForCLIP.from_pretrained(model_args.model_name_or_ │ │ 43 │ else: │ │ 44 │ │ model = HuskyForCLIP.from_pretrained(model_args.model_name_or_ │ │ 45 │ │ │ │ /mnt/petrelfs/wangweiyun/miniconda3/envs/recognize_anything/lib/python3.9/si │ │ te-packages/transformers/modeling_utils.py:2629 in from_pretrained │ │ │ │ 2626 │ │ │ init_contexts.append(init_empty_weights()) │ │ 2627 │ │ │ │ 2628 │ │ with ContextManagers(init_contexts): │ │ ❱ 2629 │ │ │ model = cls(config, *model_args, **model_kwargs) │ │ 2630 │ │ │ │ 2631 │ │ # Check first if we are `from_pt` │ │ 2632 │ │ if use_keep_in_fp32_modules: │ │ │ │ /mnt/petrelfs/wangweiyun/miniconda3/envs/recognize_anything/lib/python3.9/si │ │ te-packages/deepspeed/runtime/zero/partition_parameters.py:382 in wrapper │ │ │ │ 379 │ │ │ │ │ is_child_module = True │ │ 380 │ │ │ │ │ setattr(module, "_ds_child_entered", True) │ │ 381 │ │ │ │ │ │ ❱ 382 │ │ │ │ f(module, *args, **kwargs) │ │ 383 │ │ │ │ │ │ 384 │ │ │ │ if is_child_module: │ │ 385 │ │ │ │ │ # child's __init__ is done, now we can run a sing │ │ │ │ /mnt/petrelfs/wangweiyun/projects/region_wise_model/custom_models/husky_clip │ │ _ablate.py:1472 in __init__ │ │ │ │ 1469 # shared align token + Both flatten + soft prompt (best) │ │ 1470 class HuskyForCLIPV6(WindowRegionHusky): │ │ 1471 │ def __init__(self, config: WindowRegionHuskyConfig): │ │ ❱ 1472 │ │ super().__init__(config) │ │ 1473 │ │ │ │ 1474 │ │ self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / 0 │ │ 1475 │ │ self.text_projection = nn.Parameter(torch.empty(self.language │ │ │ │ /mnt/petrelfs/wangweiyun/miniconda3/envs/recognize_anything/lib/python3.9/si │ │ te-packages/deepspeed/runtime/zero/partition_parameters.py:382 in wrapper │ │ │ │ 379 │ │ │ │ │ is_child_module = True │ │ 380 │ │ │ │ │ setattr(module, "_ds_child_entered", True) │ │ 381 │ │ │ │ │ │ ❱ 382 │ │ │ │ f(module, *args, **kwargs) │ │ 383 │ │ │ │ │ │ 384 │ │ │ │ if is_child_module: │ │ 385 │ │ │ │ │ # child's __init__ is done, now we can run a sing │ │ │ │ /mnt/petrelfs/wangweiyun/projects/region_wise_model/custom_models/husky_wind │ │ ow.py:47 in __init__ │ │ │ │ 44 │ │ │ │ │ self.vision_model.encoder.layers[idx] = WindowBLIP │ │ 45 │ │ │ │ │ 46 │ │ │ if self.config.lora: │ │ ❱ 47 │ │ │ │ self.wrap_lora() │ │ 48 │ │ │ if self.config.lora_vision: │ │ 49 │ │ │ │ self.wrap_lora_vision() │ │ 50 │ │ self.post_init() │ │ │ │ /mnt/petrelfs/wangweiyun/projects/region_wise_model/custom_models/husky_src/ │ │ husky_chat.py:436 in wrap_lora │ │ │ │ 433 │ │ │ lora_dropout=lora_dropout, │ │ 434 │ │ │ target_modules=target_modules │ │ 435 │ │ ) │ │ ❱ 436 │ │ self.language_model = get_peft_model(self.language_model, peft │ │ 437 │ │ self.config.lora = True │ │ 438 │ │ self.language_model.print_trainable_parameters() │ │ 439 │ │ │ │ /mnt/petrelfs/wangweiyun/miniconda3/envs/recognize_anything/lib/python3.9/si │ │ te-packages/peft/mapping.py:145 in get_peft_model │ │ │ │ 142 │ │ peft_config = _prepare_lora_config(peft_config, model_config) │ │ 143 │ else: │ │ 144 │ │ peft_config = _prepare_prompt_learning_config(peft_config, mod │ │ ❱ 145 │ return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type](mod │ │ 146 │ │ │ │ /mnt/petrelfs/wangweiyun/miniconda3/envs/recognize_anything/lib/python3.9/si │ │ te-packages/deepspeed/runtime/zero/partition_parameters.py:377 in wrapper │ │ │ │ 374 │ │ │ │ print_rank_0(f'Before initializing {module.__class__. │ │ 375 │ │ │ │ │ │ 376 │ │ │ │ is_child_module = False │ │ ❱ 377 │ │ │ │ if not hasattr(module, "_ds_child_entered"): │ │ 378 │ │ │ │ │ # child's __init__ was called, since parents all │ │ 379 │ │ │ │ │ is_child_module = True │ │ 380 │ │ │ │ │ setattr(module, "_ds_child_entered", True) │ │ │ │ /mnt/petrelfs/wangweiyun/miniconda3/envs/recognize_anything/lib/python3.9/si │ │ te-packages/peft/peft_model.py:289 in __getattr__ │ │ │ │ 286 │ │ try: │ │ 287 │ │ │ return super().__getattr__(name) # defer to nn.Module's l │ │ 288 │ │ except AttributeError: │ │ ❱ 289 │ │ │ return getattr(self.base_model, name) │ │ 290 │ │ │ 291 │ def forward(self, *args, **kwargs): │ │ 292 │ │ """ │ │ │ │ /mnt/petrelfs/wangweiyun/miniconda3/envs/recognize_anything/lib/python3.9/si │ │ te-packages/peft/peft_model.py:289 in __getattr__ │ │ │ │ 286 │ │ try: │ │ 287 │ │ │ return super().__getattr__(name) # defer to nn.Module's l │ │ 288 │ │ except AttributeError: │ │ ❱ 289 │ │ │ return getattr(self.base_model, name) │ │ 290 │ │ │ 291 │ def forward(self, *args, **kwargs): │ │ 292 │ │ """ │ │ │ │ /mnt/petrelfs/wangweiyun/miniconda3/envs/recognize_anything/lib/python3.9/si │ │ te-packages/peft/peft_model.py:289 in __getattr__ │ │ │ │ 286 │ │ try: │ │ 287 │ │ │ return super().__getattr__(name) # defer to nn.Module's l │ │ 288 │ │ except AttributeError: │ │ ❱ 289 │ │ │ return getattr(self.base_model, name) │ │ 290 │ │ │ 291 │ def forward(self, *args, **kwargs): │ │ 292 │ │ """ │ │ │ │ /mnt/petrelfs/wangweiyun/miniconda3/envs/recognize_anything/lib/python3.9/si │ │ te-packages/peft/peft_model.py:289 in __getattr__ │ │ │ │ 286 │ │ try: │ │ 287 │ │ │ return super().__getattr__(name) # defer to nn.Module's l │ │ 288 │ │ except AttributeError: │ │ ❱ 289 │ │ │ return getattr(self.base_model, name) │ │ 290 │ │ │ 291 │ def forward(self, *args, **kwargs): │ │ 292 │ │ """ │ ``` ### Who can help? @pacman100 ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction environments: `pytorch==2.0.0, transformers==4.28.0, peft==0.2.0` slurm launch command: `srun --gres=gpu:8 --ntasks=8 --ntasks-per-node=8 --cpus-per-task=8 python -u bug_unit_test.py --output_dir ./outputs/debug --deepspeed ./configs/default_offload_opt_param_zero3.json` deepspeed config to reproduce: ```json { "bf16": { "enabled": "auto" }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } ``` code to reproduce: ```python import os import subprocess import torch from transformers import ( HfArgumentParser, TrainingArguments, PreTrainedModel, LlamaModel, LlamaConfig ) from peft import LoraConfig, TaskType, get_peft_model class BugModel(PreTrainedModel): config_class = LlamaConfig def __init__(self, config): super().__init__(config) self.model = LlamaModel(config) self.wrap_lora() # init code for other modules, which is not important to reproduce this bug pass def wrap_lora( self, r=16, lora_alpha=32, lora_dropout=0.05, target_modules=("q_proj", "k_proj", "v_proj", "o_proj"), ): peft_config = LoraConfig( task_type=TaskType.CAUSAL_LM, inference_mode=False, r=r, lora_alpha=lora_alpha, lora_dropout=lora_dropout, target_modules=target_modules ) self.model = get_peft_model(self.model, peft_config) self.model.print_trainable_parameters() def init_distributed_mode(): if 'SLURM_PROCID' in os.environ: rank = int(os.environ['SLURM_PROCID']) local_rank = rank % torch.cuda.device_count() world_size = int(os.environ["SLURM_NTASKS"]) local_size = int(os.environ["SLURM_NTASKS_PER_NODE"]) if "MASTER_PORT" not in os.environ: port = 22110 print(f'MASTER_PORT = {port}') os.environ["MASTER_PORT"] = str(port) node_list = os.environ["SLURM_NODELIST"] addr = subprocess.getoutput(f"scontrol show hostname {node_list} | head -n1") if "MASTER_ADDR" not in os.environ: os.environ["MASTER_ADDR"] = addr os.environ['RANK'] = str(rank) os.environ['LOCAL_RANK'] = str(local_rank) os.environ['LOCAL_WORLD_SIZE'] = str(local_size) os.environ['WORLD_SIZE'] = str(world_size) parser = HfArgumentParser(TrainingArguments) init_distributed_mode() training_args = parser.parse_args_into_dataclasses() model_name_or_path = '/mnt/petrelfs/share_data/wangweiyun/share_hf/vicuna-7b' model = BugModel.from_pretrained(model_name_or_path) # Error! print('finish') ``` ### Expected behavior I expect to wrap the model with LoRA during `__init__` successfully when i enable ZeRO3.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24445/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24445/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24444
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24444/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24444/comments
https://api.github.com/repos/huggingface/transformers/issues/24444/events
https://github.com/huggingface/transformers/pull/24444
1,771,170,382
PR_kwDOCUB6oc5Tu6Wm
24,444
[`Trainer`] Fix `.to` call on 4bit models
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@amyeroberts thanks! \r\nAbsolutely yes, for reference, here is how we set that attribute: https://github.com/huggingface/transformers/blob/ea91c2adca842da3d2f87e094504fa7d66a7008a/src/transformers/modeling_utils.py#L2922 ", "I can confirm that this fix resolves the error I was hitting with 4-bit models:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/fsx/lewis/git/h4/scripts/evaluation/run_rm_eval.py\", line 275, in <module>\r\n main()\r\n File \"/fsx/lewis/git/h4/scripts/evaluation/run_rm_eval.py\", line 164, in main\r\n trainer = RewardTrainer(\r\n File \"/fsx/lewis/git/h4/src/h4/training/trainer.py\", line 26, in __init__\r\n super().__init__(*args, **kwargs)\r\n File \"/fsx/lewis/miniconda/envs/h4/lib/python3.10/site-packages/transformers/trainer.py\", line 506, in __init__\r\n self._move_model_to_device(model, args.device)\r\n File \"/fsx/lewis/miniconda/envs/h4/lib/python3.10/site-packages/transformers/trainer.py\", line 747, in _move_model_to_device\r\n model = model.to(device)\r\n File \"/fsx/lewis/miniconda/envs/h4/lib/python3.10/site-packages/transformers/modeling_utils.py\", line 1889, in to\r\n raise ValueError(\r\nValueError: `.to` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.\r\n```\r\n\r\nThanks for the fast fix @younesbelkada !!" ]
1,687
1,687
1,687
CONTRIBUTOR
null
# What does this PR do? Currently the Trainer fails when calling initializing it on some scenarios using 4bit models In fact, the device placement is correctly skipped for 8bit models but needs to be skipped as well for 4bit models as the `to` operation is not supported for 4bit models as well. This PR adds a patch for that case cc @amyeroberts @lewtun
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24444/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24444/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24444", "html_url": "https://github.com/huggingface/transformers/pull/24444", "diff_url": "https://github.com/huggingface/transformers/pull/24444.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24444.patch", "merged_at": 1687520105000 }
https://api.github.com/repos/huggingface/transformers/issues/24443
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24443/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24443/comments
https://api.github.com/repos/huggingface/transformers/issues/24443/events
https://github.com/huggingface/transformers/pull/24443
1,771,087,084
PR_kwDOCUB6oc5TuoQy
24,443
Update `JukeboxConfig.from_pretrained`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,687
1,687
1,687
COLLABORATOR
null
# What does this PR do? The `from_pretrained` of `JukeboxConfig` needs an update (too) after #24306 to avoid the error `TypeError: __init__() got an unexpected keyword argument 'token'`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24443/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24443/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24443", "html_url": "https://github.com/huggingface/transformers/pull/24443", "diff_url": "https://github.com/huggingface/transformers/pull/24443.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24443.patch", "merged_at": 1687525253000 }
https://api.github.com/repos/huggingface/transformers/issues/24442
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24442/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24442/comments
https://api.github.com/repos/huggingface/transformers/issues/24442/events
https://github.com/huggingface/transformers/issues/24442
1,770,965,741
I_kwDOCUB6oc5pjsrt
24,442
Wrong special tokens using XLM-RoBERTa's tokenizer for question answering
{ "login": "severinsimmler", "id": 16133277, "node_id": "MDQ6VXNlcjE2MTMzMjc3", "avatar_url": "https://avatars.githubusercontent.com/u/16133277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severinsimmler", "html_url": "https://github.com/severinsimmler", "followers_url": "https://api.github.com/users/severinsimmler/followers", "following_url": "https://api.github.com/users/severinsimmler/following{/other_user}", "gists_url": "https://api.github.com/users/severinsimmler/gists{/gist_id}", "starred_url": "https://api.github.com/users/severinsimmler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severinsimmler/subscriptions", "organizations_url": "https://api.github.com/users/severinsimmler/orgs", "repos_url": "https://api.github.com/users/severinsimmler/repos", "events_url": "https://api.github.com/users/severinsimmler/events{/privacy}", "received_events_url": "https://api.github.com/users/severinsimmler/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey, I am not sure where your expectations come from, but in the task of `sequence_classification`, you are not simply concatenating two sentences. If you have a look [here](https://github.com/facebookresearch/XLM/blob/cd281d32612d145c6742b4d3f048f80df8669c30/generate-embeddings.ipynb#L130), an original colab formats the sentences the same way. ", "No that's indeed expected behaviour, RoBERTa models use 2 special tokens in between context and question, unlike BERT.\r\n\r\nSee here: https://github.com/huggingface/transformers/blob/8e164c5400b7b413c7b8fb32e35132001effc970/src/transformers/models/roberta/tokenization_roberta.py#L346", "Got it, thanks a lot for clarifying :) 🙏 " ]
1,687
1,687
1,687
CONTRIBUTOR
null
### System Info - `transformers` version: 4.30.2 - Platform: Linux-6.2.0-20-generic-x86_64-with-glibc2.37 - Python version: 3.11.3 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.0+cu117 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Tokenizing a question and its context with XLM-RoBERTa's tokenizer: ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base") question = "This is a question?" context = "This is the context." inputs = tokenizer(question, context) tokenizer.decode(inputs["input_ids"]) ``` returns something like this: ``` <s> This is a question?</s></s> This is the context.</s> ``` i.e. with _two_ SEP tokens between question and context. Is this expected behavior? Shouldn't it be separated by only one `</sep>` or even `</sep><sep>`? ### Expected behavior I'd expect the tokenizer to return: ``` <s> This is a question?</s><s> This is the context.</s> ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24442/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24442/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24441
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24441/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24441/comments
https://api.github.com/repos/huggingface/transformers/issues/24441/events
https://github.com/huggingface/transformers/issues/24441
1,770,760,302
I_kwDOCUB6oc5pi6hu
24,441
Calling the tokenizer modifies the tokenizer object
{ "login": "vikigenius", "id": 12724810, "node_id": "MDQ6VXNlcjEyNzI0ODEw", "avatar_url": "https://avatars.githubusercontent.com/u/12724810?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vikigenius", "html_url": "https://github.com/vikigenius", "followers_url": "https://api.github.com/users/vikigenius/followers", "following_url": "https://api.github.com/users/vikigenius/following{/other_user}", "gists_url": "https://api.github.com/users/vikigenius/gists{/gist_id}", "starred_url": "https://api.github.com/users/vikigenius/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vikigenius/subscriptions", "organizations_url": "https://api.github.com/users/vikigenius/orgs", "repos_url": "https://api.github.com/users/vikigenius/repos", "events_url": "https://api.github.com/users/vikigenius/events{/privacy}", "received_events_url": "https://api.github.com/users/vikigenius/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Most likely the issue lies in Fast Tokenizers here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/tokenization_utils_fast.py#L319-L388\r\n\r\nI don't actually see anyplace where the original strategy is restored.\r\n\r\nBecause this snippet\r\n\r\n```python\r\nfrom transformers import AutoTokenizer\r\n\r\nt = AutoTokenizer.from_pretrained('bert-base-uncased')\r\np1 = t._tokenizer.padding\r\ntext = \"This is an example text\"\r\nttext = t(text, max_length=256, padding=\"max_length\", truncation=True)\r\np2 = t._tokenizer.padding\r\n```\r\n\r\np1 and p2 are different. i.e the value of padding changes.\r\n", "Hey! This is actually expected. When you save a tokenizer, the `init_kwargs` that were last used are saved along. \r\nIf you initialize the model with `from_slow = True`, then this will be saved. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,687
1,690
1,690
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: Linux-6.1.31_1-x86_64-with-glibc2.36 - Python version: 3.9.16 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Very simple reproduction here ```python from transformers import AutoTokenizer from datasets.utils.py_utils import dumps # Huggingface datasets t = AutoTokenizer.from_pretrained('bert-base-uncased') t.save_pretrained("tok1") th1 = hash(dumps(t)) text = "This is an example text" ttext = t(text, max_length=512, padding="max_length", truncation=True) t.save_pretrained("tok2") th2 = hash(dumps(t)) assert th1 == th2 # Assertion Error ``` The actual difference can be found if you try to save the tokenizer after calling it. Diff the tokenizer.json and you can see that the keys "padding" and "truncation" got updated. `diff tok1/tokenizer.json tok2/tokenizer.json` produces an actual diff ### Expected behavior The tokenizer object should not change just because it was called with the padding parameters.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24441/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24441/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24440
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24440/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24440/comments
https://api.github.com/repos/huggingface/transformers/issues/24440/events
https://github.com/huggingface/transformers/pull/24440
1,770,747,173
PR_kwDOCUB6oc5TtjGg
24,440
Fix typo
{ "login": "siryuon", "id": 28976334, "node_id": "MDQ6VXNlcjI4OTc2MzM0", "avatar_url": "https://avatars.githubusercontent.com/u/28976334?v=4", "gravatar_id": "", "url": "https://api.github.com/users/siryuon", "html_url": "https://github.com/siryuon", "followers_url": "https://api.github.com/users/siryuon/followers", "following_url": "https://api.github.com/users/siryuon/following{/other_user}", "gists_url": "https://api.github.com/users/siryuon/gists{/gist_id}", "starred_url": "https://api.github.com/users/siryuon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/siryuon/subscriptions", "organizations_url": "https://api.github.com/users/siryuon/orgs", "repos_url": "https://api.github.com/users/siryuon/repos", "events_url": "https://api.github.com/users/siryuon/events{/privacy}", "received_events_url": "https://api.github.com/users/siryuon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for the fix!" ]
1,687
1,687
1,687
CONTRIBUTOR
null
# What does this PR do? Fix typo (funcionality -> functionality) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24440/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24440/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24440", "html_url": "https://github.com/huggingface/transformers/pull/24440", "diff_url": "https://github.com/huggingface/transformers/pull/24440.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24440.patch", "merged_at": 1687522868000 }
https://api.github.com/repos/huggingface/transformers/issues/24439
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24439/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24439/comments
https://api.github.com/repos/huggingface/transformers/issues/24439/events
https://github.com/huggingface/transformers/issues/24439
1,770,721,007
I_kwDOCUB6oc5piw7v
24,439
AttributeError: 'QuantLinear' object has no attribute 'weight'
{ "login": "sigmareaver", "id": 6249501, "node_id": "MDQ6VXNlcjYyNDk1MDE=", "avatar_url": "https://avatars.githubusercontent.com/u/6249501?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sigmareaver", "html_url": "https://github.com/sigmareaver", "followers_url": "https://api.github.com/users/sigmareaver/followers", "following_url": "https://api.github.com/users/sigmareaver/following{/other_user}", "gists_url": "https://api.github.com/users/sigmareaver/gists{/gist_id}", "starred_url": "https://api.github.com/users/sigmareaver/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sigmareaver/subscriptions", "organizations_url": "https://api.github.com/users/sigmareaver/orgs", "repos_url": "https://api.github.com/users/sigmareaver/repos", "events_url": "https://api.github.com/users/sigmareaver/events{/privacy}", "received_events_url": "https://api.github.com/users/sigmareaver/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi there. The script you are using is not one we have in Transformers, and we also do not have an object `QuantLinear`, so I'm really unsure why you are reporting this here?", "Forgive me, I do not understand Python object mechanisms well, but I thought the following line was the error:\r\n`File \"~/anaconda3/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py\", line 319, in forward\r\n isinstance(self.wo.weight, torch.Tensor)`\r\nI come from a C++ background, so my logic was that transformers is at fault because it's trying to access a variable on a class that does no exist. But of course, a Python object is nothing like a C++ class.\r\n\r\nI am told that QuantLinear should have a `weight` attribute. So I am thinking maybe the model object is malformed? In that case I should report on the original project. Forgive my misunderstanding.", "As I said before, we do not use that class (`QuantLinear`) anywhere in Transformers. So this comes from your script making modifications to the model that do not work. You should raise the issue in the repo where you found that script.", "Hey! Don’t know if this is still useful.\r\nI was working with GPTQ for my custom T5 model. Got a similar bug saying ‘quantlinear’ has no attribute ’weight’.\r\n\r\nWas able to solve the issue by making a new conda env and installing only the bare minimum packages required for GPTQ. \r\nOur guess is, some other package(unwanted) was inhibiting a class definition used by the code.\r\n" ]
1,687
1,693
1,687
NONE
null
### System Info Python = 3.9.10 Transformers = 4.30.0.dev0 PyTorch = 2.0.1 Model = Google/flan-ul2 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Model quantized using `qwopqwop200/GPTQ-for-Llama` on the `t5` branch, using the following command: ``` python t5.py ../full-models/flan-ul2 wikitext2 --nsamples 256 --wbits 4 --act-order --groupsize 128 --save ../gptq-models/flan-ul2-gptq/flan-ul2-4bit-128g-gptq.pt ``` When performing benchmark using the following command (also applies to inference): ``` python t5.py ../full-models/flan-ul2 wikitext2 --load ../gptq-models/flan-ul2-gptq/flan-ul2-4bit-128g-gptq.pt --wbits 4 --groupsize 128 --benchmark --benchmark_mode mmlu ``` The following error occurs: ``` Traceback (most recent call last): File "/mnt/Storage/ai-dev/t5-gptq/t5.py", line 752, in <module> mmlu_benchmark(model, tokenizer, args) File "/mnt/Storage/ai-dev/t5-gptq/t5.py", line 542, in mmlu_benchmark cors, acc, probs = mmlu_eval(args, subject, model, tokenizer, dev_df, test_df, (idx,len(subjects))) File "~/anaconda3/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/mnt/Storage/ai-dev/t5-gptq/t5.py", line 473, in mmlu_eval logits = model( File "~/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "~/anaconda3/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1683, in forward encoder_outputs = self.encoder( File "~/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "~/anaconda3/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1090, in forward layer_outputs = layer_module( File "~/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "~/anaconda3/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 753, in forward hidden_states = self.layer[-1](hidden_states) File "~/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "~/anaconda3/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward forwarded_states = self.DenseReluDense(forwarded_states) File "~/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "~/anaconda3/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 319, in forward isinstance(self.wo.weight, torch.Tensor) File "~/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__ raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'QuantLinear' object has no attribute 'weight' ``` According to my limited understanding, `QuantLinear` is a PyTorch class, and the error is occurring in `transformers`. ### Expected behavior Successfully performing benchmark, inference, etc. of the 4-bit GPTQ flan-ul2 model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24439/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24439/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24438
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24438/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24438/comments
https://api.github.com/repos/huggingface/transformers/issues/24438/events
https://github.com/huggingface/transformers/issues/24438
1,770,571,065
I_kwDOCUB6oc5piMU5
24,438
Problem with Deepspeed integration
{ "login": "karths8", "id": 47289950, "node_id": "MDQ6VXNlcjQ3Mjg5OTUw", "avatar_url": "https://avatars.githubusercontent.com/u/47289950?v=4", "gravatar_id": "", "url": "https://api.github.com/users/karths8", "html_url": "https://github.com/karths8", "followers_url": "https://api.github.com/users/karths8/followers", "following_url": "https://api.github.com/users/karths8/following{/other_user}", "gists_url": "https://api.github.com/users/karths8/gists{/gist_id}", "starred_url": "https://api.github.com/users/karths8/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/karths8/subscriptions", "organizations_url": "https://api.github.com/users/karths8/orgs", "repos_url": "https://api.github.com/users/karths8/repos", "events_url": "https://api.github.com/users/karths8/events{/privacy}", "received_events_url": "https://api.github.com/users/karths8/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @pacman100 ", "Hello, this isn't an issue with DeepSpeed integration. The issue is this: \r\n```\r\nImportError: /root/.cache/torch_extensions/py311_cu118/cpu_adam/cpu_adam.so: cannot open shared object file: No such file or directory\r\n...\r\n\r\nRuntimeError: Error building extension 'cpu_adam'\r\n```", "Hi, @karths8 \r\n\r\nYou can try `rm -rf ~/.cache/torch_extensions/` first.\r\n\r\nRelated discussion: #14520", "> rm -rf ~/.cache/torch_extensions/\r\n\r\nThis does not seem to work for me. The root of the problem lies in `fatal error: curand_kernel.h: No such file or directory`. If there are any insights on how to solve this issue please let me know. Any help is greatly appreciated!", "This isn't an integration issue like pacman100 said. See this: https://github.com/microsoft/DeepSpeed/issues/1846\r\nLooks like an issue with the DeepSpeed pip package, I recommend installing it via conda", "> This isn't an integration issue like pacman100 said. See this: [microsoft/DeepSpeed#1846](https://github.com/microsoft/DeepSpeed/issues/1846) Looks like an issue with the DeepSpeed pip package, I recommend installing it via conda\r\n\r\nThanks! I fixed it using [this](https://github.com/microsoft/DeepSpeed/issues/3794#issuecomment-1616059430) " ]
1,687
1,688
1,688
NONE
null
### System Info - `transformers` version: 4.29.2 - Platform: Linux-5.4.0-137-generic-x86_64-with-glibc2.31 - Python version: 3.11.3 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help? _No response_ ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction I am using the WizardCoder [training script](https://github.com/nlpxucan/WizardLM/blob/main/WizardCoder/src/train_wizardcoder.py) to further fine-tune the model on some examples that I have using DeepSpeed integration. I have followed their instructions given [here](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0#fine-tuning) to fine-tune the model and I am getting the following error: ``` datachat_env) [email protected]:~/Custom-LLM$ sh train.sh [2023-06-23 00:36:25,039] [WARNING] [runner.py:191:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only. [2023-06-23 00:36:25,077] [INFO] [runner.py:541:main] cmd = /root/anaconda3/envs/datachat_env/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMSwgMiwgM119 --master_addr=127.0.0.1 --master_port=29500 --enable_each_rank_log=None /root/Custom-LLM/WizardLM/WizardCoder/src/train_wizardcoder.py --model_name_or_path /root/Custom-LLM/WizardCoder-15B-V1.0 --data_path /root/Custom-LLM/data.json --output_dir /root/Custom-LLM/WC-Checkpoint --num_train_epochs 3 --model_max_length 512 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --gradient_accumulation_steps 4 --evaluation_strategy no --save_strategy steps --save_steps 50 --save_total_limit 2 --learning_rate 2e-5 --warmup_steps 30 --logging_steps 2 --lr_scheduler_type cosine --report_to tensorboard --gradient_checkpointing True --deepspeed /root/Custom-LLM/Llama-X/src/configs/deepspeed_config.json --fp16 True [2023-06-23 00:36:26,992] [INFO] [launch.py:229:main] WORLD INFO DICT: {'localhost': [0, 1, 2, 3]} [2023-06-23 00:36:26,993] [INFO] [launch.py:235:main] nnodes=1, num_local_procs=4, node_rank=0 [2023-06-23 00:36:26,993] [INFO] [launch.py:246:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1, 2, 3]}) [2023-06-23 00:36:26,993] [INFO] [launch.py:247:main] dist_world_size=4 [2023-06-23 00:36:26,993] [INFO] [launch.py:249:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3 [2023-06-23 00:36:29,650] [INFO] [comm.py:622:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl [2023-06-23 00:36:55,124] [INFO] [partition_parameters.py:454:__exit__] finished initializing model with 15.82B parameters [2023-06-23 00:37:12,845] [WARNING] [cpu_adam.py:84:__init__] FP16 params for CPUAdam may not work on AMD CPUs [2023-06-23 00:37:12,968] [WARNING] [cpu_adam.py:84:__init__] FP16 params for CPUAdam may not work on AMD CPUs [2023-06-23 00:37:12,969] [WARNING] [cpu_adam.py:84:__init__] FP16 params for CPUAdam may not work on AMD CPUs [2023-06-23 00:37:12,970] [WARNING] [cpu_adam.py:84:__init__] FP16 params for CPUAdam may not work on AMD CPUs Using /root/.cache/torch_extensions/py311_cu118 as PyTorch extensions root... Detected CUDA files, patching ldflags Emitting ninja build file /root/.cache/torch_extensions/py311_cu118/cpu_adam/build.ninja... Building extension module cpu_adam... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) [1/3] c++ -MMD -MF cpu_adam.o.d -DTORCH_EXTENSION_NAME=cpu_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/csrc/includes -I/usr/local/cuda/include -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include/torch/csrc/api/include -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include/TH -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /root/anaconda3/envs/datachat_env/include/python3.11 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -O3 -std=c++14 -g -Wno-reorder -L/usr/local/cuda/lib64 -lcudart -lcublas -g -march=native -fopenmp -D__AVX256__ -D__ENABLE_CUDA__ -c /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/csrc/adam/cpu_adam.cpp -o cpu_adam.o FAILED: cpu_adam.o c++ -MMD -MF cpu_adam.o.d -DTORCH_EXTENSION_NAME=cpu_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/csrc/includes -I/usr/local/cuda/include -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include/torch/csrc/api/include -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include/TH -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /root/anaconda3/envs/datachat_env/include/python3.11 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -O3 -std=c++14 -g -Wno-reorder -L/usr/local/cuda/lib64 -lcudart -lcublas -g -march=native -fopenmp -D__AVX256__ -D__ENABLE_CUDA__ -c /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/csrc/adam/cpu_adam.cpp -o cpu_adam.o In file included from /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/csrc/includes/cpu_adam.h:19, from /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/csrc/adam/cpu_adam.cpp:6: /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/csrc/includes/custom_cuda_layers.h:12:10: fatal error: curand_kernel.h: No such file or directory 12 | #include <curand_kernel.h> | ^~~~~~~~~~~~~~~~~ compilation terminated. Using /root/.cache/torch_extensions/py311_cu118 as PyTorch extensions root... Using /root/.cache/torch_extensions/py311_cu118 as PyTorch extensions root... Using /root/.cache/torch_extensions/py311_cu118 as PyTorch extensions root... [2/3] /usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=cpu_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/csrc/includes -I/usr/local/cuda/include -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include/torch/csrc/api/include -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include/TH -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /root/anaconda3/envs/datachat_env/include/python3.11 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -O3 --use_fast_math -std=c++14 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_80,code=compute_80 -c /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/csrc/common/custom_cuda_kernel.cu -o custom_cuda_kernel.cuda.o FAILED: custom_cuda_kernel.cuda.o /usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=cpu_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/csrc/includes -I/usr/local/cuda/include -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include/torch/csrc/api/include -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include/TH -isystem /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /root/anaconda3/envs/datachat_env/include/python3.11 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -O3 --use_fast_math -std=c++14 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_80,code=compute_80 -c /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/csrc/common/custom_cuda_kernel.cu -o custom_cuda_kernel.cuda.o In file included from /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/csrc/common/custom_cuda_kernel.cu:6: /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/csrc/includes/custom_cuda_layers.h:12:10: fatal error: curand_kernel.h: No such file or directory 12 | #include <curand_kernel.h> | ^~~~~~~~~~~~~~~~~ compilation terminated. ninja: build stopped: subcommand failed. Traceback (most recent call last): File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1893, in _run_ninja_build subprocess.run( File "/root/anaconda3/envs/datachat_env/lib/python3.11/subprocess.py", line 571, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/root/Custom-LLM/WizardLM/WizardCoder/src/train_wizardcoder.py", line 247, in <module> train() File "/root/Custom-LLM/WizardLM/WizardCoder/src/train_wizardcoder.py", line 241, in train trainer.train() File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 1664, in train return inner_training_loop( ^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 1741, in _inner_training_loop deepspeed_engine, optimizer, lr_scheduler = deepspeed_init( ^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/deepspeed.py", line 378, in deepspeed_init deepspeed_engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/__init__.py", line 165, in initialize engine = DeepSpeedEngine(args=args, ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 308, in __init__ self._configure_optimizer(optimizer, model_parameters) File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 1162, in _configure_optimizer basic_optimizer = self._configure_basic_optimizer(model_parameters) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 1218, in _configure_basic_optimizer optimizer = DeepSpeedCPUAdam(model_parameters, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/adam/cpu_adam.py", line 94, in __init__ self.ds_opt_adam = CPUAdamBuilder().load() ^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/op_builder/builder.py", line 445, in load return self.jit_load(verbose) ^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/op_builder/builder.py", line 480, in jit_load Loading extension module cpu_adam... op_module = load(name=self.name, ^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1284, in load Traceback (most recent call last): File "/root/Custom-LLM/WizardLM/WizardCoder/src/train_wizardcoder.py", line 247, in <module> return _jit_compile( ^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1509, in _jit_compile train() File "/root/Custom-LLM/WizardLM/WizardCoder/src/train_wizardcoder.py", line 241, in train trainer.train() File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 1664, in train _write_ninja_file_and_build_library( File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1624, in _write_ninja_file_and_build_library _run_ninja_build( File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1909, in _run_ninja_build return inner_training_loop( ^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 1741, in _inner_training_loop raise RuntimeError(message) from e RuntimeError: Error building extension 'cpu_adam' Loading extension module cpu_adam... deepspeed_engine, optimizer, lr_scheduler = deepspeed_init( ^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/deepspeed.py", line 378, in deepspeed_init Traceback (most recent call last): File "/root/Custom-LLM/WizardLM/WizardCoder/src/train_wizardcoder.py", line 247, in <module> deepspeed_engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs) train() File "/root/Custom-LLM/WizardLM/WizardCoder/src/train_wizardcoder.py", line 241, in train ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ trainer.train() File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/__init__.py", line 165, in initialize File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 1664, in train engine = DeepSpeedEngine(args=args, ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 308, in __init__ self._configure_optimizer(optimizer, model_parameters) File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 1162, in _configure_optimizer return inner_training_loop( ^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 1741, in _inner_training_loop basic_optimizer = self._configure_basic_optimizer(model_parameters) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 1218, in _configure_basic_optimizer deepspeed_engine, optimizer, lr_scheduler = deepspeed_init( ^^ ^optimizer = DeepSpeedCPUAdam(model_parameters,^ ^^ ^ ^ ^ ^ ^ ^ ^ ^ ^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/deepspeed.py", line 378, in deepspeed_init ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/adam/cpu_adam.py", line 94, in __init__ deepspeed_engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs) self.ds_opt_adam = CPUAdamBuilder().load() ^ ^ ^ ^ ^ ^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/op_builder/builder.py", line 445, in load ^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/__init__.py", line 165, in initialize return self.jit_load(verbose) engine = DeepSpeedEngine(args=args, ^ ^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/op_builder/builder.py", line 480, in jit_load File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 308, in __init__ self._configure_optimizer(optimizer, model_parameters) op_module = load(name=self.name, File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 1162, in _configure_optimizer ^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1284, in load basic_optimizer = self._configure_basic_optimizer(model_parameters) ^^^^^^^^^^^^^ ^return _jit_compile(^ ^^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1535, in _jit_compile ^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 1218, in _configure_basic_optimizer optimizer = DeepSpeedCPUAdam(model_parameters, return _import_module_from_library(name, build_directory, is_python_module) ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/adam/cpu_adam.py", line 94, in __init__ ^^^^^^^^^^^^^^^^^^^^^^^^ ^self.ds_opt_adam = CPUAdamBuilder().load()^ ^^^^^^^^^ ^ ^ ^ ^ ^ ^ ^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1929, in _import_module_from_library ^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/op_builder/builder.py", line 445, in load return self.jit_load(verbose) ^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/op_builder/builder.py", line 480, in jit_load module = importlib.util.module_from_spec(spec) ^^^^^^^^^^^^^^^^ ^op_module = load(name=self.name,^ ^^^^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^^ File "<frozen importlib._bootstrap>", line 573, in module_from_spec ^^ File "<frozen importlib._bootstrap_external>", line 1233, in create_module ^^ File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed ^^ImportError^: ^/root/.cache/torch_extensions/py311_cu118/cpu_adam/cpu_adam.so: cannot open shared object file: No such file or directory^ ^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1284, in load return _jit_compile( ^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1535, in _jit_compile return _import_module_from_library(name, build_directory, is_python_module) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1929, in _import_module_from_library module = importlib.util.module_from_spec(spec) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen importlib._bootstrap>", line 573, in module_from_spec File "<frozen importlib._bootstrap_external>", line 1233, in create_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed ImportError: /root/.cache/torch_extensions/py311_cu118/cpu_adam/cpu_adam.so: cannot open shared object file: No such file or directory Loading extension module cpu_adam... Traceback (most recent call last): File "/root/Custom-LLM/WizardLM/WizardCoder/src/train_wizardcoder.py", line 247, in <module> train() File "/root/Custom-LLM/WizardLM/WizardCoder/src/train_wizardcoder.py", line 241, in train trainer.train() File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 1664, in train return inner_training_loop( ^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 1741, in _inner_training_loop deepspeed_engine, optimizer, lr_scheduler = deepspeed_init( ^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/deepspeed.py", line 378, in deepspeed_init deepspeed_engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/__init__.py", line 165, in initialize engine = DeepSpeedEngine(args=args, ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 308, in __init__ self._configure_optimizer(optimizer, model_parameters) File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 1162, in _configure_optimizer basic_optimizer = self._configure_basic_optimizer(model_parameters) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 1218, in _configure_basic_optimizer optimizer = DeepSpeedCPUAdam(model_parameters, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/adam/cpu_adam.py", line 94, in __init__ self.ds_opt_adam = CPUAdamBuilder().load() ^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/op_builder/builder.py", line 445, in load return self.jit_load(verbose) ^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/op_builder/builder.py", line 480, in jit_load op_module = load(name=self.name, ^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1284, in load return _jit_compile( ^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1535, in _jit_compile return _import_module_from_library(name, build_directory, is_python_module) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1929, in _import_module_from_library module = importlib.util.module_from_spec(spec) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen importlib._bootstrap>", line 573, in module_from_spec File "<frozen importlib._bootstrap_external>", line 1233, in create_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed ImportError: /root/.cache/torch_extensions/py311_cu118/cpu_adam/cpu_adam.so: cannot open shared object file: No such file or directory Exception ignored in: <function DeepSpeedCPUAdam.__del__ at 0x7fcaec4a89a0> Traceback (most recent call last): File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/adam/cpu_adam.py", line 102, in __del__ self.ds_opt_adam.destroy_adam(self.opt_id) ^^^^^^^^^^^^^^^^ AttributeError: 'DeepSpeedCPUAdam' object has no attribute 'ds_opt_adam' Exception ignored in: <function DeepSpeedCPUAdam.__del__ at 0x7fbf4e6409a0> Traceback (most recent call last): File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/adam/cpu_adam.py", line 102, in __del__ self.ds_opt_adam.destroy_adam(self.opt_id) ^^^^^^^^^^^^^^^^ AttributeError: 'DeepSpeedCPUAdam' object has no attribute 'ds_opt_adam' Exception ignored in: <function DeepSpeedCPUAdam.__del__ at 0x7f9ce61b09a0> Traceback (most recent call last): File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/adam/cpu_adam.py", line 102, in __del__ AttributeError: 'DeepSpeedCPUAdam' object has no attribute 'ds_opt_adam' Exception ignored in: <function DeepSpeedCPUAdam.__del__ at 0x7f6c2bf109a0> Traceback (most recent call last): File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/deepspeed/ops/adam/cpu_adam.py", line 102, in __del__ self.ds_opt_adam.destroy_adam(self.opt_id) ^^^^^^^^^^^^^^^^ AttributeError: 'DeepSpeedCPUAdam' object has no attribute 'ds_opt_adam' ``` ### Expected behavior Expect the model to use the deepspeed config file and run training
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24438/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24438/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24437
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24437/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24437/comments
https://api.github.com/repos/huggingface/transformers/issues/24437/events
https://github.com/huggingface/transformers/issues/24437
1,770,534,812
I_kwDOCUB6oc5piDec
24,437
TFPreTrainedModel.build breaks pytorch PreTrainedModel.from_pretrained(from_tf=True)
{ "login": "winston-zillow", "id": 26907141, "node_id": "MDQ6VXNlcjI2OTA3MTQx", "avatar_url": "https://avatars.githubusercontent.com/u/26907141?v=4", "gravatar_id": "", "url": "https://api.github.com/users/winston-zillow", "html_url": "https://github.com/winston-zillow", "followers_url": "https://api.github.com/users/winston-zillow/followers", "following_url": "https://api.github.com/users/winston-zillow/following{/other_user}", "gists_url": "https://api.github.com/users/winston-zillow/gists{/gist_id}", "starred_url": "https://api.github.com/users/winston-zillow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/winston-zillow/subscriptions", "organizations_url": "https://api.github.com/users/winston-zillow/orgs", "repos_url": "https://api.github.com/users/winston-zillow/repos", "events_url": "https://api.github.com/users/winston-zillow/events{/privacy}", "received_events_url": "https://api.github.com/users/winston-zillow/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@winston-zillow thanks for the bug report! We're investigating now.", "Hi @winston-zillow, I did some investigation and I can't reproduce the issue! At my end `RobertaModel.from_pretrained(\"roberta-base\", from_tf=True)` or `RobertaForSequenceClassification.from_pretrained(\"roberta-base\", from_tf=True)` both work correctly.\r\n\r\nThis might have something to do with the specific checkpoint you're using. Can you give me some code that reproduces the issue using a checkpoint on the HuggingFace Hub, or if not, can you upload the checkpoint you're using so I can try to figure this one out?", "Wait, I was able to trigger the bug by switching my TensorFlow version! Investigating this now.", "Update: The bug is not caused by `build()`, but by faulty imports from the deprecated `tf.python.keras` repo. As a workaround for now, you can update your version of TensorFlow to 2.11 or newer, which should solve the bug for you. I'm working on a PR which should fix this issue for all TF versions >= 2.6, and bump our minimum supported TF version to 2.6 as well.", "@winston-zillow A PR is in that should resolve this! If you want to try it before it's merged and report your experiences, you can use\r\n```\r\npip install git+https://github.com/huggingface/transformers.git@improved_keras_imports\r\n```", "PR has now been merged! You can now get it just by installing from `main`:\r\n```\r\npip install git+https://github.com/huggingface/transformers.git\r\n```\r\nIt'll also be included in the next release of transformers. Thanks again for filing the issue, and please feel free to comment or reopen it if the PR doesn't resolve your problem! Our test suite normally catches things like this, but the specific combination of older TF versions and TF -> PT crossloading slipped through, so the bug report is greatly appreciated.", "@Rocketknight1 Thanks for the quick fix! " ]
1,687
1,687
1,687
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: macOS-13.4-x86_64-i386-64bit - Python version: 3.9.16 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1 (False) - Tensorflow version (GPU?): 2.10.1 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: NO - Using distributed or parallel set-up in script?: NO ### Who can help? @Rocketknight1 ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The `build()` overridden method introduced recently breaks `PreTrainedModel.from_pretrained(from_tf=True)`. This can be reproduced by the following line of codes: ```python model = RobertaModelForSequenceClassification.from_pretrained(path, from_tf=True) ``` console prints out: ``` All TF 2.0 model weights were used when initializing RobertaForSequenceClassification. Some weights of RobertaForSequenceClassification were not initialized from the TF 2.0 model and are newly initialized: ['roberta.embeddings.word_embeddings.weight', 'roberta.embeddings.position_embeddings.weight', 'roberta.embeddings.token_type_embeddings.weight', ... all 6 layers... ``` After some digging, I believe the recursive call to `__call__` in the `build()` causes the names of all TF weights to be prefixed twice with the model's keras name (instead of once,) e.g. `tf_roberta_for_sequence_classification/tf_roberta_for_sequence_classification/...`. The `from_pretrained(from_tf=True)` works according to the following steps: 1. create a PT model 2. create a TF model 3. build the TF model by calling `tf_model(tf_mode.dummy_inputs, training=False)` 1. calling `tf_model._call_` 2. enter name scope of the model, e.g. `tf_roberta_for_sequence_classification` 3. figure it has not been built because `built=False` 4. call `tf_model.build` 5. the overridden `TFPreTrainedModel.build` then set `built=True` 6. calls `self._call__` (i.e. `tf_model.__call__`) again 7. **enter name scope of the model again**, e.g. e.g. => `tf_roberta_for_sequence_classification/tf_roberta_for_sequence_classification` 8. proceed to call the `tf_model.call` 9. call the layer’s `_call_` 10. add variables with name e.g. `tf_roberta_for_sequence_classification/tf_roberta_for_sequence_classification/roberta/...` 4. load TF weights to TF models 5. map TF weight names to PT weight names by removing the **first** prefix of the TF variable names and copy over the weights. => fail to map TF names to PT names due to the double prefix e.g. => `tf_roberta_for_sequence_classification/tf_roberta_for_sequence_classification` The hacky workaround is to override the `build` ourself to not call the `__call__`, e.g.: ```python class _FixedTFRobertaForSequenceClassification(transformers.TFRobertaForSequenceClassification): def build(self, input_shape=None): self.built = True if id(transformers.TFRobertaForSequenceClassification) != id(_FixedTFRobertaForSequenceClassification): print('fixing TFRobertaForSequenceClassification to', _FixedTFRobertaForSequenceClassification.__name__) transformers.TFRobertaForSequenceClassification = _FixedTFRobertaForSequenceClassification model = RobertaForSequenceClassification.from_pretrained(language_model_path, from_tf=True) #### console prints out All TF 2.0 model weights were used when initializing RobertaForSequenceClassification. All the weights of RobertaForSequenceClassification were initialized from the TF 2.0 model ``` The double naming can also be seen by just creating the submodel: ```python config = RobertaConfig.from_pretrained(language_model_path) tf_model = TFRobertaForSequenceClassification(config) print(tf_model._name) tf_model.weights[10] # => notice the double prefix in the weight variable names ``` There should be better fix. CC @Rocketknight1 ### Expected behavior 1. TF submodel variables/weights names should not be double-prefixed 2. `from_pretrained(from_tf=True)` should works.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24437/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24437/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24436
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24436/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24436/comments
https://api.github.com/repos/huggingface/transformers/issues/24436/events
https://github.com/huggingface/transformers/pull/24436
1,770,488,222
PR_kwDOCUB6oc5TswE4
24,436
[llama] Fix comments in weights converter
{ "login": "weimingzha0", "id": 38259546, "node_id": "MDQ6VXNlcjM4MjU5NTQ2", "avatar_url": "https://avatars.githubusercontent.com/u/38259546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/weimingzha0", "html_url": "https://github.com/weimingzha0", "followers_url": "https://api.github.com/users/weimingzha0/followers", "following_url": "https://api.github.com/users/weimingzha0/following{/other_user}", "gists_url": "https://api.github.com/users/weimingzha0/gists{/gist_id}", "starred_url": "https://api.github.com/users/weimingzha0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/weimingzha0/subscriptions", "organizations_url": "https://api.github.com/users/weimingzha0/orgs", "repos_url": "https://api.github.com/users/weimingzha0/repos", "events_url": "https://api.github.com/users/weimingzha0/events{/privacy}", "received_events_url": "https://api.github.com/users/weimingzha0/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,687
1,687
1,687
CONTRIBUTOR
null
Explain the reason to clone tensor The original comment doesn't explain much about why we need to do clone. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker @younesbelkada @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24436/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24436/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24436", "html_url": "https://github.com/huggingface/transformers/pull/24436", "diff_url": "https://github.com/huggingface/transformers/pull/24436.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24436.patch", "merged_at": 1687480734000 }
https://api.github.com/repos/huggingface/transformers/issues/24435
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24435/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24435/comments
https://api.github.com/repos/huggingface/transformers/issues/24435/events
https://github.com/huggingface/transformers/pull/24435
1,770,469,793
PR_kwDOCUB6oc5TssNl
24,435
🌐 [i18n-KO] Translated `tflite.mdx` to Korean
{ "login": "0525hhgus", "id": 47289574, "node_id": "MDQ6VXNlcjQ3Mjg5NTc0", "avatar_url": "https://avatars.githubusercontent.com/u/47289574?v=4", "gravatar_id": "", "url": "https://api.github.com/users/0525hhgus", "html_url": "https://github.com/0525hhgus", "followers_url": "https://api.github.com/users/0525hhgus/followers", "following_url": "https://api.github.com/users/0525hhgus/following{/other_user}", "gists_url": "https://api.github.com/users/0525hhgus/gists{/gist_id}", "starred_url": "https://api.github.com/users/0525hhgus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/0525hhgus/subscriptions", "organizations_url": "https://api.github.com/users/0525hhgus/orgs", "repos_url": "https://api.github.com/users/0525hhgus/repos", "events_url": "https://api.github.com/users/0525hhgus/events{/privacy}", "received_events_url": "https://api.github.com/users/0525hhgus/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "번역 문장 모두 좋습니다. 앞서 반영된 내용 외에 추가 의견 없습니다! 👍 ", "May you please review this PR? 😄 \r\n@sgugger, @ArthurZucker, @eunseojo" ]
1,687
1,687
1,687
CONTRIBUTOR
null
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니다 --> # What does this PR do? Translated the `tflite.mdx` file of the documentation to Korean 😄 Thank you in advance for your review! Part of https://github.com/huggingface/transformers/issues/20179 <!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: --> ## Before reviewing - [x] Check for missing / redundant translations (번역 누락/중복 검사) - [x] Grammar Check (맞춤법 검사) - [x] Review or Add new terms to glossary (용어 확인 및 추가) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview로 정상작동 확인) ## Who can review? (Initial) <!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> <!-- Team PseudoLab, may you please review this PR? --> @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24435/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24435/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24435", "html_url": "https://github.com/huggingface/transformers/pull/24435", "diff_url": "https://github.com/huggingface/transformers/pull/24435.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24435.patch", "merged_at": 1687868322000 }
https://api.github.com/repos/huggingface/transformers/issues/24434
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24434/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24434/comments
https://api.github.com/repos/huggingface/transformers/issues/24434/events
https://github.com/huggingface/transformers/pull/24434
1,770,393,324
PR_kwDOCUB6oc5Tsb9o
24,434
Replace python random with torch.rand to enable dynamo.export
{ "login": "BowenBao", "id": 9376104, "node_id": "MDQ6VXNlcjkzNzYxMDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9376104?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BowenBao", "html_url": "https://github.com/BowenBao", "followers_url": "https://api.github.com/users/BowenBao/followers", "following_url": "https://api.github.com/users/BowenBao/following{/other_user}", "gists_url": "https://api.github.com/users/BowenBao/gists{/gist_id}", "starred_url": "https://api.github.com/users/BowenBao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BowenBao/subscriptions", "organizations_url": "https://api.github.com/users/BowenBao/orgs", "repos_url": "https://api.github.com/users/BowenBao/repos", "events_url": "https://api.github.com/users/BowenBao/events{/privacy}", "received_events_url": "https://api.github.com/users/BowenBao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> You are touching multiple Flax model which shouldn't depend on torch, could you revert that?\r\n\r\nNice catch! Done.", "_The documentation is not available anymore as the PR was closed or merged._", "@BowenBao Does this really solve the export problem. We are seeing export issues here - https://github.com/pytorch/pytorch/issues/107587\r\n\r\nIf one peeks at the tensor value for the conditional, its a legit dynamic control flow. We might have to use `torch.where` or `torch.cond`.", "@anijain2305 it does for inference export, since the actual condition is short circuited by `self.training` being False. Without the change, `random.uniform(0, 1)` leads to a graph break, although the value is unused." ]
1,687
1,692
1,687
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Related and Fixes https://github.com/pytorch/pytorch/issues/102794 TL;DR dynamo graph breaks on python `random.uniform(0, 1)`. The graph break can be prevented by replacing with `torch.randn([])`. Example repro script ```python import torch import torch._dynamo from transformers import AutoTokenizer, BartForCausalLM tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base") model = BartForCausalLM.from_pretrained("facebook/bart-base", add_cross_attention=False) inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) torch._dynamo.export(model, return_dict=False, **inputs) ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24434/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24434/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24434", "html_url": "https://github.com/huggingface/transformers/pull/24434", "diff_url": "https://github.com/huggingface/transformers/pull/24434.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24434.patch", "merged_at": 1687522641000 }
https://api.github.com/repos/huggingface/transformers/issues/24433
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24433/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24433/comments
https://api.github.com/repos/huggingface/transformers/issues/24433/events
https://github.com/huggingface/transformers/issues/24433
1,770,331,387
I_kwDOCUB6oc5phRz7
24,433
Decoding error while using DataCollatorForSeq2Seq
{ "login": "Pavloveuge", "id": 49618087, "node_id": "MDQ6VXNlcjQ5NjE4MDg3", "avatar_url": "https://avatars.githubusercontent.com/u/49618087?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Pavloveuge", "html_url": "https://github.com/Pavloveuge", "followers_url": "https://api.github.com/users/Pavloveuge/followers", "following_url": "https://api.github.com/users/Pavloveuge/following{/other_user}", "gists_url": "https://api.github.com/users/Pavloveuge/gists{/gist_id}", "starred_url": "https://api.github.com/users/Pavloveuge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Pavloveuge/subscriptions", "organizations_url": "https://api.github.com/users/Pavloveuge/orgs", "repos_url": "https://api.github.com/users/Pavloveuge/repos", "events_url": "https://api.github.com/users/Pavloveuge/events{/privacy}", "received_events_url": "https://api.github.com/users/Pavloveuge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes, you need to replace the labels indices that are at -100 to be able to decode them. The -100 indicates to PyTorch the corresponding token should be ignored during the loss computation, this is not a bug.\r\n\r\nAlso please use the [forums](https://discuss.huggingface.co/) for questions like this.", "OK, thank you so much.", "Hey @Pavloveuge , I had the same problem \"OverflowError: out of range integral type conversion attempted\", How do I solve it?", "\r\nHi, look at this [comment](https://github.com/huggingface/transformers/issues/22634#issuecomment-1595885334)", "I saw that. I'm finetuning the araT5 model, and I can't understand how to replace -100 in my code. Could help me please? @Pavloveuge \r\n", "Can you share minimal code snippet?\r\nI think, you can try:\r\n```\r\ncollator = DataCollatorForSeq2Seq(tokenizer, model=model, label_pad_token_id=tokenizer.pad_token_id)\r\n```", "I tried it but the error was not solved. \r\nthis screenshot of the error,\r\n![image](https://github.com/huggingface/transformers/assets/53126908/fbb222a6-7403-45fe-819e-41b1d5f1438a)\r\n\r\nin this line, the error occurs\r\n![image](https://github.com/huggingface/transformers/assets/53126908/b761417b-0c42-4df1-92a6-798e257cf7a8)\r\n![image](https://github.com/huggingface/transformers/assets/53126908/a56ae390-0aa6-40ff-89ad-444dd19f4b73)\r\nspecifically, this line is causing the error, where the preds containing -100, as you can see in the first image\r\n![image](https://github.com/huggingface/transformers/assets/53126908/4df01146-b70c-4048-9cdf-52f2afb61d19)\r\n\r\n\r\n", "@Pavloveuge , Also the error occurs in the evaluation and prediction stages", "In initialize of your Seq2SeqTrainer you pass `data_collator`, try to initialize him something like this:\r\n```\r\ndata_collator = DataCollatorForSeq2Seq(tokenizer, model=model, label_pad_token_id=tokenizer.pad_token_id)\r\n```\r\n", "Thank you @Pavloveuge, the error is solved by adding ` \r\nif data_args.ignore_pad_token_for_loss:\r\n # Replace -100 in the labels as we can't decode them.\r\n preds = np.where(preds != -100, preds, tokenizer.pad_token_id) ` \r\nin **_compute_mertic_** function before tokenizer.batch_decode for perds:\r\n![image](https://github.com/huggingface/transformers/assets/53126908/ebe3b40f-2aac-45b8-a660-021c476ffd0c)\r\n" ]
1,687
1,697
1,687
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu) - Jax version: 0.4.10 - JaxLib version: 0.4.10 - Using GPU in script?: no - Using distributed or parallel set-up in script?: no> ### Who can help? @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction When I decode data from DataCollatorForSeq2Seq, I get OverflowError with fast tokenizer and TypeError with default tokenizer. Code example: ``` model_name = "facebook/bart-base" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) data_collator = DataCollatorForSeq2Seq(tokenizer, model=None) texts = ["text" * 5, "text" * 10] labels = ["label" * 5, "label" * 10] features = [ { "input_ids": tokenizer(i)['input_ids'], "labels": tokenizer(j)['input_ids'] } for i,j in zip(texts, labels) ] result = data_collator(features) print(result["labels"][0]) print(tokenizer.decode(result["labels"][0], skip_special_tokens=True)) ``` Stack trace in case `use_fast=False`: ``` TypeError Traceback (most recent call last) [<ipython-input-7-14a1328c21fe>](https://localhost:8080/#) in <cell line: 18>() 16 result = data_collator(features) 17 print(result["labels"][0]) ---> 18 print(tokenizer.decode(result["labels"][0], skip_special_tokens=True)) 2 frames [/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces, **kwargs) 3507 token_ids = to_py_obj(token_ids) 3508 -> 3509 return self._decode( 3510 token_ids=token_ids, 3511 skip_special_tokens=skip_special_tokens, [/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils.py](https://localhost:8080/#) in _decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces, spaces_between_special_tokens, **kwargs) 947 current_sub_text.append(token) 948 if current_sub_text: --> 949 sub_texts.append(self.convert_tokens_to_string(current_sub_text)) 950 951 if spaces_between_special_tokens: [/usr/local/lib/python3.10/dist-packages/transformers/models/bart/tokenization_bart.py](https://localhost:8080/#) in convert_tokens_to_string(self, tokens) 305 def convert_tokens_to_string(self, tokens): 306 """Converts a sequence of tokens (string) in a single string.""" --> 307 text = "".join(tokens) 308 text = bytearray([self.byte_decoder[c] for c in text]).decode("utf-8", errors=self.errors) 309 return text TypeError: sequence item 9: expected str instance, NoneType found ``` Stack trace in case `use_fast=True`: ``` OverflowError Traceback (most recent call last) [<ipython-input-8-d0724246272d>](https://localhost:8080/#) in <cell line: 18>() 16 result = data_collator(features) 17 print(result["labels"][0]) ---> 18 print(tokenizer.decode(result["labels"][0], skip_special_tokens=True)) 1 frames [/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces, **kwargs) 3507 token_ids = to_py_obj(token_ids) 3508 -> 3509 return self._decode( 3510 token_ids=token_ids, 3511 skip_special_tokens=skip_special_tokens, [/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_fast.py](https://localhost:8080/#) in _decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces, **kwargs) 544 if isinstance(token_ids, int): 545 token_ids = [token_ids] --> 546 text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens) 547 548 clean_up_tokenization_spaces = ( OverflowError: out of range integral type conversion attempted ``` Also, if I use facebook/m2m100_418M there are no errors, but the result of decoding looks like (though I use `skip_special_tokens=True`) this: ``` labellabellabellabellabel<unk><unk><unk><unk><unk><unk><unk><unk><unk><unk> ``` ### Expected behavior Hello! I have expected no errors and that skip_special_tokens would work normally. Seems like the label padding in DataCollatorForSeq2Seq with using -100 is leading to error. I think these issue are related to this: #22634 [this](https://github.com/huggingface/transformers/issues/3853#issuecomment-770417239)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24433/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24433/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24432
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24432/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24432/comments
https://api.github.com/repos/huggingface/transformers/issues/24432/events
https://github.com/huggingface/transformers/pull/24432
1,770,306,285
PR_kwDOCUB6oc5TsI8i
24,432
[GPT-2] Add docs
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @stevhliu ", "_The documentation is not available anymore as the PR was closed or merged._", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@stevhliu can we proceed by adding this, or would you like to have it on a separate page?", "I think it'll be best to put this in its own section on the [Text generation strategies](https://huggingface.co/docs/transformers/generation_strategies#text-generation-strategies) page :)", "cc @gante will close this PR in favor of adding it to the text generation strategies page.\r\n\r\nWould it be possible to add a section on batched generation?", "@NielsRogge absolutely 👍 " ]
1,687
1,691
1,691
CONTRIBUTOR
null
# What does this PR do? Lots of people don't seem to know about batched generation with GPT-2 and friends. Hence this PR adds a section to the docs, similar to the T5 docs. It also fixes an issue in the T5 docs.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24432/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24432/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24432", "html_url": "https://github.com/huggingface/transformers/pull/24432", "diff_url": "https://github.com/huggingface/transformers/pull/24432.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24432.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24431
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24431/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24431/comments
https://api.github.com/repos/huggingface/transformers/issues/24431/events
https://github.com/huggingface/transformers/issues/24431
1,770,247,718
I_kwDOCUB6oc5pg9Ym
24,431
bug in trainer with accelerate prepare of GPT2LMHeadModel using fp16
{ "login": "StevenSong", "id": 26208374, "node_id": "MDQ6VXNlcjI2MjA4Mzc0", "avatar_url": "https://avatars.githubusercontent.com/u/26208374?v=4", "gravatar_id": "", "url": "https://api.github.com/users/StevenSong", "html_url": "https://github.com/StevenSong", "followers_url": "https://api.github.com/users/StevenSong/followers", "following_url": "https://api.github.com/users/StevenSong/following{/other_user}", "gists_url": "https://api.github.com/users/StevenSong/gists{/gist_id}", "starred_url": "https://api.github.com/users/StevenSong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StevenSong/subscriptions", "organizations_url": "https://api.github.com/users/StevenSong/orgs", "repos_url": "https://api.github.com/users/StevenSong/repos", "events_url": "https://api.github.com/users/StevenSong/events{/privacy}", "received_events_url": "https://api.github.com/users/StevenSong/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @pacman100 and @muellerzr ", "Having the same issue with QLoRA style PEFT (also when setting FP16 to off).", "Hello @StevenSong, \r\n\r\nThank you for the minimal reproducible example. The culprit in the above example is `device_map=\"auto\",`. The below code runs fine:\r\n\r\n```diff\r\nimport os\r\nimport sys\r\nimport numpy as np\r\nfrom itertools import chain\r\n\r\nimport torch\r\nfrom datasets import load_dataset\r\nfrom transformers import (\r\n GPT2TokenizerFast,\r\n GPT2LMHeadModel,\r\n DataCollatorForLanguageModeling,\r\n Trainer,\r\n TrainingArguments,\r\n set_seed,\r\n)\r\n\r\nseed = 42\r\ntorch.manual_seed(seed)\r\nset_seed(seed)\r\nnp.random.seed(seed)\r\n\r\ntok = GPT2TokenizerFast.from_pretrained(\"gpt2\")\r\ntok.pad_token = tok.eos_token\r\ntok.pad_token_id = tok.eos_token_id\r\n\r\ntest_size = 0.1\r\n_chunk_size = 256\r\ntext_col = \"text\"\r\n\r\nnum_workers = min(os.cpu_count(), 2)\r\n\r\nmax_seq_length = min(_chunk_size, tok.model_max_length)\r\n\r\nds = load_dataset(\"wikitext\", \"wikitext-2-v1\")\r\n\r\ntokenized_ds = ds.map(\r\n lambda x: tok(x[\"text\"], padding=True, pad_to_multiple_of=max_seq_length),\r\n remove_columns=[text_col],\r\n batched=True,\r\n num_proc=num_workers,\r\n)\r\n\r\ndef chunk_text(examples, max_seq_length):\r\n concatenated = {k: list(chain(*examples[k])) for k in examples.keys()}\r\n tot_len = len(concatenated[list(examples.keys())[0]])\r\n if tot_len >= max_seq_length:\r\n tot_len = (\r\n tot_len // max_seq_length\r\n ) * max_seq_length\r\n result = {\r\n k: [t[i : i + max_seq_length] for i in range(0, tot_len, max_seq_length)]\r\n for k, t in concatenated.items()\r\n }\r\n return result\r\n\r\nchunked_ds = tokenized_ds.map(\r\n lambda x: chunk_text(x, max_seq_length), batched=True, num_proc=num_workers\r\n)\r\n\r\nmodel = GPT2LMHeadModel.from_pretrained(\r\n \"gpt2\",\r\n- device_map=\"auto\",\r\n)\r\n\r\ndata_collator = DataCollatorForLanguageModeling(tok, mlm=False)\r\n\r\nargs = TrainingArguments(\r\n output_dir=\"delete-me\",\r\n per_device_train_batch_size=6,\r\n logging_steps=500,\r\n gradient_accumulation_steps=1,\r\n gradient_checkpointing=False,\r\n num_train_epochs=1,\r\n weight_decay=0.1,\r\n warmup_steps=50,\r\n lr_scheduler_type=\"cosine\",\r\n learning_rate=5e-6,\r\n save_steps=10_000,\r\n fp16=True, # fp16 bug with GPT2 models in huggingface?\r\n dataloader_pin_memory=True,\r\n dataloader_num_workers=2,\r\n optim=\"adafactor\",\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n tokenizer=tok,\r\n args=args,\r\n data_collator=data_collator,\r\n train_dataset=chunked_ds[\"train\"],\r\n)\r\n\r\ntrainer.train()\r\n\r\ntrainer.save_model(\"temp\")\r\n\r\n```\r\n\r\nHello @sgugger, seems like `device_map` changes the `model.forward` to `function` rather than preserving it as a `method`.", "Hello @imarquart, please open a new issue on PEFT wrt issue you are facing with a minimal reproducible example.", "The above PR should fix this", "Thank you!" ]
1,687
1,687
1,687
NONE
null
### System Info ``` - `transformers` version: 4.30.2 - Platform: Linux-4.15.0-192-generic-x86_64-with-glibc2.27 - Python version: 3.11.3 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes, model parallelism ``` ### Who can help? @sgugger ~@ pacma~ oops ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` import os import sys import numpy as np from itertools import chain import torch from datasets import load_dataset from transformers import ( GPT2TokenizerFast, GPT2LMHeadModel, DataCollatorForLanguageModeling, Trainer, TrainingArguments, set_seed, ) seed = 42 torch.manual_seed(seed) set_seed(seed) np.random.seed(seed) tok = GPT2TokenizerFast.from_pretrained("gpt2") tok.pad_token = tok.eos_token tok.pad_token_id = tok.eos_token_id test_size = 0.1 _chunk_size = 256 text_col = "text" num_workers = min(os.cpu_count(), 2) max_seq_length = min(_chunk_size, tok.model_max_length) ds = load_dataset("wikitext", "wikitext-2-v1") tokenized_ds = ds.map( lambda x: tok(x["text"], padding=True, pad_to_multiple_of=max_seq_length), remove_columns=[text_col], batched=True, num_proc=num_workers, ) def chunk_text(examples, max_seq_length): concatenated = {k: list(chain(*examples[k])) for k in examples.keys()} tot_len = len(concatenated[list(examples.keys())[0]]) if tot_len >= max_seq_length: tot_len = ( tot_len // max_seq_length ) * max_seq_length result = { k: [t[i : i + max_seq_length] for i in range(0, tot_len, max_seq_length)] for k, t in concatenated.items() } return result chunked_ds = tokenized_ds.map( lambda x: chunk_text(x, max_seq_length), batched=True, num_proc=num_workers ) model = GPT2LMHeadModel.from_pretrained( "gpt2", device_map="auto", ) data_collator = DataCollatorForLanguageModeling(tok, mlm=False) args = TrainingArguments( output_dir="delete-me", per_device_train_batch_size=6, logging_steps=500, gradient_accumulation_steps=1, gradient_checkpointing=False, num_train_epochs=1, weight_decay=0.1, warmup_steps=50, lr_scheduler_type="cosine", learning_rate=5e-6, save_steps=10_000, fp16=True, # fp16 bug with GPT2 models in huggingface? dataloader_pin_memory=True, dataloader_num_workers=2, optim="adafactor", ) trainer = Trainer( model=model, tokenizer=tok, args=args, data_collator=data_collator, train_dataset=chunked_ds["train"], ) trainer.train() trainer.save_model("temp") ``` ### Expected behavior Seems like there were some changes to trainer between v4.29.2 and v4.30.0 to utilize accelerate to prepare the model ([here's the git blame](https://github.com/huggingface/transformers/blame/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/trainer.py#L1751-L1752)). With a GPT2LMHeadModel using fp16 precision for training, these changes to trainer lead to the following error from the above script: ``` Traceback (most recent call last): File "[...]/min-reproducible.py", line 93, in <module> trainer.train() File "[...]/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/trainer.py", line 1645, in train return inner_training_loop( ^^^^^^^^^^^^^^^^^^^^ File "[...]/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/trainer.py", line 1756, in _inner_training_loop model, self.optimizer = self.accelerator.prepare(self.model, self.optimizer) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[...]/miniconda3/envs/llm/lib/python3.11/site-packages/accelerate/accelerator.py", line 1182, in prepare result = tuple( ^^^^^^ File "[...]/miniconda3/envs/llm/lib/python3.11/site-packages/accelerate/accelerator.py", line 1183, in <genexpr> self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[...]/miniconda3/envs/llm/lib/python3.11/site-packages/accelerate/accelerator.py", line 1022, in _prepare_one return self.prepare_model(obj, device_placement=device_placement) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[...]/miniconda3/envs/llm/lib/python3.11/site-packages/accelerate/accelerator.py", line 1308, in prepare_model model.forward = MethodType(torch.cuda.amp.autocast(dtype=torch.float16)(model.forward.__func__), model) ^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'function' object has no attribute '__func__'. Did you mean: '__doc__'? ``` Seems like the `model.forward` object is a `function` rather than a `method` so `__func__` isn't defined. `model` is an instance of GPT2LMHeadModel so I would've expected `model.forward` to be a method on the instance but maybe it's modified somewhere else. ~Overall, I'm not sure if this is a bug of trainer or accelerate or the model.~ Seems like actually this might be an issue on `accelerate` as the folks in the linked issue below are running into it when manually preparing the model (as opposed to letting trainer prepare as I did) - I can reopen this issue in the `accelerate` repo if that's better? Interestingly, if not using fp16, it runs fine. Ideally, I'd be able to use fp16 with a GPT2LMHeadModel using the trainer. Seems like someone else has also run into this issue using a LLaMA model: https://github.com/OpenAccess-AI-Collective/axolotl/issues/195#issuecomment-1589657199 Would appreciate any help/fix!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24431/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24431/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24430
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24430/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24430/comments
https://api.github.com/repos/huggingface/transformers/issues/24430/events
https://github.com/huggingface/transformers/pull/24430
1,770,118,206
PR_kwDOCUB6oc5Trf0C
24,430
Clarify batch size displayed when using DataParallel
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,687
1,687
1,687
COLLABORATOR
null
# What does this PR do? As pointed out in #24345, the batch size displayed when using `DataParallel` is unclear, this PR fixes that. Fixes #24345
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24430/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24430/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24430", "html_url": "https://github.com/huggingface/transformers/pull/24430", "diff_url": "https://github.com/huggingface/transformers/pull/24430.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24430.patch", "merged_at": 1687459580000 }
https://api.github.com/repos/huggingface/transformers/issues/24429
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24429/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24429/comments
https://api.github.com/repos/huggingface/transformers/issues/24429/events
https://github.com/huggingface/transformers/pull/24429
1,770,025,126
PR_kwDOCUB6oc5TrLos
24,429
Add support for for loops in python interpreter
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,687
1,687
1,687
COLLABORATOR
null
# What does this PR do? For loops are safe to execute in our restricted Python interpreter, this PR adds support for it and adds `range` in the list of base Python tools allowed. Fixes #24362
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24429/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24429/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24429", "html_url": "https://github.com/huggingface/transformers/pull/24429", "diff_url": "https://github.com/huggingface/transformers/pull/24429.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24429.patch", "merged_at": 1687787895000 }
https://api.github.com/repos/huggingface/transformers/issues/24428
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24428/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24428/comments
https://api.github.com/repos/huggingface/transformers/issues/24428/events
https://github.com/huggingface/transformers/pull/24428
1,769,942,590
PR_kwDOCUB6oc5Tq5aZ
24,428
Fix some `TFWhisperModelIntegrationTests`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> LGTM- thanks for fixing!\r\n> \r\n> Regarding the PyTorch tests failing with the same inputs - how do they fail?\r\n\r\nIt gives different outputs ", "I am very sorry, **I was doing something stupid and made wrong claims**. The `generation_kwargs` is an argument to the `__init__` of `GenerationConfig` instead of the `generate` method. \r\n\r\n(Some of) The previous failing tests pass as the 2 problematic arguments to `generate` are not passed, but this is wrong logic.\r\n\r\n**I will look in more depth.**", "Well, I finally decided to overwrite `generate` for `TFWhisperForConditionalGeneartion`." ]
1,687
1,687
1,687
COLLABORATOR
null
# What does this PR do? (Probably since the introduction of `GenerationConfig`), some TF Whisper Integration tests fail with error ```bash ValueError: The following `model_kwargs` are not used by the model: ['language', 'task'] (note: typos in the generate arguments will also show up in this list) ``` From my understanding, we should pass some arguments via `generation_kwargs`. Note that `WhisperForConditionalGeneartion` has its custom `generate` but `TFWhisperForConditionalGeneartion` doesn't. ⚠️ When I try to apply the same changes to PyTorch Whisper test methods, it fails as the output is different. We are somehow in a trouble of the inconsistency between PT/TF. (Not sure if we should overwrite `generate` in `TFWhisperForConditionalGeneartion`.)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24428/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24428/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24428", "html_url": "https://github.com/huggingface/transformers/pull/24428", "diff_url": "https://github.com/huggingface/transformers/pull/24428.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24428.patch", "merged_at": 1687523270000 }
https://api.github.com/repos/huggingface/transformers/issues/24427
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24427/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24427/comments
https://api.github.com/repos/huggingface/transformers/issues/24427/events
https://github.com/huggingface/transformers/issues/24427
1,769,878,332
I_kwDOCUB6oc5pfjM8
24,427
Make it possible to customize generation config for Trainer's training loop evaluation
{ "login": "antonioalegria", "id": 49322, "node_id": "MDQ6VXNlcjQ5MzIy", "avatar_url": "https://avatars.githubusercontent.com/u/49322?v=4", "gravatar_id": "", "url": "https://api.github.com/users/antonioalegria", "html_url": "https://github.com/antonioalegria", "followers_url": "https://api.github.com/users/antonioalegria/followers", "following_url": "https://api.github.com/users/antonioalegria/following{/other_user}", "gists_url": "https://api.github.com/users/antonioalegria/gists{/gist_id}", "starred_url": "https://api.github.com/users/antonioalegria/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/antonioalegria/subscriptions", "organizations_url": "https://api.github.com/users/antonioalegria/orgs", "repos_url": "https://api.github.com/users/antonioalegria/repos", "events_url": "https://api.github.com/users/antonioalegria/events{/privacy}", "received_events_url": "https://api.github.com/users/antonioalegria/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You can customize the model `generation_config` as you want and it will be the one being used. cc @gante to make sure I'm not saying something untrue.", "Maybe it's because I was setting the Generation Config in the model without putting _from_model_config = False and it was triggering the following, which resets the config:\r\n\r\n```python\r\nif generation_config is None:\r\n # legacy: users may modify the model configuration to control generation -- update the generation config\r\n # model attribute accordingly, if it was created from the model config\r\n if self.generation_config._from_model_config:\r\n new_generation_config = GenerationConfig.from_model_config(self.config)\r\n if new_generation_config != self.generation_config:\r\n warnings.warn(\r\n \"You have modified the pretrained model configuration to control generation. This is a\"\r\n \" deprecated strategy to control generation and will be removed soon, in a future version.\"\r\n \" Please use a generation configuration file (see\"\r\n \" https://huggingface.co/docs/transformers/main_classes/text_generation)\"\r\n )\r\n self.generation_config = new_generation_config\r\n generation_config = self.generation_config\r\n```", "Yeah, that was it. Though it is a bit confusing because that message says it's a deprecated strategy but doesn't say it will revert the configs.\r\n\r\nSo, maybe docs should be improved to say when Generation Config is used where, with what priority between the default one, the model one, the gen_kwargs, etc...\r\n\r\nFor example, in Seq2SeqTrainer.evaluate and predict there is this code which doesn't take into consideration if there is a generation config set in the model.\r\n\r\n```python\r\ngen_kwargs = gen_kwargs.copy()\r\nif gen_kwargs.get(\"max_length\") is None and gen_kwargs.get(\"max_new_tokens\") is None:\r\n gen_kwargs[\"max_length\"] = self.args.generation_max_length\r\ngen_kwargs[\"num_beams\"] = (\r\n gen_kwargs[\"num_beams\"] if gen_kwargs.get(\"num_beams\") is not None else self.args.generation_num_beams\r\n)\r\n```\r\n\r\nFurthermore, in Seq2SeqTrainer.prediction_step there is this code that, again, doesn't take into account the model generation config and goes for the gen_kwargs (which aren't passed in the training loop).\r\n\r\n```python\r\ngen_kwargs = self._gen_kwargs.copy()\r\nif gen_kwargs.get(\"max_length\") is None and gen_kwargs.get(\"max_new_tokens\") is None:\r\n gen_kwargs[\"max_length\"] = self.model.config.max_length\r\ngen_kwargs[\"num_beams\"] = (\r\n gen_kwargs[\"num_beams\"] if gen_kwargs.get(\"num_beams\") is not None else self.model.config.num_beams\r\n)\r\ndefault_synced_gpus = True if is_deepspeed_zero3_enabled() else False\r\ngen_kwargs[\"synced_gpus\"] = (\r\n gen_kwargs[\"synced_gpus\"] if gen_kwargs.get(\"synced_gpus\") is not None else default_synced_gpus\r\n)\r\n```\r\n\r\nIf you do have a Generation Config set in the model or passed as generation_config parameter in evaluate/predict, where you set `max_new_tokens` this will yield a warning: \"Both `max_new_tokens` and `max_length` seem to have been set. It was the code above that set max_length because it didn't see the passed GenerationConfig.\r\n\r\nSo it seems if I set a GenerationConfig in the model, with max_new_tokens, then I will always get this warning because the training loop doesn't pass anything directly to evaluate/predict.\r\n\r\nLet me know if I should close this.", "Also, the shape[1] of outputs.predictions coming out of Seq2SeqTrainer.predict is 20 and it does not respect max_new_tokens passed in the generation config.", "Hi @antonioalegria 👋 \r\n\r\nWe do support parameterizing `Seq2SeqTrainer` with a `GenerationConfig` object. You seem to be hitting an issue due to `_from_model_config` being `True`, which simply means that you've issued a sequence of commands that we did not account for :)\r\n\r\nMost of the issues you described are intentional temporary workarounds -- when a new feature is introduced, we have to ensure our library goes through a deprecation cycle, in which the old way of doing things take precedence. That's why you see strange patterns like the use of `_from_model_config` or \"duplicated\" pieces of code to control the same thing. Due to the large number of possibilities within `transformers`, sometimes we simply miss a few combinations.\r\n\r\nThat being said, let's start with the basics (which you haven't done 😉 ): what version of transformers are you using, and how can I reproduce your issue?", "Hi, I have removed the `_from_model_config` from the duplicated and altered config and the first issue no longer happens.\r\n\r\nI still get those \"Both max_new_tokens and max_length seem to have been set.\" warnings though.\r\n\r\ntransformers: 4.30.2\r\n\r\nTo reproduce use this code you just need to call `Seq2SeqTrainer.evaluate` with `generation_config=your_gen_config`, or set the generation_config in the model. In any case, you have to set max_new_tokens.\r\n\r\nYou will then see those warnings, which shouldn't happen.\r\n\r\nLet me know if you'd like me to provide a running script.\r\n\r\n", "So what is the correct way to parametrize the generation (e.g. to use contrastive search) during the model training? [The documentation](https://huggingface.co/docs/transformers/main_classes/text_generation) misses this point.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@gante @sgugger sorry to bother you, but it seems like the issue with GenerationConfig has not been resolved. Please, take a look at this code:\r\n```\r\nmy_config = GenerationConfig.from_pretrained(CHECKPOINT_DIR)\r\nmy_config.forced_bos_token_id = tokenizer.encode(\"[\")\r\nmy_config.forced_eos_token_id = tokenizer.encode(\"]\")\r\nmy_config.max_new_tokens = 1000\r\nmy_config.max_length = 1000\r\n\r\nargs = Seq2SeqTrainingArguments(\r\n ...\r\n predict_with_generate=True,\r\n generation_config=my_config,\r\n)\r\n\r\ntrainer = Seq2SeqTrainer(\r\n model_init=model_init,\r\n args=args,\r\n train_dataset=tokenized_dataset[\"train\"],\r\n eval_dataset=tokenized_dataset[\"test\"],\r\n data_collator=data_collator,\r\n tokenizer=tokenizer,\r\n compute_metrics=compute_metrics\r\n)\r\n```\r\n\r\nI check the config before starting the training, and everything looks fine:\r\n```\r\n>>> trainer.model.generation_config\r\nGenerationConfig {\r\n \"decoder_start_token_id\": 0,\r\n \"eos_token_id\": 2,\r\n \"forced_bos_token_id\": [\r\n 63\r\n ],\r\n \"forced_eos_token_id\": [\r\n 65\r\n ],\r\n \"max_length\": 1000,\r\n \"max_new_tokens\": 1000,\r\n \"pad_token_id\": 0,\r\n \"transformers_version\": \"4.32.1\"\r\n}\r\n```\r\nThen during the evaluation phase of the training I print the generated sentences and notice that the model doesn't follow my generation config. The forced start and end tokens are ignored and the number of generated tokens is 20 for any sample. I then stop the training and run the previous command again, and to my surprise see the following:\r\n```\r\n>>> trainer.model.generation_config\r\nGenerationConfig {\r\n \"decoder_start_token_id\": 0,\r\n \"eos_token_id\": 2,\r\n \"pad_token_id\": 0,\r\n \"transformers_version\": \"4.32.1\"\r\n}\r\n``` \r\nWhy has the config been reset and how can I avoid it? It seems like a bug to me.\r\n\r\n@antonioalegria FYI", "Upd: the source of the problem was my `model_init()` function, which was being called at the start of the training process. This function returns a new instance of model with default generation config, and so the custom one just gets lost.\r\n\r\nI managed to achieve the desired behavior by modifying my model_init() function like that:\r\n```\r\ndef model_init():\r\n model = T5ForConditionalGeneration.from_pretrained(MODEL_NAME)\r\n model.resize_token_embeddings(len(tokenizer), pad_to_multiple_of=16)\r\n model.generation_config = my_config\r\n return model\r\n```\r\n\r\nHowever, now at each evaluation I get this annoying warning:\r\n```\r\nBoth `max_new_tokens` (=1000) and `max_length`(=20) seem to have been set. `max_new_tokens` will take precedence. Please refer to the documentation for more information.\r\n```\r\nAt this point I'm not sure where does the `max_length = 20` come from, but luckily it doesn't matter, so for now I just silenced it:\r\n```\r\nlogging.getLogger(\"transformers.generation.utils\").setLevel(logging.ERROR) \r\n```\r\n\r\nIt seems like it would be more convenient if the explicitly passed GenerationConfig would be automatically applied to the model returned by the `model_init()` function, otherwise it's kind of pointless to use both these arguments together. What do you think?", "Hey @nick-maykr 👋 Thank you for raising the issue!\r\n\r\nI've just opened a PR that tackles this issue (fine-tuning from older models often ignoring `generation_config` changes), which should no longer happen after it gets merged 👉 https://github.com/huggingface/transformers/pull/25962\r\n\r\nAs for the `max_length` warning: the seq2seq trainer was doing something dodgy regarding `max_length`, fixing it. EDIT 👉 https://github.com/huggingface/transformers/pull/25987\r\n\r\nLet me know if you come across further issues. As I've written above, we have changed how we control generation late 2022, and maintaining retrocompatibility on top of the new features is challenging :)" ]
1,687
1,693
1,692
NONE
null
### Feature request When using `predict_with_generate` and we want to compute generation-based metrics during the eval happening during training, it would be good if the model's generation config is used and/or if we can pass the indented generation config into the train method so that it can be passed to evaluate. As it is, the generation done is using the default parameters only. ### Motivation The current way GenerationConfigs are used is pretty inconsistent and muddled IMO. You can set it at the model level but it's only used sometimes. You can pass it directly to evaluate, predict or generate but it's not clear if you should pass it as kwargs or as a full GenerationConfig. Would be great to clean this up so that it's super clear on how to use it and have a very consistent way to use it, as in python. My suggestion would be to set it at the Trainer level and be able to override it in the evaluate, predict, generate methods with a simple generation_config: GenerationConfig parameter. ### Your contribution Happy to discuss different possibilities and see where I could help.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24427/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24427/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24426
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24426/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24426/comments
https://api.github.com/repos/huggingface/transformers/issues/24426/events
https://github.com/huggingface/transformers/pull/24426
1,769,796,677
PR_kwDOCUB6oc5Tqbp5
24,426
TF CI fix for Segformer
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Ah, I know, I just thought you might still be unconscious!", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,687
1,687
1,687
MEMBER
null
This very small PR rewrites a couple of reshapes so the TF compiler can figure out the channels dim for Segformer even when some input dimensions are undefined. Should fix any CI issues the model has been having.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24426/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24426/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24426", "html_url": "https://github.com/huggingface/transformers/pull/24426", "diff_url": "https://github.com/huggingface/transformers/pull/24426.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24426.patch", "merged_at": 1687445354000 }
https://api.github.com/repos/huggingface/transformers/issues/24425
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24425/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24425/comments
https://api.github.com/repos/huggingface/transformers/issues/24425/events
https://github.com/huggingface/transformers/issues/24425
1,769,712,054
I_kwDOCUB6oc5pe6m2
24,425
LayoutXLM / LayoutLMv2: error when doing export to TorchScript
{ "login": "sudoandros", "id": 27918948, "node_id": "MDQ6VXNlcjI3OTE4OTQ4", "avatar_url": "https://avatars.githubusercontent.com/u/27918948?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sudoandros", "html_url": "https://github.com/sudoandros", "followers_url": "https://api.github.com/users/sudoandros/followers", "following_url": "https://api.github.com/users/sudoandros/following{/other_user}", "gists_url": "https://api.github.com/users/sudoandros/gists{/gist_id}", "starred_url": "https://api.github.com/users/sudoandros/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sudoandros/subscriptions", "organizations_url": "https://api.github.com/users/sudoandros/orgs", "repos_url": "https://api.github.com/users/sudoandros/repos", "events_url": "https://api.github.com/users/sudoandros/events{/privacy}", "received_events_url": "https://api.github.com/users/sudoandros/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false }
[ { "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false } ]
[ "[Error message](https://github.com/huggingface/transformers/files/11834624/error.txt)\r\n\r\nIt's gigantic so attaching it as a file\r\n", "Hi @sudoandros , we investigated with @ArthurZucker and found out that the issue comes from this line: https://github.com/huggingface/transformers/blob/8e164c5400b7b413c7b8fb32e35132001effc970/src/transformers/models/layoutlmv2/modeling_layoutlmv2.py#L591 . It calls `detectron2` external library, so there is not much we can do on our side. Feel free to open an issue in their repo.\r\n\r\nNote that the log `First diverging operator` from pytorch is wrong.\r\n\r\nNote: `torch.are_deterministic_algorithms_enabled()` does not help.", "Hello @fxmarty. Thank you for taking time and looking into it! As I see in the code, FPN model of detectron2 is the backbone here. So this is the model I should report to its authors about. Do I get it right? ", "@sudoandros The detectron2 config hints that `'META_ARCHITECTURE': 'GeneralizedRCNN'`. Maybe this is the bit responsible. https://github.com/facebookresearch/detectron2/issues/46 may be a good read" ]
1,687
1,687
1,687
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: Linux-4.15.0-212-generic-x86_64-with-glibc2.27 - Python version: 3.10.11 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @NielsRogge @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Run the following code: ```python import torch from transformers import AutoModelForTokenClassification DEVICE = "cuda" model = AutoModelForTokenClassification.from_pretrained( "microsoft/layoutxlm-base", torchscript=True ).to(DEVICE) input_ids = torch.tensor([[6, 7, 8, 9, 10]], device=DEVICE) bboxes = torch.tensor( [ [ (0.7605, 0.0071, 0.8343, 0.0215), (0.834388, 0.0071817, 0.855485, 0.0215431), (0.855485, 0.0071817, 0.866033, 0.0215431), (0.26377, 0.02427, 0.34743, 0.0421), (0.347483, 0.024237, 0.369816, 0.04219), ] ], device=DEVICE, ) bboxes = (bboxes * 1000).to(int) image = torch.randn((1, 3, 256, 256)).to(torch.uint8).to(DEVICE) attention_mask = torch.tensor([[1, 1, 1, 1, 1]], device=DEVICE) torch.jit.trace(model, [input_ids, bboxes, image, attention_mask] ``` ### Expected behavior I expect to get a traced model as a result of the last line of code. But instead I get huge error traceback saying that "graphs differ across invocations". I see that @NielsRogge already did some changes to the LayoutLMv2 code to fix model tracing. AFAIK LayoutXLM just uses LayoutLMv2 model code under the hood so I was expecting to get a traced model with no problem. But it looks like the focus of the change was on different errors (#15254). I haven't found any mentions about my problem anywhere here except for the #17476 where it helped to just disable trace checking. If I try to disable trace checking, the model works at the first inference but predictions start to deviate seriously after that. Every prediction on the same image is *very* different. So I guess it's not the solution in my case. Am I doing something wrong here or is this model not really compatible with PyTorch tracing functionality? I'm pretty carefully following the official guide about exporting the model to TorchScript.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24425/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24425/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24424
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24424/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24424/comments
https://api.github.com/repos/huggingface/transformers/issues/24424/events
https://github.com/huggingface/transformers/pull/24424
1,769,652,444
PR_kwDOCUB6oc5Tp7mr
24,424
Save `site-packages` as cache in CircleCI job
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", ">If we save .pyenv/versions as cache too:\r\n\r\n>loading this new extra cache takes ~ 2 min\r\npip install takes 20 ~ 30 seconds\r\n\r\n\r\n@ydshieh Wait - are the timings of these in the right order? ", "> Do we still get updates if there is a release of one of the libraries?\r\n\r\n@sgugger Yes if we use -U flag. I can do that if you are ok", "> > If we save .pyenv/versions as cache too:\r\n> \r\n> > loading this new extra cache takes ~ 2 min\r\n> > pip install takes 20 ~ 30 seconds\r\n> \r\n> @ydshieh Wait - are the timings of these in the right order?\r\n\r\nHmm, yes. But could you explain why you have doubts so I can reply in more details?", "> @sgugger Yes if we use -U flag. I can do that if you are ok\r\nYEs, please!", "> Hmm, yes. But could you explain why you have doubts so I can reply in more details?\r\n\r\n@ydshieh I just realised my mistake 🙃 I thought it was saying that it takes 2 mins to load with the cache and 30-40s to install by pip. Whereas it's (45 secs + 3-4 mins ) -> (2 mins + 20-30s). My bad! ", "> > Hmm, yes. But could you explain why you have doubts so I can reply in more details?\r\n> \r\n> @ydshieh I just realised my mistake 🙃 I thought it was saying that it takes 2 mins to load with the cache and 30-40s to install by pip. Whereas it's (45 secs + 3-4 mins ) -> (2 mins + 20-30s). My bad!\r\n\r\n\r\nThe new one should be (45 secs + 2 mins + 20-30s): The first part of cache (in `.cache/pip`) is not changed.\r\nBut we still have a little gain overall.\r\n\r\n", "Although already been approved - FYI: I just added -U everywhere" ]
1,687
1,687
1,687
COLLABORATOR
null
# What does this PR do? Currently, we save `~/.cache/pip` as cache. Take `check_repository_consistency` job as example: - it installs `[all, quality]` - loading cache takes ~ 45 seconds - `pip install` takes ~ 3-4 minutes If we save `.pyenv/versions` as cache too: - loading this new extra cache takes ~ 2 min - `pip install` takes 20 ~ 30 seconds We gain 30 ~ 90 seconds (depending on CircleCI's states). Not a big absolute improvement. But for this job which total runtime is ~ `5m30s`, we can say > 20% reduction. As `check_repository_consistency` and `check_code_quality` will be run for each PR's each push, probably it's nice to have such reduction. WDYT?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24424/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24424/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24424", "html_url": "https://github.com/huggingface/transformers/pull/24424", "diff_url": "https://github.com/huggingface/transformers/pull/24424.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24424.patch", "merged_at": 1687468595000 }
https://api.github.com/repos/huggingface/transformers/issues/24423
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24423/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24423/comments
https://api.github.com/repos/huggingface/transformers/issues/24423/events
https://github.com/huggingface/transformers/issues/24423
1,769,434,901
I_kwDOCUB6oc5pd28V
24,423
False truncation of generated sequences when calling Seq2SeqTrainer.predict with num_return_sequences>1
{ "login": "namespace-Pt", "id": 61188463, "node_id": "MDQ6VXNlcjYxMTg4NDYz", "avatar_url": "https://avatars.githubusercontent.com/u/61188463?v=4", "gravatar_id": "", "url": "https://api.github.com/users/namespace-Pt", "html_url": "https://github.com/namespace-Pt", "followers_url": "https://api.github.com/users/namespace-Pt/followers", "following_url": "https://api.github.com/users/namespace-Pt/following{/other_user}", "gists_url": "https://api.github.com/users/namespace-Pt/gists{/gist_id}", "starred_url": "https://api.github.com/users/namespace-Pt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/namespace-Pt/subscriptions", "organizations_url": "https://api.github.com/users/namespace-Pt/orgs", "repos_url": "https://api.github.com/users/namespace-Pt/repos", "events_url": "https://api.github.com/users/namespace-Pt/events{/privacy}", "received_events_url": "https://api.github.com/users/namespace-Pt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It's true that `num_return_sequences>1` is not supported by the `Seq2SeqTrainer`. If you have an idea of a fix, I'm happy to look at a PR!\r\n\r\nAs for setting generation parameters, the recommended way is to use the generation config (using the model config is deprecated).", "@sgugger Wow I find opening a pull request is quite complicated as there are so many tests. I pushed a simple fix in my forked repo [here](https://github.com/namespace-Pt/transformers/commit/fbcbda33522186a84a073a43fef864eecc0a29f2). Hope that helps :)", "@sgugger What is the recommended way of using `Seq2SeqTrainer` with `predict_with_generate=True` and `num_return_sequences>1` in distributed inference setup, for example with `trainer.predict()`?\r\n\r\nCurrently, I have the following solution.\r\n\r\nI am passing a custom function to `preprocess_logits_for_metrics`. Since, I predict with generate, I actually do not get logits but the generated tokens in this function as input. The input shape is `(number_of_samples*num_return_sequences, sequence_length).` which would get truncated. Therefore, I reshape the tensor to shape `(number_of_samples, seq_len * num_return_sequences)`. \r\n\r\nIt works but is there is a better way you would recommend?" ]
1,687
1,689
1,687
CONTRIBUTOR
null
### System Info - `transformers` version: 4.30.0 - Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @sgugger @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. Define `compute_metrics` function to inspect predictions' shape ``` def compute_metrics(eval_preds): preds, labels = eval_preds print(preds.shape) ``` 2. Create training arguments `args = TrainingArguments(do_predict=True)` 3. Instantiate `trainer = Seq2SeqTrainer(model=model, args=args, tokenizer=tokenizer, compute_metrics=compute_metrics)` 4. Call predict method `trainer.predict(test_dataset, num_return_sequences=2, max_new_tokens=32, do_sample=True)` I expect the predictions to be of shape `8 * 2 * k` or `16 * k` where k is the generated sequence length. However, it is always `8 * k`. I find out the `generated_tokens` [here](https://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/trainer_seq2seq.py#L276) is `16 * k` while it is truncated to `8 * k` [here](https://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/trainer.py#L3330), which is incorrect to me. ### Expected behavior I think when `num_return_sequences > 1`, the output tensor (`generated_tokens`) should be of shape `2 * 8 * k` instead of `16 * k`. Maybe you can add a simple parameter (e.g. `batch_size_alone=True/False`) to determine how the output is aligned. By the way, I think currently there are too many ways to set generation configurations when prediction (kwargs, model configs, and default configs). Maybe they should be simplified.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24423/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24423/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24422
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24422/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24422/comments
https://api.github.com/repos/huggingface/transformers/issues/24422/events
https://github.com/huggingface/transformers/pull/24422
1,769,420,572
PR_kwDOCUB6oc5TpIf4
24,422
Update RayTune doc link for Hyperparameter tuning
{ "login": "JoshuaEPSamuel", "id": 66880119, "node_id": "MDQ6VXNlcjY2ODgwMTE5", "avatar_url": "https://avatars.githubusercontent.com/u/66880119?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JoshuaEPSamuel", "html_url": "https://github.com/JoshuaEPSamuel", "followers_url": "https://api.github.com/users/JoshuaEPSamuel/followers", "following_url": "https://api.github.com/users/JoshuaEPSamuel/following{/other_user}", "gists_url": "https://api.github.com/users/JoshuaEPSamuel/gists{/gist_id}", "starred_url": "https://api.github.com/users/JoshuaEPSamuel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JoshuaEPSamuel/subscriptions", "organizations_url": "https://api.github.com/users/JoshuaEPSamuel/orgs", "repos_url": "https://api.github.com/users/JoshuaEPSamuel/repos", "events_url": "https://api.github.com/users/JoshuaEPSamuel/events{/privacy}", "received_events_url": "https://api.github.com/users/JoshuaEPSamuel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24422). All of your documentation changes will be reflected on that endpoint." ]
1,687
1,687
1,687
CONTRIBUTOR
null
Link to RayTune search space API docs was outdated - have provided correct new link for docs. # What does this PR do? Updates broken link to RayTune search space API docs for the Transformers Hyperparameter tuning function. <!-- Remove if not applicable --> Fixes #24135 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @richardliaw , @amogkam <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24422/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24422/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24422", "html_url": "https://github.com/huggingface/transformers/pull/24422", "diff_url": "https://github.com/huggingface/transformers/pull/24422.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24422.patch", "merged_at": 1687444682000 }
https://api.github.com/repos/huggingface/transformers/issues/24421
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24421/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24421/comments
https://api.github.com/repos/huggingface/transformers/issues/24421/events
https://github.com/huggingface/transformers/issues/24421
1,769,293,287
I_kwDOCUB6oc5pdUXn
24,421
GPT2 and -100 in input_ids
{ "login": "spyroot", "id": 11797329, "node_id": "MDQ6VXNlcjExNzk3MzI5", "avatar_url": "https://avatars.githubusercontent.com/u/11797329?v=4", "gravatar_id": "", "url": "https://api.github.com/users/spyroot", "html_url": "https://github.com/spyroot", "followers_url": "https://api.github.com/users/spyroot/followers", "following_url": "https://api.github.com/users/spyroot/following{/other_user}", "gists_url": "https://api.github.com/users/spyroot/gists{/gist_id}", "starred_url": "https://api.github.com/users/spyroot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/spyroot/subscriptions", "organizations_url": "https://api.github.com/users/spyroot/orgs", "repos_url": "https://api.github.com/users/spyroot/repos", "events_url": "https://api.github.com/users/spyroot/events{/privacy}", "received_events_url": "https://api.github.com/users/spyroot/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You can put -100 in the `labels` so they are ignored by the loss, but in the inputs as it's not a valid index for the embedding matrix.\r\n\r\nAlso this kind of questions is better suite for the [forums](https://discuss.huggingface.co/) as we keep GitHub issues for bugs and feature requests only.", "Thank you very much for confirming, strange ... I have an old code that did work before I think it was masking -100. anyway looks like im a bit off here. : ) thank you! " ]
1,687
1,687
1,687
NONE
null
### System Info I post all detail maybe it is expected behavior. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction Hi Folks, I remember it used to work, but I may be missing something. I noticed if I set the value in input_ids to -100. i.e., standard ignore value. GPT-2 crash is essentially used as an index value where it should ignore it. Is there anything changed in GPT-2 model recently in the code base? I'm not 100% sure is a bug or not. This led to a crash. ``` attention_mask = batch['attention_mask'] mask_inverted = ~attention_mask.bool() input_ids = batch["input_ids"] input_ids = input_ids.masked_fill(mask_inverted == 1, -100).contiguous() ``` Thank you. ### Expected behavior Expected behavior value in the input_ids should [id_n, id_n+1 etc, eos_id, -100]
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24421/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24421/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24420
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24420/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24420/comments
https://api.github.com/repos/huggingface/transformers/issues/24420/events
https://github.com/huggingface/transformers/pull/24420
1,769,229,802
PR_kwDOCUB6oc5Toeq2
24,420
Revert "Fix gradient checkpointing + fp16 autocast for most models"
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I'd be pro reverting until we manage to resolve or find another solution. \r\n\r\ncc @ydshieh too as the testing master :) Looking at our daily CI, it seems this hasn't affected our \"normal\" models - is this right? Are there any tests we should be running to verify this? ", "@amyeroberts \r\n\r\nThat PR #24247 is merged yesterday. The daily CI is triggered this morning and not finished yet. So we don't know what that PR brings.", "There is also a push CI (only non-slow tests and not a complete CI).\r\n\r\nFrom the screenshot, it does look good though.\r\n\r\n<img width=\"490\" alt=\"Screenshot 2023-06-22 114431\" src=\"https://github.com/huggingface/transformers/assets/2521628/7db602ba-3b9b-43aa-a822-08167d1e6a6b\">\r\n", "Ok I just did some benchmarks by observing the peak memory usage of different training setups and it seems to affect most of the models regardless of the modality:\r\n\r\n| Model | Quantization method | Use Rentrant == `False` (i.e. #24247 included) | Peak memory usage |\r\n| -------- | ------- | ------- | ------- |\r\n| `openai/whisper-large` | 8bit | Yes | OOM |\r\n| `openai/whisper-large` | 8bit | No | 7.5GB |\r\n| `openai/whisper-large` | 4bit | No | 5.1GB |\r\n| `openai/whisper-large` | 4bit | Yes | 14.5GB |\r\n| `facebook/opt-6.7b` | 8bit | Yes | 14.1GB |\r\n| `facebook/opt-6.7b` | 8bit | no | 9.8GB |\r\n| `facebook/opt-1.3b` | 16bit | Yes | 12.1GB |\r\n| `facebook/opt-1.3b` | 16bit | no | 12.1GB |\r\n| `google/flan-t5-large` | 16bit | Yes | 12.7GB |\r\n| `google/flan-t5-large` | 16bit | no | 12.7GB |\r\n| `facebook/opt-1.3b` | 8bit | Yes | 5.1GB |\r\n| `facebook/opt-1.3b` | 8bit | no | 4.1GB |\r\n\r\n\r\nNote that before #24420 the last PEFT layer had always None grad, therefore got never updated. But the surprising thing is that the last layer shouldn't cause 2x memory increase, it should cause in the worst case x(1 + 1/num_layers) increase\r\n\r\nI will investigate further and keep updates here ", "@younesbelkada Thanks for investigating and sharing! Could you also add a model with no quantization for reference in the table? ", "Sure yes! Will update the table soon ", "From the updated observations above\r\n\r\n1- it seems to affect the quantized models only\r\n2- Larger models gets more affected", "we can merge this PR and revert the change as it is leading to **huge** increase in VRAM usage for quantized models. The below minimal example doesn't lead to final layer having `None` grads.\r\n\r\nPlease note the way Accelerate does the Mixed Precision handling which is now used in Trainer too. Don't know why this works and why using autocast as a context manager fails (results in `None` grads for final layer).\r\n\r\n```diff\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM\r\nfrom types import MethodType\r\nfrom accelerate.utils import convert_outputs_to_fp32\r\nmodel_id = \"facebook/opt-350m\"\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(model_id).to(0)\r\n\r\nmodel.gradient_checkpointing_enable()\r\nmodel.train()\r\n\r\n+ model.forward = MethodType(torch.cuda.amp.autocast(dtype=torch.bfloat16)(model.forward.__func__), model)\r\n+ model.forward = MethodType(convert_outputs_to_fp32(model.forward.__func__), model)\r\n\r\nassert model.training and model.is_gradient_checkpointing\r\n\r\noptimizer = torch.optim.Adam(model.parameters(), lr=1e-7)\r\n- with torch.cuda.amp.autocast(True, dtype=torch.float16):\r\ndummy_input = torch.LongTensor([[0, 1, 0, 1]]).to(0)\r\nmodel.train()\r\nlogits = model(dummy_input).logits\r\nloss = logits.mean()\r\n\r\nloss.backward()\r\noptimizer.step()\r\n\r\nfor n, param in model.named_parameters():\r\n if param.grad is None:\r\n print(n)\r\n``` ", "Perfect, let's revert the PR then \r\nI can also confirm I don't have any None-grad for lora layers using llama (as posted in original issue), I believe the recent accelerate integration silently fixed the bug and the user was using a former version of transfomers\r\n\r\ncc @amyeroberts @sgugger this is ready for review", "Thanks very much for the support and quick feedback! @amyeroberts and big kudos to @pacman100 as well ! " ]
1,687
1,698
1,687
CONTRIBUTOR
null
Reverts huggingface/transformers#24247 This PR reverts #24247 The investigation initially started with the failing test in https://github.com/huggingface/peft/actions/runs/5340918925/jobs/9686171926 - a training setup that was taking 7GB now takes 15GB and OOM. I looked back at each commit and can confirm this commit caused it. Instead of patching the initial issue on our side, I propose for now to revert the PR and just wait for the fix in PT side as doubling down the memory requirements is a lot for PEFT users. Can confirm the training doesn't OOM before the commit 285a48011da3145ae77c5b22bcfbe77d367e5173 hence the PR that reverts the commit cc @sgugger @pacman100 @amyeroberts Putting it as draft as I need to deep dive a bit before making sure this is the right thing to do
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24420/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24420/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24420", "html_url": "https://github.com/huggingface/transformers/pull/24420", "diff_url": "https://github.com/huggingface/transformers/pull/24420.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24420.patch", "merged_at": 1687443087000 }
https://api.github.com/repos/huggingface/transformers/issues/24419
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24419/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24419/comments
https://api.github.com/repos/huggingface/transformers/issues/24419/events
https://github.com/huggingface/transformers/pull/24419
1,769,191,046
PR_kwDOCUB6oc5ToWU6
24,419
Fix `save_cache` version in `config.yml`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,687
1,687
1,687
COLLABORATOR
null
# What does this PR do? In #22204, I changed the `restore_cache` to `v0.6` but forgot to change `save_cache`. In consequence, no cache is saved/loaded, and the 2 jobs spend 5 minutes to install things: <img width="716" alt="Screenshot 2023-06-22 101905" src="https://github.com/huggingface/transformers/assets/2521628/c796d958-ef9c-4066-bb46-3009be7a8fcc"> This PR fixes this and save money/credit we spend on CircleCI .... Please don't punish me 😭
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24419/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24419/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24419", "html_url": "https://github.com/huggingface/transformers/pull/24419", "diff_url": "https://github.com/huggingface/transformers/pull/24419.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24419.patch", "merged_at": 1687443496000 }