url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
βŒ€
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
βŒ€
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/22893
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22893/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22893/comments
https://api.github.com/repos/huggingface/transformers/issues/22893/events
https://github.com/huggingface/transformers/pull/22893
1,676,927,221
PR_kwDOCUB6oc5Ox1s6
22,893
Update Swin MIM output class
{ "login": "alaradirik", "id": 8944735, "node_id": "MDQ6VXNlcjg5NDQ3MzU=", "avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alaradirik", "html_url": "https://github.com/alaradirik", "followers_url": "https://api.github.com/users/alaradirik/followers", "following_url": "https://api.github.com/users/alaradirik/following{/other_user}", "gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}", "starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions", "organizations_url": "https://api.github.com/users/alaradirik/orgs", "repos_url": "https://api.github.com/users/alaradirik/repos", "events_url": "https://api.github.com/users/alaradirik/events{/privacy}", "received_events_url": "https://api.github.com/users/alaradirik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,682
1,682
1,682
CONTRIBUTOR
null
# What does this PR do? - Replaces incorrectly named `logits` output of `SwinMaskedImageModelingOutput` and `SwinV2MaskedImageModelingOutput` classes with the `reconstruction` attribute - Sets `logits` as a property for backward compatibility and adds a deprecation warning ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22893/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22893/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22893", "html_url": "https://github.com/huggingface/transformers/pull/22893", "diff_url": "https://github.com/huggingface/transformers/pull/22893.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22893.patch", "merged_at": 1682084313000 }
https://api.github.com/repos/huggingface/transformers/issues/22892
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22892/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22892/comments
https://api.github.com/repos/huggingface/transformers/issues/22892/events
https://github.com/huggingface/transformers/pull/22892
1,676,902,242
PR_kwDOCUB6oc5OxwLH
22,892
Hardcode GELU as the intermediate activation for ESM
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Just to understand a bit better, does this mean that the nucleotide model has a different activation set in its config used for other layers? ", "Actually, no! It also always expects gelu, which matches the original ESM (both the port to HF and the original repo at Meta). The issue here is that the TF version is reading `config.hidden_act`, but that isn't even set by default - the bug slipped in because whatever way we constructed the original ESM checkpoints, that value was always set in the configs, so the issue was silent until we tried to make new configs for nucleotide transformer and they suddenly broke in TF.", "@Rocketknight1 Would a possible solution to this be to update the ESM configuration to have `hidden_act` as `\"gelu\"` by default? If I've understood correctly, the original ESM model configs have the `hidden_act` attribute. In which case, as a user, if I updated this I'd expect it to be propagated when constructing a model from the config. ", "@amyeroberts I'm not sure that's the best course here! `hidden_act` is actually a parameter on the base config that `EsmConfig` inherits from. As such, it's not included in the documentation for `EsmConfig` at all. The attribute just happened to be set (I think by the ESM team) when they created the ESM checkpoints, which masked the bug. I think the right solution is to just not read the attribute at all in the code for either framework.\r\n\r\nAlso, I spotted one minor issue with the weight tying fix I made for ESM, and I'm sneaking a fix for it into this PR. (decoder should be a layer when it's untied to make sure weight crossloading works properly, not a bare weight matrix).", "No probs, I think they helped!" ]
1,682
1,682
1,682
MEMBER
null
One more issue revealed by the nucleotide transformer port! This time it's the activation function - ESM uses a hardcoded GELU, which the PyTorch port gets right, but the TF port used an intermediate block copied from BERT which reads `config.hidden_act`. This value was set to `gelu` for all of the original ESM checkpoints, so the bug was silent until we tried making some new checkpoints from scratch. This PR replaces `config.hidden_act` with a hardcoded `gelu`. All ESM tests (including slow / cross-tests) pass locally.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22892/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22892/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22892", "html_url": "https://github.com/huggingface/transformers/pull/22892", "diff_url": "https://github.com/huggingface/transformers/pull/22892.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22892.patch", "merged_at": 1682089810000 }
https://api.github.com/repos/huggingface/transformers/issues/22891
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22891/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22891/comments
https://api.github.com/repos/huggingface/transformers/issues/22891/events
https://github.com/huggingface/transformers/pull/22891
1,676,575,486
PR_kwDOCUB6oc5Owp0k
22,891
[`SAM`] Change to `facebook/sam-vit-base`
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? Changes the checkpoint name to `sam-vit-base` instead of `sam-vit-big` which was slightly confusing for users. The checkpoints are sorted with their size - `sam-vit-base`: 350MB - `sam-vit-large`: 1GB - `sam-vit-huge`: 2GB Which now makes more sense. The repo on the Hub have been updated accordingly cc @amyeroberts @ArthurZucker @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22891/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22891/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22891", "html_url": "https://github.com/huggingface/transformers/pull/22891", "diff_url": "https://github.com/huggingface/transformers/pull/22891.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22891.patch", "merged_at": 1681992678000 }
https://api.github.com/repos/huggingface/transformers/issues/22890
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22890/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22890/comments
https://api.github.com/repos/huggingface/transformers/issues/22890/events
https://github.com/huggingface/transformers/issues/22890
1,676,543,856
I_kwDOCUB6oc5j7gdw
22,890
`prefix_allowed_tokens_fn` do not constrain when all allowed tokens have scores of `-inf`
{ "login": "ksh108405", "id": 50015864, "node_id": "MDQ6VXNlcjUwMDE1ODY0", "avatar_url": "https://avatars.githubusercontent.com/u/50015864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ksh108405", "html_url": "https://github.com/ksh108405", "followers_url": "https://api.github.com/users/ksh108405/followers", "following_url": "https://api.github.com/users/ksh108405/following{/other_user}", "gists_url": "https://api.github.com/users/ksh108405/gists{/gist_id}", "starred_url": "https://api.github.com/users/ksh108405/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ksh108405/subscriptions", "organizations_url": "https://api.github.com/users/ksh108405/orgs", "repos_url": "https://api.github.com/users/ksh108405/repos", "events_url": "https://api.github.com/users/ksh108405/events{/privacy}", "received_events_url": "https://api.github.com/users/ksh108405/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[ { "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false } ]
[ "Hey @ksh108405 πŸ‘‹ \r\n\r\nConstrained generation has several issues at the moment, and I'm out of bandwidth. I'm adding this to the list of things keep an eye on when revisiting constrained generation :)", "Hello @gante , I'm happy to work on this issue. Do you approve?\r\n\r\n(I have some experience with constrained decoding previously)\r\n\r\n", "Hey @Saibo-creator πŸ‘‹ \r\n\r\nOf course, we always welcome contributions to fix issues πŸ’› Thank you for offering help!" ]
1,681
1,697
null
NONE
null
### System Info transformer 4.25.1 python 3.8.16 ### Who can help? @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction When using `generate()` with `prefix_allowed_tokens_fn`, (more precisely, when using `PrefixConstrainedLogitsProcessor`), when all tokens returned by `prefix_allowed_tokens_fn` have scores of `-inf`, the model does not comply with the constraints and picks the token which is not on the allowed token list. ### Expected behavior Even if all allowed tokens have score of `-inf`, the model should pick tokens from allowed token list by `prefix_allowed_tokens_fn`. I think it can be solved by using some clamp function or adding epsilon value to this code. https://github.com/huggingface/transformers/blob/04ab5605fbb4ef207b10bf2772d88c53fc242e83/src/transformers/generation/logits_process.py#L692-L698 This is my own code to solve it. However, it might cause other bugs. ```python def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor: masked_score = torch.full_like(scores, -math.inf) for batch_id, beam_sent in enumerate(input_ids.view(-1, self._num_beams, input_ids.shape[-1])): for beam_id, sent in enumerate(beam_sent): allowed_idx = batch_id * self._num_beams + beam_id, self._prefix_allowed_tokens_fn(batch_id, sent) filtered_scores = torch.clamp(scores[allowed_idx], min=-10 ** 6) masked_score[allowed_idx] = filtered_scores return masked_score ``` Edit: The model works well on `torch.clamp()` with `min=-10 ** 6`, not `min=-10 ** 8`, when all allowed token's score is -inf. Too low score token in the sequence may have affected the decoding step. I updated the above code.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22890/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22890/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/22889
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22889/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22889/comments
https://api.github.com/repos/huggingface/transformers/issues/22889/events
https://github.com/huggingface/transformers/pull/22889
1,676,456,717
PR_kwDOCUB6oc5OwQFf
22,889
fix warning function call creating logger error (max_length and max_new_tokens)
{ "login": "QuentinAmbard", "id": 96781, "node_id": "MDQ6VXNlcjk2Nzgx", "avatar_url": "https://avatars.githubusercontent.com/u/96781?v=4", "gravatar_id": "", "url": "https://api.github.com/users/QuentinAmbard", "html_url": "https://github.com/QuentinAmbard", "followers_url": "https://api.github.com/users/QuentinAmbard/followers", "following_url": "https://api.github.com/users/QuentinAmbard/following{/other_user}", "gists_url": "https://api.github.com/users/QuentinAmbard/gists{/gist_id}", "starred_url": "https://api.github.com/users/QuentinAmbard/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/QuentinAmbard/subscriptions", "organizations_url": "https://api.github.com/users/QuentinAmbard/orgs", "repos_url": "https://api.github.com/users/QuentinAmbard/repos", "events_url": "https://api.github.com/users/QuentinAmbard/events{/privacy}", "received_events_url": "https://api.github.com/users/QuentinAmbard/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@QuentinAmbard @gante , could you please tell how to fix this bug? I still see \"logging error message\".", "You need to wait for the next release I guess, or apply the fix directly?", "> You need to wait for the next release I guess, or apply the fix directly?\r\n\r\nYep, or you can [install from source](https://huggingface.co/docs/transformers/installation#install-from-source) ", "@QuentinAmbard okay, thanks for quicky response. I will better remove the logging from source code.", "@amyeroberts how to install from source, if new fix isn't merged with main branch? ", "@IamExperimenting The fix is merged into the main branch. Installing from source means installing the version of the library currently on main. The instructions were in the link I shared above: https://huggingface.co/docs/transformers/installation#install-from-source", "@amyeroberts thanks for the information, I installed from source but its throwing error\r\n```\r\nfrom transformers import pipeline\r\n```\r\n\r\nerror: \r\n```\r\nRuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback):\r\ncannot import name `PartialState` from `accelerate`\r\n```", "You need to upgrade your `accelerate` library: `pip install --upgrade accelerate`.", "@amyeroberts @sgugger it worked after upgrading. However, it removed only `logging error` not the `warning message`." ]
1,681
1,683
1,681
CONTRIBUTOR
null
# What does this PR do? PR #21347 introduced a bug in the warning we display, calling the wrong warn function. There is a bug open about this error: #22636 it's either `warnings.warn("msg", UserWarning,)` or `logger.warning("msg")` In this case we have `logger.warn` which is deprecated, and `logger.warn("msg", category)` doesn't exist and throws an error: ``` --- Logging error --- Traceback (most recent call last): File "/python-path/python3.9/logging/__init__.py", line 1083, in emit msg = self.format(record) File "/python-path/python3.9/logging/__init__.py", line 927, in format return fmt.format(record) File "/python-path/python3.9/logging/__init__.py", line 663, in format record.message = record.getMessage() File "/python-path/python3.9/logging/__init__.py", line 367, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting ``` Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @gante ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22889/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22889/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22889", "html_url": "https://github.com/huggingface/transformers/pull/22889", "diff_url": "https://github.com/huggingface/transformers/pull/22889.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22889.patch", "merged_at": 1681992484000 }
https://api.github.com/repos/huggingface/transformers/issues/22888
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22888/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22888/comments
https://api.github.com/repos/huggingface/transformers/issues/22888/events
https://github.com/huggingface/transformers/pull/22888
1,676,411,818
PR_kwDOCUB6oc5OwGRv
22,888
fix: GPTNeoX half inference error
{ "login": "SeongBeomLEE", "id": 65529313, "node_id": "MDQ6VXNlcjY1NTI5MzEz", "avatar_url": "https://avatars.githubusercontent.com/u/65529313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SeongBeomLEE", "html_url": "https://github.com/SeongBeomLEE", "followers_url": "https://api.github.com/users/SeongBeomLEE/followers", "following_url": "https://api.github.com/users/SeongBeomLEE/following{/other_user}", "gists_url": "https://api.github.com/users/SeongBeomLEE/gists{/gist_id}", "starred_url": "https://api.github.com/users/SeongBeomLEE/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SeongBeomLEE/subscriptions", "organizations_url": "https://api.github.com/users/SeongBeomLEE/orgs", "repos_url": "https://api.github.com/users/SeongBeomLEE/repos", "events_url": "https://api.github.com/users/SeongBeomLEE/events{/privacy}", "received_events_url": "https://api.github.com/users/SeongBeomLEE/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi! @younesbelkada \r\n\r\nIt works fine when executed in the way you described.\r\n\r\nHowever, what do you think about considering the case of using model.half?\r\n\r\nThanks!\r\n\r\nbefore:\r\n```\r\nmodel = GPTNeoXForCausalLM.from_pretrained(MODEL_PATH, torch_dtype=torch.float16)\r\nmodel.gpt_neox.layers[0].attention.norm_factor.dtype\r\n\r\noutput: torch.float16\r\n```\r\nafter:\r\n```\r\nmodel = GPTNeoXForCausalLM.from_pretrained(MODEL_PATH, torch_dtype=torch.float16)\r\nmodel.gpt_neox.layers[0].attention.norm_factor.dtype\r\n\r\noutput: torch.float16\r\n````", "The canonical way is indeed to use `torch_dtype=torch.float16` as it saves a lot of memory (otherwise you instantiate your model in float32 so take a lot of space, then convert it to half). But `model.half()` should work nethertheless. Using a buffer seems like a good solution, but make it non-persistent so it doesn't get inside the `state_dict` (with `persistent=False`).", "Added a parameter \"persistent=False\"\r\n\r\nThanks!", "Of course. Thanks!", "Failure of the FLax test is unrelated and already fixed on main, so merging!" ]
1,681
1,682
1,682
CONTRIBUTOR
null
norm_factor is still torch.float32 after using model.half So I changed it to register_buffer so I can change it to torch.float16 after using model.half This error does not occur in all cases, but it does happen occasionally. Thanks! Error Message: ``` File "/data2/sblee/anaconda3/lib/python3.9/site-packages/transformers/models/gpt_neox/modeling_gpt_neox.py", line 206, in _attn attn_scores = torch.baddbmm( RuntimeError: expected scalar type Half but found Float ``` Error Code: ``` attn_scores = torch.baddbmm( attn_scores, query, key.transpose(1, 2), beta=1.0, alpha=(torch.tensor(1.0, dtype=self.norm_factor.dtype, device=self.norm_factor.device) / self.norm_factor), ) ``` before Code: ``` class GPTNeoXAttention(nn.Module): def __init__(self, config): super().__init__() self.num_attention_heads = config.num_attention_heads self.hidden_size = config.hidden_size self.head_size = self.hidden_size // self.num_attention_heads self.rotary_ndims = int(self.head_size * config.rotary_pct) max_positions = config.max_position_embeddings self.register_buffer( "bias", torch.tril(torch.ones((max_positions, max_positions), dtype=torch.bool)).view( 1, 1, max_positions, max_positions ), ) self.register_buffer("masked_bias", torch.tensor(-1e9)) self.rotary_emb = RotaryEmbedding( self.rotary_ndims, config.max_position_embeddings, base=config.rotary_emb_base ) self.norm_factor = torch.sqrt(torch.tensor(self.head_size, dtype=torch.float32)).to(torch.get_default_dtype()) self.query_key_value = nn.Linear(config.hidden_size, 3 * config.hidden_size) self.dense = nn.Linear(config.hidden_size, config.hidden_size) ``` after Code: ``` class GPTNeoXAttention(nn.Module): def __init__(self, config): super().__init__() self.num_attention_heads = config.num_attention_heads self.hidden_size = config.hidden_size self.head_size = self.hidden_size // self.num_attention_heads self.rotary_ndims = int(self.head_size * config.rotary_pct) max_positions = config.max_position_embeddings self.register_buffer( "bias", torch.tril(torch.ones((max_positions, max_positions), dtype=torch.bool)).view( 1, 1, max_positions, max_positions ), ) self.register_buffer("masked_bias", torch.tensor(-1e9)) self.rotary_emb = RotaryEmbedding( self.rotary_ndims, config.max_position_embeddings, base=config.rotary_emb_base ) self.register_buffer("norm_factor", torch.sqrt(torch.tensor(self.head_size, dtype=torch.float32)).to(torch.get_default_dtype())) self.query_key_value = nn.Linear(config.hidden_size, 3 * config.hidden_size) self.dense = nn.Linear(config.hidden_size, config.hidden_size) ``` before: ``` model = GPTNeoXForCausalLM.from_pretrained(F_MODEL_PATH, config=model_config) model.half() model.to("cuda") model.gpt_neox.layers[0].attention.norm_factor.dtype output: torch.float32 ``` after: ``` model = GPTNeoXForCausalLM.from_pretrained(F_MODEL_PATH, config=model_config) model.half() model.to("cuda") model.gpt_neox.layers[0].attention.norm_factor.dtype output: torch.float16 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22888/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22888/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22888", "html_url": "https://github.com/huggingface/transformers/pull/22888", "diff_url": "https://github.com/huggingface/transformers/pull/22888.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22888.patch", "merged_at": 1682087034000 }
https://api.github.com/repos/huggingface/transformers/issues/22887
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22887/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22887/comments
https://api.github.com/repos/huggingface/transformers/issues/22887/events
https://github.com/huggingface/transformers/pull/22887
1,676,333,850
PR_kwDOCUB6oc5Ov1NV
22,887
Fix SAM example in documentation
{ "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,681
1,681
COLLABORATOR
null
As per title
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22887/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22887/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22887", "html_url": "https://github.com/huggingface/transformers/pull/22887", "diff_url": "https://github.com/huggingface/transformers/pull/22887.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22887.patch", "merged_at": 1681986162000 }
https://api.github.com/repos/huggingface/transformers/issues/22886
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22886/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22886/comments
https://api.github.com/repos/huggingface/transformers/issues/22886/events
https://github.com/huggingface/transformers/pull/22886
1,676,301,688
PR_kwDOCUB6oc5OvuRZ
22,886
[`SAM`] Correct arxiv link
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? This PR fixes the link of SAM paper with the correct arxiv link cc @ArthurZucker @amyeroberts @osanseviero
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22886/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22886/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22886", "html_url": "https://github.com/huggingface/transformers/pull/22886", "diff_url": "https://github.com/huggingface/transformers/pull/22886.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22886.patch", "merged_at": 1681982593000 }
https://api.github.com/repos/huggingface/transformers/issues/27654
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27654/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27654/comments
https://api.github.com/repos/huggingface/transformers/issues/27654/events
https://github.com/huggingface/transformers/issues/27654
2,006,317,767
I_kwDOCUB6oc53lfrH
27,654
LongT5 - Errors
{ "login": "gozdeydd", "id": 86059342, "node_id": "MDQ6VXNlcjg2MDU5MzQy", "avatar_url": "https://avatars.githubusercontent.com/u/86059342?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gozdeydd", "html_url": "https://github.com/gozdeydd", "followers_url": "https://api.github.com/users/gozdeydd/followers", "following_url": "https://api.github.com/users/gozdeydd/following{/other_user}", "gists_url": "https://api.github.com/users/gozdeydd/gists{/gist_id}", "starred_url": "https://api.github.com/users/gozdeydd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gozdeydd/subscriptions", "organizations_url": "https://api.github.com/users/gozdeydd/orgs", "repos_url": "https://api.github.com/users/gozdeydd/repos", "events_url": "https://api.github.com/users/gozdeydd/events{/privacy}", "received_events_url": "https://api.github.com/users/gozdeydd/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! If you want some help you should provide a reproducer of what the error was with the output of `transformers-cli env` πŸ€— ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,681
1,703
1,703
NONE
null
I am not very sure if the models/documentation is up to date in the website. For example, I have experienced this with a LongT5 model. It gives an error that transformers library doesn't have such model. I assume this happens due to the fast changes going on with NLP models in general these days. Perhaps there is a new version?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27654/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27654/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22885
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22885/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22885/comments
https://api.github.com/repos/huggingface/transformers/issues/22885/events
https://github.com/huggingface/transformers/issues/22885
1,676,278,556
I_kwDOCUB6oc5j6fsc
22,885
KeyError: eval_loss when using Trainer (SpeechT5 fine-tuning)
{ "login": "hollance", "id": 346853, "node_id": "MDQ6VXNlcjM0Njg1Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hollance", "html_url": "https://github.com/hollance", "followers_url": "https://api.github.com/users/hollance/followers", "following_url": "https://api.github.com/users/hollance/following{/other_user}", "gists_url": "https://api.github.com/users/hollance/gists{/gist_id}", "starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hollance/subscriptions", "organizations_url": "https://api.github.com/users/hollance/orgs", "repos_url": "https://api.github.com/users/hollance/repos", "events_url": "https://api.github.com/users/hollance/events{/privacy}", "received_events_url": "https://api.github.com/users/hollance/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
{ "login": "hollance", "id": 346853, "node_id": "MDQ6VXNlcjM0Njg1Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hollance", "html_url": "https://github.com/hollance", "followers_url": "https://api.github.com/users/hollance/followers", "following_url": "https://api.github.com/users/hollance/following{/other_user}", "gists_url": "https://api.github.com/users/hollance/gists{/gist_id}", "starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hollance/subscriptions", "organizations_url": "https://api.github.com/users/hollance/orgs", "repos_url": "https://api.github.com/users/hollance/repos", "events_url": "https://api.github.com/users/hollance/events{/privacy}", "received_events_url": "https://api.github.com/users/hollance/received_events", "type": "User", "site_admin": false }
[ { "login": "hollance", "id": 346853, "node_id": "MDQ6VXNlcjM0Njg1Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hollance", "html_url": "https://github.com/hollance", "followers_url": "https://api.github.com/users/hollance/followers", "following_url": "https://api.github.com/users/hollance/following{/other_user}", "gists_url": "https://api.github.com/users/hollance/gists{/gist_id}", "starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hollance/subscriptions", "organizations_url": "https://api.github.com/users/hollance/orgs", "repos_url": "https://api.github.com/users/hollance/repos", "events_url": "https://api.github.com/users/hollance/events{/privacy}", "received_events_url": "https://api.github.com/users/hollance/received_events", "type": "User", "site_admin": false } ]
[ "Added a Colab that demonstrates the issue with a minimal amount of code: https://colab.research.google.com/drive/12AFpcCE96C22-IxRRJjDIo1s4wDsP0v_?usp=sharing\r\n\r\nI still had a copy of the SpeechT5 TTS fine-tuning changes on 4.28.dev and that works fine, so something that changed between 4.28 and 4.29 has broken this. Still investigating.", "OK the issue seems to be that `stop_labels` is not present in the input (since we're not actually using them) and as a result, the evaluation loop thinks the model doesn't have labels (even though it does) and doesn't report the loss. \r\n\r\nI had initially removed `stop_labels` from `model.forward` when implementing the TTS fine-tuning logic, but had put it back at the last minute to keep backwards compatibility with the publicly released version of SpeechT5. That's why the fine-tuning Colab used to work but is now broken.\r\n\r\nThe question now is: why does the Trainer believe `stop_labels` are labels? And how can I tell it to ignore them?\r\n", "The workaround is to add the following when creating the training arguments:\r\n\r\n```python\r\ntraining_args = Seq2SeqTrainingArguments(\r\n ...\r\n label_names=[\"labels\"],\r\n)\r\n```\r\n\r\nThe Trainer looks at the signature of `model.forward()` and anything with `labels` in it is assumed to be labels, which in this case includes `stop_labels`. We'll remove this argument in a future version of Transformers. But until then you can override this by supplying your own `label_names` that does not include `stop_labels`.", "Ran into a similar error \r\n\r\n> The workaround is to add the following when creating the training arguments:\r\n> \r\n> ```python\r\n> training_args = Seq2SeqTrainingArguments(\r\n> ...\r\n> label_names=[\"labels\"],\r\n> )\r\n> ```\r\n> \r\n> The Trainer looks at the signature of `model.forward()` and anything with `labels` in it is assumed to be labels, which in this case includes `stop_labels`. We'll remove this argument in a future version of Transformers. But until then you can override this by supplying your own `label_names` that does not include `stop_labels`.\r\n\r\nRan into a very similar issue when fine-tuning BLIP2 and turns out this was the only line i was missing" ]
1,681
1,698
1,682
CONTRIBUTOR
null
### System Info current main branch of Transformers (4.29.0.dev0, 20 Apr 2023) ### Who can help? @hollance ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction We recently published a Colab notebook for fine-tuning SpeechT5 for TTS. https://colab.research.google.com/drive/1i7I5pzBcU3WDFarDnzweIj4-sVVoIUFJ This notebook worked fine previously but now it gives an error in `trainer.py` because the `eval_loss` is not part of the metrics. This happens when saving the checkpoint. The progress bar in the notebook shows "No log" for the Validation Loss. I will look into this issue myself first and try to get a smaller reproducible case. My hunch is that something changed in Trainer in between the time I wrote the notebook and now (for example, it now requires Accelerate). ### Expected behavior The notebook should work as before.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22885/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22885/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22884
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22884/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22884/comments
https://api.github.com/repos/huggingface/transformers/issues/22884/events
https://github.com/huggingface/transformers/pull/22884
1,676,247,367
PR_kwDOCUB6oc5OvitP
22,884
Change schedule CI time
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22884). All of your documentation changes will be reflected on that endpoint." ]
1,681
1,681
1,681
COLLABORATOR
null
# What does this PR do? **This PR changes the test CI to be scheduled 2 hours after the image build CI.** Despite #22859 changed to use `accelerate@main`, the last DeepSpeed CI docker image still has `accelerate==0.18.0`. This is because that image build takes ~1h30m to finish, but the test CI is scheduled (only) 1 hour after the image build CI. Although from the next run, the schedule test CI will start to use `accelerate@main`, there will be a gap - i.e. it will use the `accelerate@main` **one day before** the current `main`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22884/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22884/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22884", "html_url": "https://github.com/huggingface/transformers/pull/22884", "diff_url": "https://github.com/huggingface/transformers/pull/22884.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22884.patch", "merged_at": 1681992068000 }
https://api.github.com/repos/huggingface/transformers/issues/22883
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22883/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22883/comments
https://api.github.com/repos/huggingface/transformers/issues/22883/events
https://github.com/huggingface/transformers/pull/22883
1,676,161,979
PR_kwDOCUB6oc5OvQgg
22,883
Add FlaxWhisperForAudioClassification model
{ "login": "raghavanone", "id": 115454562, "node_id": "U_kgDOBuGyYg", "avatar_url": "https://avatars.githubusercontent.com/u/115454562?v=4", "gravatar_id": "", "url": "https://api.github.com/users/raghavanone", "html_url": "https://github.com/raghavanone", "followers_url": "https://api.github.com/users/raghavanone/followers", "following_url": "https://api.github.com/users/raghavanone/following{/other_user}", "gists_url": "https://api.github.com/users/raghavanone/gists{/gist_id}", "starred_url": "https://api.github.com/users/raghavanone/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/raghavanone/subscriptions", "organizations_url": "https://api.github.com/users/raghavanone/orgs", "repos_url": "https://api.github.com/users/raghavanone/repos", "events_url": "https://api.github.com/users/raghavanone/events{/privacy}", "received_events_url": "https://api.github.com/users/raghavanone/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sanchit-gandhi I have opened new PR. \r\n", "This actually broke a lot of tests on Flax Whisper, so reverting. Can you re-open the PR and rebase on main so we can see what went wrong?", "@sanchit-gandhi Request you to open this PR. \r\n", "Hey @raghavanone - unfortunately a PR can't be re-opened after it's been merged. The best thing to do is add commits to the branch and create a new pull request, copying all the details over and providing a link to the original pull request. See https://stackoverflow.com/questions/12674304/github-reopening-a-merged-pull-request for details." ]
1,681
1,683
1,683
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22883/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22883/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22883", "html_url": "https://github.com/huggingface/transformers/pull/22883", "diff_url": "https://github.com/huggingface/transformers/pull/22883.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22883.patch", "merged_at": 1683219616000 }
https://api.github.com/repos/huggingface/transformers/issues/22882
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22882/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22882/comments
https://api.github.com/repos/huggingface/transformers/issues/22882/events
https://github.com/huggingface/transformers/issues/22882
1,676,104,002
I_kwDOCUB6oc5j51FC
22,882
` Device to device copy is unsupporte` RuntimeError
{ "login": "lms-mt", "id": 130641421, "node_id": "U_kgDOB8luDQ", "avatar_url": "https://avatars.githubusercontent.com/u/130641421?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lms-mt", "html_url": "https://github.com/lms-mt", "followers_url": "https://api.github.com/users/lms-mt/followers", "following_url": "https://api.github.com/users/lms-mt/following{/other_user}", "gists_url": "https://api.github.com/users/lms-mt/gists{/gist_id}", "starred_url": "https://api.github.com/users/lms-mt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lms-mt/subscriptions", "organizations_url": "https://api.github.com/users/lms-mt/orgs", "repos_url": "https://api.github.com/users/lms-mt/repos", "events_url": "https://api.github.com/users/lms-mt/events{/privacy}", "received_events_url": "https://api.github.com/users/lms-mt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @lms-mt, thanks for reporting this issue! \r\n\r\nSo that we can best help you, could you share the following: \r\n* The running environment: run `transformers-cli env` in the terminal and copy-paste the output\r\n* A minimal code snippet to reproduce the error", "cc @ArthurZucker ", "I am using main and I cannot reproduce this : \r\n```python \r\n>>> from transformers import GPT2Model\r\n>>> import torch\r\n>>> model = GPT2Model.from_pretrained(\"gpt2\")\r\n>>> device = torch.device(\"cuda\")\r\n>>> model.to(\"cuda\")\r\n```\r\nworks as expected", "@amyeroberts Sorry to reply later. \r\n```\r\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\r\n\r\n- `transformers` version: 4.26.1\r\n- Platform: Linux-5.4.0-42-generic-x86_64-with-glibc2.17\r\n- Python version: 3.8.13\r\n- Huggingface_hub version: 0.13.4\r\n- PyTorch version (GPU?): 2.0.0a0+gitc263bd4 (False)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n- ```\r\n\r\n\r\n2. Sorry I can't. because we are working on a torch backend some kind like cuda. And it should be kept private for legal.\r\n\r\n@ArthurZucker \r\n\r\nThanks for your reply. Actually, I want to know is this message ommited from HuggingFace or torch? If I have this clue, I may resolve it myself.", "@lms-mt Since `.to` method works for our supported devices e.g. `\"cuda\"` or `\"cpu\"`, then I suspect the issue is arising from the custom backend. \r\n\r\nAs for where the error comes from, searching in both the [Hugging Face](https://github.com/search?q=org%3Ahuggingface++%22copy+is+unsupported%22&type=issues) and [PyTorch](https://github.com/search?q=org%3Apytorch+%22copy+is+unsupported%22&type=code) orgs returns no results. It's peculiar that it's raised on a line in the Hugging Face module. Is this possible that the custom backend is causing an early termination and raising the error? ", "> @lms-mt Since `.to` method works for our supported devices e.g. `\"cuda\"` or `\"cpu\"`, then I suspect the issue is arising from the custom backend.\r\n> \r\n> As for where the error comes from, searching in both the [Hugging Face](https://github.com/search?q=org%3Ahuggingface++%22copy+is+unsupported%22&type=issues) and [PyTorch](https://github.com/search?q=org%3Apytorch+%22copy+is+unsupported%22&type=code) orgs returns no results. It's peculiar that it's raised on a line in the Hugging Face module. Is this possible that the custom backend is causing an early termination and raising the error?\r\n\r\n@amyeroberts \r\nReally thanks for your reply. From what I search and your advices, I think this error may raised by the custom backend module. I will check it out. I will close this issue because this issue is not in well fromat. Have a good day." ]
1,681
1,682
1,682
NONE
null
### System Info transformer: 4.20.1 platform: docker Ubuntu python: 3.8.13 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am using pretrained GPT2 model to inference, and set the device to GPU in the first: ``` device = torch.device("cuda") mode.to(device) ``` Then I got: ``` File "/opt/conda/envs/test_environment/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/envs/test_environment/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1048, in forward transformer_outputs = self.transformer( File "/opt/conda/envs/test_environment/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/envs/test_environment/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 891, in forward outputs = block( File "/opt/conda/envs/test_environment/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/envs/test_environment/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 391, in forward attn_outputs = self.attn( File "/opt/conda/envs/test_environment/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/envs/test_environment/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 332, in forward attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask) File "/opt/conda/envs/test_environment/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 201, in _attn causal_mask = self.bias[:, :, key_length - query_length : key_length, :key_length].to(torch.bool) RuntimeError: Device to device copy is unsupported ``` Why this errror ommitted? Thank you very much. ### Expected behavior I didn't googled out the same problem, so I posted here. I expect this error gone.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22882/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22882/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22881
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22881/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22881/comments
https://api.github.com/repos/huggingface/transformers/issues/22881/events
https://github.com/huggingface/transformers/issues/22881
1,676,085,946
I_kwDOCUB6oc5j5wq6
22,881
Question about Bloom pretrain
{ "login": "ZeyuTeng96", "id": 96521059, "node_id": "U_kgDOBcDLYw", "avatar_url": "https://avatars.githubusercontent.com/u/96521059?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZeyuTeng96", "html_url": "https://github.com/ZeyuTeng96", "followers_url": "https://api.github.com/users/ZeyuTeng96/followers", "following_url": "https://api.github.com/users/ZeyuTeng96/following{/other_user}", "gists_url": "https://api.github.com/users/ZeyuTeng96/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZeyuTeng96/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZeyuTeng96/subscriptions", "organizations_url": "https://api.github.com/users/ZeyuTeng96/orgs", "repos_url": "https://api.github.com/users/ZeyuTeng96/repos", "events_url": "https://api.github.com/users/ZeyuTeng96/events{/privacy}", "received_events_url": "https://api.github.com/users/ZeyuTeng96/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, @ZeyuTeng96 thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,681
1,685
1,685
NONE
null
### Feature request Hi, a question about bloom pretrain In pretrain phase, I prepared set of unlabeled text in a .txt file. Each line is a paper or a paragraph in a paper. Each line should be independnet. So, the next line or next text is not relevant to previous one. The run_clm.py script can read those text line by line and concatenate all texts from our dataset and generate blocks by user-defined block_size param or a default value(which is 1024). I have a question about the concatenation. If each line (or text) in my .txt file describe different thing (means each text or paraphs are independent), then the concatenation will merge them all without an explicit 'end of text/end of paper' mark. How the Bloom model predicts next token based on previous context. How the model can predict the first token in the new paragraph by seeing previous context (the previous context describe different context). I tried to make one block only contain one paragraph or text, but they do not have same length and get an error. If I use the concatenation mechanism, I feel like it is totally wrong. Can anyone help me to figure out these. ### Motivation Trying to make each line of text in .txt file as an individual block. ### Your contribution I am not sure about which one is right
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22881/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22881/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22880
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22880/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22880/comments
https://api.github.com/repos/huggingface/transformers/issues/22880/events
https://github.com/huggingface/transformers/pull/22880
1,675,931,611
PR_kwDOCUB6oc5OufgW
22,880
tests: Fix flaky test for NLLB-MoE
{ "login": "connor-henderson", "id": 78612354, "node_id": "MDQ6VXNlcjc4NjEyMzU0", "avatar_url": "https://avatars.githubusercontent.com/u/78612354?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-henderson", "html_url": "https://github.com/connor-henderson", "followers_url": "https://api.github.com/users/connor-henderson/followers", "following_url": "https://api.github.com/users/connor-henderson/following{/other_user}", "gists_url": "https://api.github.com/users/connor-henderson/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-henderson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-henderson/subscriptions", "organizations_url": "https://api.github.com/users/connor-henderson/orgs", "repos_url": "https://api.github.com/users/connor-henderson/repos", "events_url": "https://api.github.com/users/connor-henderson/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-henderson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "cc @ydshieh ", "Happy to!" ]
1,681
1,682
1,682
CONTRIBUTOR
null
# What does this PR do? Fixes #22464 (and added some light docs edits I happened to notice) From my comment in the issue: Looked into this and I think the flakiness is caused by the natural variability in the sparse MoE layers. Specifically that when they calculate which experts to use in the gating logic, they’re computing probabilities imperfectly for two different sets of inputs: one with prior inputs concatenated with the past key values and one with just the past key values. The test usually passes cause magnitude of the difference is usually likely to be small. Notably, when the vocab size is increased this pass rate goes up (and vice versa) since the increased representational capacity can help the model make more accurate decisions about which experts to use for each input. For example, increasing the vocab size in the config from its current 99 to 999 increases the pass rate from ~80% to ~95%. I think this flakiness is inherent in the sparse layers, but if I understand right the point of the test is to check the decoder uses the past properly, so I edited the test to use dense layers and moved to the rtol down to 1e-3 to be in line with the other models’ version of this check. Wrote a loop to test this on a 1000 passes and they all passed. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @ArthurZucker, @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22880/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22880/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22880", "html_url": "https://github.com/huggingface/transformers/pull/22880", "diff_url": "https://github.com/huggingface/transformers/pull/22880.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22880.patch", "merged_at": 1682093381000 }
https://api.github.com/repos/huggingface/transformers/issues/22879
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22879/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22879/comments
https://api.github.com/repos/huggingface/transformers/issues/22879/events
https://github.com/huggingface/transformers/pull/22879
1,675,892,819
PR_kwDOCUB6oc5OuXiO
22,879
[Examples/TensorFlow] minor refactoring to allow compatible datasets to work
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,681
1,681
MEMBER
null
This PR removes the hard-coded "wikitext" values from the scripts so that they can be used in conjunction with any compatible dataset. @Rocketknight1 FYI.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22879/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22879/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22879", "html_url": "https://github.com/huggingface/transformers/pull/22879", "diff_url": "https://github.com/huggingface/transformers/pull/22879.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22879.patch", "merged_at": 1681995062000 }
https://api.github.com/repos/huggingface/transformers/issues/22878
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22878/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22878/comments
https://api.github.com/repos/huggingface/transformers/issues/22878/events
https://github.com/huggingface/transformers/pull/22878
1,675,886,949
PR_kwDOCUB6oc5OuWV1
22,878
[tensorflow] Add support for the `is_symbolic_tensor` predicate
{ "login": "hvaara", "id": 1535968, "node_id": "MDQ6VXNlcjE1MzU5Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/1535968?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hvaara", "html_url": "https://github.com/hvaara", "followers_url": "https://api.github.com/users/hvaara/followers", "following_url": "https://api.github.com/users/hvaara/following{/other_user}", "gists_url": "https://api.github.com/users/hvaara/gists{/gist_id}", "starred_url": "https://api.github.com/users/hvaara/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hvaara/subscriptions", "organizations_url": "https://api.github.com/users/hvaara/orgs", "repos_url": "https://api.github.com/users/hvaara/repos", "events_url": "https://api.github.com/users/hvaara/events{/privacy}", "received_events_url": "https://api.github.com/users/hvaara/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "cc @Rocketknight1 ", "This looks clean! Do you have any links to documentation/discussion about `is_symbolic_tensor` in TF, though? This is the first I've heard of it - I can see the relevant code in the TF codebase, but are we guaranteed that the behaviour won't change before the 2.14 release? Also, given that it's in the codebase already, won't it be in the 2.13 release rather than 2.14?", "I wrote the `is_symbolic_tensor` code in question. The 2.14 was a guess from me, I wasn't sure if the original PR was going to make the TF branch cut, but it looks like it will make 2.13. The intention of `is_symbolic_tensor` is actually to provide more stability:\r\n\r\nWe're looking into breaking the current inheritance setup in TF. EagerTensor inherits from symbolic Tensor, and this adds a lot of weird complication in the TF codebase, along with awkward checks like type(t) == Tensor. This method was introduced to avoid churning the few users who need to distinguish between eager and symbolic tensors.\r\n\r\nWe don't have a proposal yet for the split, but we're front-running some of this prep/cleanup.", "LGTM in that case, and thanks for the clarification!" ]
1,681
1,682
1,682
CONTRIBUTOR
null
# What does this PR do? This PR adds support for the `is_symbolic_tensor` predicate in TensorFlow. This predicate will become available starting with version 2.14. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22878/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22878/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22878", "html_url": "https://github.com/huggingface/transformers/pull/22878", "diff_url": "https://github.com/huggingface/transformers/pull/22878.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22878.patch", "merged_at": 1682016402000 }
https://api.github.com/repos/huggingface/transformers/issues/22877
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22877/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22877/comments
https://api.github.com/repos/huggingface/transformers/issues/22877/events
https://github.com/huggingface/transformers/issues/22877
1,675,874,805
I_kwDOCUB6oc5j49H1
22,877
Llama fast tokenizer `train_new_from_iterator` returns `TypeError: 'NoneType' object is not subscriptable`
{ "login": "larrylawl", "id": 40198156, "node_id": "MDQ6VXNlcjQwMTk4MTU2", "avatar_url": "https://avatars.githubusercontent.com/u/40198156?v=4", "gravatar_id": "", "url": "https://api.github.com/users/larrylawl", "html_url": "https://github.com/larrylawl", "followers_url": "https://api.github.com/users/larrylawl/followers", "following_url": "https://api.github.com/users/larrylawl/following{/other_user}", "gists_url": "https://api.github.com/users/larrylawl/gists{/gist_id}", "starred_url": "https://api.github.com/users/larrylawl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/larrylawl/subscriptions", "organizations_url": "https://api.github.com/users/larrylawl/orgs", "repos_url": "https://api.github.com/users/larrylawl/repos", "events_url": "https://api.github.com/users/larrylawl/events{/privacy}", "received_events_url": "https://api.github.com/users/larrylawl/received_events", "type": "User", "site_admin": false }
[ { "id": 1834056635, "node_id": "MDU6TGFiZWwxODM0MDU2NjM1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization", "name": "Core: Tokenization", "color": "FF4446", "default": false, "description": "Internals of the library; Tokenization." } ]
closed
false
null
[]
[ "Same problem here. The code appears to be looking for a ByteLevel pretokenizer, but the json.load(_tokenizer) at line 644 of tokenization_utils_fast.py is initializing one with pretokenizer equal to None", "Hey! Thanks for reporting! I can reproduce this, indeed it's bug will investigate", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Should have been fixed by #22959 " ]
1,681
1,684
1,684
NONE
null
### System Info accelerate==0.18.0 aiohttp==3.8.4 aiosignal==1.3.1 anyio==3.6.2 argon2-cffi==21.3.0 argon2-cffi-bindings==21.2.0 arrow==1.2.3 asttokens==2.2.1 async-timeout==4.0.2 attrs==23.1.0 backcall==0.2.0 beautifulsoup4==4.12.2 bitsandbytes==0.38.1 bleach==6.0.0 certifi==2022.12.7 cffi==1.15.1 charset-normalizer==3.1.0 cmake==3.26.3 comm==0.1.3 datasets==2.11.0 debugpy==1.6.7 decorator==5.1.1 defusedxml==0.7.1 dill==0.3.6 evaluate==0.4.0 executing==1.2.0 fastjsonschema==2.16.3 filelock==3.12.0 fqdn==1.5.1 frozenlist==1.3.3 fsspec==2023.4.0 huggingface-hub==0.13.4 idna==3.4 importlib-metadata==6.5.0 importlib-resources==5.12.0 ipykernel==6.22.0 ipython==8.12.0 ipython-genutils==0.2.0 isoduration==20.11.0 jedi==0.18.2 Jinja2==3.1.2 jsonpointer==2.3 jsonschema==4.17.3 jupyter-events==0.6.3 jupyter_client==8.2.0 jupyter_core==5.3.0 jupyter_server==2.5.0 jupyter_server_terminals==0.4.4 jupyterlab-pygments==0.2.2 lit==16.0.1 MarkupSafe==2.1.2 matplotlib-inline==0.1.6 mistune==2.0.5 mpmath==1.3.0 multidict==6.0.4 multiprocess==0.70.14 nbclassic==0.5.5 nbclient==0.7.3 nbconvert==7.3.1 nbformat==5.8.0 nest-asyncio==1.5.6 networkx==3.1 notebook==6.5.4 notebook_shim==0.2.2 numpy==1.24.2 nvidia-cublas-cu11==11.10.3.66 nvidia-cuda-cupti-cu11==11.7.101 nvidia-cuda-nvrtc-cu11==11.7.99 nvidia-cuda-runtime-cu11==11.7.99 nvidia-cudnn-cu11==8.5.0.96 nvidia-cufft-cu11==10.9.0.58 nvidia-curand-cu11==10.2.10.91 nvidia-cusolver-cu11==11.4.0.1 nvidia-cusparse-cu11==11.7.4.91 nvidia-nccl-cu11==2.14.3 nvidia-nvtx-cu11==11.7.91 packaging==23.1 pandas==2.0.0 pandocfilters==1.5.0 parso==0.8.3 pexpect==4.8.0 pickleshare==0.7.5 pkgutil_resolve_name==1.3.10 platformdirs==3.2.0 prometheus-client==0.16.0 prompt-toolkit==3.0.38 protobuf==3.20.0 psutil==5.9.5 ptyprocess==0.7.0 pure-eval==0.2.2 pyarrow==11.0.0 pycparser==2.21 Pygments==2.15.1 pyrsistent==0.19.3 python-dateutil==2.8.2 python-dotenv==1.0.0 python-json-logger==2.0.7 pytz==2023.3 PyYAML==6.0 pyzmq==25.0.2 regex==2023.3.23 requests==2.28.2 responses==0.18.0 rfc3339-validator==0.1.4 rfc3986-validator==0.1.1 Send2Trash==1.8.0 sentencepiece==0.1.98 six==1.16.0 sniffio==1.3.0 soupsieve==2.4.1 stack-data==0.6.2 sympy==1.11.1 terminado==0.17.1 tinycss2==1.2.1 tokenizers==0.13.3 torch==2.0.0 tornado==6.3 tqdm==4.65.0 traitlets==5.9.0 -e git+https://github.com/huggingface/transformers.git@474bf508dfe0d46fc38585a1bb793e5ba74fddfd#egg=transformers triton==2.0.0 typing_extensions==4.5.0 tzdata==2023.3 uri-template==1.2.0 urllib3==1.26.15 wcwidth==0.2.6 webcolors==1.13 webencodings==0.5.1 websocket-client==1.5.1 xxhash==3.2.0 yarl==1.8.2 zipp==3.15.0 ### Who can help? @ArthurZucker , @Narsil ### Information - [] The official example scripts - [X ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Convert llama weights to hf format ``` python src/transformers/models/llama/convert_llama_weights_to_hf.py \ --input_dir /path/to/downloaded/llama/weights --model_size tokenizer_only --output_dir /output/path ``` 2. Train new tokenizer from old. ``` from transformers import AutoTokenizer old_tokenizer = AutoTokenizer.from_pretrained(/output/path) old_tokenizer.train_new_from_iterator(["I love huggingface!"], 50) ``` ### Expected behavior ## Behavior I ran into the error: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[3], line 5 3 old_tokenizer = AutoTokenizer.from_pretrained(PATH_TO_LLAMA_DIR,) ----> 5 old_tokenizer.train_new_from_iterator(["I love huggingface!"], 50) File ~/transformers/src/transformers/tokenization_utils_fast.py:709, in PreTrainedTokenizerFast.train_new_from_iterator(self, text_iterator, vocab_size, length, new_special_tokens, special_tokens_map, **kwargs) [707](file:///home/jovyan/transformers/src/transformers/tokenization_utils_fast.py?line=706) if tokenizer_json["model"]["type"] == "Unigram" and unk_token is not None: [708](file:///home/jovyan/transformers/src/transformers/tokenization_utils_fast.py?line=707) kwargs["unk_token"] = unk_token --> [709](file:///home/jovyan/transformers/src/transformers/tokenization_utils_fast.py?line=708) if tokenizer_json["pre_tokenizer"]["type"] == "ByteLevel": [710](file:///home/jovyan/transformers/src/transformers/tokenization_utils_fast.py?line=709) kwargs["initial_alphabet"] = pre_tokenizers_fast.ByteLevel.alphabet() [712](file:///home/jovyan/transformers/src/transformers/tokenization_utils_fast.py?line=711) trainer_class = MODEL_TO_TRAINER_MAPPING[tokenizer_json["model"]["type"]] TypeError: 'NoneType' object is not subscriptable ``` ## Analysis Inspecting my `tokenizer.json` file ([tokenizer.zip](https://github.com/huggingface/transformers/files/11279412/tokenizer.zip)), I realised my `"pre_tokenizer": null,` which led to the error. I'm not sure if it helps, but I had issue converting the llama weights to hf format (step 1) due to the protobuf version bug described [here](https://github.com/huggingface/transformers/issues/21128). I fixed it by downgrading my protobuf to version 3.20.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22877/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22877/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22876
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22876/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22876/comments
https://api.github.com/repos/huggingface/transformers/issues/22876/events
https://github.com/huggingface/transformers/pull/22876
1,675,832,380
PR_kwDOCUB6oc5OuLXc
22,876
Remove broken test_data symlink in legacy s2s examples
{ "login": "hvaara", "id": 1535968, "node_id": "MDQ6VXNlcjE1MzU5Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/1535968?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hvaara", "html_url": "https://github.com/hvaara", "followers_url": "https://api.github.com/users/hvaara/followers", "following_url": "https://api.github.com/users/hvaara/following{/other_user}", "gists_url": "https://api.github.com/users/hvaara/gists{/gist_id}", "starred_url": "https://api.github.com/users/hvaara/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hvaara/subscriptions", "organizations_url": "https://api.github.com/users/hvaara/orgs", "repos_url": "https://api.github.com/users/hvaara/repos", "events_url": "https://api.github.com/users/hvaara/events{/privacy}", "received_events_url": "https://api.github.com/users/hvaara/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,682
1,682
CONTRIBUTOR
null
# What does this PR do? This PR removes the broken `test_data` symlink in `examples/legacy/seq2seq/test_data` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22876/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22876/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22876", "html_url": "https://github.com/huggingface/transformers/pull/22876", "diff_url": "https://github.com/huggingface/transformers/pull/22876.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22876.patch", "merged_at": 1682087743000 }
https://api.github.com/repos/huggingface/transformers/issues/22875
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22875/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22875/comments
https://api.github.com/repos/huggingface/transformers/issues/22875/events
https://github.com/huggingface/transformers/pull/22875
1,675,803,947
PR_kwDOCUB6oc5OuFgj
22,875
Generation: only search for eos_token if set
{ "login": "xloem", "id": 279585, "node_id": "MDQ6VXNlcjI3OTU4NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/279585?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xloem", "html_url": "https://github.com/xloem", "followers_url": "https://api.github.com/users/xloem/followers", "following_url": "https://api.github.com/users/xloem/following{/other_user}", "gists_url": "https://api.github.com/users/xloem/gists{/gist_id}", "starred_url": "https://api.github.com/users/xloem/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xloem/subscriptions", "organizations_url": "https://api.github.com/users/xloem/orgs", "repos_url": "https://api.github.com/users/xloem/repos", "events_url": "https://api.github.com/users/xloem/events{/privacy}", "received_events_url": "https://api.github.com/users/xloem/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> 1. If you rebase with `main`, you'll see that there is another decoding method with the same pattern. Would you be able to rebase and push the change there as well?\r\n\r\nDone. I have not reviewed the rest of the function for synchronization points but observe the impact of this change will be much smaller with the nested assistant loop. Nice new feature.\r\n\r\n> 2. How's the system where you noticed the big speed change? How are you running `.generate()`? I'd be interested in knowing more about it :)\r\n\r\nThis is an old Asus KGPE-D16 motherboard. They've been popular with some independent developers as higher-end hackable hardware. It has two old K80s in it and is running the Dasharo third-party bios firmware, which adds support for newer PCI cards.\r\n\r\nThe firmware is not fully polished and I was observing corruption when transferring data between cards; the solution from nvidia's forums was to pass `iommu=soft` to the kernel. This fixes the issue in a pinch but makes data transfer very slow, and points where data is transferred became the biggest bottlenecks.\r\n\r\nThe generation call I'm presently using is roughly ````model.generate(input_ids,\r\n do_sample=False,\r\n min_length=10,\r\n max_length=50,\r\n top_p=0.95,\r\n temperature=0.0)```` from the cuda branch of the gptq llama repository.\r\n\r\nThe model approaches 40GB in size and is spread across all 4 logical cards using huggingface accelerate `device_map=\"auto\"`. (I've made another small patch to transformers, not submitted yet (it would affect every model, hard to test), to additionally reduce the need to transfer data between the cards inside the model layer loop, when running with accelerate. The attention mask and position ids are not properly moved off the first card to prepare for layers on other cards, with the vanilla code.)", "@xloem thank you for the explanation! πŸ’› " ]
1,681
1,681
1,681
CONTRIBUTOR
null
In generation, the current check for `unfinished_sequences.max()`, which is to find sequences that have ended early via `eos_token_id`, creates a synchronization point even when there is no `eos_token`, which slows inference down. This pull request moves that calculation to inside the condition checking for an `eos_token`, so that such slowdown may be removed by disabling this token. On my old system with `iommu=soft`, this change is saving me 6 seconds per token on a large model by setting `model.config.eos_token_id = None`. @gante
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22875/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22875/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22875", "html_url": "https://github.com/huggingface/transformers/pull/22875", "diff_url": "https://github.com/huggingface/transformers/pull/22875.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22875.patch", "merged_at": 1681989509000 }
https://api.github.com/repos/huggingface/transformers/issues/22874
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22874/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22874/comments
https://api.github.com/repos/huggingface/transformers/issues/22874/events
https://github.com/huggingface/transformers/pull/22874
1,675,708,671
PR_kwDOCUB6oc5OtxT5
22,874
ddp fixes for training
{ "login": "winglian", "id": 381258, "node_id": "MDQ6VXNlcjM4MTI1OA==", "avatar_url": "https://avatars.githubusercontent.com/u/381258?v=4", "gravatar_id": "", "url": "https://api.github.com/users/winglian", "html_url": "https://github.com/winglian", "followers_url": "https://api.github.com/users/winglian/followers", "following_url": "https://api.github.com/users/winglian/following{/other_user}", "gists_url": "https://api.github.com/users/winglian/gists{/gist_id}", "starred_url": "https://api.github.com/users/winglian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/winglian/subscriptions", "organizations_url": "https://api.github.com/users/winglian/orgs", "repos_url": "https://api.github.com/users/winglian/repos", "events_url": "https://api.github.com/users/winglian/events{/privacy}", "received_events_url": "https://api.github.com/users/winglian/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "cc @sgugger, solution is exactly what we have in Accelerate, and would be a good way to keep it working until the Accelerate integration is fully finished :) " ]
1,681
1,682
1,682
CONTRIBUTOR
null
# What does this PR do? While trying to train Stable LM or even Llama, I ran into a couple of issues with multi-gpu and DDP. I've added a check to skip this for this case since torch doesn't support it: see https://github.com/pytorch/pytorch/blob/main/torch/nn/parallel/distributed.py#L686-L694 ``` File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1633, in train return inner_training_loop( File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1720, in _inner_training_loop model = self._wrap_model(self.model_wrapped) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1545, in _wrap_model model = nn.parallel.DistributedDataParallel( File "/opt/conda/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 571, in __init__ self._log_and_throw( File "/opt/conda/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 769, in _log_and_throw raise err_type(err_msg) RuntimeError: DistributedDataParallel is not needed when a module doesn't have any parameter that requires a gradient. ``` Added another check for the method `no_sync` ``` File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1634, in train return inner_training_loop( File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1900, in _inner_training_loop with model.no_sync(): File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__ raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'GPTNeoXForCausalLM' object has no attribute 'no_sync' ``` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22874/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22874/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22874", "html_url": "https://github.com/huggingface/transformers/pull/22874", "diff_url": "https://github.com/huggingface/transformers/pull/22874.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22874.patch", "merged_at": 1682091723000 }
https://api.github.com/repos/huggingface/transformers/issues/22873
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22873/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22873/comments
https://api.github.com/repos/huggingface/transformers/issues/22873/events
https://github.com/huggingface/transformers/issues/22873
1,675,480,037
I_kwDOCUB6oc5j3cvl
22,873
While weight conversion of llama-13b getting this error: RuntimeError: Internal: unk is not defined.
{ "login": "Ahtesham00", "id": 88507331, "node_id": "MDQ6VXNlcjg4NTA3MzMx", "avatar_url": "https://avatars.githubusercontent.com/u/88507331?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ahtesham00", "html_url": "https://github.com/Ahtesham00", "followers_url": "https://api.github.com/users/Ahtesham00/followers", "following_url": "https://api.github.com/users/Ahtesham00/following{/other_user}", "gists_url": "https://api.github.com/users/Ahtesham00/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ahtesham00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ahtesham00/subscriptions", "organizations_url": "https://api.github.com/users/Ahtesham00/orgs", "repos_url": "https://api.github.com/users/Ahtesham00/repos", "events_url": "https://api.github.com/users/Ahtesham00/events{/privacy}", "received_events_url": "https://api.github.com/users/Ahtesham00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "facing the same issue.", "Hey! Thanks for reporting I'll investigate this! ", "I have the same issue when I use the latest version of torch.", "I did not find the solution. but if someone wants to download the weights.\r\nfollowing link has all the versions.\r\n\r\nhttps://huggingface.co/elinas", "Okay, We update the conversion script, which should have fixed most issues. I downloaded the tokenizer model, and re-tried the conversion, and I did not have any issue. Make sure you are using the latest transformers version.", "I tried with the latest code from the main branch, but still getting the same issue\r\n\r\n<img width=\"1400\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/12937285/bea4eb23-ee9b-4acf-b2a5-60df5411cd24\">\r\n", "I am getting the same error message when running the conversion for the 7B model. Tried installing the latest version (4.29.2) but the error persists. Same traceback as @dittops but mine has a nicer formatting.", "Again, the issue is most probably with the tokenizer file that you are using, which is outdated. Yes you need to upgrade to the latest transformers version, but you also need to use the original sentencepiece model in order for the conversion to properly work! ", "Thanks for following up. I have the llama weights/tokenizer that were updated on 3/26/23. Isn't that the latest version of the tokenizer? \r\n\r\nAlso I'm not sure what you mean by the original sentencepiece model (unless you mean the model from prior to the 3/26 update).", "When you say:\r\n> I have the llama weights/tokenizer that were updated on 3/26/23\r\n\r\ndo you mean the META weights and tokenizer? \r\nOtherwise can you share a notebook with a reproducer? The issue with llama is that a PR was made too early and thus lots of checkpoints and previous tokenizers (meaning hf tokenizers json) are incorrect.", "@ArthurZucker I have the META weights and tokenizer. The issue share is with that. For sentencepiece, is there a specific version to be used?", "> \r\n> > I have the llama weights/tokenizer that were updated on 3/26/23\r\n> \r\n> do you mean the META weights and tokenizer? Otherwise can you share a notebook with a reproducer? The issue with llama is that a PR was made too early and thus lots of checkpoints and previous tokenizers (meaning hf tokenizers json) are incorrect.\r\n\r\nAh I see. The llama weights I have come from [Meta's torrent PR](https://github.com/facebookresearch/llama/pull/73). I did not get them from HuggingFace, if you are referring to [this](https://github.com/facebookresearch/llama/pull/109) PR.", "Ok πŸ‘πŸ» I'll give it another go, but I remember trying with those exact weights and getting a correct conversion. \r\nWill get back to you soon! ", "Would you mind sending me the file via google drive? The torrent link seems down", "The torrent is showing as up for me right now, but if it isn't working for you I am happy to send you a copy of the 7B folder I am using. The entire folder for the 7B model is ~13-14GB. I'm trying to compress it right now but it will take a little bit to finish.", "Just the tokenizer files are enough! ", "Email sent!", "@egoetz where you able to solve this issue?", "@egoetz told me that installing GIT LFS + using the tokenizer at `huggyllama/llama-7b` worked. \r\nI received the email but could not access files as they were not shared using drive but a private mail provider πŸ˜… \r\nIf you are trying to convert the original model (by that I mean going from the spm model to transformers) make sure you have the latest version of `transformers` ", "I was able to resolve it by replacing `tokenizer.model `with one from hugging face. Thank you/", "I'm not sure I understand. If you are trying to **convert** a checkpoint/tokenizer, then you don't need to use an already converted one. The script is to go from the original tokenizer to the HF format. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,681
1,690
1,690
NONE
null
### System Info OS : Ubunto Virtual Env : **accelerate==0.18.0 certifi==2022.12.7 charset-normalizer==3.1.0 cmake==3.26.3 filelock==3.12.0 huggingface-hub==0.13.4 idna==3.4 Jinja2==3.1.2 lit==16.0.1 MarkupSafe==2.1.2 mpmath==1.3.0 networkx==3.1 numpy==1.24.2 nvidia-cublas-cu11==11.10.3.66 nvidia-cuda-cupti-cu11==11.7.101 nvidia-cuda-nvrtc-cu11==11.7.99 nvidia-cuda-runtime-cu11==11.7.99 nvidia-cudnn-cu11==8.5.0.96 nvidia-cufft-cu11==10.9.0.58 nvidia-curand-cu11==10.2.10.91 nvidia-cusolver-cu11==11.4.0.1 nvidia-cusparse-cu11==11.7.4.91 nvidia-nccl-cu11==2.14.3 nvidia-nvtx-cu11==11.7.91 packaging==23.1 psutil==5.9.5 PyYAML==6.0 regex==2023.3.23 requests==2.28.2 sentencepiece==0.1.98 sympy==1.11.1 tokenizers==0.13.3 torch==2.0.0 tqdm==4.65.0 transformers==4.28.1 triton==2.0.0 typing_extensions==4.5.0 urllib3==1.26.15** ### Who can help? @ArthurZucker @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Used following command to convert llama-13 weights into hf. `python src/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir /home/unconveretd-weights --model_size 13B --output_dir /home/test-converted` ### Expected behavior **It should generated the converted weights. But instead it is generating this error** Loading the checkpoint in a Llama model. Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 41/41 [00:17<00:00, 2.35it/s] Saving in the Transformers format. Saving a LlamaTokenizerFast to /home/test-converted. Traceback (most recent call last): File "/home/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py", line 278, in <module> main() File "/home/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py", line 274, in main write_tokenizer(args.output_dir, spm_path) File "/home/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py", line 248, in write_tokenizer tokenizer = tokenizer_class(input_tokenizer_path) File "/home/myenv/lib/python3.10/site-packages/transformers/models/llama/tokenization_llama_fast.py", line 89, in __init__ super().__init__( File "/home/myenv/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 117, in __init__ slow_tokenizer = self.slow_tokenizer_class(*args, **kwargs) File "/home/myenv/lib/python3.10/site-packages/transformers/models/llama/tokenization_llama.py", line 96, in __init__ self.sp_model.Load(vocab_file) File "/home/myenv/lib/python3.10/site-packages/sentencepiece/__init__.py", line 905, in Load return self.LoadFromFile(model_file) File "/home/myenv/lib/python3.10/site-packages/sentencepiece/__init__.py", line 310, in LoadFromFile return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) RuntimeError: Internal: unk is not defined.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22873/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22873/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22872
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22872/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22872/comments
https://api.github.com/repos/huggingface/transformers/issues/22872/events
https://github.com/huggingface/transformers/pull/22872
1,675,473,796
PR_kwDOCUB6oc5Os9jz
22,872
moved labels to the same device as logits for OTP, CODEGEN ,gptj and pixel2struct model
{ "login": "sushmanthreddy", "id": 73489688, "node_id": "MDQ6VXNlcjczNDg5Njg4", "avatar_url": "https://avatars.githubusercontent.com/u/73489688?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sushmanthreddy", "html_url": "https://github.com/sushmanthreddy", "followers_url": "https://api.github.com/users/sushmanthreddy/followers", "following_url": "https://api.github.com/users/sushmanthreddy/following{/other_user}", "gists_url": "https://api.github.com/users/sushmanthreddy/gists{/gist_id}", "starred_url": "https://api.github.com/users/sushmanthreddy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sushmanthreddy/subscriptions", "organizations_url": "https://api.github.com/users/sushmanthreddy/orgs", "repos_url": "https://api.github.com/users/sushmanthreddy/repos", "events_url": "https://api.github.com/users/sushmanthreddy/events{/privacy}", "received_events_url": "https://api.github.com/users/sushmanthreddy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Glad to help! " ]
1,681
1,682
1,681
CONTRIBUTOR
null
# What does this PR do? As suggested in the [#22561](https://github.com/huggingface/transformers/issues/22561), moved labels to the same device as logits for the OTP model and codegen model @sgugger can u pls review and merge this pr?? @sgugger I am really sorry for the mess of this pr, I will not repeat this in the future .... pls once review and merge this...I should have created multiple branches for each pr but sorry this will not be repeated...
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22872/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22872/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22872", "html_url": "https://github.com/huggingface/transformers/pull/22872", "diff_url": "https://github.com/huggingface/transformers/pull/22872.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22872.patch", "merged_at": 1681995174000 }
https://api.github.com/repos/huggingface/transformers/issues/22871
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22871/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22871/comments
https://api.github.com/repos/huggingface/transformers/issues/22871/events
https://github.com/huggingface/transformers/pull/22871
1,675,444,949
PR_kwDOCUB6oc5Os3Uk
22,871
moved labels to the same device as logits for opt, GPTJ , codeine and pix2struct models
{ "login": "sushmanthreddy", "id": 73489688, "node_id": "MDQ6VXNlcjczNDg5Njg4", "avatar_url": "https://avatars.githubusercontent.com/u/73489688?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sushmanthreddy", "html_url": "https://github.com/sushmanthreddy", "followers_url": "https://api.github.com/users/sushmanthreddy/followers", "following_url": "https://api.github.com/users/sushmanthreddy/following{/other_user}", "gists_url": "https://api.github.com/users/sushmanthreddy/gists{/gist_id}", "starred_url": "https://api.github.com/users/sushmanthreddy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sushmanthreddy/subscriptions", "organizations_url": "https://api.github.com/users/sushmanthreddy/orgs", "repos_url": "https://api.github.com/users/sushmanthreddy/repos", "events_url": "https://api.github.com/users/sushmanthreddy/events{/privacy}", "received_events_url": "https://api.github.com/users/sushmanthreddy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "### there are some things to be changed i will keep pr after that changes", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22871). All of your documentation changes will be reflected on that endpoint." ]
1,681
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? As suggested in the [#22561 ](https://github.com/huggingface/transformers/issues/22561) , moved labels to the same device as logits for OPT, GPTj, codegen and pix2struct models. I am new to open source pls give me suggestions for keeping better pr.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22871/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22871/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22871", "html_url": "https://github.com/huggingface/transformers/pull/22871", "diff_url": "https://github.com/huggingface/transformers/pull/22871.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22871.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22870
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22870/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22870/comments
https://api.github.com/repos/huggingface/transformers/issues/22870/events
https://github.com/huggingface/transformers/pull/22870
1,675,400,630
PR_kwDOCUB6oc5OstyB
22,870
Fix to removing ESM special tokens
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "that took like 45 seconds how do you see your notifications that quickly", "Ah ah, I was on my GitHub already that's all.", "I'm intimidated nonetheless!", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,681
1,681
MEMBER
null
This is a followup to PR #22770 - I forgot that because of the way the ESM tokenizer is structured that the EOS token would come back after it was saved and reloaded. By making the special tokens arguments to the tokenizer, we can set them using `init_kwargs` and ensure that they stay changed permanently. Sorry for overlooking this in the last PR!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22870/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22870/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22870", "html_url": "https://github.com/huggingface/transformers/pull/22870", "diff_url": "https://github.com/huggingface/transformers/pull/22870.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22870.patch", "merged_at": 1681929749000 }
https://api.github.com/repos/huggingface/transformers/issues/22869
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22869/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22869/comments
https://api.github.com/repos/huggingface/transformers/issues/22869/events
https://github.com/huggingface/transformers/pull/22869
1,675,390,619
PR_kwDOCUB6oc5Osrrb
22,869
Fixup multigpu local_rank
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "id": 2107554019, "node_id": "MDU6TGFiZWwyMTA3NTU0MDE5", "url": "https://api.github.com/repos/huggingface/transformers/labels/Distributed%20Training%20/%20Models", "name": "Distributed Training / Models", "color": "fef2c0", "default": false, "description": "" }, { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? The `local_rank` wasn't being properly set when using the `PartialState`, causing failures on the nightlies. This PR fixes it. Fixes # (issue) Failing nightly tests ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22869/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22869/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22869", "html_url": "https://github.com/huggingface/transformers/pull/22869", "diff_url": "https://github.com/huggingface/transformers/pull/22869.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22869.patch", "merged_at": 1681929437000 }
https://api.github.com/repos/huggingface/transformers/issues/22868
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22868/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22868/comments
https://api.github.com/repos/huggingface/transformers/issues/22868/events
https://github.com/huggingface/transformers/pull/22868
1,675,371,352
PR_kwDOCUB6oc5OsnfS
22,868
Update modeling_opt.py
{ "login": "sushmanthreddy", "id": 73489688, "node_id": "MDQ6VXNlcjczNDg5Njg4", "avatar_url": "https://avatars.githubusercontent.com/u/73489688?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sushmanthreddy", "html_url": "https://github.com/sushmanthreddy", "followers_url": "https://api.github.com/users/sushmanthreddy/followers", "following_url": "https://api.github.com/users/sushmanthreddy/following{/other_user}", "gists_url": "https://api.github.com/users/sushmanthreddy/gists{/gist_id}", "starred_url": "https://api.github.com/users/sushmanthreddy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sushmanthreddy/subscriptions", "organizations_url": "https://api.github.com/users/sushmanthreddy/orgs", "repos_url": "https://api.github.com/users/sushmanthreddy/repos", "events_url": "https://api.github.com/users/sushmanthreddy/events{/privacy}", "received_events_url": "https://api.github.com/users/sushmanthreddy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22868). All of your documentation changes will be reflected on that endpoint." ]
1,681
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? As suggested in the [#22561 ](https://github.com/huggingface/transformers/issues/22561) ,moved labels to the same device as logits for OTP , codegen and gptj @sgugger can u pls review this once??
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22868/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22868/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22868", "html_url": "https://github.com/huggingface/transformers/pull/22868", "diff_url": "https://github.com/huggingface/transformers/pull/22868.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22868.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22867
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22867/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22867/comments
https://api.github.com/repos/huggingface/transformers/issues/22867/events
https://github.com/huggingface/transformers/issues/22867
1,675,360,048
I_kwDOCUB6oc5j2_cw
22,867
`push_to_hub` with `branch` or `revision` keyword argument
{ "login": "zanussbaum", "id": 33707069, "node_id": "MDQ6VXNlcjMzNzA3MDY5", "avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zanussbaum", "html_url": "https://github.com/zanussbaum", "followers_url": "https://api.github.com/users/zanussbaum/followers", "following_url": "https://api.github.com/users/zanussbaum/following{/other_user}", "gists_url": "https://api.github.com/users/zanussbaum/gists{/gist_id}", "starred_url": "https://api.github.com/users/zanussbaum/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zanussbaum/subscriptions", "organizations_url": "https://api.github.com/users/zanussbaum/orgs", "repos_url": "https://api.github.com/users/zanussbaum/repos", "events_url": "https://api.github.com/users/zanussbaum/events{/privacy}", "received_events_url": "https://api.github.com/users/zanussbaum/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
closed
false
null
[]
[ "cc @sgugger ", "If you want to contribute a PR to add this, it would be welcome!", "Any updates on this @zanussbaum ?" ]
1,681
1,692
1,692
CONTRIBUTOR
null
### Feature request In `datasets`, you can upload a dataset to a `branch`. In the `transformers` package, it doesn't seem like `branch` or `revision` [are supported](https://huggingface.co/docs/transformers/v4.28.1/en/main_classes/model#transformers.PreTrainedModel.push_to_hub) ### Motivation To push a model to the hub and with a revision seems a little harder. It seems like I would need to find the cache directory of the model and use `upload_folder` from `huggingface_hub` to upload to the correct revision. I could very well be missing the right documentation but I can't seem to figure out how/where to do this ### Your contribution Maybe a PR?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22867/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22867/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22866
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22866/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22866/comments
https://api.github.com/repos/huggingface/transformers/issues/22866/events
https://github.com/huggingface/transformers/pull/22866
1,675,271,795
PR_kwDOCUB6oc5OsSCa
22,866
Flax Refactor v2
{ "login": "cgarciae", "id": 5862228, "node_id": "MDQ6VXNlcjU4NjIyMjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5862228?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cgarciae", "html_url": "https://github.com/cgarciae", "followers_url": "https://api.github.com/users/cgarciae/followers", "following_url": "https://api.github.com/users/cgarciae/following{/other_user}", "gists_url": "https://api.github.com/users/cgarciae/gists{/gist_id}", "starred_url": "https://api.github.com/users/cgarciae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cgarciae/subscriptions", "organizations_url": "https://api.github.com/users/cgarciae/orgs", "repos_url": "https://api.github.com/users/cgarciae/repos", "events_url": "https://api.github.com/users/cgarciae/events{/privacy}", "received_events_url": "https://api.github.com/users/cgarciae/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22866). All of your documentation changes will be reflected on that endpoint.", "Hey @cgarciae - we're nearly finished with this PR far as I can tell. Do you have the bandwidth to see this through to completion? Happy to help with the last stages of integration here!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,681
1,690
1,690
NONE
null
# What does this PR do? Alternative to #22627. Instead of making `FlaxPretrainedModel` a Flax `Module`, this PR aims to make all inner `.module`s usable in a standalone way by moving any pre/posprocessing done by its `*Model` container into the Module itself.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22866/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22866/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22866", "html_url": "https://github.com/huggingface/transformers/pull/22866", "diff_url": "https://github.com/huggingface/transformers/pull/22866.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22866.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22865
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22865/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22865/comments
https://api.github.com/repos/huggingface/transformers/issues/22865/events
https://github.com/huggingface/transformers/pull/22865
1,675,158,474
PR_kwDOCUB6oc5Or5-8
22,865
Remove some pipeline skip cases
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,681
1,681
COLLABORATOR
null
# What does this PR do? Remove some pipeline skip cases after #22428: the real tokenizers avoid a lot of failing cases - except for QA with slow tokenizers. P.S. As discussed once on Slack: the QA pipeline with slow tokenizer uses some methods in `src/transformers/data/processors/squad.py`, and we plan not to make any change to this file.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22865/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22865/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22865", "html_url": "https://github.com/huggingface/transformers/pull/22865", "diff_url": "https://github.com/huggingface/transformers/pull/22865.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22865.patch", "merged_at": 1681928840000 }
https://api.github.com/repos/huggingface/transformers/issues/22864
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22864/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22864/comments
https://api.github.com/repos/huggingface/transformers/issues/22864/events
https://github.com/huggingface/transformers/pull/22864
1,675,147,772
PR_kwDOCUB6oc5Or3q1
22,864
Add perf_train_gpu_one.mdx italian translation
{ "login": "Baelish03", "id": 97971495, "node_id": "U_kgDOBdbtJw", "avatar_url": "https://avatars.githubusercontent.com/u/97971495?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Baelish03", "html_url": "https://github.com/Baelish03", "followers_url": "https://api.github.com/users/Baelish03/followers", "following_url": "https://api.github.com/users/Baelish03/following{/other_user}", "gists_url": "https://api.github.com/users/Baelish03/gists{/gist_id}", "starred_url": "https://api.github.com/users/Baelish03/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Baelish03/subscriptions", "organizations_url": "https://api.github.com/users/Baelish03/orgs", "repos_url": "https://api.github.com/users/Baelish03/repos", "events_url": "https://api.github.com/users/Baelish03/events{/privacy}", "received_events_url": "https://api.github.com/users/Baelish03/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@Baelish03 Thanks for adding this! To resolve the failing quality checks, you'll need to run `make fixup` locally and push any changes. " ]
1,681
1,682
1,682
CONTRIBUTOR
null
See issue #17459 Good evening. I didn't translate technical terms and I preferred to keep them in english. So I hope it's all ok. Good bye.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22864/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22864/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22864", "html_url": "https://github.com/huggingface/transformers/pull/22864", "diff_url": "https://github.com/huggingface/transformers/pull/22864.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22864.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22863
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22863/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22863/comments
https://api.github.com/repos/huggingface/transformers/issues/22863/events
https://github.com/huggingface/transformers/issues/22863
1,675,103,075
I_kwDOCUB6oc5j2Atj
22,863
GPT-2 trained on 24 yrs of US patent grants
{ "login": "goji-patai", "id": 55723262, "node_id": "MDQ6VXNlcjU1NzIzMjYy", "avatar_url": "https://avatars.githubusercontent.com/u/55723262?v=4", "gravatar_id": "", "url": "https://api.github.com/users/goji-patai", "html_url": "https://github.com/goji-patai", "followers_url": "https://api.github.com/users/goji-patai/followers", "following_url": "https://api.github.com/users/goji-patai/following{/other_user}", "gists_url": "https://api.github.com/users/goji-patai/gists{/gist_id}", "starred_url": "https://api.github.com/users/goji-patai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/goji-patai/subscriptions", "organizations_url": "https://api.github.com/users/goji-patai/orgs", "repos_url": "https://api.github.com/users/goji-patai/repos", "events_url": "https://api.github.com/users/goji-patai/events{/privacy}", "received_events_url": "https://api.github.com/users/goji-patai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @goji-patai - cool model! \r\n\r\nThis kind of post is best shared in places like our forum's [Show and Tell](https://discuss.huggingface.co/c/show-and-tell/65) section, or the `i-made-this` channel in our [discord](https://t.co/1n75wi976V?amp=1). We try to reserve the issues for feature requests and bug reports. \r\n\r\nAnyone can add their [model to the hub](https://huggingface.co/docs/hub/models-uploading). There you should be able to build a demo which you can share with others too. \r\n\r\n", "<!--\n/* Font Definitions */\n@font-face\n\t{font-family:\"Cambria Math\";\n\tpanose-1:2 4 5 3 5 4 6 3 2 4;}\n@font-face\n\t{font-family:Calibri;\n\tpanose-1:2 15 5 2 2 2 4 3 2 4;}\n/* Style Definitions */\np.MsoNormal, li.MsoNormal, div.MsoNormal\n\t{margin:0in;\n\tfont-size:11.0pt;\n\tfont-family:\"Calibri\",sans-serif;}\na:link, span.MsoHyperlink\n\t{mso-style-priority:99;\n\tcolor:blue;\n\ttext-decoration:underline;}\ncode\n\t{mso-style-priority:99;\n\tfont-family:\"Courier New\";}\n.MsoChpDefault\n\t{mso-style-type:export-only;}\n@page WordSection1\n\t{size:8.5in 11.0in;\n\tmargin:1.0in 1.0in 1.0in 1.0in;}\ndiv.WordSection1\n\t{page:WordSection1;}\n-->Thank you.Β  Will do.Β Sent from Mail for WindowsΒ From: amyerobertsSent: Wednesday, April 19, 2023 1:13 PMTo: huggingface/transformersCc: GEMIC; MentionSubject: Re: [huggingface/transformers] GPT-2 trained on 24 yrs of US patent grants (Issue #22863)Β Hi @goji-patai - cool model!This kind of post is best shared in places like our forum's Show and Tell section, or the i-made-this channel in our discord. We try to reserve the issues for feature requests and bug reports.Anyone can add their model to the hub. There you should be able to build a demo which you can share with others too.β€”Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: ***@***.***>Β ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,681
1,685
1,685
NONE
null
### Model description Hi, I have a GPT2 model available that was made from scratch and has been trained on 1976-2000 US patent grants ~>1M docs. I think it is a cool/useful example of Huggingface's gpt2 implementation. I continue to update this model as I get new data. I have a local streamlit implementation with greedy and beam searches. Top-k, top_p and temperature variables are randomized within optimized ranges . Should you accept to implement it, I will give you the model, and a streamlit implementation to be open-source. What this model can and cannot do: Can: -Given an invention idea e.g. (totally made up)"an ultrasonic toothbrush with sensors to detect dental caries..." it gives intriguing results - it is not limited to one tech. The training data was randomly sampled so as to cover a large tech space. -It gives even more intriguing results when prompted by a preamble (first ~ 7-10 words) of an existing granted patent. In a non-scientific or exhaustive testing of this model, I have gotten generated text similar to inventions that were applied for and granted several years later than 2000. This observation may or may not be generalizable to other fields of tech. Cannot: -Prompts not grounded in the laws of physics as we currently understand them will generate unsatisfying results. So, no perpetual machines, flying carpets, or unicorns. Additionally, there are certain subject matters that cannot be patented by law in the US. Prompts about these subject matters will not give meaningful responses. -it is limited by its training to those ideas that have been patented until 2000. Best regards, Gojeb Frehywot [email protected] P.S. Should you implement this model as you have done for other gpt2 models, I have no interest in any possible patentable invention a user might generate/come up with ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation My name is Gojeb Frehywot. I have a streamlit implementation of this text generating model on my local machine. I do not use a repository on GITHUB. Can upload to GITHUB do so if you are interested to look into this request further. Thank you. Gojeb Frehywot [email protected]
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22863/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22863/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22862
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22862/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22862/comments
https://api.github.com/repos/huggingface/transformers/issues/22862/events
https://github.com/huggingface/transformers/pull/22862
1,675,056,787
PR_kwDOCUB6oc5OrjlP
22,862
Generate: assisted decoding with sample
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I'm closing this PR because I found a much much better way to handle the sample case 🧠 \r\n\r\nStay tuned πŸš€ " ]
1,681
1,684
1,682
MEMBER
null
# What does this PR do? This PR expands the previous [assisted generation PR](https://github.com/huggingface/transformers/pull/22211) so as to work with sampling. Two important notes to review the PR: 1. I'd suggest starting the review by the docs, so you understand what's going on at a high level. Sampling adds an additional (controllable) heuristic, so the user can control between speed and pure sampling behavior. 2. In terms of implementation, I've decided to overload the assisted generation function with a few extra lines to handle the sample case. This is to avoid adding a close copy of a 500-line function. _____________________________________________________________________________ Bellow are some results, so you can understand the balancing act. Execution time obtained on a 3090. <details> <summary>Script</summary> ```py from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer import torch import time model_id = "EleutherAI/pythia-6.9b-deduped" assistant_id = "EleutherAI/pythia-160m-deduped" tokenizer = AutoTokenizer.from_pretrained(model_id) assistant_model = AutoModelForCausalLM.from_pretrained(assistant_id) assistant_model = assistant_model.to("cuda") model_kwargs = { "pretrained_model_name_or_path": model_id, "device_map": "auto", "max_memory": {0: "20GiB", "cpu": "50GiB"}, "torch_dtype": torch.float16, } model = AutoModelForCausalLM.from_pretrained(**model_kwargs) inputs = tokenizer("Here's how to cook a good ramen:", return_tensors="pt").to("cuda") streamer = TextStreamer(tokenizer=tokenizer) print("Greedy with assistance:") start = time.time() model.generate(**inputs, assistant_model=assistant_model, streamer=streamer, max_new_tokens=64) print(f"Elapsed time: {time.time() - start:.2f} seconds") for p in (0.0, 0.2, 0.4, 0.6, 0.8, 1.0): print(f"Sample with assistance (assisted_keep_proba = {p})") torch.manual_seed(0) start = time.time() model.generate( **inputs, do_sample=True, assistant_model=assistant_model, assisted_keep_proba=p, streamer=streamer, max_new_tokens=64 ) print(f"Elapsed time: {time.time() - start:.2f} seconds") print("Original sample") torch.manual_seed(0) start = time.time() model.generate(**inputs, do_sample=True, streamer=streamer, max_new_tokens=64) print(f"Elapsed time: {time.time() - start:.2f} seconds") ``` </details> <details> <summary>Sample results</summary> Decoding strategy | Result | Execution time :-------------------:|:-------:|:------:| Greedy (w/assistance) | Here's how to cook a good ramen:<br><br>1. Make sure you have a good stock.<br><br>2. Make sure you have a good broth.<br><br>3. Make sure you have a good ramen.<br><br>4. Make sure you have a good ramen.<br><br>5. Make sure you have a good ramen. | 1.44 seconds Sample (w/assistance<br>`assisted_keep_proba=0.0`) | Here's how to cook a good ramen:<br><br>1. Get a noodle.<br><br>2. Get a stock.<br><br>3. Get a packet of dried ingredients.<br><br>4. Cook the noodles.<br><br>5. Cook the stock.<br><br>6. Cook the packet of dried ingredients.<br><br>7. Enjoy!<br><br>And | 1.44 seconds Sample (w/assistance<br>`assisted_keep_proba=0.2`) | Here's how to cook a good ramen:<br><br>1. Get a noodle vendor.<br><br>The noodle vendor makes the noodles. Japanese restaurants often have the noodle vendor on-site.<br><br>2. Get a pot.<br><br>The pot is used to cook ramen.<br><br>3. Get a pot of boiling water. | 1.59 seconds Sample (w/assistance<br>`assisted_keep_proba=0.4`) | Here's how to cook a good ramen:<br><br>Step 1: Collect your ingredients.<br><br>For this recipe you need a big stock pot. That's good.<br><br>And some water.<br><br>Step 2: Peel the eggs.<br><br>Yes, that's it. Four eggs.<br><br>Step 3: Separate the yolks. | 1.71 seconds Sample (w/assistance<br>`assisted_keep_proba=0.6`) | Here's how to cook a good ramen:<br><br>Nothing much to take out of the packet. Just a big block of pork fat, some Chinese chilli paste and seasonings.<br><br>Preheat the oven to 210ΒΊC (410ΒΊF/Gas 6).<br><br>Place the pork fat, chilli paste and seasoning into a mixing bowl and | 2.08 seconds Sample (w/assistance<br>`assisted_keep_proba=0.8`) | Here's how to cook a good ramen:<br><br>**You'll need:** A large pot for boiling noodles<br>A small saucepan for cooking the noodles<br>BBQ chicken or roasted fish, or any grilled healthy protein<br>A box of ramen noodles, noodles that come in<br>shapes and sizes<br>Soups or broth, | 2.32 seconds Sample (w/assistance<br>`assisted_keep_proba=1.0`) | Here's how to cook a good ramen:<br><br>You take your pre-scalloped noodles, pour boiling water (or your preferred water-to-noodle ratio) over them, and leave them alone for four to five minutes. Once that's done, drain them, season with salt, and heat them up on the stove (microwave won | 2.56 seconds Original Sample) | Here's how to cook a good ramen:<br><br>You take your pre-scalloped noodles, pour boiling water (or your preferred cooking liquid) over it, and after that you go get your ramen broth, add-ins, and other condiments. You make your seasoning sauce, and heat that up. Mix it all together, and put | 2.05 seconds As it can be seen above, there is a trade off between time and quality. This will certainly be application specific: factual applications will be able to take the most of assisted decoding. In my brief experiments, `assisted_keep_proba=0.3` seems like a sensible default. </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22862/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22862/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22862", "html_url": "https://github.com/huggingface/transformers/pull/22862", "diff_url": "https://github.com/huggingface/transformers/pull/22862.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22862.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22861
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22861/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22861/comments
https://api.github.com/repos/huggingface/transformers/issues/22861/events
https://github.com/huggingface/transformers/issues/22861
1,674,979,655
I_kwDOCUB6oc5j1ilH
22,861
LLaMA `generate` output changes depending on batch size
{ "login": "ryan-caesar-ramos", "id": 65334734, "node_id": "MDQ6VXNlcjY1MzM0NzM0", "avatar_url": "https://avatars.githubusercontent.com/u/65334734?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ryan-caesar-ramos", "html_url": "https://github.com/ryan-caesar-ramos", "followers_url": "https://api.github.com/users/ryan-caesar-ramos/followers", "following_url": "https://api.github.com/users/ryan-caesar-ramos/following{/other_user}", "gists_url": "https://api.github.com/users/ryan-caesar-ramos/gists{/gist_id}", "starred_url": "https://api.github.com/users/ryan-caesar-ramos/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ryan-caesar-ramos/subscriptions", "organizations_url": "https://api.github.com/users/ryan-caesar-ramos/orgs", "repos_url": "https://api.github.com/users/ryan-caesar-ramos/repos", "events_url": "https://api.github.com/users/ryan-caesar-ramos/events{/privacy}", "received_events_url": "https://api.github.com/users/ryan-caesar-ramos/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @ryan-caesar-ramos πŸ‘‹ \r\n\r\nThe particular base checkpoint you're using (`decapoda-research/llama-7b-hf`) is not compatible with transformers, so we do not provide support for related problems :) \r\n\r\nIf you have access to the original Meta weights, you can use other checkpoints as a starting point (e.g. [these](https://huggingface.co/huggyllama)). If the issue you're seeing still persists after updating the checkpoint, I'd be happy to take a look!", "Hi @gante ! Thanks, I'll try to look into this, but I'm unable to use repos like `huggyllama/llama-7b` that have shard sizes around 10GB since my CPU ram can't handle it. Any way I can tell if a checkpoint is supported or not? Maybe there's a sharded one out there that is compatible", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Same thing happens when I use the original meta llama2 models. When using batch, the answers are completely broken.", "> Same thing happens when I use the original meta llama2 models. When using batch, the answers are completely broken.\r\n\r\nDid you initialize your tokenizer with left padding?", "> \r\n\r\n@zhilif No idea what you mean. this is how I load it:\r\n```\r\ngenerator = Llama.build(\r\n \r\n ckpt_dir=\"C:/AI/LLaMA2_Docker_FileSystem/codellama/CodeLlama-7b-Instruct\",\r\n tokenizer_path=\"C:/AI/LLaMA2_Docker_FileSystem/codellama/CodeLlama-7b-Instruct/tokenizer.model\",\r\n max_seq_len=max_seq_len,\r\n max_batch_size=max_batch_size,\r\n model_parallel_size = 1 # num of worlds/gpus\r\n)\r\n```\r\n", "> > \r\n> \r\n> @zhilif No idea what you mean. this is how I load it:\r\n> \r\n> ```\r\n> generator = Llama.build(\r\n> \r\n> ckpt_dir=\"C:/AI/LLaMA2_Docker_FileSystem/codellama/CodeLlama-7b-Instruct\",\r\n> tokenizer_path=\"C:/AI/LLaMA2_Docker_FileSystem/codellama/CodeLlama-7b-Instruct/tokenizer.model\",\r\n> max_seq_len=max_seq_len,\r\n> max_batch_size=max_batch_size,\r\n> model_parallel_size = 1 # num of worlds/gpus\r\n> )\r\n> ```\r\n\r\nIs this a huggingface interface? If so, can you point to me its doc page?", "> > > \r\n> > \r\n> > \r\n> > @zhilif No idea what you mean. this is how I load it:\r\n> > ```\r\n> > generator = Llama.build(\r\n> > \r\n> > ckpt_dir=\"C:/AI/LLaMA2_Docker_FileSystem/codellama/CodeLlama-7b-Instruct\",\r\n> > tokenizer_path=\"C:/AI/LLaMA2_Docker_FileSystem/codellama/CodeLlama-7b-Instruct/tokenizer.model\",\r\n> > max_seq_len=max_seq_len,\r\n> > max_batch_size=max_batch_size,\r\n> > model_parallel_size = 1 # num of worlds/gpus\r\n> > )\r\n> > ```\r\n> \r\n> Is this a huggingface interface? If so, can you point to me its doc page?\r\n\r\n@zhilif https://github.com/facebookresearch/codellama/blob/main/example_instructions.py", "@realhaik πŸ‘‹ As @zhilif pointed out, that is not a hugging face interface :) Have a look at this [blog post](https://huggingface.co/blog/codellama) for examples", "> out, that is not a hugging face interface :) Have a look at this [blog post](https://huggingface.co/blog/codellama) for examples\r\n\r\n@gante unfortunately all the hugging face llama2 models are trash, the answers are completely broken.\r\nIf you work with hugging face llama models, I feel really sorry for you.", "> Same thing happens when I use the original meta llama2 models. When using batch, the answers are completely broken. \r\n\r\nI thought you said you had broken results with the original codebase as well? ", "> > Same thing happens when I use the original meta llama2 models. When using batch, the answers are completely broken.\r\n> \r\n> I thought you said you had broken results with the original codebase as well?\r\n\r\n@ArthurZucker I am talking only about original, I don't use hugging face. \r\nWhen using batch in original llama2 models, the answers are broken." ]
1,681
1,694
1,685
NONE
null
### System Info ``` - `transformers` version: 4.29.0.dev0 - Platform: Linux-5.10.147+-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.13.4 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.8 (gpu) - Jax version: 0.4.8 - JaxLib version: 0.4.7 - Using GPU in script?: True - Using distributed or parallel set-up in script?: False ``` ### Who can help? @younesbelkada @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction These [cells](https://colab.research.google.com/drive/1nAz2MUphzg5ifWW3CfRCJdYUgSYN5ynH?usp=sharing) should reproduce the error. ### Expected behavior **Concise version:** I was expecting the results to not change whether or not the inference was batched or not. Long version: Basically when I run `generate` with just one tokenized sequence, I get a certain result, and when I process the same sequence but inside a batch instead, the result changes. To make sure it wasn't any tokenization shenanigans, I tokenized the whole batch and took just the parts that related to the weird sequence (basically just `{k: v[1:] for k, v in tokenized_inputs.items()}` where the batch is just two items and the weird one is the second item). For clarity, when I pass just ```python3 """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: Combine the question and answer into an image caption as succinctly as possible. Be sure to include the phrase "a photo of". Do not draw false conclusions. ### Input: Is this a baseball game? no ### Response: """ ``` to the generate function of a `LlamaForCausalLM` wrapped in a `PeftModel`, I get ``` This is not a baseball game. ``` However, if I put it in a two-item batch, the output for some reason is instead ``` A photo of people playing a game. ``` For more details, the base model was 8-bit quantized and the LoRA weights should be at half-precision. The weights were taken from `decapoda-research/llama-7b-hf` and `tloen/alpaca-lora-7b` respectively. Edit: forgot to make the notebook public, should be fixed now.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22861/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22861/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22860
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22860/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22860/comments
https://api.github.com/repos/huggingface/transformers/issues/22860/events
https://github.com/huggingface/transformers/pull/22860
1,674,939,110
PR_kwDOCUB6oc5OrJi8
22,860
Remove 'main' from doc links
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,681
1,681
COLLABORATOR
null
# What does this PR do? There's a bunch of models added to the readme which have `main` in their doc link. This shows up in a lot of contributor's PRs' diffs, which is a bit annoying. This resolves this. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22860/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22860/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22860", "html_url": "https://github.com/huggingface/transformers/pull/22860", "diff_url": "https://github.com/huggingface/transformers/pull/22860.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22860.patch", "merged_at": 1681913037000 }
https://api.github.com/repos/huggingface/transformers/issues/22859
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22859/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22859/comments
https://api.github.com/repos/huggingface/transformers/issues/22859/events
https://github.com/huggingface/transformers/pull/22859
1,674,746,624
PR_kwDOCUB6oc5OqfwY
22,859
use `Accelerate@main`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,681
1,681
COLLABORATOR
null
# What does this PR do? We love `Accelerate@main` for CI.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22859/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22859/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22859", "html_url": "https://github.com/huggingface/transformers/pull/22859", "diff_url": "https://github.com/huggingface/transformers/pull/22859.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22859.patch", "merged_at": 1681909133000 }
https://api.github.com/repos/huggingface/transformers/issues/22858
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22858/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22858/comments
https://api.github.com/repos/huggingface/transformers/issues/22858/events
https://github.com/huggingface/transformers/issues/22858
1,674,631,761
I_kwDOCUB6oc5j0NpR
22,858
Finetuned Donut model taking too much time on local machine for inference , around 5 minutes.
{ "login": "shubh1608", "id": 8343393, "node_id": "MDQ6VXNlcjgzNDMzOTM=", "avatar_url": "https://avatars.githubusercontent.com/u/8343393?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shubh1608", "html_url": "https://github.com/shubh1608", "followers_url": "https://api.github.com/users/shubh1608/followers", "following_url": "https://api.github.com/users/shubh1608/following{/other_user}", "gists_url": "https://api.github.com/users/shubh1608/gists{/gist_id}", "starred_url": "https://api.github.com/users/shubh1608/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shubh1608/subscriptions", "organizations_url": "https://api.github.com/users/shubh1608/orgs", "repos_url": "https://api.github.com/users/shubh1608/repos", "events_url": "https://api.github.com/users/shubh1608/events{/privacy}", "received_events_url": "https://api.github.com/users/shubh1608/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @shubh1608, thanks for raising this issue. \r\n\r\nCould you share the following information, so that we can best help you: \r\n* Checkpoint of the Donut model being used\r\n* Environment being run (locally and on Colab). Copy-paste the output of `transformers-cli env` run in the terminal\r\n* Expanded snippet to allow for full reproduction. In particular showing how the model and processor are loaded and how to code is being timed. \r\n ", "Hi @amyeroberts, please find below the requested details:\r\n\r\n* model checkpoint - shubh1608/donut_pdf_ocr\r\n* Colab CPU environment \r\n\r\n - `transformers` version: 4.28.1\r\n - Platform: Linux-5.10.147+-x86_64-with-glibc2.31\r\n - Python version: 3.9.16\r\n - Huggingface_hub version: 0.13.4\r\n - Safetensors version: not installed\r\n - PyTorch version (GPU?): 2.0.0+cu118 (False)\r\n - Tensorflow version (GPU?): 2.12.0 (False)\r\n - Flax version (CPU?/GPU?/TPU?): 0.6.8 (cpu)\r\n - Jax version: 0.4.8\r\n - JaxLib version: 0.4.7\r\n - Using GPU in script?: no\r\n - Using distributed or parallel set-up in script?: no\r\n\r\n* Local Windows environment\r\n\r\n - `transformers` version: 4.28.1\r\n - Platform: Windows-10-10.0.19045-SP0\r\n - Python version: 3.9.16\r\n - Huggingface_hub version: 0.13.4\r\n - Safetensors version: not installed\r\n - PyTorch version (GPU?): 1.12.1 (False)\r\n - Tensorflow version (GPU?): not installed (NA)\r\n - Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n - Jax version: not installed\r\n - JaxLib version: not installed\r\n - Using GPU in script?: no\r\n - Using distributed or parallel set-up in script?: no\r\n\r\n* Expanded code snippet. NOTE: I have cloned the model repo locally and loaded weights from there.\r\n\r\n model_processor_path = '../model-weights/donut/donut_pdf_ocr'\r\n processor = DonutProcessor.from_pretrained(model_processor_path)\r\n model = VisionEncoderDecoderModel.from_pretrained(model_processor_path)\r\n device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\r\n model.to(device)\r\n # prepare decoder inputs\r\n task_prompt = \"<s>\"\r\n decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors=\"pt\").input_ids\r\n \r\n def run_prediction(file):\r\n image = Image.open(file).convert('RGB')\r\n pixel_values = processor(image, return_tensors=\"pt\").pixel_values\r\n outputs = model.generate(\r\n pixel_values.to(device),\r\n decoder_input_ids=decoder_input_ids.to(device),\r\n max_length=model.decoder.config.max_position_embeddings,\r\n early_stopping=True,\r\n pad_token_id=processor.tokenizer.pad_token_id,\r\n eos_token_id=processor.tokenizer.eos_token_id,\r\n use_cache=True,\r\n num_beams=1,\r\n bad_words_ids=[[processor.tokenizer.unk_token_id]],\r\n return_dict_in_generate=True)\r\n \r\n sequence = processor.batch_decode(outputs.sequences)[0]\r\n sequence = sequence.replace(processor.tokenizer.eos_token, \"\").replace(processor.tokenizer.pad_token, \"\")\r\n sequence = re.sub(r\"<.*?>\", \"\", sequence, count=1).strip() # remove first task start token\r\n return processor.token2json(sequence)\r\n\r\nLet me know if you need any more information for debugging. \r\n\r\nThanks.", "Guys, any update on this?", "Hey @shubh1608 I guess there has not been any updates for quite a while, sorry about that! \r\nIt's a bit hard for us to debug this as we need to find why your computer is basically slow. Would be great if you can tell us more about the specs of your compute. You mentioned that: \r\n> windows laptop which has 16GB RAM and 4 cores\r\n\r\nYou might have a lot of RAM but super slow CPU. What comes to me is also your torch version, which is `1.12` compared to `2.0`, a LOT of improvements are thus missing on your local setup. Could you try updating torch? ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,681
1,697
1,697
NONE
null
Finetuned Donut model is taking **4 minutes 37 seconds** for inference on my local windows laptop which has **16GB RAM and 4 cores**. However, inference time is under **5 seconds on a google colab CPU machine**, it has 32GB RAM. On Colab GPU, the inference time is under a second. **_Why it's taking too much time on my local Windows machine?_** seems like it's not a normal behavior. Could someone help and guide me on what could be wrong here? I am using **Transformers Version: 4.28.1**, it's the same on my windows machine as well. Also, below is the prediction function which I am using and it's the model.generate method which it taking time. ``` def run_prediction(image): pixel_values = processor(image, return_tensors="pt").pixel_values outputs = model.generate( pixel_values.to(device), decoder_input_ids=decoder_input_ids.to(device), max_length=model.decoder.config.max_position_embeddings, early_stopping=True, pad_token_id=processor.tokenizer.pad_token_id, eos_token_id=processor.tokenizer.eos_token_id, use_cache=True, num_beams=1, bad_words_ids=[[processor.tokenizer.unk_token_id]], return_dict_in_generate=True) sequence = processor.batch_decode(outputs.sequences)[0] sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "") sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token return processor.token2json(sequence) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22858/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22858/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22857
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22857/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22857/comments
https://api.github.com/repos/huggingface/transformers/issues/22857/events
https://github.com/huggingface/transformers/pull/22857
1,674,621,300
PR_kwDOCUB6oc5OqEie
22,857
fix: Correct small typo in docstring
{ "login": "oscar-defelice", "id": 49638680, "node_id": "MDQ6VXNlcjQ5NjM4Njgw", "avatar_url": "https://avatars.githubusercontent.com/u/49638680?v=4", "gravatar_id": "", "url": "https://api.github.com/users/oscar-defelice", "html_url": "https://github.com/oscar-defelice", "followers_url": "https://api.github.com/users/oscar-defelice/followers", "following_url": "https://api.github.com/users/oscar-defelice/following{/other_user}", "gists_url": "https://api.github.com/users/oscar-defelice/gists{/gist_id}", "starred_url": "https://api.github.com/users/oscar-defelice/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/oscar-defelice/subscriptions", "organizations_url": "https://api.github.com/users/oscar-defelice/orgs", "repos_url": "https://api.github.com/users/oscar-defelice/repos", "events_url": "https://api.github.com/users/oscar-defelice/events{/privacy}", "received_events_url": "https://api.github.com/users/oscar-defelice/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Thanks for the fix and quick PR! πŸš€\r\n> \r\n> For the quality checks, you'll need to run `make fixup` locally and push and changes.\r\n\r\n@amyeroberts Thank you very much!\r\nDone!" ]
1,681
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? Fixes #22855 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @amyeroberts <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22857/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22857/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22857", "html_url": "https://github.com/huggingface/transformers/pull/22857", "diff_url": "https://github.com/huggingface/transformers/pull/22857.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22857.patch", "merged_at": 1681988333000 }
https://api.github.com/repos/huggingface/transformers/issues/22856
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22856/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22856/comments
https://api.github.com/repos/huggingface/transformers/issues/22856/events
https://github.com/huggingface/transformers/issues/22856
1,674,618,179
I_kwDOCUB6oc5j0KVD
22,856
Type hinting Inconsistency in beam_search.py
{ "login": "mert-kurttutan", "id": 88637659, "node_id": "MDQ6VXNlcjg4NjM3NjU5", "avatar_url": "https://avatars.githubusercontent.com/u/88637659?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mert-kurttutan", "html_url": "https://github.com/mert-kurttutan", "followers_url": "https://api.github.com/users/mert-kurttutan/followers", "following_url": "https://api.github.com/users/mert-kurttutan/following{/other_user}", "gists_url": "https://api.github.com/users/mert-kurttutan/gists{/gist_id}", "starred_url": "https://api.github.com/users/mert-kurttutan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mert-kurttutan/subscriptions", "organizations_url": "https://api.github.com/users/mert-kurttutan/orgs", "repos_url": "https://api.github.com/users/mert-kurttutan/repos", "events_url": "https://api.github.com/users/mert-kurttutan/events{/privacy}", "received_events_url": "https://api.github.com/users/mert-kurttutan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @mert-kurttutan πŸ‘‹ \r\n\r\nThat is absolutely correct. Would you like to open a PR to fix it? :)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hello, is anyone working on this issue? If not I can take it on @gante.", "Hey @jprivera44 -- AFAIK no one is working on it, feel free to take it πŸ™Œ ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,681
1,688
1,688
NONE
null
### System Info Main branch ### Who can help? @gante ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Hi, In `beam_search.py`file, the process function (in actual classes), have the following signature ``` def process( self, input_ids: torch.LongTensor, next_scores: torch.FloatTensor, next_tokens: torch.LongTensor, next_indices: torch.LongTensor, pad_token_id: Optional[int] = None, eos_token_id: Optional[Union[int, List[int]]] = None, beam_indices: Optional[torch.LongTensor] = None, ) -> Tuple[torch.Tensor]: ``` even though it returns `UserDict`. Any reason why not annotate with Dict, Mapping rather than Tuple? This type of mismatching might exist in other places, not sure ### Expected behavior IMO, it should return Dict, or Mapping
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22856/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22856/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22855
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22855/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22855/comments
https://api.github.com/repos/huggingface/transformers/issues/22855/events
https://github.com/huggingface/transformers/issues/22855
1,674,565,102
I_kwDOCUB6oc5jz9Xu
22,855
Small typo in `conversation.py` docstring.
{ "login": "oscar-defelice", "id": 49638680, "node_id": "MDQ6VXNlcjQ5NjM4Njgw", "avatar_url": "https://avatars.githubusercontent.com/u/49638680?v=4", "gravatar_id": "", "url": "https://api.github.com/users/oscar-defelice", "html_url": "https://github.com/oscar-defelice", "followers_url": "https://api.github.com/users/oscar-defelice/followers", "following_url": "https://api.github.com/users/oscar-defelice/following{/other_user}", "gists_url": "https://api.github.com/users/oscar-defelice/gists{/gist_id}", "starred_url": "https://api.github.com/users/oscar-defelice/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/oscar-defelice/subscriptions", "organizations_url": "https://api.github.com/users/oscar-defelice/orgs", "repos_url": "https://api.github.com/users/oscar-defelice/repos", "events_url": "https://api.github.com/users/oscar-defelice/events{/privacy}", "received_events_url": "https://api.github.com/users/oscar-defelice/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@oscar-defelice Good spot! Would you like to open a PR with the amendment? ", "Yes, I can open a PR and add a fixing commit.\r\n\r\nthank you very much!" ]
1,681
1,681
1,681
CONTRIBUTOR
null
## Reproduction There is a small typographical error in the docstring for Conversation class in conversational.py. ```python class Conversation: """ Utility class containing a conversation and its history. This class is meant to be used as an input to the [`ConversationalPipeline`]. The conversation contains a number of utility function to manage the addition of new user input and generated model responses. A conversation needs to contain an unprocessed user input before being passed to the [`ConversationalPipeline`]. This user input is either created when the class is instantiated, or by calling `conversational_pipeline.append_response("input")` after a conversation turn. ... ``` ## Proposed Solution This could be fixed by just rewriting this to: ```python class Conversation: """ Utility class containing a conversation and its history. This class is meant to be used as an input to the [`ConversationalPipeline`]. The conversation contains several utility functions to manage the addition of new user inputs and generated model responses. A conversation needs to contain an unprocessed user input before being passed to the [`ConversationalPipeline`]. This user input is either created when the class is instantiated, or by calling `conversational_pipeline.append_response("input")` after a conversation turn. ... ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22855/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22855/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22854
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22854/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22854/comments
https://api.github.com/repos/huggingface/transformers/issues/22854/events
https://github.com/huggingface/transformers/pull/22854
1,674,512,976
PR_kwDOCUB6oc5Optoj
22,854
fix SpeechT5 doc comments
{ "login": "hollance", "id": 346853, "node_id": "MDQ6VXNlcjM0Njg1Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hollance", "html_url": "https://github.com/hollance", "followers_url": "https://api.github.com/users/hollance/followers", "following_url": "https://api.github.com/users/hollance/following{/other_user}", "gists_url": "https://api.github.com/users/hollance/gists{/gist_id}", "starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hollance/subscriptions", "organizations_url": "https://api.github.com/users/hollance/orgs", "repos_url": "https://api.github.com/users/hollance/repos", "events_url": "https://api.github.com/users/hollance/events{/privacy}", "received_events_url": "https://api.github.com/users/hollance/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? Forgot to run the documentation tests on the SpeechT5 TTS changes. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22854/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22854/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22854", "html_url": "https://github.com/huggingface/transformers/pull/22854", "diff_url": "https://github.com/huggingface/transformers/pull/22854.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22854.patch", "merged_at": 1681906241000 }
https://api.github.com/repos/huggingface/transformers/issues/22853
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22853/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22853/comments
https://api.github.com/repos/huggingface/transformers/issues/22853/events
https://github.com/huggingface/transformers/issues/22853
1,674,447,605
I_kwDOCUB6oc5jzgr1
22,853
Add an efficient vision transformer backbone in ICLR 2022: CrossFormer
{ "login": "cheerss", "id": 15375071, "node_id": "MDQ6VXNlcjE1Mzc1MDcx", "avatar_url": "https://avatars.githubusercontent.com/u/15375071?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cheerss", "html_url": "https://github.com/cheerss", "followers_url": "https://api.github.com/users/cheerss/followers", "following_url": "https://api.github.com/users/cheerss/following{/other_user}", "gists_url": "https://api.github.com/users/cheerss/gists{/gist_id}", "starred_url": "https://api.github.com/users/cheerss/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cheerss/subscriptions", "organizations_url": "https://api.github.com/users/cheerss/orgs", "repos_url": "https://api.github.com/users/cheerss/repos", "events_url": "https://api.github.com/users/cheerss/events{/privacy}", "received_events_url": "https://api.github.com/users/cheerss/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "I'm going to close this as it's a repeat of #22852" ]
1,681
1,681
1,681
NONE
null
### Model description The CrossFormer has three new things that does not exist in other ViTs (such as Swin): 1. The cross-scale embedding layer(CEL) that generate cross-scale embeddings as ViT's input. 2. The long-short distance attention (LSDA) mechanism, which is an efficient replacement of the vanilla self-attention and shows better performance than Swin 3. A dynamic relative position bias, a kind of relative position bias that support dynamic group size. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation The open source website: https://github.com/cheerss/CrossFormer The paper was accepted in ICLR 2022: https://openreview.net/forum?id=_PHymLIxuI
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22853/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22853/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22852
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22852/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22852/comments
https://api.github.com/repos/huggingface/transformers/issues/22852/events
https://github.com/huggingface/transformers/issues/22852
1,674,446,723
I_kwDOCUB6oc5jzgeD
22,852
Add an efficient vision transformer backbone in ICLR 2022: CrossFormer
{ "login": "cheerss", "id": 15375071, "node_id": "MDQ6VXNlcjE1Mzc1MDcx", "avatar_url": "https://avatars.githubusercontent.com/u/15375071?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cheerss", "html_url": "https://github.com/cheerss", "followers_url": "https://api.github.com/users/cheerss/followers", "following_url": "https://api.github.com/users/cheerss/following{/other_user}", "gists_url": "https://api.github.com/users/cheerss/gists{/gist_id}", "starred_url": "https://api.github.com/users/cheerss/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cheerss/subscriptions", "organizations_url": "https://api.github.com/users/cheerss/orgs", "repos_url": "https://api.github.com/users/cheerss/repos", "events_url": "https://api.github.com/users/cheerss/events{/privacy}", "received_events_url": "https://api.github.com/users/cheerss/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "I can pick this up. ", "@raghavanone Are you still working on this? If so, would you have an estimation of when it will be ready for review? This would be a great addition to the library, if you don't have enough bandwidth then we can open it up for someone else in the community to pick up :) \r\n\r\ncc @rafaelpadilla ", "I had paused this for a while , I have bandwidth now , will continue to work on it . ", "@raghavanone Thanks for your great work. I checked the merge workflow and found the error is due to the model not being added into `_import_structure` like [this](https://github.com/huggingface/transformers/blob/main/src/transformers/__init__.py#L535). Hope that may help you." ]
1,681
1,694
null
NONE
null
### Model description The CrossFormer has three new things that does not exist in other ViTs (such as Swin): 1. The cross-scale embedding layer(CEL) that generate cross-scale embeddings as ViT's input. 2. The long-short distance attention (LSDA) mechanism, which is an efficient replacement of the vanilla self-attention and shows better performance than Swin 3. A dynamic relative position bias, a kind of relative position bias that support dynamic group size. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation The open source website: https://github.com/cheerss/CrossFormer The paper was accepted in ICLR 2022: https://openreview.net/forum?id=_PHymLIxuI
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22852/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22852/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/22851
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22851/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22851/comments
https://api.github.com/repos/huggingface/transformers/issues/22851/events
https://github.com/huggingface/transformers/issues/22851
1,674,309,506
I_kwDOCUB6oc5jy--C
22,851
Deadlock condition in layoutlmv2 using OMP library
{ "login": "Agarwal-Saurabh", "id": 23380740, "node_id": "MDQ6VXNlcjIzMzgwNzQw", "avatar_url": "https://avatars.githubusercontent.com/u/23380740?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Agarwal-Saurabh", "html_url": "https://github.com/Agarwal-Saurabh", "followers_url": "https://api.github.com/users/Agarwal-Saurabh/followers", "following_url": "https://api.github.com/users/Agarwal-Saurabh/following{/other_user}", "gists_url": "https://api.github.com/users/Agarwal-Saurabh/gists{/gist_id}", "starred_url": "https://api.github.com/users/Agarwal-Saurabh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Agarwal-Saurabh/subscriptions", "organizations_url": "https://api.github.com/users/Agarwal-Saurabh/orgs", "repos_url": "https://api.github.com/users/Agarwal-Saurabh/repos", "events_url": "https://api.github.com/users/Agarwal-Saurabh/events{/privacy}", "received_events_url": "https://api.github.com/users/Agarwal-Saurabh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Agarwal-Saurabh, thanks for reporting this issue. \n\nSo that we can best help, could you follow the issue template and give information about the running environment (from running `transformers-cli env`) and a reproducible code snippet? ", "@amyeroberts here is the details\r\n- `transformers` version: 4.28.1\r\n- Platform: Linux-5.4.0-139-generic-x86_64-with-glibc2.2.5\r\n- Python version: 3.8.5\r\n- Huggingface_hub version: 0.13.4\r\n- Safetensors version: not installed\r\n- PyTorch version (GPU?): 1.8.0 (False)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: no\r\n- Using distributed or parallel set-up in script?: yes", "@Agarwal-Saurabh Thank you. Could you also share a minimal code snippet to reproduce the issue? ", "Its not in the model rather in the preprocessor that we are loading to\ntokenize for layoulmv2 model\n\nOn Wed, 19 Apr 2023, 8:03 pm amyeroberts, ***@***.***> wrote:\n\n> @Agarwal-Saurabh <https://github.com/Agarwal-Saurabh> Thank you. Could\n> you also share a minimal code snippet to reproduce the issue?\n>\n> β€”\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/22851#issuecomment-1514847613>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AFSMGBGG3MASQ3OFELFA4I3XB7ZTBANCNFSM6AAAAAAXDUJ2KM>\n> .\n> You are receiving this because you were mentioned.Message ID:\n> ***@***.***>\n>\n", "@Agarwal-Saurabh Without knowing what code you're running and more information about the deadlock behaviour, I'm unable to understand or help with this issue.", "@amyeroberts similar issue caught in other processors as well. Here is the code snippet requested https://gist.github.com/harshyadav17/149f1c990c17111d8340fcf2e89a5b88\r\n\r\nreference issue : https://github.com/huggingface/transformers/issues/22978", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "This is not solved yet\n\nOn Fri, 19 May 2023, 8:32 pm github-actions[bot], ***@***.***>\nwrote:\n\n> This issue has been automatically marked as stale because it has not had\n> recent activity. If you think this still needs to be addressed please\n> comment on this thread.\n>\n> Please note that issues that do not follow the contributing guidelines\n> <https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md>\n> are likely to be ignored.\n>\n> β€”\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/22851#issuecomment-1554722971>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AFSMGBG3TCVG62RD6HZX64TXG6DQBANCNFSM6AAAAAAXDUJ2KM>\n> .\n> You are receiving this because you were mentioned.Message ID:\n> ***@***.***>\n>\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,681
1,687
1,687
NONE
null
I tried to fork the layoutlmv2 model using kserve workers adding the OMP library variables but it leads to deadlock. Intresting fact is it works well without OMP library variables but gives really high inference time. Is there a resolution to utilize layoutlmv2 with multithreading and forking sharing the values used for OPEN_MP os.environ['OMP_NUM_THREADS'] = '4' os.environ['OMP_PROC_BIND'] = 'false' os.environ['OMP_SCHEDULE'] = 'STATIC' os.environ['KMP_AFFINITY']='granularity=fine,compact,1,0'
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22851/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22851/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22850
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22850/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22850/comments
https://api.github.com/repos/huggingface/transformers/issues/22850/events
https://github.com/huggingface/transformers/pull/22850
1,674,233,590
PR_kwDOCUB6oc5Oox1N
22,850
feat(model parallelism): move labels to the same device as logits for M2M100
{ "login": "elabongaatuo", "id": 32382363, "node_id": "MDQ6VXNlcjMyMzgyMzYz", "avatar_url": "https://avatars.githubusercontent.com/u/32382363?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elabongaatuo", "html_url": "https://github.com/elabongaatuo", "followers_url": "https://api.github.com/users/elabongaatuo/followers", "following_url": "https://api.github.com/users/elabongaatuo/following{/other_user}", "gists_url": "https://api.github.com/users/elabongaatuo/gists{/gist_id}", "starred_url": "https://api.github.com/users/elabongaatuo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elabongaatuo/subscriptions", "organizations_url": "https://api.github.com/users/elabongaatuo/orgs", "repos_url": "https://api.github.com/users/elabongaatuo/repos", "events_url": "https://api.github.com/users/elabongaatuo/events{/privacy}", "received_events_url": "https://api.github.com/users/elabongaatuo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Thanks a lot!\r\n\r\nThank you 😊 " ]
1,681
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? Moves labels to same device as logits for M2M100 Related to #22561 @sgugger hello please review.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22850/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22850/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22850", "html_url": "https://github.com/huggingface/transformers/pull/22850", "diff_url": "https://github.com/huggingface/transformers/pull/22850.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22850.patch", "merged_at": 1681908868000 }
https://api.github.com/repos/huggingface/transformers/issues/22849
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22849/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22849/comments
https://api.github.com/repos/huggingface/transformers/issues/22849/events
https://github.com/huggingface/transformers/issues/22849
1,674,201,310
I_kwDOCUB6oc5jykje
22,849
Fine-tuning wav2vec 2.0 with `torch.compile`
{ "login": "w11wo", "id": 23167175, "node_id": "MDQ6VXNlcjIzMTY3MTc1", "avatar_url": "https://avatars.githubusercontent.com/u/23167175?v=4", "gravatar_id": "", "url": "https://api.github.com/users/w11wo", "html_url": "https://github.com/w11wo", "followers_url": "https://api.github.com/users/w11wo/followers", "following_url": "https://api.github.com/users/w11wo/following{/other_user}", "gists_url": "https://api.github.com/users/w11wo/gists{/gist_id}", "starred_url": "https://api.github.com/users/w11wo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/w11wo/subscriptions", "organizations_url": "https://api.github.com/users/w11wo/orgs", "repos_url": "https://api.github.com/users/w11wo/repos", "events_url": "https://api.github.com/users/w11wo/events{/privacy}", "received_events_url": "https://api.github.com/users/w11wo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sanchit-gandhi ", "Hi @w11wo, thanks for raising this issue! \r\n\r\nPlease note that whilst we aim to support a wide variety of use cases with our examples, `torch_compile` is an experimental flag and not one we guarantee will work for for all of our models as the support is progressively rolled in in PyTorch. ", "Hi @amyeroberts, no worries and thanks for the heads up. Looking forward to seeing wav2vec 2.0 supported. Cheers.", "Hey @w11wo! Sorry for the late reply here and thanks for the detailed issue description! I had a quick look, and the issue seems to reside with the `_compute_mask_indices` function:\r\nhttps://github.com/huggingface/transformers/blob/4baa34c18f18274fe028ad5a5511ea3fba9eeece/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L132\r\nThe function is both dynamic and in NumPy - we'd need to make the function static (fixed shapes) for it to be compatible with torch compile. I sadly won't have time to look into this myself, but feel free to open a PR if you want to take a stab at updating this!\r\n\r\nIn the meantime, you can set SpecAug to 0 to avoid calling this dynamic function - you'll loose regularisation in the feature encoder outputs, but you should be able to torch compile the model. To do this, you simply need to set `apply_spec_augment` to False in the config: https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self/blob/54074b1c16f4de6a5ad59affb4caa8f2ea03a119/config.json#L4", "cc @hollance ", "Hey @w11wo - any luck here? Did it work with specaug set to 0?", "Hi @sanchit-gandhi, unfortunately I haven't been able to test it out without SpecAugment, since my use case requires it to be used. I will try and test it out when I can.", "Hey @w11wo - sure, sounds good! The offer still stands for opening a PR to fix this if you feel like having a go at re-working the SpecAug logic in the modelling file, think this could make for a nice PR :)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Extending the offer of opening a PR to fix the SpecAug logic in the modelling file to the community! Would be a nice PR addition to re-work the SpecAug function so that it's compatible with torch compile (note that `torch.compile` is not guaranteed for the transformers library, but is a nice feature if it can be done without backwards breaking changes)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,681
1,690
1,690
NONE
null
### System Info - `transformers` version: 4.28.1 - Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-glibc2.28 - Python version: 3.9.0 - Huggingface_hub version: 0.13.3 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```diff python run_audio_classification.py \ --model_name_or_path facebook/wav2vec2-base \ --dataset_name superb \ --dataset_config_name ks \ --output_dir wav2vec2-base-ft-keyword-spotting \ --overwrite_output_dir \ --remove_unused_columns False \ --do_train \ --do_eval \ --fp16 \ --learning_rate 3e-5 \ --max_length_seconds 1 \ --attention_mask False \ --warmup_ratio 0.1 \ --num_train_epochs 5 \ --per_device_train_batch_size 32 \ --gradient_accumulation_steps 4 \ --per_device_eval_batch_size 32 \ --dataloader_num_workers 4 \ --logging_strategy steps \ --logging_steps 10 \ --evaluation_strategy epoch \ --save_strategy epoch \ --load_best_model_at_end True \ --metric_for_best_model accuracy \ --save_total_limit 3 \ --seed 0 \ + --torch_compile True ``` ### Expected behavior I followed the example to fine-tune wav2vec 2.0 for [audio classification](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification#single-gpu), with the exception of using `torch.compile`, aiming to get faster training. However, I ran to an issue as follows <details> <summary> Error Log </summary> ``` [INFO|trainer.py:1769] 2023-04-19 05:28:50,832 >> ***** Running training ***** [INFO|trainer.py:1770] 2023-04-19 05:28:50,832 >> Num examples = 51,094 [INFO|trainer.py:1771] 2023-04-19 05:28:50,832 >> Num Epochs = 5 [INFO|trainer.py:1772] 2023-04-19 05:28:50,832 >> Instantaneous batch size per device = 32 [INFO|trainer.py:1773] 2023-04-19 05:28:50,832 >> Total train batch size (w. parallel, distributed & accumulation) = 128 [INFO|trainer.py:1774] 2023-04-19 05:28:50,833 >> Gradient Accumulation steps = 4 [INFO|trainer.py:1775] 2023-04-19 05:28:50,833 >> Total optimization steps = 1,995 [INFO|trainer.py:1776] 2023-04-19 05:28:50,834 >> Number of trainable parameters = 90,371,212 0%| | 0/1995 [00:00<?, ?it/s]/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/feature_extraction_utils.py:165: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray. tensor = as_tensor(value) /opt/conda/envs/torch/lib/python3.9/site-packages/transformers/feature_extraction_utils.py:165: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray. tensor = as_tensor(value) /opt/conda/envs/torch/lib/python3.9/site-packages/transformers/feature_extraction_utils.py:165: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray. tensor = as_tensor(value) /opt/conda/envs/torch/lib/python3.9/site-packages/transformers/feature_extraction_utils.py:165: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray. tensor = as_tensor(value) [2023-04-19 05:28:54,741] torch._inductor.utils: [WARNING] using triton random, expect difference from eager Traceback (most recent call last): File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 670, in call_user_compiler compiled_fn = compiler_fn(gm, self.fake_example_inputs()) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/debug_utils.py", line 1055, in debug_wrapper compiled_gm = compiler_fn(gm, example_inputs) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/__init__.py", line 1390, in __call__ return compile_fx(model_, inputs_, config_patches=self.config) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_inductor/compile_fx.py", line 455, in compile_fx return aot_autograd( File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/backends/common.py", line 48, in compiler_fn cg = aot_module_simplified(gm, example_inputs, **kwargs) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 2805, in aot_module_simplified compiled_fn = create_aot_dispatcher_function( File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper r = func(*args, **kwargs) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 2498, in create_aot_dispatcher_function compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1713, in aot_wrapper_dedupe return compiler_fn(flat_fn, leaf_flat_args, aot_config) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 2087, in aot_dispatch_autograd fx_g = make_fx(joint_forward_backward, aot_config.decompositions)( File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 714, in wrapped t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 209, in _fn return fn(*args, **kwargs) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 443, in dispatch_trace graph = tracer.trace(root, concrete_args) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 209, in _fn return fn(*args, **kwargs) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 778, in trace (self.create_arg(fn(*args)),), File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 652, in flatten_fn tree_out = root_fn(*tree_args) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped out = f(*tensors) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1156, in traced_joint return functionalized_f_helper(primals, tangents) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1108, in functionalized_f_helper f_outs = flat_fn_no_input_mutations(fn, f_primals, f_tangents, meta, keep_input_mutations) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1076, in flat_fn_no_input_mutations outs = flat_fn_with_synthetic_bases_expanded(fn, primals, primals_after_cloning, maybe_tangents, meta, keep_input_mutations) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1048, in flat_fn_with_synthetic_bases_expanded outs = forward_or_joint(fn, primals_before_cloning, primals, maybe_tangents, meta, keep_input_mutations) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1017, in forward_or_joint backward_out = torch.autograd.grad( File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/autograd/__init__.py", line 269, in grad return handle_torch_function( File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/overrides.py", line 1534, in handle_torch_function result = mode.__torch_function__(public_api, types, args, kwargs) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_inductor/overrides.py", line 38, in __torch_function__ return func(*args, **kwargs) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/autograd/__init__.py", line 303, in grad return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/utils/_stats.py", line 20, in wrapper return fn(*args, **kwargs) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 487, in __torch_dispatch__ return self.inner_torch_dispatch(func, types, args, kwargs) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 512, in inner_torch_dispatch out = proxy_call(self, func, args, kwargs) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 345, in proxy_call out = func(*args, **kwargs) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_ops.py", line 287, in __call__ return self._op(*args, **kwargs or {}) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/utils/_stats.py", line 20, in wrapper return fn(*args, **kwargs) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 987, in __torch_dispatch__ return self.dispatch(func, types, args, kwargs) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 1162, in dispatch op_impl_out = op_impl(self, func, *args, **kwargs) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 453, in index_tensor check_no_bool_index_tensors(func, *args, **kwargs) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 432, in check_no_bool_index_tensors raise DynamicOutputShapeException(func) torch._subclasses.fake_tensor.DynamicOutputShapeException: aten.index.Tensor The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/wilson_bookbotkids_com/run_audio_classification.py", line 418, in <module> main() File "/home/wilson_bookbotkids_com/run_audio_classification.py", line 392, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/trainer.py", line 1662, in train return inner_training_loop( File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/trainer.py", line 1929, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/trainer.py", line 2699, in training_step loss = self.compute_loss(model, inputs) File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/trainer.py", line 2731, in compute_loss outputs = model(**inputs) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 82, in forward return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 209, in _fn return fn(*args, **kwargs) File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1817, in forward outputs = self.wav2vec2( File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1316, in forward hidden_states = self._mask_hidden_states( File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1249, in _mask_hidden_states if not getattr(self.config, "apply_spec_augment", True): File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1259, in <graph break in _mask_hidden_states> mask_time_indices = _compute_mask_indices( File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1266, in <graph break in _mask_hidden_states> mask_time_indices = torch.tensor(mask_time_indices, device=hidden_states.device, dtype=torch.bool) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 337, in catch_errors return callback(frame, cache_size, hooks) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 404, in _convert_frame result = inner_convert(frame, cache_size, hooks) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 104, in _fn return fn(*args, **kwargs) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 262, in _convert_frame_assert return _compile( File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper r = func(*args, **kwargs) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 324, in _compile out_code = transform_code_object(code, transform) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 445, in transform_code_object transformations(instructions, code_options) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 311, in transform tracer.run() File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1726, in run super().run() File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 576, in run and self.step() File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 540, in step getattr(self, inst.opname)(inst) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1792, in RETURN_VALUE self.output.compile_subgraph( File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 517, in compile_subgraph self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 588, in compile_and_call_fx_graph compiled_fn = self.call_user_compiler(gm) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper r = func(*args, **kwargs) File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 675, in call_user_compiler raise BackendCompilerFailed(self.compiler_fn, e) from e torch._dynamo.exc.BackendCompilerFailed: debug_wrapper raised DynamicOutputShapeException: aten.index.Tensor Set torch._dynamo.config.verbose=True for more information You can suppress this exception and fall back to eager by setting: torch._dynamo.config.suppress_errors = True ``` </details> I suspect that wav2vec 2.0 is not yet supported in PyTorch 2.0 and needs some modification to ensure compatibility when running `torch.compile`. The same error occurred when fine-tuning for automatic speech recognition.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22849/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22849/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22848
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22848/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22848/comments
https://api.github.com/repos/huggingface/transformers/issues/22848/events
https://github.com/huggingface/transformers/issues/22848
1,674,146,182
I_kwDOCUB6oc5jyXGG
22,848
Add LLaVA model
{ "login": "youssefadr", "id": 104783077, "node_id": "U_kgDOBj7c5Q", "avatar_url": "https://avatars.githubusercontent.com/u/104783077?v=4", "gravatar_id": "", "url": "https://api.github.com/users/youssefadr", "html_url": "https://github.com/youssefadr", "followers_url": "https://api.github.com/users/youssefadr/followers", "following_url": "https://api.github.com/users/youssefadr/following{/other_user}", "gists_url": "https://api.github.com/users/youssefadr/gists{/gist_id}", "starred_url": "https://api.github.com/users/youssefadr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/youssefadr/subscriptions", "organizations_url": "https://api.github.com/users/youssefadr/orgs", "repos_url": "https://api.github.com/users/youssefadr/repos", "events_url": "https://api.github.com/users/youssefadr/events{/privacy}", "received_events_url": "https://api.github.com/users/youssefadr/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "@sgugger and @youssefadr, I want to work on this issue. I am new to open source and hugging face, can u pls provide me some guidance to work on this issue. Any reference issue that helps on getting an idea on this..pls help me out.", "@sushmanthreddy It's great you want to contribute a model! \r\n\r\nThere's a [detailed guide in the docs](https://huggingface.co/docs/transformers/add_new_model) outlining important information about the model class, how it fits in the library and the steps to take to add a model. Let us know if there's anything which is unclear or you hit a blocker. Looking forward to seeing the PR! πŸ€— ", "@sushmanthreddy I don't know if you are still planning to work on this model, but if not, I would be glad to contribute on my side πŸ€—! \r\n\r\nPlease let me know if working on this issue is still in your plans πŸ™‚!", "@youssefadr Sorry, I am busy with my google summer of code work...couldn't contribute much you can go ahead and contribute to it", "Hello @youssefadr are you going to take on the work of adding this model in? I'd be happy to collaborate or take this task on ", "@sushmanthreddy Okay, thank you and good luck with your Google program!\r\n\r\n@jprivera44 Hello! Yes, I am going to open a draft PR this week, do not hesitate to collaborate!", "That's fantastic @youssefadr, do you mind adding me as a collaborator on your branch so we can plan there on which sections of LLava we are going to tackle? I've got time today to create a branch and add you there if you prefer. Excited for this :) \r\n\r\n@amyeroberts any other suggestions on the best way to collaborate with peers on a new model such as this? I read through the suggestions and I appreciate the philosophy of transformers.", "@jprivera44 @youssefadr - great to hear that you're both keen to work on this model! \r\n\r\nThe main piece of advice I have if you're both collaborating on a PR is to make sure that it's clear who is working on what and when - you don't want to find out that one piece has been implemented twice! If working on the same branch, make sure not to force push as well :) \r\n\r\n", "Thanks @amyeroberts, I'm waiting for the approval for the LLaMA weights from Meta at the moment do you know if there is any way to speed up that process?\r\n\r\n@youssefadr hey nice job with the pr! I noticed you added a lot of changes, are you working with the 7B, 13B, or 65B parameter count?", "@jprivera44 I am planning to work with 7B parameter checkpoint. I think it would be better if we could communicate directly to better collaborate on this model together. What do you think of discussing through Discord ? Here is my username 'Youssef Adarrab#3595'", "Fantastic, I'll reach out to you on discord. " ]
1,681
1,686
null
CONTRIBUTOR
null
### Model description [LLaVA](https://llava-vl.github.io/) is a multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding, "achieving impressive chat capabilities mimicking spirits of the multimodal GPT-4". ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation https://github.com/haotian-liu/LLaVA
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22848/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22848/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/22847
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22847/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22847/comments
https://api.github.com/repos/huggingface/transformers/issues/22847/events
https://github.com/huggingface/transformers/issues/22847
1,674,136,259
I_kwDOCUB6oc5jyUrD
22,847
Creating XLNetTokenizer from Custom ByteLevelBPETokenizer Throws OSError
{ "login": "sam-hieken", "id": 99104112, "node_id": "U_kgDOBeg1cA", "avatar_url": "https://avatars.githubusercontent.com/u/99104112?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sam-hieken", "html_url": "https://github.com/sam-hieken", "followers_url": "https://api.github.com/users/sam-hieken/followers", "following_url": "https://api.github.com/users/sam-hieken/following{/other_user}", "gists_url": "https://api.github.com/users/sam-hieken/gists{/gist_id}", "starred_url": "https://api.github.com/users/sam-hieken/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sam-hieken/subscriptions", "organizations_url": "https://api.github.com/users/sam-hieken/orgs", "repos_url": "https://api.github.com/users/sam-hieken/repos", "events_url": "https://api.github.com/users/sam-hieken/events{/privacy}", "received_events_url": "https://api.github.com/users/sam-hieken/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "UPDATE: According to [this summary](https://huggingface.co/docs/transformers/tokenizer_summary#sentencepiece), XLNet uses SentencePiece tokenization; so, I tried swapping in a `SentencePieceBPETokenizer` instead of a `ByteLevelBPETokenizer` (you should really include `SentencePieceBPETokenizer` in the docs by the way... the only mention I could find of it was [here](https://discuss.huggingface.co/t/training-sentencepiece-from-scratch/3477/2)). I'm receiving the exact same issue though.\r\n\r\nAlso, looks like I didn't include the code for training the tokenizer above, so I'll drop it down here:\r\n\r\n```\r\ndef batch_iterator(dataset):\r\n for i in dataset:\r\n yield i[\"text\"]\r\n\r\ndef getTokenizer(train=True, train_dataset=None):\r\n tokenizer = None\r\n\r\n if train:\r\n tokenizer = SentencePieceBPETokenizer()\r\n\r\n print(\"Training tokenizer...\")\r\n tokenizer.train_from_iterator(batch_iterator(train_dataset), show_progress=True, vocab_size=VOCAB_SIZE, min_frequency=2, special_tokens=[\r\n \"<s>\",\r\n \"<pad>\",\r\n \"</s>\",\r\n \"<unk>\",\r\n \"<mask>\",\r\n NEWLINE\r\n ])\r\n print(\"Training complete. Saving tokenizer...\")\r\n\r\n tokenizer.save_model(\"tokenizer\")\r\n```\r\n\r\n... and my dataset:\r\n\r\n```\r\nDataset({\r\n features: ['text'],\r\n num_rows: 2080000\r\n})\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,681
1,685
1,685
NONE
null
### System Info - `transformers` version: 4.28.1 - Platform: Linux-3.10.0-1160.76.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.12 - Huggingface_hub version: 0.13.3 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0+cu117 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker @ArthurZucker @younesbelkada ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Hello, I'm running into some issues using a custom tokenizer with XLNet. I have a ByteLevelBPETokenizer (located in `./tokenizer`) that I already trained, but when trying to load it with XLNetTokenizer, I get an OSError. ``` >>> from transformers import XLNetTokenizer >>> tokenizer = XLNetTokenizer.from_pretrained("tokenizer", local_files_only=True) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/hiekense/.local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1795, in from_pretrained raise EnvironmentError( OSError: Can't load tokenizer for 'tokenizer'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'tokenizer' is the correct path to a directory containing all relevant files for a XLNetTokenizer tokenizer. ``` I've read in a few places that it's likely due to a missing `tokenizer_config.json`, so I tried dropping in the default one from `xlnet-base-cased`. ``` { "additional_special_tokens": [ "<eop>", "<eod>" ], "bos_token": "<s>", "clean_up_tokenization_spaces": true, "cls_token": "<cls>", "do_lower_case": false, "eos_token": "</s>", "keep_accents": false, "mask_token": { "__type": "AddedToken", "content": "<mask>", "lstrip": true, "normalized": true, "rstrip": false, "single_word": false }, "model_max_length": 1000000000000000019884624838656, "pad_token": "<pad>", "remove_space": true, "sep_token": "<sep>", "sp_model_kwargs": {}, "tokenizer_class": "XLNetTokenizer", "unk_token": "<unk>" } ``` ... which led to an even stranger error: ``` >>> from transformers import XLNetTokenizer >>> tokenizer = XLNetTokenizer.from_pretrained("tokenizer", local_files_only=True) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/hiekense/.local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1811, in from_pretrained return cls._from_pretrained( File "/home/hiekense/.local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1965, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/home/hiekense/.local/lib/python3.9/site-packages/transformers/models/xlnet/tokenization_xlnet.py", line 179, in __init__ self.sp_model.Load(vocab_file) File "/home/hiekense/.local/lib/python3.9/site-packages/sentencepiece/__init__.py", line 905, in Load return self.LoadFromFile(model_file) File "/home/hiekense/.local/lib/python3.9/site-packages/sentencepiece/__init__.py", line 310, in LoadFromFile return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) TypeError: not a string ``` Also, there's nothing wrong with the tokenizer itself - testing it with GPT2's tokenizer (`GPT2Tokenizer.from_pretrained("tokenizer", local_files_only=True)`) yielded no errors. Thank you. ### Expected behavior N/A
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22847/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22847/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22846
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22846/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22846/comments
https://api.github.com/repos/huggingface/transformers/issues/22846/events
https://github.com/huggingface/transformers/issues/22846
1,674,033,888
I_kwDOCUB6oc5jx7rg
22,846
NameError: name 'PartialState' is not defined.
{ "login": "gli-mrunal", "id": 77198742, "node_id": "MDQ6VXNlcjc3MTk4NzQy", "avatar_url": "https://avatars.githubusercontent.com/u/77198742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gli-mrunal", "html_url": "https://github.com/gli-mrunal", "followers_url": "https://api.github.com/users/gli-mrunal/followers", "following_url": "https://api.github.com/users/gli-mrunal/following{/other_user}", "gists_url": "https://api.github.com/users/gli-mrunal/gists{/gist_id}", "starred_url": "https://api.github.com/users/gli-mrunal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gli-mrunal/subscriptions", "organizations_url": "https://api.github.com/users/gli-mrunal/orgs", "repos_url": "https://api.github.com/users/gli-mrunal/repos", "events_url": "https://api.github.com/users/gli-mrunal/events{/privacy}", "received_events_url": "https://api.github.com/users/gli-mrunal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Closing, as this issue is a duplicate of a comment on #22816, where it it being followed up on. " ]
1,681
1,681
1,681
NONE
null
I am using the following version of transformer, datasets and huggingface_hub. ![image](https://user-images.githubusercontent.com/77198742/232941383-cc398bb4-88c0-4a12-9ff1-c59f8c5aa1a6.png) I am running into the following error: ```sh NameError: name 'PartialState' is not defined. ``` How to resolve this issue to work with my versions of the transformer, datasets and huggingface_hub ? My code was running just fine until yesterday.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22846/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22846/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22845
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22845/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22845/comments
https://api.github.com/repos/huggingface/transformers/issues/22845/events
https://github.com/huggingface/transformers/issues/22845
1,673,939,780
I_kwDOCUB6oc5jxktE
22,845
CodeGenAttention does not work with defaults in forward pass
{ "login": "sgunasekar", "id": 8418631, "node_id": "MDQ6VXNlcjg0MTg2MzE=", "avatar_url": "https://avatars.githubusercontent.com/u/8418631?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgunasekar", "html_url": "https://github.com/sgunasekar", "followers_url": "https://api.github.com/users/sgunasekar/followers", "following_url": "https://api.github.com/users/sgunasekar/following{/other_user}", "gists_url": "https://api.github.com/users/sgunasekar/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgunasekar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgunasekar/subscriptions", "organizations_url": "https://api.github.com/users/sgunasekar/orgs", "repos_url": "https://api.github.com/users/sgunasekar/repos", "events_url": "https://api.github.com/users/sgunasekar/events{/privacy}", "received_events_url": "https://api.github.com/users/sgunasekar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @sgunasekar \r\nThanks for the issue\r\nAs per my understanding, since the class `CodeGenAttention` is not a public class, it should be only used by `CodeGenModel`. In the modeling script if position ids is `None` we indeed manually create `position_ids` based on past length and sequence length. I personally don't think we should do this inside `CodeGenAttention`, but if you want to use that class as a standalone class you should manually create position ids and pass it in the forward pass.\r\nI also want to hear from @ArthurZucker, @sgugger @amyeroberts to make sure we are aligned on this", "πŸ‘πŸ» on @younesbelkada 's answer, on almost all of our attention modules, the attention should be passed and there are not reason to give them a default value because this is handled in the modelling file. ", "Completely in line with the comments above.", "+1 - As @younesbelkada says, `CodeGenAttention` isn't a public class and this is easily resolved by passing in `position_ids` directly to the layer.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,681
1,684
1,684
NONE
null
### System Info - `transformers` version: 4.28.1 - Platform: Linux-5.15.0-1034-azure-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.13.4 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0a0+1767026 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? The current version of models.codegen.modeling_codegen.CodeGenAttention forward throws error on line 193 when position_ids are not specified and defaults to None. This can be fixed by defining default position_ids as self.position_ids in the init. The issue was an issue introduced in commit 4e94c6c. @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` import torch from transformers.models.codegen.modeling_codegen import CodeGenAttention from transformers import AutoConfig, AutoModelForCausalLM config = AutoConfig.from_pretrained("Salesforce/codegen-350M-nl") model = CodeGenAttention(config) x= torch.randn(4, config.n_ctx, config.n_embd) model(x) ``` ### Expected behavior The block should instantiate the codegenattention with default position ids as torch.arange(seq_offset:seqlen)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22845/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22845/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22844
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22844/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22844/comments
https://api.github.com/repos/huggingface/transformers/issues/22844/events
https://github.com/huggingface/transformers/pull/22844
1,673,931,200
PR_kwDOCUB6oc5OnzPs
22,844
Make ClipSeg compatible with model parallelism
{ "login": "youssefadr", "id": 104783077, "node_id": "U_kgDOBj7c5Q", "avatar_url": "https://avatars.githubusercontent.com/u/104783077?v=4", "gravatar_id": "", "url": "https://api.github.com/users/youssefadr", "html_url": "https://github.com/youssefadr", "followers_url": "https://api.github.com/users/youssefadr/followers", "following_url": "https://api.github.com/users/youssefadr/following{/other_user}", "gists_url": "https://api.github.com/users/youssefadr/gists{/gist_id}", "starred_url": "https://api.github.com/users/youssefadr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/youssefadr/subscriptions", "organizations_url": "https://api.github.com/users/youssefadr/orgs", "repos_url": "https://api.github.com/users/youssefadr/repos", "events_url": "https://api.github.com/users/youssefadr/events{/privacy}", "received_events_url": "https://api.github.com/users/youssefadr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "My pleasure, I'm glad I could help!" ]
1,681
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? Add model parallelism for `ClipSeg`. Related to #22561 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22844/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22844/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22844", "html_url": "https://github.com/huggingface/transformers/pull/22844", "diff_url": "https://github.com/huggingface/transformers/pull/22844.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22844.patch", "merged_at": 1681860719000 }
https://api.github.com/repos/huggingface/transformers/issues/22843
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22843/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22843/comments
https://api.github.com/repos/huggingface/transformers/issues/22843/events
https://github.com/huggingface/transformers/pull/22843
1,673,900,739
PR_kwDOCUB6oc5Onsok
22,843
Fix default position_ids in CodeGenAttention module
{ "login": "sgunasekar", "id": 8418631, "node_id": "MDQ6VXNlcjg0MTg2MzE=", "avatar_url": "https://avatars.githubusercontent.com/u/8418631?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgunasekar", "html_url": "https://github.com/sgunasekar", "followers_url": "https://api.github.com/users/sgunasekar/followers", "following_url": "https://api.github.com/users/sgunasekar/following{/other_user}", "gists_url": "https://api.github.com/users/sgunasekar/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgunasekar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgunasekar/subscriptions", "organizations_url": "https://api.github.com/users/sgunasekar/orgs", "repos_url": "https://api.github.com/users/sgunasekar/repos", "events_url": "https://api.github.com/users/sgunasekar/events{/privacy}", "received_events_url": "https://api.github.com/users/sgunasekar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22843). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,681
1,685
1,685
NONE
null
CodeGenAttention forward throws error when position_ids are not specified. # What does this PR do? The current version of models.codegen.modeling_codegen.CodeGenAttention forward throws error on line 193 when position_ids are not specified and defaults to None. Added a default behavior by defining default position_ids as self.position_ids in the init. <!-- Remove if not applicable --> Fixes # (models.codegen.modeling_codegen.CodeGenAttention forward throws error on line 193 when position_ids are not specified and defaults to None) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22843/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22843/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22843", "html_url": "https://github.com/huggingface/transformers/pull/22843", "diff_url": "https://github.com/huggingface/transformers/pull/22843.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22843.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22842
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22842/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22842/comments
https://api.github.com/repos/huggingface/transformers/issues/22842/events
https://github.com/huggingface/transformers/pull/22842
1,673,711,757
PR_kwDOCUB6oc5OnDzs
22,842
None check for encoder
{ "login": "sfilios", "id": 20328487, "node_id": "MDQ6VXNlcjIwMzI4NDg3", "avatar_url": "https://avatars.githubusercontent.com/u/20328487?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sfilios", "html_url": "https://github.com/sfilios", "followers_url": "https://api.github.com/users/sfilios/followers", "following_url": "https://api.github.com/users/sfilios/following{/other_user}", "gists_url": "https://api.github.com/users/sfilios/gists{/gist_id}", "starred_url": "https://api.github.com/users/sfilios/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sfilios/subscriptions", "organizations_url": "https://api.github.com/users/sfilios/orgs", "repos_url": "https://api.github.com/users/sfilios/repos", "events_url": "https://api.github.com/users/sfilios/events{/privacy}", "received_events_url": "https://api.github.com/users/sfilios/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hey! As you can see from the red tests, this cannot be merged as it is breaking a lot of the API πŸ˜… \r\n" ]
1,681
1,682
1,682
NONE
null
In the case that BartForConditionalGeneration decoder is being used without an encoder, this change maintains the ability to resize embeddings. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [Not Necessary] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [Not Necessary] Did you write any new necessary tests? @ArthurZucker and @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22842/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22842/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22842", "html_url": "https://github.com/huggingface/transformers/pull/22842", "diff_url": "https://github.com/huggingface/transformers/pull/22842.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22842.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22841
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22841/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22841/comments
https://api.github.com/repos/huggingface/transformers/issues/22841/events
https://github.com/huggingface/transformers/pull/22841
1,673,572,451
PR_kwDOCUB6oc5OmlvW
22,841
Raise err if minimum Accelerate version isn't available
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "id": 2107554019, "node_id": "MDU6TGFiZWwyMTA3NTU0MDE5", "url": "https://api.github.com/repos/huggingface/transformers/labels/Distributed%20Training%20/%20Models", "name": "Distributed Training / Models", "color": "fef2c0", "default": false, "description": "" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? This PR will raise an explicit `ImportError` during `TrainingArguments` if `Accelerate` isn't installed (or isn't the required minimal version) and Accelerate is going to be utilized Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22841/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22841/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22841", "html_url": "https://github.com/huggingface/transformers/pull/22841", "diff_url": "https://github.com/huggingface/transformers/pull/22841.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22841.patch", "merged_at": 1681842303000 }
https://api.github.com/repos/huggingface/transformers/issues/22840
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22840/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22840/comments
https://api.github.com/repos/huggingface/transformers/issues/22840/events
https://github.com/huggingface/transformers/pull/22840
1,673,526,720
PR_kwDOCUB6oc5Omb-M
22,840
Add `automatic-mask-generation` pipeline for Segment Anything Model (SAM)
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thank you all for your reviews! ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22840). All of your documentation changes will be reflected on that endpoint." ]
1,681
1,682
1,682
COLLABORATOR
null
# What does this PR do? This need the SAM model + rebasing once merged ```python from transformers import pipeline import matplotlib.pyplot as plt from PIL import Image import numpy as np import time generator = pipeline("automatic-mask-generation", device = 0) image_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png" dog_url = "/home/arthur_huggingface_co/transformers/Arthur/dog.jpg" raw_image = Image.open(dog_url).convert("RGB") start = time.time() outputs = generator(raw_image, points_per_batch = 256, pred_iou_thresh=1) print(f"point_batch_size : {256}, {time.time() - start}") def show_mask(mask, ax, random_color=False): if random_color: color = np.concatenate([np.random.random(3), np.array([0.6])], axis=0) else: color = np.array([30 / 255, 144 / 255, 255 / 255, 0.6]) h, w = mask.shape[-2:] mask_image = mask.reshape(h, w, 1) * color.reshape(1, 1, -1) ax.imshow(mask_image) plt.imshow(np.array(raw_image)) ax = plt.gca() for mask in outputs["masks"]: show_mask(mask, ax=ax, random_color=True) plt.axis("off") plt.show() plt.savefig("dog_results_2.png") ``` ![image](https://user-images.githubusercontent.com/48595927/232851728-936eb6bc-0765-48db-bda6-787cb79205f7.png) ![image](https://user-images.githubusercontent.com/48595927/232851676-d025beb2-07cd-4f0d-9e53-d4ff23456c93.png) <img width="621" alt="image" src="https://user-images.githubusercontent.com/48595927/232853562-9858cdc5-dc1c-41b3-b067-1ea013c63e0f.png">
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22840/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22840/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22840", "html_url": "https://github.com/huggingface/transformers/pull/22840", "diff_url": "https://github.com/huggingface/transformers/pull/22840.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22840.patch", "merged_at": 1682011645000 }
https://api.github.com/repos/huggingface/transformers/issues/22839
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22839/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22839/comments
https://api.github.com/repos/huggingface/transformers/issues/22839/events
https://github.com/huggingface/transformers/pull/22839
1,673,514,847
PR_kwDOCUB6oc5OmZYs
22,839
Fix weight tying in TF-ESM
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Also cc @gante in case he hates how I handled weight tying here, I don't want to break TF convention too much!", "_The documentation is not available anymore as the PR was closed or merged._", "@Rocketknight1 I'm cool with this :D " ]
1,681
1,682
1,682
MEMBER
null
TF ESM cloned weights instead of tying, which worked when loading from PT but broke when loading from safetensors. This resolves the issue by correctly tying weights when this is enabled in the config. Fixes an ongoing CI error raised by @ydshieh
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22839/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22839/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22839", "html_url": "https://github.com/huggingface/transformers/pull/22839", "diff_url": "https://github.com/huggingface/transformers/pull/22839.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22839.patch", "merged_at": 1682002232000 }
https://api.github.com/repos/huggingface/transformers/issues/22838
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22838/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22838/comments
https://api.github.com/repos/huggingface/transformers/issues/22838/events
https://github.com/huggingface/transformers/pull/22838
1,673,498,601
PR_kwDOCUB6oc5OmV8t
22,838
🌐 [i18n-KO] Translated `tasks/masked_language_modeling.mdx` to Korean
{ "login": "HanNayeoniee", "id": 33839093, "node_id": "MDQ6VXNlcjMzODM5MDkz", "avatar_url": "https://avatars.githubusercontent.com/u/33839093?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HanNayeoniee", "html_url": "https://github.com/HanNayeoniee", "followers_url": "https://api.github.com/users/HanNayeoniee/followers", "following_url": "https://api.github.com/users/HanNayeoniee/following{/other_user}", "gists_url": "https://api.github.com/users/HanNayeoniee/gists{/gist_id}", "starred_url": "https://api.github.com/users/HanNayeoniee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HanNayeoniee/subscriptions", "organizations_url": "https://api.github.com/users/HanNayeoniee/orgs", "repos_url": "https://api.github.com/users/HanNayeoniee/repos", "events_url": "https://api.github.com/users/HanNayeoniee/events{/privacy}", "received_events_url": "https://api.github.com/users/HanNayeoniee/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,682
1,682
CONTRIBUTOR
null
<!-- PR의 제λͺ©μ€ "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으둜 λΆ€νƒλ“œλ¦½λ‹ˆλ‹Ή --> # What does this PR do? Translated the `tasks/masked_language_modeling.mdx` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 <!-- 메인 μ΄μŠˆμ— 기둝이 λ‚¨μ•„μš”! κ°€μ§œμ—°κ΅¬μ†Œ 리포λ₯Ό μ‚¬μš©ν•΄ μ—°μŠ΅ν•˜μ‹€λ•ŒλŠ” μ œκ±°ν•΄μ£Όμ‹œλ©΄ κ°μ‚¬ν•˜κ² μŠ΅λ‹ˆλ‹€! :smile: --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? <!-- 제좜 μ „ 체크리슀트둜, κ°€μ§œμ—°κ΅¬μ†Œλ§Œμ˜ μ²΄ν¬λ¦¬μŠ€νŠΈλ„ <details>둜 κ°μ‹Έμ„œ λ§Œλ“€μ–΄λ‘λ©΄ 더 쒋을 것 κ°™μ•„μš”. --> ## Who can review? <!-- κ°€μ§œμ—°κ΅¬μ†Œ νŒ€μ›λ“€κ³Ό 리뷰가 λλ‚œ ν›„μ—λ§Œ ν—ˆκΉ…νŽ˜μ΄μŠ€ μ§μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> <!-- @sgugger, @ArthurZucker, @eunseojo May you please review this PR? --> Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22838/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22838/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22838", "html_url": "https://github.com/huggingface/transformers/pull/22838", "diff_url": "https://github.com/huggingface/transformers/pull/22838.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22838.patch", "merged_at": 1682341341000 }
https://api.github.com/repos/huggingface/transformers/issues/22837
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22837/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22837/comments
https://api.github.com/repos/huggingface/transformers/issues/22837/events
https://github.com/huggingface/transformers/pull/22837
1,673,436,018
PR_kwDOCUB6oc5OmIkS
22,837
Fix from_pretrained when model is instantiated on the meta device
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,681
1,681
COLLABORATOR
null
# What does this PR do? #22437 broke the `from_pretrained` method whenever the model is instantiated on the meta device and the state dict passed is not complete (see [this issue](https://github.com/huggingface/accelerate/issues/1333) for one example). Basically the check will remove all keys from `missing_keys` since all parameters on the meta device share the same data pointer. I had advocated to use another solution in that PR but the contributor did not listen Since we rely on those `missing_keys` later on to re-initialize the weights that are not in the state dict, the model ends up with weights on the meta device.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22837/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22837/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22837", "html_url": "https://github.com/huggingface/transformers/pull/22837", "diff_url": "https://github.com/huggingface/transformers/pull/22837.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22837.patch", "merged_at": 1681840458000 }
https://api.github.com/repos/huggingface/transformers/issues/22836
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22836/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22836/comments
https://api.github.com/repos/huggingface/transformers/issues/22836/events
https://github.com/huggingface/transformers/pull/22836
1,673,394,219
PR_kwDOCUB6oc5Ol_uT
22,836
Neptune fix bug init run
{ "login": "AleksanderWWW", "id": 58885668, "node_id": "MDQ6VXNlcjU4ODg1NjY4", "avatar_url": "https://avatars.githubusercontent.com/u/58885668?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AleksanderWWW", "html_url": "https://github.com/AleksanderWWW", "followers_url": "https://api.github.com/users/AleksanderWWW/followers", "following_url": "https://api.github.com/users/AleksanderWWW/following{/other_user}", "gists_url": "https://api.github.com/users/AleksanderWWW/gists{/gist_id}", "starred_url": "https://api.github.com/users/AleksanderWWW/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AleksanderWWW/subscriptions", "organizations_url": "https://api.github.com/users/AleksanderWWW/orgs", "repos_url": "https://api.github.com/users/AleksanderWWW/repos", "events_url": "https://api.github.com/users/AleksanderWWW/events{/privacy}", "received_events_url": "https://api.github.com/users/AleksanderWWW/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger Do you know more or less when will that be released?", "@AleksanderWWW It was released this week in v4.29.0", "Ah yes, my bad. I didn't realize that I had a bug in my own tests :smile: Thank you @amyeroberts!" ]
1,681
1,683
1,682
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> We realised that the `init_run` function embedded in the integration was accepting a deprecated kwarg `run` which was replaced with `with_id` some time ago. Without this fix there might be cases where the NeptuneCallback will not run correctly and throw an error, that the function received an unexpected argument. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22836/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22836/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22836", "html_url": "https://github.com/huggingface/transformers/pull/22836", "diff_url": "https://github.com/huggingface/transformers/pull/22836.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22836.patch", "merged_at": 1682427065000 }
https://api.github.com/repos/huggingface/transformers/issues/22835
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22835/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22835/comments
https://api.github.com/repos/huggingface/transformers/issues/22835/events
https://github.com/huggingface/transformers/pull/22835
1,673,384,031
PR_kwDOCUB6oc5Ol9jq
22,835
Include decoder_attention_mask in T5 model inputs
{ "login": "aashiqmuhamed", "id": 17514579, "node_id": "MDQ6VXNlcjE3NTE0NTc5", "avatar_url": "https://avatars.githubusercontent.com/u/17514579?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aashiqmuhamed", "html_url": "https://github.com/aashiqmuhamed", "followers_url": "https://api.github.com/users/aashiqmuhamed/followers", "following_url": "https://api.github.com/users/aashiqmuhamed/following{/other_user}", "gists_url": "https://api.github.com/users/aashiqmuhamed/gists{/gist_id}", "starred_url": "https://api.github.com/users/aashiqmuhamed/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aashiqmuhamed/subscriptions", "organizations_url": "https://api.github.com/users/aashiqmuhamed/orgs", "repos_url": "https://api.github.com/users/aashiqmuhamed/repos", "events_url": "https://api.github.com/users/aashiqmuhamed/events{/privacy}", "received_events_url": "https://api.github.com/users/aashiqmuhamed/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@amyeroberts Could you merge this PR please?" ]
1,681
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? This PR includes decoder_attention_mask as an argument in the prepare_inputs_for_generation function, helping enable the use of custom attention masks in the decoder. Duplicate of https://github.com/huggingface/transformers/pull/22819 @gante @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22835/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22835/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22835", "html_url": "https://github.com/huggingface/transformers/pull/22835", "diff_url": "https://github.com/huggingface/transformers/pull/22835.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22835.patch", "merged_at": 1681999536000 }
https://api.github.com/repos/huggingface/transformers/issues/22834
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22834/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22834/comments
https://api.github.com/repos/huggingface/transformers/issues/22834/events
https://github.com/huggingface/transformers/pull/22834
1,673,290,366
PR_kwDOCUB6oc5OlpOS
22,834
fix CLAP integration tests
{ "login": "hollance", "id": 346853, "node_id": "MDQ6VXNlcjM0Njg1Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hollance", "html_url": "https://github.com/hollance", "followers_url": "https://api.github.com/users/hollance/followers", "following_url": "https://api.github.com/users/hollance/following{/other_user}", "gists_url": "https://api.github.com/users/hollance/gists{/gist_id}", "starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hollance/subscriptions", "organizations_url": "https://api.github.com/users/hollance/orgs", "repos_url": "https://api.github.com/users/hollance/repos", "events_url": "https://api.github.com/users/hollance/events{/privacy}", "received_events_url": "https://api.github.com/users/hollance/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I don't have merge rights, so if all is good, feel free to merge. :-) " ]
1,681
1,682
1,682
CONTRIBUTOR
null
# What does this PR do? I noticed that the CLAP feature extractor tests were not being run, and that once enabled, they fail. This PR fixes these tests. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22834/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22834/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22834", "html_url": "https://github.com/huggingface/transformers/pull/22834", "diff_url": "https://github.com/huggingface/transformers/pull/22834.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22834.patch", "merged_at": 1682071455000 }
https://api.github.com/repos/huggingface/transformers/issues/22833
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22833/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22833/comments
https://api.github.com/repos/huggingface/transformers/issues/22833/events
https://github.com/huggingface/transformers/pull/22833
1,673,278,376
PR_kwDOCUB6oc5OlmpP
22,833
Update accelerate version + warning check fix
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" }, { "id": 2107554019, "node_id": "MDU6TGFiZWwyMTA3NTU0MDE5", "url": "https://api.github.com/repos/huggingface/transformers/labels/Distributed%20Training%20/%20Models", "name": "Distributed Training / Models", "color": "fef2c0", "default": false, "description": "" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? This PR bumps the accelerate version, and flips the logic for the warning to be accurate on the distributed mode check Fixes # (issue) Solves https://github.com/huggingface/transformers/issues/22816 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger @pacman100
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22833/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22833/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22833", "html_url": "https://github.com/huggingface/transformers/pull/22833", "diff_url": "https://github.com/huggingface/transformers/pull/22833.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22833.patch", "merged_at": 1681836693000 }
https://api.github.com/repos/huggingface/transformers/issues/22832
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22832/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22832/comments
https://api.github.com/repos/huggingface/transformers/issues/22832/events
https://github.com/huggingface/transformers/issues/22832
1,673,254,590
I_kwDOCUB6oc5ju9a-
22,832
WER = 100% !! (Whisper medium)
{ "login": "Seif-aber", "id": 96656595, "node_id": "U_kgDOBcLc0w", "avatar_url": "https://avatars.githubusercontent.com/u/96656595?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Seif-aber", "html_url": "https://github.com/Seif-aber", "followers_url": "https://api.github.com/users/Seif-aber/followers", "following_url": "https://api.github.com/users/Seif-aber/following{/other_user}", "gists_url": "https://api.github.com/users/Seif-aber/gists{/gist_id}", "starred_url": "https://api.github.com/users/Seif-aber/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Seif-aber/subscriptions", "organizations_url": "https://api.github.com/users/Seif-aber/orgs", "repos_url": "https://api.github.com/users/Seif-aber/repos", "events_url": "https://api.github.com/users/Seif-aber/events{/privacy}", "received_events_url": "https://api.github.com/users/Seif-aber/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Seif-aber, thanks for raising this issue! \r\n\r\nSo that we can best help, could you share the running environment: run `transformers-cli env` in the terminal and copy-paste the output. \r\n\r\nHave you looked any of the inputs to/ outputs of the model when this occurs? After the model has finished training, if you feed a single sample to the model to predict in eval model, what does the prediction look like? \r\n", "Hey @Seif-aber! I believe I answered a duplicate of your question on the Hugging Face Hub earlier today: https://huggingface.co/spaces/openai/whisper/discussions/84#64466139e113660053727da7\r\n\r\nMy suggestions were similar to those of @amyeroberts - let's take a look at the predictions the model is making to work out what's going on.", "Addressed on the HF Hub: https://huggingface.co/spaces/openai/whisper/discussions/84#644aa699af97dfd24c0e0767" ]
1,681
1,683
1,683
NONE
null
Hello everyone, I am having an issue when finetuning OpenAI's Whisper Medium on Mozilla's Common Voice 11 Dataset with the Arabic language. The training and validation loss are both decreasing but the WER is being 100% after some steps (specially when the loss becomes < 1) and I see that the model is performing well and that WER is just miscalculated. ![hf_issue](https://user-images.githubusercontent.com/96656595/232808398-936c0e2b-f8b3-4289-a6bd-f21f64236efe.png) Notes : - This error is just happening with the medium model, other models (small, tiny, large-v2 ,etc.) are working fine. - I am following the famous blog about Whisper's finetuning (https://huggingface.co/blog/fine-tune-whisper). @sanchit-gandhi
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22832/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22832/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22831
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22831/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22831/comments
https://api.github.com/repos/huggingface/transformers/issues/22831/events
https://github.com/huggingface/transformers/issues/22831
1,673,180,010
I_kwDOCUB6oc5jurNq
22,831
Seq2SeqTrainingArguments.generation_config not json serializable
{ "login": "Natooz", "id": 56734983, "node_id": "MDQ6VXNlcjU2NzM0OTgz", "avatar_url": "https://avatars.githubusercontent.com/u/56734983?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Natooz", "html_url": "https://github.com/Natooz", "followers_url": "https://api.github.com/users/Natooz/followers", "following_url": "https://api.github.com/users/Natooz/following{/other_user}", "gists_url": "https://api.github.com/users/Natooz/gists{/gist_id}", "starred_url": "https://api.github.com/users/Natooz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Natooz/subscriptions", "organizations_url": "https://api.github.com/users/Natooz/orgs", "repos_url": "https://api.github.com/users/Natooz/repos", "events_url": "https://api.github.com/users/Natooz/events{/privacy}", "received_events_url": "https://api.github.com/users/Natooz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @Natooz πŸ‘‹ \r\n\r\nWe're more responsible than you, since we are supposed to catch that sort of issue in advance during the review process ;) But that's okay! It's normal to create bugs while trying to move forward at a good pace πŸš€ \r\n\r\nI believe that option 2 (recursively converting attributes to dictionaries) would be preferable. @sgugger, WDYT?", "Option 2 is fine, but just on `Seq2SeqTrainingArguments`, to replace the generation config by `generation_config.to_dict()`." ]
1,681
1,682
1,682
CONTRIBUTOR
null
### System Info πŸ‘‹ Following #22323, the `Seq2SeqTrainingArguments` `generation_config` attribute can be a `GenerationConfig` object. When saving a `Seq2SeqTrainingArguments` object as json (as done during training / with tensorboard), it is first converted to a dictionary. But a `GenerationConfig` is not serializable --> error ### Who can help? cc @gante @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```Python generation_config = GenerationConfig( max_new_tokens=64, top_k=20, top_p=0.9, ) gen_training_args = Seq2SeqTrainingArguments(generation_config=generation_config) as_dict = gen_finetune_training_args.to_dict() as_json_str = gen_finetune_training_args.to_json_string() # error here ``` ### Expected behavior To fix this, two possible solutions are : 1. Modify the expected types of `generation_config` to `[str, Path, dict]`, possibly converting arguments passed as `GenerationConfig` to dictionaries in `__post_init__`, and modify the behavior of `Seq2SeqTrainer.load_generation_config` to handle dictionaries; 2. Make `Seq2SeqTrainingArguments` override [`to_dict()`](https://github.com/huggingface/transformers/blob/03462875cc2d6506eb66a74de7d19b93ce968596/src/transformers/training_args.py#L1833) (or directly modify in `TrainingArguments`), to recursively convert non-json-serializable attributes to dictionaries. Which sounds better do you think ? Or maybe you have a better one. In any case I can handle it, as I feel responsible for this error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22831/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22831/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22830
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22830/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22830/comments
https://api.github.com/repos/huggingface/transformers/issues/22830/events
https://github.com/huggingface/transformers/pull/22830
1,673,174,259
PR_kwDOCUB6oc5OlQUV
22,830
[i18n-KO] Translated `accelerate.mdx` to Korean
{ "login": "0525hhgus", "id": 47289574, "node_id": "MDQ6VXNlcjQ3Mjg5NTc0", "avatar_url": "https://avatars.githubusercontent.com/u/47289574?v=4", "gravatar_id": "", "url": "https://api.github.com/users/0525hhgus", "html_url": "https://github.com/0525hhgus", "followers_url": "https://api.github.com/users/0525hhgus/followers", "following_url": "https://api.github.com/users/0525hhgus/following{/other_user}", "gists_url": "https://api.github.com/users/0525hhgus/gists{/gist_id}", "starred_url": "https://api.github.com/users/0525hhgus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/0525hhgus/subscriptions", "organizations_url": "https://api.github.com/users/0525hhgus/orgs", "repos_url": "https://api.github.com/users/0525hhgus/repos", "events_url": "https://api.github.com/users/0525hhgus/events{/privacy}", "received_events_url": "https://api.github.com/users/0525hhgus/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "May you please review this PR?\r\n@sgugger, @ArthurZucker, @eunseojo" ]
1,681
1,682
1,682
CONTRIBUTOR
null
# What does this PR do? Translated the `accelerate.mdx` file of the documentation to Korean. Thank you in advance for your review:) Part of https://github.com/huggingface/transformers/issues/20179 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? <!-- 제좜 μ „ 체크리슀트둜, κ°€μ§œμ—°κ΅¬μ†Œλ§Œμ˜ μ²΄ν¬λ¦¬μŠ€νŠΈλ„ <details>둜 κ°μ‹Έμ„œ λ§Œλ“€μ–΄λ‘λ©΄ 더 쒋을 것 κ°™μ•„μš”. --> ## Who can review? <!-- κ°€μ§œμ—°κ΅¬μ†Œ νŒ€μ›λ“€κ³Ό 리뷰가 λλ‚œ ν›„μ—λ§Œ ν—ˆκΉ…νŽ˜μ΄μŠ€ μ§μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22830/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22830/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22830", "html_url": "https://github.com/huggingface/transformers/pull/22830", "diff_url": "https://github.com/huggingface/transformers/pull/22830.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22830.patch", "merged_at": 1682336945000 }
https://api.github.com/repos/huggingface/transformers/issues/22829
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22829/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22829/comments
https://api.github.com/repos/huggingface/transformers/issues/22829/events
https://github.com/huggingface/transformers/issues/22829
1,673,144,640
I_kwDOCUB6oc5juilA
22,829
Add CLIP-ViP
{ "login": "HellwayXue", "id": 42398002, "node_id": "MDQ6VXNlcjQyMzk4MDAy", "avatar_url": "https://avatars.githubusercontent.com/u/42398002?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HellwayXue", "html_url": "https://github.com/HellwayXue", "followers_url": "https://api.github.com/users/HellwayXue/followers", "following_url": "https://api.github.com/users/HellwayXue/following{/other_user}", "gists_url": "https://api.github.com/users/HellwayXue/gists{/gist_id}", "starred_url": "https://api.github.com/users/HellwayXue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HellwayXue/subscriptions", "organizations_url": "https://api.github.com/users/HellwayXue/orgs", "repos_url": "https://api.github.com/users/HellwayXue/repos", "events_url": "https://api.github.com/users/HellwayXue/events{/privacy}", "received_events_url": "https://api.github.com/users/HellwayXue/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "Cool model, I've contributed X-CLIP in the past: https://huggingface.co/docs/transformers/model_doc/xclip which is an extension of CLIP for video-language pretraining. Looks like CLIP-ViP focuses more on retrieval.\r\n\r\nLooks like a great candidate for a first model contribution as the implementation is already in HF format.", "Hi, I'd like to help out getting this model integrated.\r\n\r\n", "I have a general question about unit testing. The implementation guidelines indicate that the HF implementation should align with the reference model to a tolerance of .001, but I don't see that tested in all models.\r\n\r\nIn my PR I'll include an integration test analogous to clip's [test](https://github.com/huggingface/transformers/blob/88399476c3892435395618ed37993176dbb0de73/tests/models/clip/test_modeling_clip.py#L709-L737).\r\n\r\nBut I've noticed some model's don't seem to do this kind of integration test. (For example, I don't see an analogous test in [gptneo](https://github.com/huggingface/transformers/blob/88399476c3892435395618ed37993176dbb0de73/tests/models/gpt_neo/test_modeling_gpt_neo.py#L4)\r\n\r\nOut of curiosity, why do some models not have these kinds of integration tests?", "Yes, ideally GPT-Neo also has integration tests that test exact logit values. However you can see [here](https://github.com/huggingface/transformers/blob/88399476c3892435395618ed37993176dbb0de73/tests/models/gpt_neo/test_modeling_gpt_neo.py#L519) that expected output IDs and generated texts are tested.\r\n\r\nBut in any case it's always best to have an expected slice of the logits in the integration test.", "Thanks for the clarification!\r\n\r\nI have another question, the reference implementation reuses CLIPConfig, CLIPTextConfig, and CLIPVisionConfig directly.\r\n\r\nCan we reuse them (via importing) in the PR directly as well? Or should we copy-paste these files and comment \"Copied from transformers.clip...\" at the top?", "In that case, you can copy the classes, call them `CLIPVipConfig`, `CLIPVipTextConfig`, etc. and add `Copied from` on top of them. If you then run `make fix-copies` from the root of the repository, all files will automatically be copied to make sure they stay consistent.\r\n\r\nNote that you can copy entire classes, like so:\r\n```\r\n# Copied from transformers.models.clip.configuration_clip.CLIPConfig\r\nclass CLIPVipConfig(...)\r\n```\r\n\r\nbut also place them on top of methods, in case only a method is the same but the class is different:\r\n\r\n```\r\nclass CLIPVipConfig(...)\r\n \r\n # Copied from transformers.models.clip.configuration_clip.CLIPConfig.__init__\r\n def __init__(config):", "Got it, thanks for the quick response!", "Any updates here? Looking for a good video embedding model!", "Any news on this?", "I had a PR I got started on but I'll be too busy to get it merged. \r\nWhen I last worked on it I think I was pretty close, but I had some CLIP docs I still had to update to be specific to CLIPViP.\r\n\r\nBut it has been long enough that the code would require major reworking to work with the changes to transformers" ]
1,681
1,707
null
NONE
null
### Model description [CLIP-ViP](https://github.com/microsoft/XPretrain/tree/main/CLIP-ViP) is a video-language model which is based on a pre-trained image-text model [CLIP](https://openai.com/blog/clip/) then further pre-trained (post-pretraining) on a large-scale video-text dataset [HD-VILA-100M](https://github.com/microsoft/XPretrain/tree/main/hd-vila-100m). This work is accepted by ICLR 2023. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation [The official implementation](https://github.com/microsoft/XPretrain/tree/main/CLIP-ViP) This repo has model implementation and pre-trained weights. @hellwayxue
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22829/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22829/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/22828
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22828/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22828/comments
https://api.github.com/repos/huggingface/transformers/issues/22828/events
https://github.com/huggingface/transformers/pull/22828
1,673,076,724
PR_kwDOCUB6oc5Ok7JJ
22,828
XGLM: Fix left-padding (PT and TF)
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@amyeroberts now it is ready :)", "Adding a new keyword argument is not considered breaking, so that's fine!" ]
1,681
1,681
1,681
MEMBER
null
# What does this PR do? Fixes left-padding for XGLM, on PT and TF. It is the usual problem: `position_ids` was not being used/passed around, and the code assumed that position ids = past length + input length (which is not true when left-padding is present). Fixes #22707 While touching XGLM, other issues were sorted: 1. on PT, docs were duplicated (and one of the copies was wrong) 2. XGLM generate with sampling integration test was pinned on CPU (as always, GPU gives different results, which was making our slow CI report an error) 3. TF XLA was failing because of this (left padding support), so now we have TF XLA XGLM πŸ™Œ
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22828/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22828/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22828", "html_url": "https://github.com/huggingface/transformers/pull/22828", "diff_url": "https://github.com/huggingface/transformers/pull/22828.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22828.patch", "merged_at": 1681981316000 }
https://api.github.com/repos/huggingface/transformers/issues/22827
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22827/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22827/comments
https://api.github.com/repos/huggingface/transformers/issues/22827/events
https://github.com/huggingface/transformers/issues/22827
1,672,945,088
I_kwDOCUB6oc5jtx3A
22,827
Generate method Time Series Transformer throws an error
{ "login": "yurkoff-mv", "id": 82467993, "node_id": "MDQ6VXNlcjgyNDY3OTkz", "avatar_url": "https://avatars.githubusercontent.com/u/82467993?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yurkoff-mv", "html_url": "https://github.com/yurkoff-mv", "followers_url": "https://api.github.com/users/yurkoff-mv/followers", "following_url": "https://api.github.com/users/yurkoff-mv/following{/other_user}", "gists_url": "https://api.github.com/users/yurkoff-mv/gists{/gist_id}", "starred_url": "https://api.github.com/users/yurkoff-mv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yurkoff-mv/subscriptions", "organizations_url": "https://api.github.com/users/yurkoff-mv/orgs", "repos_url": "https://api.github.com/users/yurkoff-mv/repos", "events_url": "https://api.github.com/users/yurkoff-mv/events{/privacy}", "received_events_url": "https://api.github.com/users/yurkoff-mv/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante ", "@amyeroberts -- `TimeSeriesTransformerForPrediction` has its own `generate` method, so I'm passing the tag to @kashif, who implemented it πŸ€— ", "@yurkoff-mv the issue is that the `lags_sequence=[1]` and then the size of the past values and past time features needs to be larger. Let me paste an example below", "```python\r\nimport torch\r\nfrom transformers import TimeSeriesTransformerConfig, TimeSeriesTransformerForPrediction\r\n\r\n\r\nbatch_size = 32\r\ncontext_length = 100\r\nprediction_length = 1\r\ninput_size = 25\r\nnum_time_features = 1\r\nlags_sequence = [1]\r\n\r\nconfig = TimeSeriesTransformerConfig(prediction_length=prediction_length,\r\n context_length=context_length,\r\n input_size=input_size,\r\n lags_sequence=lags_sequence,\r\n num_time_features=num_time_features,\r\n num_static_categorical_features=0,\r\n num_static_real_features=0,\r\n num_dynamic_real_features=0,\r\n embedding_dimension=64,\r\n encoder_ffn_dim=32,\r\n decoder_ffn_dim=32,\r\n encoder_attention_heads=2,\r\n decoder_attention_heads=2,\r\n encoder_layers=2,\r\n decoder_layers=2,\r\n is_encoder_decoder=True,\r\n activation_function=\"gelu\",\r\n d_model=64,\r\n dropout=0.1,\r\n encoder_layerdrop=0.1,\r\n decoder_layerdrop=0.1,\r\n attention_dropout=0.1,\r\n activation_dropout=0.1,\r\n num_parallel_samples=100,\r\n init_std=0.02\r\n )\r\n\r\nmodel = TimeSeriesTransformerForPrediction(config)\r\n\r\n\r\n# input past seq length is context_length plus largest lag value:\r\noutputs = model.generate(past_values=torch.randn((batch_size, context_length+max(lags_sequence), input_size)),\r\n past_time_features=torch.randn((batch_size, context_length+max(lags_sequence), num_time_features)),\r\n past_observed_mask=torch.ones((batch_size, context_length+max(lags_sequence), input_size)),\r\n future_time_features=torch.randn((batch_size, prediction_length, num_time_features)),\r\n )\r\n\r\nprint(outputs.keys())\r\n\r\noutputs['sequences'].shape\r\n# torch.Size([32, 100, 1, 25]) [batch_size, num_parallel_samples, prediction_length, input_size]\r\n```", "Thank you! It's Working for me!" ]
1,681
1,698
1,681
NONE
null
### System Info - `transformers` version: 4.28.1 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.7.9 - Huggingface_hub version: 0.13.4 - Safetensors version: not installed - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker, @younesbelkada, @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` import torch from transformers import TimeSeriesTransformerConfig, TimeSeriesTransformerForPrediction batch_size = 32 context_length = 100 prediction_length = 1 input_size = 25 num_time_features = 1 config = TimeSeriesTransformerConfig(prediction_length=prediction_length, context_length=context_length, input_size=input_size, lags_sequence=[0], num_time_features=num_time_features, num_static_categorical_features=0, num_static_real_features=0, num_dynamic_real_features=0, embedding_dimension=64, encoder_ffn_dim=32, decoder_ffn_dim=32, encoder_attention_heads=2, decoder_attention_heads=2, encoder_layers=2, decoder_layers=2, is_encoder_decoder=True, activation_function="gelu", d_model=64, dropout=0.1, encoder_layerdrop=0.1, decoder_layerdrop=0.1, attention_dropout=0.1, activation_dropout=0.1, num_parallel_samples=100, init_std=0.02 ) model = TimeSeriesTransformerForPrediction(config) outputs = model.generate(past_values=torch.empty((batch_size, context_length, input_size)), past_time_features=torch.empty((batch_size, context_length, num_time_features)), past_observed_mask=torch.ones((batch_size, context_length, input_size)), future_time_features=torch.empty((batch_size, prediction_length, input_size)), ) print(outputs.keys()) ``` ``` File ".\venv\lib\site-packages\transformers\models\time_series_transformer\modeling_time_series_transformer.py", line 1807, in generate decoder_input = torch.cat((reshaped_lagged_sequence, repeated_features[:, : k + 1]), dim=-1) RuntimeError: Sizes of tensors must match except in dimension 2. Expected size 100 but got size 1 for tensor number 1 in the list. ``` Error in this row: ``` decoder_input = torch.cat((reshaped_lagged_sequence, repeated_features[:, : k + 1]), dim=-1) ``` An attempt to concatenate tensors with dimensions `[3200, 100, 25]` and `[3200, 1, 75]`. ### Expected behavior I expected to get the correct result of the model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22827/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22827/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22826
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22826/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22826/comments
https://api.github.com/repos/huggingface/transformers/issues/22826/events
https://github.com/huggingface/transformers/pull/22826
1,672,784,257
PR_kwDOCUB6oc5Oj8aq
22,826
Fix `test_eos_token_id_int_and_list_top_k_top_sampling`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,681
1,681
COLLABORATOR
null
# What does this PR do? In #22204, I updated the expected value in `test_eos_token_id_int_and_list_top_k_top_sampling` to pass the `CircleCI`. However, the daily CI fails with that new value. It turns out that we need a seed that could give the same (generation) output (at minimum, the same output length) on both CPU/GPU machines. The difference is very likely coming from somehow larger numerical differences after certain generation steps. ### remark With seed `0`, the output `generated_tokens[0]` is: - `cpu`: `[ 40, 416, 79, 12, 230, 89, 231, 432, 301, 212, 933, 225, 33, 33, 846]` - `gpu`: `[ 40, 416, 79, 12, 230, 89, 231, 432, 301, 212, 933, 225, 476, 682, 319, 832, 873, 853, 873, 832]`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22826/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22826/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22826", "html_url": "https://github.com/huggingface/transformers/pull/22826", "diff_url": "https://github.com/huggingface/transformers/pull/22826.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22826.patch", "merged_at": 1681826692000 }
https://api.github.com/repos/huggingface/transformers/issues/22825
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22825/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22825/comments
https://api.github.com/repos/huggingface/transformers/issues/22825/events
https://github.com/huggingface/transformers/issues/22825
1,672,756,881
I_kwDOCUB6oc5jtD6R
22,825
Not work cache_dir of AutoTokenizer.from_pretrained('gpt2')
{ "login": "irene622", "id": 62585026, "node_id": "MDQ6VXNlcjYyNTg1MDI2", "avatar_url": "https://avatars.githubusercontent.com/u/62585026?v=4", "gravatar_id": "", "url": "https://api.github.com/users/irene622", "html_url": "https://github.com/irene622", "followers_url": "https://api.github.com/users/irene622/followers", "following_url": "https://api.github.com/users/irene622/following{/other_user}", "gists_url": "https://api.github.com/users/irene622/gists{/gist_id}", "starred_url": "https://api.github.com/users/irene622/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/irene622/subscriptions", "organizations_url": "https://api.github.com/users/irene622/orgs", "repos_url": "https://api.github.com/users/irene622/repos", "events_url": "https://api.github.com/users/irene622/events{/privacy}", "received_events_url": "https://api.github.com/users/irene622/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @irene622, thanks for raising this issue! \r\n\r\n`cache_dir` isn't an attribute of the class, and so calling `tokenizer.cache_dir` will raise an error. \r\n\r\nYou can find the cache directory, importing from utils: \r\n```python\r\nfrom transformers.utils import TRANSFORMERS_CACHE\r\n```\r\n\r\nWhen a tokenizer is created, should have the `name_or_path` attribute set, which will tell you from which model repo, or path it was loaded from. \r\n```python\r\n>>> from transformers import AutoTokenizer\r\n>>> tokenizer = AutoTokenizer.from_pretrained(\"xlm-mlm-en-2048\")\r\n>>> tokenizer.name_or_path\r\n'xlm-mlm-en-2048'\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,681
1,684
1,684
NONE
null
### System Info My transformers is version 4.11.3, python version is 3.8.5, and Ubuntu 20.04.1. I want to know the cache directory when downloading AutoTokenizer.from_pretrained('gpt2') I run the below code ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('gpt2') tokenizer.cache_dir ``` then, the result is `AttributeError` like ```python Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'GPT2TokenizerFast' object has no attribute 'cache_dir' ``` When run `tokenizer.cache_dir()` , the result is the same `AttributeError`. The downloaded tokenizer is from CodeParrot. CodeParrot is in `transformers/examples/research_projects/codeparrot/`, and `codeparrot/scripts/bpe_training.py` download `AutoTokenizer.from_pretrained('gpt2')`. How can I get the cache directory path of tokenizer?? What is my problems? I want to know from method or variable of tokenizer, not the path. (I already find ~/.cache/huggingface/transformers have cache files.) If possible, I would like to know for tokenizer how to use the three files, .json, .lock, and the last file with no extension. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction <img width="659" alt="Screenshot 2023-04-18 at 6 37 54 PM" src="https://user-images.githubusercontent.com/62585026/232737073-b317ae95-5ce8-4545-99c8-9239ed39d29c.png"> The tokenizer code in CodeParrot. My running code is <img width="470" alt="Screenshot 2023-04-18 at 6 38 32 PM" src="https://user-images.githubusercontent.com/62585026/232737239-a7b51916-4d9b-4e9e-ae99-fba31abf0f1c.png"> and scripts/bpe_training.py code is <img width="722" alt="Screenshot 2023-04-18 at 6 39 41 PM" src="https://user-images.githubusercontent.com/62585026/232737566-d70052e3-9ed4-40ab-9ef6-1030a80a36a4.png"> ### Expected behavior I want to get the cache directory path of downloaded tokenizer. I want to know from method or variable of tokenizer, not the path. (I already find ~/.cache/huggingface/transformers have cache files.) Moreover, if possible, I would like to know how to use the three files, .json, .lock, and the last file with no extension.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22825/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22825/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22824
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22824/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22824/comments
https://api.github.com/repos/huggingface/transformers/issues/22824/events
https://github.com/huggingface/transformers/issues/22824
1,672,715,434
I_kwDOCUB6oc5js5yq
22,824
Allow initializing HuggingFaceEmbeddings from the cached weight
{ "login": "nicolefinnie", "id": 15970573, "node_id": "MDQ6VXNlcjE1OTcwNTcz", "avatar_url": "https://avatars.githubusercontent.com/u/15970573?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nicolefinnie", "html_url": "https://github.com/nicolefinnie", "followers_url": "https://api.github.com/users/nicolefinnie/followers", "following_url": "https://api.github.com/users/nicolefinnie/following{/other_user}", "gists_url": "https://api.github.com/users/nicolefinnie/gists{/gist_id}", "starred_url": "https://api.github.com/users/nicolefinnie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nicolefinnie/subscriptions", "organizations_url": "https://api.github.com/users/nicolefinnie/orgs", "repos_url": "https://api.github.com/users/nicolefinnie/repos", "events_url": "https://api.github.com/users/nicolefinnie/events{/privacy}", "received_events_url": "https://api.github.com/users/nicolefinnie/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue should be fixed in `LangChain` sorry for the misreport." ]
1,681
1,681
1,681
NONE
null
### Feature request ### Suggestion The only change has only a few lines in `__init__()`, ```python class HuggingFaceEmbeddings(BaseModel, Embeddings): """Wrapper around sentence_transformers embedding models. To use, you should have the ``sentence_transformers`` python package installed. Example: .. code-block:: python from langchain.embeddings import HuggingFaceEmbeddings model_name = "sentence-transformers/all-mpnet-base-v2" hf = HuggingFaceEmbeddings(model_name=model_name) """ client: Any #: :meta private: model_name: str = DEFAULT_MODEL_NAME """Model name to use.""" def __init__(self, cache_folder=None, **kwargs: Any): """Initialize the sentence_transformer.""" super().__init__(**kwargs) try: import sentence_transformers self.client = sentence_transformers.SentenceTransformer(model_name_or_path=self.model_name, cache_folder=cache_folder) except ImportError: raise ValueError( "Could not import sentence_transformers python package. " "Please install it with `pip install sentence_transformers`." ) class Config: """Configuration for this pydantic object.""" extra = Extra.forbid def embed_documents(self, texts: List[str]) -> List[List[float]]: """Compute doc embeddings using a HuggingFace transformer model. Args: texts: The list of texts to embed. Returns: List of embeddings, one for each text. """ texts = list(map(lambda x: x.replace("\n", " "), texts)) embeddings = self.client.encode(texts) return embeddings.tolist() def embed_query(self, text: str) -> List[float]: """Compute query embeddings using a HuggingFace transformer model. Args: text: The text to embed. Returns: Embeddings for the text. """ text = text.replace("\n", " ") embedding = self.client.encode(text) return embedding.tolist() ``` ### Usage ```python embedding_model = HuggingFaceEmbeddings(model_name=model_name, cache_folder=cache_folder) ``` ### Motivation Right now, HuggingFaceEmbeddings doesn't support loading an embedding model's weights from the cache but downloading the weights every time. Fixing this would be a low hanging fruit. ### Your contribution I can submit a PR if this request makes sense, and I've read `CONTRIBUTING.MD`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22824/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22824/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22823
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22823/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22823/comments
https://api.github.com/repos/huggingface/transformers/issues/22823/events
https://github.com/huggingface/transformers/pull/22823
1,672,665,082
PR_kwDOCUB6oc5OjjKL
22,823
Fix Past CI not running against the latest `main`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,681
1,681
COLLABORATOR
null
# What does this PR do? Fix Past CI not running against the latest `main`. See comment in the changes.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22823/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22823/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22823", "html_url": "https://github.com/huggingface/transformers/pull/22823", "diff_url": "https://github.com/huggingface/transformers/pull/22823.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22823.patch", "merged_at": 1681825302000 }
https://api.github.com/repos/huggingface/transformers/issues/22822
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22822/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22822/comments
https://api.github.com/repos/huggingface/transformers/issues/22822/events
https://github.com/huggingface/transformers/issues/22822
1,672,645,388
I_kwDOCUB6oc5jsosM
22,822
Size of saved model checkpoints after trainer.train() is much larger when using trainer with deepspeed stage2
{ "login": "ArvinZhuang", "id": 46237844, "node_id": "MDQ6VXNlcjQ2MjM3ODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/46237844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArvinZhuang", "html_url": "https://github.com/ArvinZhuang", "followers_url": "https://api.github.com/users/ArvinZhuang/followers", "following_url": "https://api.github.com/users/ArvinZhuang/following{/other_user}", "gists_url": "https://api.github.com/users/ArvinZhuang/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArvinZhuang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArvinZhuang/subscriptions", "organizations_url": "https://api.github.com/users/ArvinZhuang/orgs", "repos_url": "https://api.github.com/users/ArvinZhuang/repos", "events_url": "https://api.github.com/users/ArvinZhuang/events{/privacy}", "received_events_url": "https://api.github.com/users/ArvinZhuang/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[ { "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false } ]
[ "cc @stas00 ", "deepspeed saves the optimizer states as well as fp32 master weights, so of course the checkpoint folder is larger. look at the contents of the saved checkpoint folder.\r\n\r\nI'm not quite sure what the problem is.", "@stas00 thanks for the reply. are these states are saved in the pytorch_model.bin file?", "no, they are saved in their own files under `global_step*`. You might want to inspect the contents of the folder. \r\n\r\nPlease feel free report the full listing and their sizes here if you'd like to continue this discussion more specifically.", "Hi, here are the file sizes in each folder:\r\n\r\n```bash\r\ndu -a -h --max-depth=1 test1\r\n496K test1/tokenizer.model\r\n512 test1/config.json\r\n32K test1/pytorch_model.bin.index.json\r\n16K test1/training_args.bin\r\n512 test1/tokenizer_config.json\r\n512 test1/special_tokens_map.json\r\n9.2G test1/pytorch_model-00001-of-00003.bin\r\n9.3G test1/pytorch_model-00002-of-00003.bin\r\n6.7G test1/pytorch_model-00003-of-00003.bin\r\n512 test1/generation_config.json\r\n26G test1\r\n\r\ndu -a -h --max-depth=1 test2\r\n496K test2/tokenizer.model\r\n512 test2/config.json\r\n32K test2/pytorch_model.bin.index.json\r\n16K test2/training_args.bin\r\n512 test2/tokenizer_config.json\r\n512 test2/special_tokens_map.json\r\n26G test2/pytorch_model-00001-of-00003.bin\r\n26G test2/pytorch_model-00002-of-00003.bin\r\n26G test2/pytorch_model-00003-of-00003.bin\r\n512 test2/generation_config.json\r\n76G test2\r\n```\r\n\r\nSo, the pytorch_model.bin files are much larger. Although there is a max file size of 10g that has been set for the second save, it still exceeds the file size. I guess something is wrong there?", "> no, they are saved in their own files under `global_step*`. You might want to inspect the contents of the folder.\r\n> \r\n> Please feel free report the full listing and their sizes here if you'd like to continue this discussion more specifically.\r\n\r\nI call trainer.save_model() manually and Im using stage2, so `global_step*` is not created. but indeed these folders will be created in checkpoints saving during training. Btw, is there any way to skip saving `global_step*` for stage2? this folder is extremely large and I think may not necessarily be needed for fine-tune cases.", "oh, thank you! now that you're showing the actual file sizes, it's much easier to see what you're talking about. Indeed this looks wrong.\r\n\r\nI have seen this happening in one situation where saving not updating the tensor's data structure. I wrote a script to fix that. Can you run this script and see if the shrink to a normal size?\r\nhttps://github.com/stas00/toolbox/blob/master/pytorch/torch-checkpoint-shrink.py\r\n\r\nThen we can look at the cause.", "Hi @stas00 seems your tool can only support `.pt` files? can you give me more instructions on how to use it for transformer checkpoints folder? thanks!", "\r\n\r\n\r\n> Hi @stas00 seems your tool can only support `.pt` files? can you give me more instructions on how to use it for transformer checkpoints folder? thanks!\r\n\r\nNever mind, I modified your script and it works now. Indeed it gets back to the correct size after shrinking:\r\n\r\n```bash\r\npython3 torch-checkpoint-shrink.py --checkpoint_dir test2/ --patterns \"pytorch_model*.bin\"\r\nProcessing zero checkpoint 'test2/'\r\n-> test2/pytorch_model-00001-of-00003.bin\r\n-> test2/pytorch_model-00002-of-00003.bin\r\n-> test2/pytorch_model-00003-of-00003.bin\r\nDone. Before 77115.10MB, after 25705.12MB, saved 51409.98MB\r\n\r\ndu -a -h --max-depth=1 test2\r\n496K test2/tokenizer.model\r\n512 test2/config.json\r\n32K test2/pytorch_model.bin.index.json\r\n16K test2/training_args.bin\r\n512 test2/tokenizer_config.json\r\n512 test2/special_tokens_map.json\r\n9.2G test2/pytorch_model-00001-of-00003.bin\r\n9.3G test2/pytorch_model-00002-of-00003.bin\r\n6.7G test2/pytorch_model-00003-of-00003.bin\r\n512 test2/generation_config.json\r\n26G test2\r\n\r\n```\r\nSo I bet the problem is this...", "Wonderful. It was fixed in PP saving code in Deepspeed at https://github.com/microsoft/DeepSpeed/pull/1324 when I first seen this problem in Megatron-Deepspeed a year ago.\r\n\r\nSo probably need to do the same for ZeRO. Would you like to try replicating the above fix for ZeRO? Basically the need is to reclone the tensors, so they are recreated with the final actual size of the storage.\r\n\r\nIt should be pretty simple to do, by applying the same change of the PR above to this line:\r\n\r\nhttps://github.com/microsoft/DeepSpeed/blob/036c5d6d7b6028853a4e15ef3f5df466ba335f33/deepspeed/runtime/checkpoint_engine/torch_checkpoint_engine.py#L20\r\n\r\nand then test that your issue goes away, file a PR with Deepspeed and become a Deepspeed committer ;)\r\n\r\n", "actually, it will require a bit of efficiency changes to it. PP was already having small `state_dict` so it wasn't a problem to clone tensors in small groups. But here it'd be very expensive as it'd end up having 2 copies of the model, which can be huge. So I won't use dict comprehension and instead loop normally over the `state_dict` and clone and immediately overwrite the tensor - one tensor at a time. So the overhead will be one largest tensor and not 2x `state_dict`", "hmm, but deepspeed doesn't do checkpoint sharding, those shards come from `transformers`:\r\n\r\n```\r\n32K test2/pytorch_model.bin.index.json\r\n9.2G test2/pytorch_model-00001-of-00003.bin\r\n9.3G test2/pytorch_model-00002-of-00003.bin\r\n6.7G test2/pytorch_model-00003-of-00003.bin\r\n```\r\n\r\nSo I am actually not sure that the suggestions I gave you is the right one. I looked at the code you shared, but that's not the code that HF Trainer runs. So we need to do that cloning there instead I think.\r\n", "Yeah, the code I shared is my temporary fix for this issue, using `self.deepspeed.save_16bit_model(output_dir, WEIGHTS_NAME)` gives the correct size `pytorch_model.bin` file, but indeed will save in a single file, not sharded.", "I think `state_dict` should be re-cloned right after this line:\r\nhttps://github.com/huggingface/transformers/blob/84a6570e7bce91ba7d18c0782186241c5f1fde75/src/transformers/trainer.py#L2872\r\n\r\nPlease check if I got to the right code branch, I'm doing it by reading the code - so possibly I got it wrong.\r\n", "> I think `state_dict` should be re-cloned here:\r\n> \r\n> https://github.com/huggingface/transformers/blob/84a6570e7bce91ba7d18c0782186241c5f1fde75/src/transformers/trainer.py#L2873\r\n> \r\n> Please check if I got to the right code branch, I'm doing it by reading the code - so possibly I got it wrong.\r\n\r\nbut I think here cannot solve for the `PreTrainedModel ` classes? Im afraid need to change `save_pretrained` here https://github.com/huggingface/transformers/blob/84a6570e7bce91ba7d18c0782186241c5f1fde75/src/transformers/modeling_utils.py#L1761 in `PreTrainedModel` if we want to fix for `transformers ` models", "so I tried this in `save_pretrained ` and it works\r\n\r\n```python\r\n# Save the model\r\nif state_dict is None:\r\n # state_dict = model_to_save.state_dict()\r\n orig_state_dict = model_to_save.state_dict()\r\n state_dict = type(orig_state_dict)(\r\n {k: v.clone()\r\n for k,\r\n v in orig_state_dict.items()})\r\n```", "Excellent, but we can't do that in `save_pretrained` since we don't want everybody paying a penalty because of a special case.\r\n\r\nSo let's go up the call stack and find where it needs to be called for the deepspeed case only. I think my suggestion should be around the right place. just need to add `if deepspeed`.\r\n\r\nActually, let's ping @tjruwase - Tunji any idea why we get the tensors bloated in the model during zero-2 w/ optim offload when they are saved? Remember we had that issue in PP in Megatron-Deepspeed and we had to re-clone the model's state dict? https://github.com/microsoft/DeepSpeed/pull/1324 So it seems @ArvinZhuang is hitting this same issue with ZeRO-2. Since the model is not sharded and the saving happens outside of Deepspeed, this is just `torch.save(module.model.state_dict())`, I am not sure how this can be fixed on the deepspeed side.\r\n\r\nThe bloating is about 2.5x times of the real size, you can see the good and the bad cases here: https://github.com/huggingface/transformers/issues/22822#issuecomment-1513853704\r\nand my checkpoint shrinking post-processing workaround restores the normal size.\r\n\r\nDoes this perhaps have anything to do with offloading? But only the optimizer is offloaded here - so I don't see a connection. \r\n\r\n@ArvinZhuang, could you try with a smaller model and test whether the bloating goes away if you don't use offload? And perhaps w/o deepspeed at all just to validate if the issue is indeed coming from deepspeed. But most likely it is.", "Good point @stas00, I have tried several things already.\r\nUsing gpt-2 (a small model) with deepspeed does not have this problem.\r\nLLaMa Without using deepspeed does not have this problem (was using fsdp).\r\nUnfortunately, I don't have enough GPU memory to run without offloading, so I cannot test\r\n\r\n\r\nI can confirm that for llama case the issue comes from here \r\nhttps://github.com/huggingface/transformers/blob/84a6570e7bce91ba7d18c0782186241c5f1fde75/src/transformers/deepspeed.py#L378\r\n\r\n After giving the model to deepspeed initial then the `model.save_pretrained()` will have the wrong size. Model savings before this line are correct.", "@stas00 Probably we can change this line\r\nhttps://github.com/huggingface/transformers/blob/84a6570e7bce91ba7d18c0782186241c5f1fde75/src/transformers/trainer.py#L2804\r\n\r\nto \r\n\r\n```python\r\n if self.args.should_save:\r\n state_dict = self.model.state_dict()\r\n state_dict = type(state_dict)(\r\n {k: v.clone()\r\n for k,\r\n v in state_dict.items()})\r\n self._save(output_dir, state_dict=state_dict)\r\n```\r\n\r\nThis will only affect saving behavior of deepspeed. and I tested it also works. ", "Excellent. That is the right place, @ArvinZhuang \r\n\r\nBut since the issue comes from Deepspeed, let's see if perhaps the cause can be removed there in the first place, since if we fix it directly in HF Trainer it'll still have this problem in any other training loop. Like Accelerate and any custom user training loop. Let's first wait for Tunji to respond.\r\n\r\nThe other option is to file your repro with saving before and after directly at https://github.com/microsoft/DeepSpeed/issues since clearly the issue is coming from there.\r\n\r\nThe shortest repro to send there is probably something like this (untested):\r\n\r\n```\r\nds_config = {\r\n \"zero_optimization\": {\r\n \"stage\": 2,\r\n },\r\n \"offload_optimizer\": {\r\n \"device\": \"cpu\",\r\n \"pin_memory\": true\r\n }, \r\n \"train_batch_size\": \"1\",\r\n \"train_micro_batch_size_per_gpu\": \"1\"\r\n}\r\nmodel = ...from_pretrained(\"decapoda-research/llama-7b-hf\")\r\nmodel.save_pretrained(\"before\")\r\ndeepspeed_engine, _* = deepspeed.initialize(model=model, config_params=ds_config)\r\ndeepspeed_engine.module.save_pretrained(\"after\")\r\n```\r\n\r\nplease fill in the missing bits, but I think that's all that is needed. I am not sure if optimizer/schedulers are even needed, but it'll assign the defaults.\r\n\r\nI hope the above indeed reproduces the issue.", "> oh, thank you! now that you're showing the actual file sizes, it's much easier to see what you're talking about. Indeed this looks wrong.\r\n> \r\n> I have seen this happening in one situation where saving not updating the tensor's data structure. I wrote a script to fix that. Can you run this script and see if the shrink to a normal size? https://github.com/stas00/toolbox/blob/master/pytorch/torch-checkpoint-shrink.py\r\n> \r\n> Then we can look at the cause.\r\n\r\nI use the script, but the pt file not change \r\n<img width=\"483\" alt=\"image\" src=\"https://user-images.githubusercontent.com/12690488/233274800-07a9b7e4-ab60-4fc0-8bba-aa6a050a9597.png\">\r\n", "Hi @lw3259111 , what is your setting? like which model, deepspeed config, etc.", "@ArvinZhuang \r\nI use llama 33B model and the deepspeed config is :\r\n\r\n```\r\n{\r\n \"bf16\": {\r\n \"enabled\": \"auto\"\r\n },\r\n \"optimizer\": {\r\n \"type\": \"AdamW\",\r\n \"params\": {\r\n \"lr\": \"auto\",\r\n \"betas\": \"auto\",\r\n \"eps\": \"auto\",\r\n \"weight_decay\": \"auto\"\r\n }\r\n },\r\n \"scheduler\": {\r\n \"type\": \"WarmupDecayLR\",\r\n \"params\": {\r\n \"total_num_steps\": \"auto\",\r\n \"warmup_min_lr\": \"auto\",\r\n \"warmup_max_lr\": \"auto\",\r\n \"warmup_num_steps\": \"auto\"\r\n }\r\n },\r\n \"zero_optimization\": {\r\n \"stage\": 3,\r\n \"offload_optimizer\": {\r\n \"device\": \"cpu\",\r\n \"pin_memory\": true\r\n },\r\n \"offload_param\": {\r\n \"device\": \"cpu\",\r\n \"pin_memory\": true\r\n },\r\n \"overlap_comm\": true,\r\n \"contiguous_gradients\": true,\r\n \"sub_group_size\": 1e9,\r\n \"reduce_bucket_size\": \"auto\",\r\n \"stage3_prefetch_bucket_size\": \"auto\",\r\n \"stage3_param_persistence_threshold\": \"auto\",\r\n \"stage3_max_live_parameters\": 1e9,\r\n \"stage3_max_reuse_distance\": 1e9,\r\n \"stage3_gather_16bit_weights_on_model_save\": false\r\n },\r\n \"gradient_accumulation_steps\": \"auto\",\r\n \"gradient_clipping\": \"auto\",\r\n \"steps_per_print\": 5,\r\n \"train_batch_size\": \"auto\",\r\n \"train_micro_batch_size_per_gpu\": \"auto\",\r\n \"wall_clock_breakdown\": false\r\n}\r\n\r\n```", "Please note the discussion continues here: https://github.com/microsoft/DeepSpeed/issues/3303#issuecomment-1516798523\r\n\r\nWe understand well the cause of the problem - explained at https://github.com/microsoft/DeepSpeed/issues/3303#issuecomment-1516801635\r\n\r\nThis impacts only z1/z2 models that are sharded.\r\n\r\nApparently, FSDP has the same issue.\r\n\r\nSo the 2 workarounds for now are:\r\n\r\n1. edit `save_pretrained` call to do `save_pretrained(..., max_shard_size=100GB)` - this will create a single shard which won't have any bloat - just choose any `max_shard_size` bigger than the model size.\r\n2. Use the full clone solution here https://github.com/huggingface/transformers/issues/22822#issuecomment-1514096667 you might want to move the cloned tensors to cpu - i.e. `v.clone().cpu()` as you are likely not to have enough memory of gpu\r\n", "@stas00 I remember I was using FSDP and it saves the correct size model shards. I feel the issue only happens with deepspeed.", "I was just relaying a report from someone else reporting the same problem with FSDP. Perhaps it depends on circumstances.\r\n\r\nBut it doesn't matter who else has this problem. This one will get fixed as soon as the Deepspeed side provides a utility for shrinking the `state_dict` and makes a new release.\r\n\r\n", "> Please note the discussion continues here: [microsoft/DeepSpeed#3303 (comment)](https://github.com/microsoft/DeepSpeed/issues/3303#issuecomment-1516798523)\r\n> \r\n> We understand well the cause of the problem - explained at [microsoft/DeepSpeed#3303 (comment)](https://github.com/microsoft/DeepSpeed/issues/3303#issuecomment-1516801635)\r\n> \r\n> This impacts only z1/z2 models that are sharded.\r\n> \r\n> Apparently, FSDP has the same issue.\r\n> \r\n> So the 2 workarounds for now are:\r\n> \r\n> 1. edit `save_pretrained` call to do `save_pretrained(..., max_shard_size=100GB)` - this will create a single shard which won't have any bloat - just choose any `max_shard_size` bigger than the model size.\r\n> 2. Use the full clone solution here [Size of saved model checkpoints after trainer.train() is much larger when using trainer with deepspeed stage2 #22822 (comment)](https://github.com/huggingface/transformers/issues/22822#issuecomment-1514096667) you might want to move the cloned tensors to cpu - i.e. `v.clone().cpu()` as you are likely not to have enough memory of gpu\r\n\r\n@stas00 when I cloned tensors to CPU, The saved model is only 400M, my code:\r\n\r\n```\r\ndef safe_save_model_for_hf_trainer(trainer: transformers.Trainer,\r\n output_dir: str):\r\n \"\"\"Collects the state dict and dump to disk.\"\"\"\r\n state_dict = trainer.model.state_dict()\r\n if trainer.args.should_save:\r\n cpu_state_dict = {\r\n key: value.cpu()\r\n for key, value in state_dict.items()\r\n }\r\n del state_dict\r\n trainer._save(output_dir, state_dict=cpu_state_dict) # noqa\r\n```", "please reread the comment you quoted - it says `clone` and then optionally move to cpu. Your code is missing the key operation.\r\n\r\n\r\n\r\n", "> please reread the comment you quoted - it says `clone` and then optionally move to cpu. Your code is missing the key operation.\r\n\r\nI am using the following code, but I still cannot save the model properly,code:\r\n```\r\ndef safe_save_model_for_hf_trainer_clone(trainer: transformers.Trainer,\r\n output_dir: str):\r\n \"\"\"Collects the state dict and dump to disk.\"\"\"\r\n state_dict = trainer.model.state_dict()\r\n if trainer.args.should_save:\r\n cpu_state_dict = type(state_dict)(\r\n {k: v.cpu().clone()\r\n for k,\r\n v in state_dict.items()})\r\n del state_dict\r\n trainer._save(output_dir, state_dict=cpu_state_dict) # noqa\r\n```\r\nor \r\n```\r\ndef safe_save_model_for_hf_trainer_clone(trainer: transformers.Trainer,\r\n output_dir: str):\r\n \"\"\"Collects the state dict and dump to disk.\"\"\"\r\n state_dict = trainer.model.state_dict()\r\n if trainer.args.should_save:\r\n cpu_state_dict = type(state_dict)(\r\n {k: v.clone().cpu()\r\n for k,\r\n v in state_dict.items()})\r\n del state_dict\r\n trainer._save(output_dir, state_dict=cpu_state_dict) # noqa\r\n```\r\nthe result:\r\n<img width=\"508\" alt=\"image\" src=\"https://user-images.githubusercontent.com/12690488/234154680-d56eef0a-6358-41bd-b1b1-b574c1c458b2.png\">\r\n<img width=\"525\" alt=\"image\" src=\"https://user-images.githubusercontent.com/12690488/234154727-87d45ac5-6df5-44be-a7a6-592b44aa0abc.png\">\r\n", "@lw3259111 this problem seems only occurs with deepspeed Zero1/2, and a large model saved with shared checkpoints. Your setting and model may not have this issue." ]
1,681
1,687
1,687
NONE
null
### System Info - `transformers` version: 4.28.0.dev0 - Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.13.3 - Safetensors version: not installed - PyTorch version (GPU?): 1.12.1+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help? @stas00 @sgugger ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I'm using Trainer with deepspeed integration to fine-tune a Llama model. This is the stage2 config im using: ```json { "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "zero_optimization": { "stage": 2, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "allgather_partitions": true, "allgather_bucket_size": 2e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 2e8, "contiguous_gradients": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto" } ``` So I'm using zero2 with optimizer offload. I found the size of the model checkpoints after `trainer.train()` become much larger than what they should be. Using official [run_clm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py) script as an example : ```bash deepspeed --num_gpus=1 run_clm.py \ --num_train_epochs 0.01 \ --model_name_or_path decapoda-research/llama-7b-hf \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --per_device_train_batch_size 2 \ --do_train \ --output_dir /tmp/test-plm \ --deepspeed ds_config.json ``` I add these two save_model lines around `trainer.train()` for testing: ```python trainer.save_model("test1") train_result = trainer.train(resume_from_checkpoint=checkpoint) trainer.save_model("test2") ``` Now check the size: ```bash du -sh test1 26G test1 du -sh test2 76G test2 ``` Note, I have deleted `global_step*` folder in `test2` before calculating the size. I believe 26G is the correct size for an fp32 llama 7b. So, after training with trainer, the model size is wrong? Interestingly, seems the wrong size model still works with `.from_pretrain`. I have located the issue raised after this [line](https://github.com/huggingface/transformers/blob/dacd34568d1a27b91f84610eab526640ed8f94e0/src/transformers/deepspeed.py#L378), which changed the model assignment in trainer `_inner_training_loop` [here](https://github.com/huggingface/transformers/blob/dacd34568d1a27b91f84610eab526640ed8f94e0/src/transformers/trainer.py#L1733) afterward. After this the model saved by `trainer._save()` will have the wrong size. Does deepspeed engine add some extra things to pytorch_model.bin? is this expected? My current solution to this is always using `self.deepspeed.save_16bit_model()` in [trainer.save_model()](https://github.com/huggingface/transformers/blob/dacd34568d1a27b91f84610eab526640ed8f94e0/src/transformers/trainer.py#L2771) for zerostage2: ```python elif self.deepspeed: # this takes care of everything as long as we aren't under zero3 if self.args.should_save: self._save(output_dir) if is_deepspeed_zero3_enabled(): # It's too complicated to try to override different places where the weights dump gets # saved, so since under zero3 the file is bogus, simply delete it. The user should # either user deepspeed checkpoint to resume or to recover full weights use # zero_to_fp32.py stored in the checkpoint. if self.args.should_save: file = os.path.join(output_dir, WEIGHTS_NAME) if os.path.isfile(file): # logger.info(f"deepspeed zero3: removing {file}, see zero_to_fp32.py to recover weights") os.remove(file) # now save the real model if stage3_gather_16bit_weights_on_model_save=True # if false it will not be saved. # This must be called on all ranks if not self.deepspeed.save_16bit_model(output_dir, WEIGHTS_NAME): logger.warning( "deepspeed.save_16bit_model didn't save the model, since" " stage3_gather_16bit_weights_on_model_save=false. Saving the full checkpoint instead, use" " zero_to_fp32.py to recover weights" ) self.deepspeed.save_checkpoint(output_dir) else: if self.args.should_save: for filename in os.listdir(output_dir): full_filename = os.path.join(output_dir, filename) # If we have a shard file that is not going to be replaced, we delete it, but only from the main process # in distributed settings to avoid race conditions. weights_no_suffix = WEIGHTS_NAME.replace(".bin", "").replace(".safetensors", "") # delete everything start with weights_no_suffix, usually are "pytorch_model". if ( filename.startswith(weights_no_suffix) and os.path.isfile(full_filename) ): os.remove(full_filename) self.deepspeed.save_16bit_model(output_dir, WEIGHTS_NAME) ``` ### Expected behavior Model checkpoint size should be unchanged after `trainer.train()`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22822/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22822/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22821
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22821/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22821/comments
https://api.github.com/repos/huggingface/transformers/issues/22821/events
https://github.com/huggingface/transformers/issues/22821
1,672,556,594
I_kwDOCUB6oc5jsTAy
22,821
set fsdp and bf16 don't save memory
{ "login": "skye95git", "id": 41561936, "node_id": "MDQ6VXNlcjQxNTYxOTM2", "avatar_url": "https://avatars.githubusercontent.com/u/41561936?v=4", "gravatar_id": "", "url": "https://api.github.com/users/skye95git", "html_url": "https://github.com/skye95git", "followers_url": "https://api.github.com/users/skye95git/followers", "following_url": "https://api.github.com/users/skye95git/following{/other_user}", "gists_url": "https://api.github.com/users/skye95git/gists{/gist_id}", "starred_url": "https://api.github.com/users/skye95git/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/skye95git/subscriptions", "organizations_url": "https://api.github.com/users/skye95git/orgs", "repos_url": "https://api.github.com/users/skye95git/repos", "events_url": "https://api.github.com/users/skye95git/events{/privacy}", "received_events_url": "https://api.github.com/users/skye95git/received_events", "type": "User", "site_admin": false }
[ { "id": 2155169140, "node_id": "MDU6TGFiZWwyMTU1MTY5MTQw", "url": "https://api.github.com/repos/huggingface/transformers/labels/trainer", "name": "trainer", "color": "2ef289", "default": false, "description": "" } ]
closed
false
null
[]
[ "cc @younesbelkada ", "cc @pacman100 as I am not really familiar with FSDP + Trainer yet", "Hello @skye95git, you are using FSDP incorrectly, just setting `fsdp=True` won't reduce memory usage. Please refer:\r\n1. the docs here if you want to use Trainer's arguments: https://huggingface.co/docs/transformers/main_classes/trainer#pytorch-fully-sharded-data-parallel\r\n2. the docs here if you want to use the `accelerate launch` with trainer: https://huggingface.co/docs/transformers/main/en/main_classes/trainer#using-accelerate-launcher-with-trainer\r\n\r\n", "> Hello @skye95git, you are using FSDP incorrectly, just setting `fsdp=True` won't reduce memory usage. Please refer:\r\n> \r\n> 1. the docs here if you want to use Trainer's arguments: https://huggingface.co/docs/transformers/main_classes/trainer#pytorch-fully-sharded-data-parallel\r\n> 2. the docs here if you want to use the `accelerate launch` with trainer: https://huggingface.co/docs/transformers/main/en/main_classes/trainer#using-accelerate-launcher-with-trainer\r\n\r\nHi @pacman100 thanks for the reply here. However, from https://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/trainer.py#L1526C1-L1526C39\r\nit seems that only when XLA enables FSDP, is this correct? If `fsdp_config['xla']` is `None`, how FSDP is used in this version?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,681
1,691
1,691
NONE
null
### System Info - `transformers` version: 4.28.0 - Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31 - Python version: 3.9.12 - Huggingface_hub version: 0.13.4 - Safetensors version: not installed - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script? yes - Using distributed or parallel set-up in script? yes ### Who can help? @ArthurZucker @sgu ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. download the dataset ``` lang = "Python" import subprocess subprocess.call(["wget", f"https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/{lang}.zip"]) subprocess.call(["unzip", f"/content/{lang}.zip"]) !mkdir "log" log_dir = "/content/log" !mkdir "data" data_dir = "/content/data" !mkdir "model" model_dir = "/content/model" !mkdir "tokenizer" tokenizer_dir = "/content/tokenizer" ``` 2. data preprocess ``` import os import json import torch from pathlib import Path from transformers import (Trainer, pipeline, RobertaConfig, TrainingArguments, RobertaForMaskedLM, RobertaTokenizerFast, LineByLineTextDataset, DataCollatorForLanguageModeling) from tokenizers import ByteLevelBPETokenizer from tokenizers.processors import BertProcessing from tokenizers.implementations import ByteLevelBPETokenizer def prepare_text(dir_path): for path in os.listdir(dir_path): os.system(f"gunzip -k {dir_path}/{path}") texts = "" for path in os.listdir(dir_path): if path.endswith(".jsonl"): with open(dir_path + "/" + path, 'r') as f: sample_file = f.readlines() for sample in sample_file: obj = json.loads(sample) texts += obj["original_string"].replace("\n", "").replace("\t", "") + "\n" return texts train1_texts = prepare_text(f"/content/{lang}/final/jsonl/train") train2_texts = prepare_text(f"/content/{lang}/final/jsonl/valid") train_texts = train1_texts + "\n" + train2_texts valid_texts = prepare_text(f"/content/{lang}/final/jsonl/test") for path, text in zip(["train_texts.txt", "valid_texts.txt"], [train_texts, valid_texts]): with open(f"{data_dir}/{path}","w") as f: f.write(text) ``` 3. Train a tokenizer ``` paths = [str(x) for x in Path(f"{data_dir}/").glob("**/*.txt")] tokenizer = ByteLevelBPETokenizer() tokenizer.train(files=paths, vocab_size=52_000, min_frequency=2, special_tokens=[ "<s>", "<pad>", "</s>", "<unk>", "<mask>", ]) tokenizer.save_model(tokenizer_dir) tokenizer = ByteLevelBPETokenizer( "tokenizer/vocab.json", "tokenizer/merges.txt", ) tokenizer._tokenizer.post_processor = BertProcessing( ("</s>", tokenizer.token_to_id("</s>")), ("<s>", tokenizer.token_to_id("<s>")), ) tokenizer.enable_truncation(max_length=512) ``` 4. Build model ``` config = RobertaConfig( vocab_size=52_000, max_position_embeddings=514, num_attention_heads=12, num_hidden_layers=6, type_vocab_size=1, ) tokenizer = RobertaTokenizerFast.from_pretrained(tokenizer_dir, max_len=512) model = RobertaForMaskedLM(config=config) model.num_parameters() train_dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path=f"{data_dir}/train_texts.txt", block_size=128, ) test_dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path=f"{data_dir}/valid_texts.txt", block_size=128, ) data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=True, mlm_probability=0.15 ) training_args = TrainingArguments( output_dir=model_dir, overwrite_output_dir=True, num_train_epochs=4, per_gpu_train_batch_size=64, save_steps=5000, do_eval=True, logging_dir=log_dir, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset = test_dataset ) trainer.train() trainer.save_model(model_dir) tokenizer.save_pretrained(tokenizer_dir) ``` ### Expected behavior before set fsdp and bf16: ``` training_args = TrainingArguments( output_dir=model_dir, overwrite_output_dir=True, num_train_epochs=4, per_gpu_train_batch_size=64, save_steps=5000, do_eval=True, logging_dir=log_dir, ) ``` <img width="417" alt="Snipaste_2023-04-18_15-42-22" src="https://user-images.githubusercontent.com/41561936/232707188-2579965b-92fd-4ba6-87de-b82ca948ec54.png"> after set fsdp and bf16: ``` training_args = TrainingArguments( output_dir=model_dir, overwrite_output_dir=True, num_train_epochs=4, per_gpu_train_batch_size=64, save_steps=5000, do_eval=True, logging_dir=log_dir, fsdp=True, bf16=True, ) ``` <img width="415" alt="Snipaste_2023-04-18_15-42-45" src="https://user-images.githubusercontent.com/41561936/232707483-2b89c658-172d-4a23-a7fc-fe40cd1dfe83.png"> The memory usage is not much different and does not achieve the desired effect. Why? I also try to set `per_gpu_train_batch_size=4` when `fsdp=True, bf16=True`: <img width="426" alt="Snipaste_2023-04-18_15-49-23" src="https://user-images.githubusercontent.com/41561936/232708818-efa676d9-4e6b-440a-b0e0-e66e54026da5.png"> Compared with the results of the previous set of experiments, the increase of memory usage is much greater than the increase of batch size. Why?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22821/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22821/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22820
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22820/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22820/comments
https://api.github.com/repos/huggingface/transformers/issues/22820/events
https://github.com/huggingface/transformers/pull/22820
1,672,493,074
PR_kwDOCUB6oc5Oi-kN
22,820
Add MobileViTv2
{ "login": "shehanmunasinghe", "id": 5057255, "node_id": "MDQ6VXNlcjUwNTcyNTU=", "avatar_url": "https://avatars.githubusercontent.com/u/5057255?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shehanmunasinghe", "html_url": "https://github.com/shehanmunasinghe", "followers_url": "https://api.github.com/users/shehanmunasinghe/followers", "following_url": "https://api.github.com/users/shehanmunasinghe/following{/other_user}", "gists_url": "https://api.github.com/users/shehanmunasinghe/gists{/gist_id}", "starred_url": "https://api.github.com/users/shehanmunasinghe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shehanmunasinghe/subscriptions", "organizations_url": "https://api.github.com/users/shehanmunasinghe/orgs", "repos_url": "https://api.github.com/users/shehanmunasinghe/repos", "events_url": "https://api.github.com/users/shehanmunasinghe/events{/privacy}", "received_events_url": "https://api.github.com/users/shehanmunasinghe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts please note that this is still a work in progress. I'll let you know when it is ready for your review.", "_The documentation is not available anymore as the PR was closed or merged._", "@amyeroberts , this is now ready for your review.", "@amyeroberts , any updates?", "Hi @amyeroberts, thanks for your review. I have applied the suggestions and pushed the updated code.", "@amyeroberts, thanks for your feedback and I have now applied the suggested changes." ]
1,681
1,685
1,685
CONTRIBUTOR
null
# What does this PR do? Adds the MobileViTv2 model into transformers library (PS: Work in Progress) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # ([issue](https://github.com/huggingface/transformers/issues/22570)) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. https://github.com/huggingface/transformers/issues/22570 - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @amyeroberts <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22820/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22820/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22820", "html_url": "https://github.com/huggingface/transformers/pull/22820", "diff_url": "https://github.com/huggingface/transformers/pull/22820.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22820.patch", "merged_at": 1685698622000 }
https://api.github.com/repos/huggingface/transformers/issues/22819
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22819/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22819/comments
https://api.github.com/repos/huggingface/transformers/issues/22819/events
https://github.com/huggingface/transformers/pull/22819
1,672,462,332
PR_kwDOCUB6oc5Oi3_6
22,819
Include decoder_attention_mask in T5 model inputs
{ "login": "aashiqmuhamed", "id": 17514579, "node_id": "MDQ6VXNlcjE3NTE0NTc5", "avatar_url": "https://avatars.githubusercontent.com/u/17514579?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aashiqmuhamed", "html_url": "https://github.com/aashiqmuhamed", "followers_url": "https://api.github.com/users/aashiqmuhamed/followers", "following_url": "https://api.github.com/users/aashiqmuhamed/following{/other_user}", "gists_url": "https://api.github.com/users/aashiqmuhamed/gists{/gist_id}", "starred_url": "https://api.github.com/users/aashiqmuhamed/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aashiqmuhamed/subscriptions", "organizations_url": "https://api.github.com/users/aashiqmuhamed/orgs", "repos_url": "https://api.github.com/users/aashiqmuhamed/repos", "events_url": "https://api.github.com/users/aashiqmuhamed/events{/privacy}", "received_events_url": "https://api.github.com/users/aashiqmuhamed/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? This PR includes decoder_attention_mask as an argument in the `prepare_inputs_for_generation` function, helping enable the use of custom attention masks in the decoder. @gante
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22819/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22819/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22819", "html_url": "https://github.com/huggingface/transformers/pull/22819", "diff_url": "https://github.com/huggingface/transformers/pull/22819.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22819.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22818
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22818/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22818/comments
https://api.github.com/repos/huggingface/transformers/issues/22818/events
https://github.com/huggingface/transformers/issues/22818
1,672,431,179
I_kwDOCUB6oc5jr0ZL
22,818
How to use Distill-BERT with different datasets?
{ "login": "sauravtii", "id": 109907638, "node_id": "U_kgDOBo0Otg", "avatar_url": "https://avatars.githubusercontent.com/u/109907638?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sauravtii", "html_url": "https://github.com/sauravtii", "followers_url": "https://api.github.com/users/sauravtii/followers", "following_url": "https://api.github.com/users/sauravtii/following{/other_user}", "gists_url": "https://api.github.com/users/sauravtii/gists{/gist_id}", "starred_url": "https://api.github.com/users/sauravtii/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sauravtii/subscriptions", "organizations_url": "https://api.github.com/users/sauravtii/orgs", "repos_url": "https://api.github.com/users/sauravtii/repos", "events_url": "https://api.github.com/users/sauravtii/events{/privacy}", "received_events_url": "https://api.github.com/users/sauravtii/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Closing this issue as it's a repeat of #22817 " ]
1,681
1,681
1,681
NONE
null
### System Info - `transformers` version: 4.11.3 - Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): 1.12.0+cu102 (True) - Tensorflow version (GPU?): 2.10.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker @younesbelkada @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I recently read [this](https://huggingface.co/docs/transformers/quicktour#train-with-tensorflow:~:text=The%20most%20important%20thing%20to%20remember%20is%20you%20need%20to%20instantiate%20a%20tokenizer%20with%20the%20same%20model%20name%20to%20ensure%20you%E2%80%99re%20using%20the%20same%20tokenization%20rules%20a%20model%20was%20pretrained%20with.) and was wondering how to use distill-BERT (which is pre-trained with imdb dataset) with a different dataset (for eg. [this](https://huggingface.co/datasets/yhavinga/imdb_dutch) dataset)? ### Expected behavior Distill-BERT should work with different datasets.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22818/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22818/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22817
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22817/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22817/comments
https://api.github.com/repos/huggingface/transformers/issues/22817/events
https://github.com/huggingface/transformers/issues/22817
1,672,426,424
I_kwDOCUB6oc5jrzO4
22,817
How to use distill-BERT with different datasets?
{ "login": "sauravtii", "id": 109907638, "node_id": "U_kgDOBo0Otg", "avatar_url": "https://avatars.githubusercontent.com/u/109907638?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sauravtii", "html_url": "https://github.com/sauravtii", "followers_url": "https://api.github.com/users/sauravtii/followers", "following_url": "https://api.github.com/users/sauravtii/following{/other_user}", "gists_url": "https://api.github.com/users/sauravtii/gists{/gist_id}", "starred_url": "https://api.github.com/users/sauravtii/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sauravtii/subscriptions", "organizations_url": "https://api.github.com/users/sauravtii/orgs", "repos_url": "https://api.github.com/users/sauravtii/repos", "events_url": "https://api.github.com/users/sauravtii/events{/privacy}", "received_events_url": "https://api.github.com/users/sauravtii/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, @sauravtii. Thanks for raising an issue! \r\n\r\nIn general, this is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n\r\nI recommend looking at the [NLP course](https://huggingface.co/learn/nlp-course/) which will take you through using and training tokenizers, datasets, and models. ", "@amyeroberts Thanks for your response. I was able to use Distil-BERT with different datasets.\r\n\r\nNow, I am trying out this [tutorial](https://flower.dev/docs/quickstart-huggingface.html) which basically trains distil-BERT with IMDB dataset (very similar to this [tutorial](https://huggingface.co/docs/transformers/main/tasks/sequence_classification)). But I don't know why my accuracy isn't increasing even after training for a significant amount of time and also by using the entire dataset. Below I have attached `client.py` file:\r\n\r\n`client.py`:\r\n\r\n```\r\nfrom collections import OrderedDict\r\nimport warnings\r\n\r\nimport flwr as fl\r\nimport torch\r\nimport numpy as np\r\n\r\nimport random\r\nfrom torch.utils.data import DataLoader\r\n\r\nfrom datasets import load_dataset, load_metric\r\n\r\nfrom transformers import AutoTokenizer, DataCollatorWithPadding\r\nfrom transformers import AutoModelForSequenceClassification\r\nfrom transformers import AdamW\r\n\r\nwarnings.filterwarnings(\"ignore\", category=UserWarning)\r\n\r\nDEVICE = \"cuda:1\"\r\n\r\nCHECKPOINT = \"distilbert-base-uncased\" # transformer model checkpoint\r\n\r\n\r\ndef load_data():\r\n \"\"\"Load IMDB data (training and eval)\"\"\"\r\n raw_datasets = load_dataset(\"imdb\")\r\n raw_datasets = raw_datasets.shuffle(seed=42)\r\n\r\n # remove unnecessary data split\r\n del raw_datasets[\"unsupervised\"]\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(CHECKPOINT)\r\n\r\n def tokenize_function(examples):\r\n return tokenizer(examples[\"text\"], truncation=True)\r\n\r\n tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)\r\n\r\n tokenized_datasets = tokenized_datasets.remove_columns(\"text\")\r\n tokenized_datasets = tokenized_datasets.rename_column(\"label\", \"labels\")\r\n\r\n data_collator = DataCollatorWithPadding(tokenizer=tokenizer)\r\n trainloader = DataLoader(\r\n tokenized_datasets[\"train\"],\r\n shuffle=True,\r\n batch_size=32,\r\n collate_fn=data_collator,\r\n )\r\n\r\n testloader = DataLoader(\r\n tokenized_datasets[\"test\"], batch_size=32, collate_fn=data_collator\r\n )\r\n\r\n return trainloader, testloader\r\n\r\n\r\ndef train(net, trainloader, epochs):\r\n optimizer = AdamW(net.parameters(), lr=5e-5)\r\n net.train()\r\n for i in range(epochs):\r\n print(\"Epoch: \", i+1)\r\n j = 1\r\n print(\"####################### The length of the trainloader is: \", len(trainloader)) \r\n for batch in trainloader:\r\n print(\"####################### The batch number is: \", j)\r\n batch = {k: v.to(DEVICE) for k, v in batch.items()}\r\n outputs = net(**batch)\r\n loss = outputs.loss\r\n loss.backward()\r\n optimizer.step()\r\n optimizer.zero_grad()\r\n j += 1\r\n\r\n\r\ndef test(net, testloader):\r\n metric = load_metric(\"accuracy\")\r\n loss = 0\r\n net.eval()\r\n for batch in testloader:\r\n batch = {k: v.to(DEVICE) for k, v in batch.items()}\r\n with torch.no_grad():\r\n outputs = net(**batch)\r\n logits = outputs.logits\r\n loss += outputs.loss.item()\r\n predictions = torch.argmax(logits, dim=-1)\r\n metric.add_batch(predictions=predictions, references=batch[\"labels\"])\r\n loss /= len(testloader.dataset)\r\n accuracy = metric.compute()[\"accuracy\"]\r\n return loss, accuracy\r\n\r\n\r\ndef main():\r\n net = AutoModelForSequenceClassification.from_pretrained(\r\n CHECKPOINT, num_labels=2\r\n ).to(DEVICE)\r\n\r\n trainloader, testloader = load_data()\r\n\r\n # Flower client\r\n class IMDBClient(fl.client.NumPyClient):\r\n def get_parameters(self, config):\r\n return [val.cpu().numpy() for _, val in net.state_dict().items()]\r\n\r\n def set_parameters(self, parameters):\r\n params_dict = zip(net.state_dict().keys(), parameters)\r\n state_dict = OrderedDict({k: torch.Tensor(v) for k, v in params_dict})\r\n net.load_state_dict(state_dict, strict=True)\r\n\r\n def fit(self, parameters, config):\r\n self.set_parameters(parameters)\r\n print(\"Training Started...\")\r\n train(net, trainloader, epochs=1)\r\n print(\"Training Finished.\")\r\n return self.get_parameters(config={}), len(trainloader), {}\r\n\r\n def evaluate(self, parameters, config):\r\n self.set_parameters(parameters)\r\n loss, accuracy = test(net, testloader)\r\n print({\"loss\": float(loss), \"accuracy\": float(accuracy)})\r\n return float(loss), len(testloader), {\"loss\": float(loss), \"accuracy\": float(accuracy)}\r\n\r\n # Start client\r\n fl.client.start_numpy_client(server_address=\"localhost:5040\", client=IMDBClient())\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\nCan I get any help, please?", "Hi @sauravtii, glad to hear you were able to use a different dataset :) \r\n\r\nAs mentioned above, this is really a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n\r\nAs a side note, training time and performance is all relative. To help people help you in the forum, it's best to give as much information as possible e.g. how long the model was training for, logs of the accuracy observed and the behaviour you expect. In the shared script, it looks like the model is only training for a single epoch - I would start with increasing this first. ", "@amyeroberts Thanks for your reponse. I tried searching for the answer to my question in the forums but wasn't able to, therefore I would really appreciate if you can provide me the link to the answer (if you find one in the forums).\r\n\r\nAlso, I have trained the model for a large number of epochs (ranging from 500-1000), and the one mentioned in the script is just for the sake of an example :)", "@sauravtii I don't know if there's an answer in the forums. What I'm suggesting is you post in the forums with your question and people in the community will be able to discuss with you there. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,681
1,685
1,685
NONE
null
### System Info - `transformers` version: 4.11.3 - Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): 1.12.0+cu102 (True) - Tensorflow version (GPU?): 2.10.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I recently read [this](https://huggingface.co/docs/transformers/quicktour#train-with-tensorflow:~:text=The%20most%20important%20thing%20to%20remember%20is%20you%20need%20to%20instantiate%20a%20tokenizer%20with%20the%20same%20model%20name%20to%20ensure%20you%E2%80%99re%20using%20the%20same%20tokenization%20rules%20a%20model%20was%20pretrained%20with.) and was wondering how to use distill-BERT (which is pre-trained with imdb dataset) with a different dataset (for eg. [this](https://huggingface.co/datasets/yhavinga/imdb_dutch) dataset)? ### Expected behavior Distill-BERT should work with different datasets.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22817/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22817/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22816
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22816/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22816/comments
https://api.github.com/repos/huggingface/transformers/issues/22816/events
https://github.com/huggingface/transformers/issues/22816
1,672,378,335
I_kwDOCUB6oc5jrnff
22,816
Name Error: "Partial State" is not defind
{ "login": "RAravindDS", "id": 85152278, "node_id": "MDQ6VXNlcjg1MTUyMjc4", "avatar_url": "https://avatars.githubusercontent.com/u/85152278?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RAravindDS", "html_url": "https://github.com/RAravindDS", "followers_url": "https://api.github.com/users/RAravindDS/followers", "following_url": "https://api.github.com/users/RAravindDS/following{/other_user}", "gists_url": "https://api.github.com/users/RAravindDS/gists{/gist_id}", "starred_url": "https://api.github.com/users/RAravindDS/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RAravindDS/subscriptions", "organizations_url": "https://api.github.com/users/RAravindDS/orgs", "repos_url": "https://api.github.com/users/RAravindDS/repos", "events_url": "https://api.github.com/users/RAravindDS/events{/privacy}", "received_events_url": "https://api.github.com/users/RAravindDS/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @muellerzr ", "@RAravindDS Thanks for reporting. I suspect the issue is coming from the version of accelerate in your environment. Could you: \r\n* Share the running environment info: copy-paste the output from running `transformers-cli env` in your terminal\r\nTHEN\r\n* Upgrade accelerate using `pip install --upgrade accelerate` \r\n* Retry", "As @amyeroberts mentions, please try following those steps. I'll also look at changing the min Accelerate needed/add a check.", "@amyeroberts I ran the code on Colab, and while training the LLM (LMV3), I got the error, Then I downloaded the previous version of the transformer, and it worked fine.Β ", "@RAravindDS Yes, this is because the `PartialState` import was added as a dependency on the transformers development branch yesterday. `PartialState` was added in the 0.17.0 release in accelerate, and so for the development branch of transformers, accelerate >= 0.17.0 is required. \r\n\r\nDowngrading the transformers version removes the code which is importing `PartialState`. ", "I am using the following version of transformer, datasets and huggingface_hub. \r\n\r\n![image](https://user-images.githubusercontent.com/77198742/232941383-cc398bb4-88c0-4a12-9ff1-c59f8c5aa1a6.png)\r\n\r\nI am running into the following error:\r\n\r\n```sh\r\n NameError: name 'PartialState' is not defined.\r\n```\r\n\r\nHow to resolve this issue to work with my versions of the transformer, datasets and huggingface_hub ?", "@gli-mrunal please do `pip install git+https://github.com/huggingface/accelerate` to install the dev version, or `pip install accelerate -U` if you are not using multi-GPUs (such as in Colab). ", "![image](https://user-images.githubusercontent.com/77198742/233088446-1ae2dabb-6c79-4425-ae7f-759c993ce466.png)\r\n", "@gli-mrunal sorry for the typo, there are two c's for accelerate :)", "> ![image](https://user-images.githubusercontent.com/77198742/233088446-1ae2dabb-6c79-4425-ae7f-759c993ce466.png)\r\n\r\nBro, you don't need to worry too much. Please downgrade the version. They are having stable version. Don't stress too much. Previous version working as usual. We changed all our requirements today. Hectic process :( ", "True. `!pip install transformers==4.28.0` for previous version is easier solution. The newer version runs into dependency issues. ", "I tried to run using the following training arguments in Colab.\r\n`training_args = TrainingArguments(\r\n output_dir=*,\r\n num_train_epochs=num_train_epochs,\r\n learning_rate=learning_rate,\r\n per_device_train_batch_size=batch_size,\r\n per_device_eval_batch_size=batch_size,\r\n weight_decay=weight_decay,\r\n evaluation_strategy=\"epoch\",\r\n disable_tqdm=False,\r\n logging_steps=logging_steps,\r\n push_to_hub=False,\r\n log_level=\"error\",\r\n save_strategy=\"epoch\",\r\n load_best_model_at_end=True,\r\n)`\r\n\r\nThen the following error occured.\r\n`NameError: name 'PartialState' is not defined`\r\n\r\nI attempted all of above advice, but this error wasn't resolved.\r\nPlease tell me how to fix this error.", "Hi @creek411 install version 4.28.0 of transformers by running this code `!pip install transformers==4.28.0`. Then restart and run all the code( if ur using colab).", "Thank you for your reply.\r\nI tried to install 4.28.0 and run the code. However, this error recurred.\r\nIn this code, I install and use `transformers datasets`. \r\nSo should I install `transformers datasets` of previsous version?", "@creek411 the solution would be to do `pip install accelerate` (and as now we have a release, it works OOTB with the normal pypi install), however the fact you have the error means you still probably are installing from dev and there's some cache working in there. You can try `pip uninstall transformers -y`, run your code, make sure it fails because `transformers` isn't installed, then install `transformers` again, either 4.28.0 or 4.29.0 and do `pip install accelerate` as well", "I attempted to do your solution and could avoid the error.\r\nI appreciate for your advise.", "> @creek411 the solution would be to do `pip install accelerate` (and as now we have a release, it works OOTB with the normal pypi install), however the fact you have the error means you still probably are installing from dev and there's some cache working in there. You can try `pip uninstall transformers -y`, run your code, make sure it fails because `transformers` isn't installed, then install `transformers` again, either 4.28.0 or 4.29.0 and do `pip install accelerate` as well\r\n\r\nI get the same error with\r\n```\r\nRequirement already satisfied: accelerate in /usr/local/lib/python3.10/dist-packages (0.19.0)\r\nRequirement already satisfied: transformers in /usr/local/lib/python3.10/dist-packages (4.29.1)\r\n```\r\n\r\non Colab\r\n\r\nI had to install accelerate manually.\r\n\r\n`!pip install torch \"argilla\" datasets accelerate transformers setfit`", "I'm getting the same error while using the Transfor4rec library from Nvidia. All the solutions proffered here didn't work for me.\r\nI tried to provide training argument here \r\n\"train_args = T4RecTrainingArguments(local_rank = -1,...\"", "Esto me funciono en colab, pero es importante reiniciar el entorno de ejecuciΓ³n\r\n\r\n!pip uninstall -y -r transformers accelerate\r\n!pip install transformers==4.29.0\r\n!pip install git+https://github.com/huggingface/accelerate", "> This worked for me in colab, but it is important to restart the execution environment\r\n> \r\n> !pip uninstall -y -r transformers accelerate !pip install transformers==4.29.0 !pip install git+https://github.com/huggingface/accelerate\r\n\r\nGracias amigo", "I came from the same error, but the previous is like……Did this mean it's not set to \"cuda\" (I run my code with GPU\r\n''' python\r\nFile ~/miniconda3/lib/python3.8/site-packages/transformers/training_args.py:1333, in TrainingArguments.__post_init__(self)\r\n 1327 if version.parse(version.parse(torch.__version__).base_version) == version.parse(\"2.0.0\") and self.fp16:\r\n 1328 raise ValueError(\"--optim adamw_torch_fused with --fp16 requires PyTorch>2.0\")\r\n 1330 if (\r\n 1331 self.framework == \"pt\"\r\n 1332 and is_torch_available()\r\n-> 1333 and (self.device.type != \"cuda\")\r\n 1334 and (get_xla_device_type(self.device) != \"GPU\")\r\n 1335 and (self.fp16 or self.fp16_full_eval)\r\n 1336 ):\r\n 1337 raise ValueError(\r\n 1338 \"FP16 Mixed precision training with AMP or APEX (`--fp16`) and FP16 half precision evaluation\"\r\n 1339 \" (`--fp16_full_eval`) can only be used on CUDA devices.\"\r\n 1340 )\r\n 1342 if (\r\n 1343 self.framework == \"pt\"\r\n 1344 and is_torch_available()\r\n (...)\r\n 1349 and (self.bf16 or self.bf16_full_eval)\r\n 1350 ):\r\n\r\nFile ~/miniconda3/lib/python3.8/site-packages/transformers/training_args.py:1697, in TrainingArguments.device(self)\r\n 1693 \"\"\"\r\n 1694 The device used by this process.\r\n 1695 \"\"\"\r\n 1696 requires_backends(self, [\"torch\"])\r\n-> 1697 return self._setup_devices\r\n\r\nFile ~/miniconda3/lib/python3.8/site-packages/transformers/utils/generic.py:54, in cached_property.__get__(self, obj, objtype)\r\n 52 cached = getattr(obj, attr, None)\r\n 53 if cached is None:\r\n---> 54 cached = self.fget(obj)\r\n 55 setattr(obj, attr, cached)\r\n 56 return cached\r\n\r\nFile ~/miniconda3/lib/python3.8/site-packages/transformers/training_args.py:1631, in TrainingArguments._setup_devices(self)\r\n 1629 self._n_gpu = 1\r\n 1630 else:\r\n-> 1631 self.distributed_state = PartialState(backend=self.ddp_backend)\r\n 1632 self._n_gpu = 1\r\n 1633 if not is_sagemaker_mp_enabled():\r\n\r\nNameError: name 'PartialState' is not defined\r\n''' ", "For those having issues, can you tell me more about if you are working in Jupyter, Colab, or in regular Python? Again the solution hasn't changed: in the correct environment you need to make sure that `accelerate` is installed and viewable. To test this in your environment you can try importing it `import accelerate`. If it fails, it's not installed correctly. ", "I'm using Jupyter (as well as the VS Code notebooks extension, which is essentially the same) on Python 3.11 with no venv and the interpreter provided by `asdf`.\r\n\r\nOn re-test, `accelerate` 0.19 _did_ work with `transformers` 4.29, as it turned out; I'm just not accustomed to notebooks and forgot that I needed to restart the kernel to freshen the dependencies. Classic n00b mistake.\r\n\r\nI'm still a bit mystified as to why I had an older `accelerate`, as I had created my entire Python environment on the same day I commented. Possibly, it was a transitive dependency of something else I'd already installed.", "Please also remember to restart the kernel ( Given you are using Colab/Jupyter ) ( I know it is silly but yes ) ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,681
1,689
1,689
NONE
null
# Parital State is not defined - In your recent release: 4.29.0.dev.0 has some issues with the code. The function or method "Partial State" is not defined. Today - I am not able to train my model. I just downloaded 4.28.0 to resolve this issue. Can you kindly check ASAP? - This error I am getting in the "Training arguments" method. - The training arguments script does not define or import the "Partial State" method or function. # Solution: - For now, install the previous stable version of transformers. ```pip install transformers==4.28.0```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22816/reactions", "total_count": 15, "+1": 11, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 4 }
https://api.github.com/repos/huggingface/transformers/issues/22816/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22815
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22815/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22815/comments
https://api.github.com/repos/huggingface/transformers/issues/22815/events
https://github.com/huggingface/transformers/pull/22815
1,671,810,614
PR_kwDOCUB6oc5Ogswp
22,815
Mark auto models as important
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22815). All of your documentation changes will be reflected on that endpoint." ]
1,681
1,681
1,681
COLLABORATOR
null
# What does this PR do? This PR marks the auto model as important so that the corresponding tests are not skipped. This is what caused a break on main after #22698 was merged. The change in the Korean doc file is a change of line ending, which is currently making it impossible to do anything on main (the remote branch as CRLF line endings but GitHub really wants LF, this shouldn't be possible but there was a bug when merging the PR touching that file).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22815/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22815/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22815", "html_url": "https://github.com/huggingface/transformers/pull/22815", "diff_url": "https://github.com/huggingface/transformers/pull/22815.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22815.patch", "merged_at": 1681759982000 }
https://api.github.com/repos/huggingface/transformers/issues/22814
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22814/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22814/comments
https://api.github.com/repos/huggingface/transformers/issues/22814/events
https://github.com/huggingface/transformers/pull/22814
1,671,737,376
PR_kwDOCUB6oc5OgcyM
22,814
Use code on the Hub from another repo
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,681
1,681
COLLABORATOR
null
# What does this PR do? Continuation of #22698 with tests fixed (coming soon).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22814/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22814/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22814", "html_url": "https://github.com/huggingface/transformers/pull/22814", "diff_url": "https://github.com/huggingface/transformers/pull/22814.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22814.patch", "merged_at": 1681839972000 }
https://api.github.com/repos/huggingface/transformers/issues/22813
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22813/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22813/comments
https://api.github.com/repos/huggingface/transformers/issues/22813/events
https://github.com/huggingface/transformers/pull/22813
1,671,733,917
PR_kwDOCUB6oc5OgcDt
22,813
Revert "Use code on the Hub from another repo"
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22813). All of your documentation changes will be reflected on that endpoint." ]
1,681
1,681
1,681
COLLABORATOR
null
Reverts huggingface/transformers#22698 as it broke three tests on main.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22813/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/22813/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22813", "html_url": "https://github.com/huggingface/transformers/pull/22813", "diff_url": "https://github.com/huggingface/transformers/pull/22813.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22813.patch", "merged_at": 1681755733000 }
https://api.github.com/repos/huggingface/transformers/issues/22812
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22812/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22812/comments
https://api.github.com/repos/huggingface/transformers/issues/22812/events
https://github.com/huggingface/transformers/pull/22812
1,671,729,214
PR_kwDOCUB6oc5OgbC4
22,812
Ignore, testing CI
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22812). All of your documentation changes will be reflected on that endpoint." ]
1,681
1,685
1,681
CONTRIBUTOR
null
Disregard
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22812/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22812/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22812", "html_url": "https://github.com/huggingface/transformers/pull/22812", "diff_url": "https://github.com/huggingface/transformers/pull/22812.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22812.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22811
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22811/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22811/comments
https://api.github.com/repos/huggingface/transformers/issues/22811/events
https://github.com/huggingface/transformers/pull/22811
1,671,619,757
PR_kwDOCUB6oc5OgDJ6
22,811
Simplify update metadata job
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,681
1,681
COLLABORATOR
null
# What does this PR do? This should fix the issue with the update metadata job on main. Simplify the job execution by removing the cached and just doing a pip install dev. Since it's running on main, we don't really care about the 2-3 minutes the cache would make us gain.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22811/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22811/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22811", "html_url": "https://github.com/huggingface/transformers/pull/22811", "diff_url": "https://github.com/huggingface/transformers/pull/22811.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22811.patch", "merged_at": 1681754060000 }
https://api.github.com/repos/huggingface/transformers/issues/22810
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22810/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22810/comments
https://api.github.com/repos/huggingface/transformers/issues/22810/events
https://github.com/huggingface/transformers/pull/22810
1,671,584,232
PR_kwDOCUB6oc5Of7YI
22,810
Add ALiBi Support for GPTNeoX - GPTNeoXALiBi
{ "login": "keleog", "id": 11840053, "node_id": "MDQ6VXNlcjExODQwMDUz", "avatar_url": "https://avatars.githubusercontent.com/u/11840053?v=4", "gravatar_id": "", "url": "https://api.github.com/users/keleog", "html_url": "https://github.com/keleog", "followers_url": "https://api.github.com/users/keleog/followers", "following_url": "https://api.github.com/users/keleog/following{/other_user}", "gists_url": "https://api.github.com/users/keleog/gists{/gist_id}", "starred_url": "https://api.github.com/users/keleog/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/keleog/subscriptions", "organizations_url": "https://api.github.com/users/keleog/orgs", "repos_url": "https://api.github.com/users/keleog/repos", "events_url": "https://api.github.com/users/keleog/events{/privacy}", "received_events_url": "https://api.github.com/users/keleog/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22810). All of your documentation changes will be reflected on that endpoint.", "Hi, @ArthurZucker @younesbelkada @sgugger , \r\n\r\nPlease can you help review this PR. Thank you very much!", "Hey! Given how similar this is to the already existing model, I would recommend sharing this on the hub following this [tutorial!](https://huggingface.co/docs/transformers/custom_models) Would that work alright for you? " ]
1,681
1,683
1,683
NONE
null
# What does this PR do? The GPT NeoX Library supports training with ALiBi positional embeddings, however the `GPTNeoXModel` only supports rotary embeddings. This PR creates a new `GPTNeoXALiBi` model that uses ALiBi positional embeddings. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? - text models: @ArthurZucker and @younesbelkada - Documentation: @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22810/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22810/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22810", "html_url": "https://github.com/huggingface/transformers/pull/22810", "diff_url": "https://github.com/huggingface/transformers/pull/22810.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22810.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22809
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22809/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22809/comments
https://api.github.com/repos/huggingface/transformers/issues/22809/events
https://github.com/huggingface/transformers/pull/22809
1,671,542,575
PR_kwDOCUB6oc5OfyWi
22,809
Support identity normalizer in SentencePiece model
{ "login": "chlorochrule", "id": 22964191, "node_id": "MDQ6VXNlcjIyOTY0MTkx", "avatar_url": "https://avatars.githubusercontent.com/u/22964191?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chlorochrule", "html_url": "https://github.com/chlorochrule", "followers_url": "https://api.github.com/users/chlorochrule/followers", "following_url": "https://api.github.com/users/chlorochrule/following{/other_user}", "gists_url": "https://api.github.com/users/chlorochrule/gists{/gist_id}", "starred_url": "https://api.github.com/users/chlorochrule/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chlorochrule/subscriptions", "organizations_url": "https://api.github.com/users/chlorochrule/orgs", "repos_url": "https://api.github.com/users/chlorochrule/repos", "events_url": "https://api.github.com/users/chlorochrule/events{/privacy}", "received_events_url": "https://api.github.com/users/chlorochrule/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22809). All of your documentation changes will be reflected on that endpoint.", "cc @ArthurZucker ", "> cc @Narsil if I am missing something (maybe the normalizers in rust should support identity type)\r\n\r\n`normalizer: None` should do nothing.\r\n\r\nMost likely a case not handled by our current code, we probably need to check that the spec is set to indentity, and not even attempt to create the `precompiled_charsmap` (since it's invalid and we already have a mecanism for identity)", "@ArthurZucker Thank you for reviewing!\r\nI fixed all issues related to empty `precompiled_charsmap` referring to following code.\r\nhttps://github.com/huggingface/transformers/blob/dc67da01829090ec92dfc24653242cf3f56d1a01/src/transformers/convert_slow_tokenizer.py#L625-L628", "The current modification LGTM. I'm not sure why the test fail, maybe rebase ?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Closing in favor of #24618" ]
1,681
1,688
1,688
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> SentencePiece can train a model with a specified normalizer (for example `normalization_rule_name="nfkc"`). https://github.com/google/sentencepiece/blob/master/doc/normalization.md However, no normalization is done with `normalization_rule_name="identity"`, and `proto.normalizer_spec.precompiled_charsmap` in the SentencePiece model is empty. Loading this model with `AlbertTokenizerFast.from_pretrained` occurres the following error: ``` >>> tokenizer = AlbertTokenizerFast.from_pretrained('.') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/nminami/.pyenv/versions/3.10.4/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1804, in from_pretrained return cls._from_pretrained( File "/Users/nminami/.pyenv/versions/3.10.4/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1959, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/Users/nminami/.pyenv/versions/3.10.4/lib/python3.10/site-packages/transformers/models/albert/tokenization_albert_fast.py", line 148, in __init__ super().__init__( File "/Users/nminami/.pyenv/versions/3.10.4/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 114, in __init__ fast_tokenizer = convert_slow_tokenizer(slow_tokenizer) File "/Users/nminami/.pyenv/versions/3.10.4/lib/python3.10/site-packages/transformers/convert_slow_tokenizer.py", line 1162, in convert_slow_tokenizer return converter_class(transformer_tokenizer).converted() File "/Users/nminami/.pyenv/versions/3.10.4/lib/python3.10/site-packages/transformers/convert_slow_tokenizer.py", line 503, in converted tokenizer.normalizer = self.normalizer(self.proto) File "/Users/nminami/.pyenv/versions/3.10.4/lib/python3.10/site-packages/transformers/convert_slow_tokenizer.py", line 535, in normalizer list_normalizers.append(normalizers.Precompiled(precompiled_charsmap)) Exception: Error while attempting to build Precompiled normalizer: Cannot parse precompiled_charsmap ``` This error is caused by passing empty bytes to `normalizers.Precompiled`. So, this PR prevents the problem to check `proto.normalizer_spec.name` before passing empty bytes. ## How to reproduce this problem ``` OS/Arch: macOS/Apple Silicon Python 3.10.4 (main, Jun 26 2022, 22:29:49) [Clang 13.0.0 (clang-1300.0.27.3)] on darwin protobuf==3.19.0 sentencepiece==0.1.97 transformers==4.28.1 ``` Save SentencePiece model using [python/test/botchan.txt](https://github.com/google/sentencepiece/blob/master/python/test/botchan.txt). ```python import sentencepiece as spm spm.SentencePieceTrainer.train(input='python/test/botchan.txt', model_prefix='spiece', vocab_size=1000, normalization_rule_name='identity') ``` Read SentencePiece model using `AlbertTokenizerFast.from_pretrained`. ```python from transformers import AlbertTokenizerFast tokenizer = AlbertTokenizerFast.from_pretrained('.') ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - I think this is a bug fix. So, no documentation updates required. - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22809/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22809/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22809", "html_url": "https://github.com/huggingface/transformers/pull/22809", "diff_url": "https://github.com/huggingface/transformers/pull/22809.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22809.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22808
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22808/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22808/comments
https://api.github.com/repos/huggingface/transformers/issues/22808/events
https://github.com/huggingface/transformers/pull/22808
1,671,528,224
PR_kwDOCUB6oc5OfvR-
22,808
Fix squeeze into torch 1.x compatible form in llama model
{ "login": "DyeKuu", "id": 39208702, "node_id": "MDQ6VXNlcjM5MjA4NzAy", "avatar_url": "https://avatars.githubusercontent.com/u/39208702?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DyeKuu", "html_url": "https://github.com/DyeKuu", "followers_url": "https://api.github.com/users/DyeKuu/followers", "following_url": "https://api.github.com/users/DyeKuu/following{/other_user}", "gists_url": "https://api.github.com/users/DyeKuu/gists{/gist_id}", "starred_url": "https://api.github.com/users/DyeKuu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DyeKuu/subscriptions", "organizations_url": "https://api.github.com/users/DyeKuu/orgs", "repos_url": "https://api.github.com/users/DyeKuu/repos", "events_url": "https://api.github.com/users/DyeKuu/events{/privacy}", "received_events_url": "https://api.github.com/users/DyeKuu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? Rewrite the squeeze into torch 1.x compatible form as squeeze accepting tuple as arg is a 2.0 only feature https://pytorch.org/docs/stable/generated/torch.squeeze.html, introduced in https://github.com/huggingface/transformers/pull/22785. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes https://github.com/huggingface/transformers/issues/22807 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @amyeroberts Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22808/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22808/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22808", "html_url": "https://github.com/huggingface/transformers/pull/22808", "diff_url": "https://github.com/huggingface/transformers/pull/22808.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22808.patch", "merged_at": 1681748929000 }
https://api.github.com/repos/huggingface/transformers/issues/22807
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22807/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22807/comments
https://api.github.com/repos/huggingface/transformers/issues/22807/events
https://github.com/huggingface/transformers/issues/22807
1,671,493,238
I_kwDOCUB6oc5joPZ2
22,807
New Crash Using Llama
{ "login": "sam-h-bean", "id": 43734688, "node_id": "MDQ6VXNlcjQzNzM0Njg4", "avatar_url": "https://avatars.githubusercontent.com/u/43734688?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sam-h-bean", "html_url": "https://github.com/sam-h-bean", "followers_url": "https://api.github.com/users/sam-h-bean/followers", "following_url": "https://api.github.com/users/sam-h-bean/following{/other_user}", "gists_url": "https://api.github.com/users/sam-h-bean/gists{/gist_id}", "starred_url": "https://api.github.com/users/sam-h-bean/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sam-h-bean/subscriptions", "organizations_url": "https://api.github.com/users/sam-h-bean/orgs", "repos_url": "https://api.github.com/users/sam-h-bean/repos", "events_url": "https://api.github.com/users/sam-h-bean/events{/privacy}", "received_events_url": "https://api.github.com/users/sam-h-bean/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sam-h-bean - Yes, sorry, you're right about the issue and the cause. We're just opening up a PR now to resolve. Thanks for reporting so quickly! ", "> @sam-h-bean - Yes, sorry, you're right about the issue and the cause. We're just opening up a PR now to resolve. Thanks for reporting so quickly!\r\n\r\nMy fault, I didn't see the note in the documentation that tuple inputs to `squeeze` is a new feature of PyTorch 2.0. If you'd like I can open a pull request to fix by replacing with two `squeeze`s.", "No worries @fpgaminer - I should have caught it in the review. @DyeKuu is opening a PR as we type :) " ]
1,681
1,681
1,681
CONTRIBUTOR
null
### System Info Seeing the following crash starting today when loading via accelerate. I think maybe related to https://github.com/huggingface/transformers/pull/22785 CC @fpgaminer @gante @amyeroberts ``` File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/transformers/models/llama/modeling_llama.py", line 205, in forward query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids) File "/usr/local/lib/python3.8/dist-packages/transformers/models/llama/modeling_llama.py", line 135, in apply_rotary_pos_emb cos = cos.squeeze((0, 1)) # [seq_len, dim] TypeError: squeeze() received an invalid combination of arguments - got (tuple), but expected one of: * () didn't match because some of the arguments have invalid types: (!tuple of (int, int)!) * (int dim) didn't match because some of the arguments have invalid types: (!tuple of (int, int)!) * (name dim) didn't match because some of the arguments have invalid types: (!tuple of (int, int)!) ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Load Llama on GPU with accelerate and try to generate text. ### Expected behavior Text is generated
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22807/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22807/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22806
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22806/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22806/comments
https://api.github.com/repos/huggingface/transformers/issues/22806/events
https://github.com/huggingface/transformers/pull/22806
1,671,362,194
PR_kwDOCUB6oc5OfLYV
22,806
🌐 [i18n-KO] Translated `serialization.mdx` to Korean
{ "login": "wonhyeongseo", "id": 29195190, "node_id": "MDQ6VXNlcjI5MTk1MTkw", "avatar_url": "https://avatars.githubusercontent.com/u/29195190?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wonhyeongseo", "html_url": "https://github.com/wonhyeongseo", "followers_url": "https://api.github.com/users/wonhyeongseo/followers", "following_url": "https://api.github.com/users/wonhyeongseo/following{/other_user}", "gists_url": "https://api.github.com/users/wonhyeongseo/gists{/gist_id}", "starred_url": "https://api.github.com/users/wonhyeongseo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wonhyeongseo/subscriptions", "organizations_url": "https://api.github.com/users/wonhyeongseo/orgs", "repos_url": "https://api.github.com/users/wonhyeongseo/repos", "events_url": "https://api.github.com/users/wonhyeongseo/events{/privacy}", "received_events_url": "https://api.github.com/users/wonhyeongseo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd", "@sgugger, @ArthurZucker, @eunseojo May you please review this PR?" ]
1,681
1,682
1,682
CONTRIBUTOR
null
<!-- PR의 제λͺ©μ€ "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으둜 λΆ€νƒλ“œλ¦½λ‹ˆλ‹Ή --> # What does this PR do? Translated the `serialization.mdx` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 <!-- 메인 μ΄μŠˆμ— 기둝이 λ‚¨μ•„μš”! κ°€μ§œμ—°κ΅¬μ†Œ 리포λ₯Ό μ‚¬μš©ν•΄ μ—°μŠ΅ν•˜μ‹€λ•ŒλŠ” μ œκ±°ν•΄μ£Όμ‹œλ©΄ κ°μ‚¬ν•˜κ² μŠ΅λ‹ˆλ‹€! :smile: --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? <!-- 제좜 μ „ 체크리슀트둜, κ°€μ§œμ—°κ΅¬μ†Œλ§Œμ˜ μ²΄ν¬λ¦¬μŠ€νŠΈλ„ <details>둜 κ°μ‹Έμ„œ λ§Œλ“€μ–΄λ‘λ©΄ 더 쒋을 것 κ°™μ•„μš”. --> ## Who can review? <!-- 1. λͺ¨λ“  λ²ˆμ—­μ΄ μ™„λ£Œλœ λ’€μ—λ§Œ κ°€μ§œμ—°κ΅¬μ†Œ νŒ€μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd <!-- 2. κ°€μ§œμ—°κ΅¬μ†Œ νŒ€μ›λ“€κ³Ό 리뷰가 λλ‚œ ν›„μ—λ§Œ ν—ˆκΉ…νŽ˜μ΄μŠ€ μ§μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> @sgugger, @ArthurZucker, @eunseojo May you please review this PR?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22806/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22806/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22806", "html_url": "https://github.com/huggingface/transformers/pull/22806", "diff_url": "https://github.com/huggingface/transformers/pull/22806.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22806.patch", "merged_at": 1682440731000 }
https://api.github.com/repos/huggingface/transformers/issues/22805
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22805/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22805/comments
https://api.github.com/repos/huggingface/transformers/issues/22805/events
https://github.com/huggingface/transformers/pull/22805
1,671,327,311
PR_kwDOCUB6oc5OfD7L
22,805
🌐 [i18n-KO] Translated `tasks/translation.mdx` to Korean
{ "login": "wonhyeongseo", "id": 29195190, "node_id": "MDQ6VXNlcjI5MTk1MTkw", "avatar_url": "https://avatars.githubusercontent.com/u/29195190?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wonhyeongseo", "html_url": "https://github.com/wonhyeongseo", "followers_url": "https://api.github.com/users/wonhyeongseo/followers", "following_url": "https://api.github.com/users/wonhyeongseo/following{/other_user}", "gists_url": "https://api.github.com/users/wonhyeongseo/gists{/gist_id}", "starred_url": "https://api.github.com/users/wonhyeongseo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wonhyeongseo/subscriptions", "organizations_url": "https://api.github.com/users/wonhyeongseo/orgs", "repos_url": "https://api.github.com/users/wonhyeongseo/repos", "events_url": "https://api.github.com/users/wonhyeongseo/events{/privacy}", "received_events_url": "https://api.github.com/users/wonhyeongseo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,681
1,681
CONTRIBUTOR
null
<!-- PR의 제λͺ©μ€ "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으둜 λΆ€νƒλ“œλ¦½λ‹ˆλ‹Ή --> # What does this PR do? Translated the `tasks/translation.mdx` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 <!-- 메인 μ΄μŠˆμ— 기둝이 λ‚¨μ•„μš”! κ°€μ§œμ—°κ΅¬μ†Œ 리포λ₯Ό μ‚¬μš©ν•΄ μ—°μŠ΅ν•˜μ‹€λ•ŒλŠ” μ œκ±°ν•΄μ£Όμ‹œλ©΄ κ°μ‚¬ν•˜κ² μŠ΅λ‹ˆλ‹€! :smile: --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? <!-- 제좜 μ „ 체크리슀트둜, κ°€μ§œμ—°κ΅¬μ†Œλ§Œμ˜ μ²΄ν¬λ¦¬μŠ€νŠΈλ„ <details>둜 κ°μ‹Έμ„œ λ§Œλ“€μ–΄λ‘λ©΄ 더 쒋을 것 κ°™μ•„μš”. --> ## Who can review? <!-- κ°€μ§œμ—°κ΅¬μ†Œ νŒ€μ›λ“€κ³Ό 리뷰가 λλ‚œ ν›„μ—λ§Œ ν—ˆκΉ…νŽ˜μ΄μŠ€ μ§μ›λ“€μ—κ²Œ 리뷰 μš”μ²­ν•˜λŠ” μ•„λž˜ 주석을 λ…ΈμΆœν•΄μ£Όμ„Έμš”! --> @sgugger, @ArthurZucker, @eunseojo May you please review this PR?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22805/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22805/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22805", "html_url": "https://github.com/huggingface/transformers/pull/22805", "diff_url": "https://github.com/huggingface/transformers/pull/22805.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22805.patch", "merged_at": 1681745418000 }
https://api.github.com/repos/huggingface/transformers/issues/22804
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22804/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22804/comments
https://api.github.com/repos/huggingface/transformers/issues/22804/events
https://github.com/huggingface/transformers/pull/22804
1,670,995,712
PR_kwDOCUB6oc5Od7yi
22,804
Fix sneaky torch dependency in TF example
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,681
1,681
MEMBER
null
Thanks to @muellerzr for uncovering this one - the TF image classification example sneakily depended on `torch` because it used `MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING` (which is a dummy if `torch` is unavailable) and called one of the `TrainingArguments` properties that requires `torch`. Made a quick PR to fix it!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22804/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22804/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22804", "html_url": "https://github.com/huggingface/transformers/pull/22804", "diff_url": "https://github.com/huggingface/transformers/pull/22804.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22804.patch", "merged_at": 1681744313000 }
https://api.github.com/repos/huggingface/transformers/issues/22803
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22803/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22803/comments
https://api.github.com/repos/huggingface/transformers/issues/22803/events
https://github.com/huggingface/transformers/pull/22803
1,670,988,801
PR_kwDOCUB6oc5Od6Sb
22,803
Skip `test_disk_offload` for `WhisperModelTest`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I am not able to use `openai/whisper-base` as I get GPU OOM. When I tried to use `openai/whisper-tiny.en`, I have a hard time to change the parameters to get a working input dict for the model.\r\n\r\nIf I just changed the values in model tester to get a larger model (but random), I get different kinds of error like `IndexError: list index out of range` in `dispatch_model` or `RuntimeError: Tensor on device meta is not on the expected device cuda:0!` in model forward. \r\n\r\nBut even if I revert the change in #22486, the above attempts to use larger (fake) models still have the same issue. I guess we will have to look into this.", "Convert to draft to avoid being merged.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,681
1,687
1,687
COLLABORATOR
null
# What does this PR do? Since #22486, `WhisperModelTest.test_disk_offload` to fail. I just blindly skip this test and guess it is ok.....?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22803/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22803/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22803", "html_url": "https://github.com/huggingface/transformers/pull/22803", "diff_url": "https://github.com/huggingface/transformers/pull/22803.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22803.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22802
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22802/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22802/comments
https://api.github.com/repos/huggingface/transformers/issues/22802/events
https://github.com/huggingface/transformers/issues/22802
1,670,941,372
I_kwDOCUB6oc5jmIq8
22,802
fssdasdf
{ "login": "K84N666LEE", "id": 32899621, "node_id": "MDQ6VXNlcjMyODk5NjIx", "avatar_url": "https://avatars.githubusercontent.com/u/32899621?v=4", "gravatar_id": "", "url": "https://api.github.com/users/K84N666LEE", "html_url": "https://github.com/K84N666LEE", "followers_url": "https://api.github.com/users/K84N666LEE/followers", "following_url": "https://api.github.com/users/K84N666LEE/following{/other_user}", "gists_url": "https://api.github.com/users/K84N666LEE/gists{/gist_id}", "starred_url": "https://api.github.com/users/K84N666LEE/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/K84N666LEE/subscriptions", "organizations_url": "https://api.github.com/users/K84N666LEE/orgs", "repos_url": "https://api.github.com/users/K84N666LEE/repos", "events_url": "https://api.github.com/users/K84N666LEE/events{/privacy}", "received_events_url": "https://api.github.com/users/K84N666LEE/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,681
1,681
1,681
NONE
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22802/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22802/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22801
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22801/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22801/comments
https://api.github.com/repos/huggingface/transformers/issues/22801/events
https://github.com/huggingface/transformers/issues/22801
1,670,878,779
I_kwDOCUB6oc5jl5Y7
22,801
Del model does not work with device_map!=None
{ "login": "ikergarcia1996", "id": 18737249, "node_id": "MDQ6VXNlcjE4NzM3MjQ5", "avatar_url": "https://avatars.githubusercontent.com/u/18737249?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ikergarcia1996", "html_url": "https://github.com/ikergarcia1996", "followers_url": "https://api.github.com/users/ikergarcia1996/followers", "following_url": "https://api.github.com/users/ikergarcia1996/following{/other_user}", "gists_url": "https://api.github.com/users/ikergarcia1996/gists{/gist_id}", "starred_url": "https://api.github.com/users/ikergarcia1996/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ikergarcia1996/subscriptions", "organizations_url": "https://api.github.com/users/ikergarcia1996/orgs", "repos_url": "https://api.github.com/users/ikergarcia1996/repos", "events_url": "https://api.github.com/users/ikergarcia1996/events{/privacy}", "received_events_url": "https://api.github.com/users/ikergarcia1996/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm afraid using `gc.collect()` is not a workaround but the only way around this that we know of. If you find a way to have Python directly reclaim the memory without calling it, we're completely game to have it merged. I suspect it all comes down to model being initialized on the meta device where we have to re-set the parameters afterward using [this function](https://github.com/huggingface/accelerate/blob/2106e87d585ae9a245c895c568fffeaa519dfb9a/src/accelerate/utils/modeling.py#L96) but not 100% sure.\r\n\r\nSince there is a way to avoid this by just adding a line to your cleanup code, this is not high priority for us to investigate more.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,681
1,685
1,685
CONTRIBUTOR
null
### System Info - `transformers` version: 4.29.0.dev0 - Platform: Linux-4.18.0-348.7.1.el8_5.x86_64-x86_64-with-glibc2.28 - Python version: 3.9.7 - Huggingface_hub version: 0.13.3 - Safetensors version: 0.3.0 - PyTorch version (GPU?): 2.1.0.dev20230411+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @sgugger @muellerzr ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 'del model' function does't free the GPU memory if the model has been loaded with device_map != None. ```Python import torch from transformers import AutoModelForCausalLM, PreTrainedModel import os ```` ### Loading the model with device_map = None ```Python model: PreTrainedModel = AutoModelForCausalLM.from_pretrained( pretrained_model_name_or_path="EleutherAI/gpt-neo-125m", load_in_8bit=False, device_map=None, torch_dtype=None, ) model = model.to("cuda") torch.cuda.memory_allocated() ```` 555601920 ```Python del model torch.cuda.empty_cache() torch.cuda.memory_allocated() ```` 0 βœ… ### Loading the model with device_map = Auto ```Python model: PreTrainedModel = AutoModelForCausalLM.from_pretrained( pretrained_model_name_or_path="EleutherAI/gpt-neo-125m", load_in_8bit=False, device_map="auto", torch_dtype=None, ) torch.cuda.memory_allocated() ```` 555601920 ```Python del model torch.cuda.empty_cache() torch.cuda.memory_allocated() ```` 555077632 ❌ ### Loading the model with device_map = {'': 0} ```Python device_map = {"": int(os.environ.get("LOCAL_RANK") or 0)} model: PreTrainedModel = AutoModelForCausalLM.from_pretrained( pretrained_model_name_or_path="EleutherAI/gpt-neo-125m", load_in_8bit=False, device_map=device_map, torch_dtype=None, ) torch.cuda.memory_allocated() ```` 555601920 ```Python del model torch.cuda.empty_cache() torch.cuda.memory_allocated() ```` 555077632 ❌ ### Rewriting models ```Python for x in range(1,5): model: PreTrainedModel = AutoModelForCausalLM.from_pretrained( pretrained_model_name_or_path="EleutherAI/gpt-neo-125m", load_in_8bit=False, device_map=None, torch_dtype=None, ) model = model.to("cuda") print(f"Iteration {x}: {torch.cuda.memory_allocated()}") ```` Iteration 1: 555601920 Iteration 2: 555601920 Iteration 3: 555601920 Iteration 4: 555601920 ```Python for x in range(1,5): model: PreTrainedModel = AutoModelForCausalLM.from_pretrained( pretrained_model_name_or_path="EleutherAI/gpt-neo-125m", load_in_8bit=False, device_map="auto", torch_dtype=None, ) print(f"Iteration {x}: {torch.cuda.memory_allocated()}") ```` Iteration 1: 554553344 Iteration 2: 1107795968 Iteration 3: 1108058112 Iteration 4: 1109368832 ### Using Garbage Collector This workaround is useful to clean the GPU memory, although it would be more appropriate to fix the delete method behavior. But for now, It can be used as a way to solve the memory leaks. ```Python model: PreTrainedModel = AutoModelForCausalLM.from_pretrained( pretrained_model_name_or_path="EleutherAI/gpt-neo-125m", load_in_8bit=False, device_map="auto", torch_dtype=None, ) del model torch.cuda.empty_cache() torch.cuda.memory_allocated() ```` 555077632 ```Python import gc torch.cuda.empty_cache() gc.collect() torch.cuda.empty_cache() ```` ```Python torch.cuda.memory_allocated() ```` 0 ### Expected behavior The model should be deleted when calling 'del model'. This bug causes multiple issues. For example: if you want to evaluate multiple model checkpoints, the model is not correctly overwritten/deleted when loading the next one, causing a memory leak that eventually results in an OOM error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22801/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22801/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22800
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22800/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22800/comments
https://api.github.com/repos/huggingface/transformers/issues/22800/events
https://github.com/huggingface/transformers/pull/22800
1,670,855,916
PR_kwDOCUB6oc5OddXF
22,800
Fix `test_word_time_stamp_integration` for `Wav2Vec2ProcessorWithLMTest`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,681
1,681
COLLABORATOR
null
# What does this PR do? Same as in #22474: caused by datasets version 2.10.1 -> 2.11, so just update the expected output values
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22800/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22800/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22800", "html_url": "https://github.com/huggingface/transformers/pull/22800", "diff_url": "https://github.com/huggingface/transformers/pull/22800.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22800.patch", "merged_at": 1681728115000 }
https://api.github.com/repos/huggingface/transformers/issues/22799
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22799/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22799/comments
https://api.github.com/repos/huggingface/transformers/issues/22799/events
https://github.com/huggingface/transformers/issues/22799
1,670,498,768
I_kwDOCUB6oc5jkcnQ
22,799
Can not import T5BiLDModel
{ "login": "sufeidechabei", "id": 26901984, "node_id": "MDQ6VXNlcjI2OTAxOTg0", "avatar_url": "https://avatars.githubusercontent.com/u/26901984?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sufeidechabei", "html_url": "https://github.com/sufeidechabei", "followers_url": "https://api.github.com/users/sufeidechabei/followers", "following_url": "https://api.github.com/users/sufeidechabei/following{/other_user}", "gists_url": "https://api.github.com/users/sufeidechabei/gists{/gist_id}", "starred_url": "https://api.github.com/users/sufeidechabei/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sufeidechabei/subscriptions", "organizations_url": "https://api.github.com/users/sufeidechabei/orgs", "repos_url": "https://api.github.com/users/sufeidechabei/repos", "events_url": "https://api.github.com/users/sufeidechabei/events{/privacy}", "received_events_url": "https://api.github.com/users/sufeidechabei/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @sufeidechabei, thanks for raising an issue. \r\n\r\nSo that we can best help you, could you follow the issue template and share information about the running environment (run `transformers-cli env` in your terminal and share what's printed out). \r\n\r\nCould you elaborate on what you mean by ` I build the library from this repo`? Is this running from a fork of the repo or from code on the hub? ", "From the code for this hub @amyeroberts \r\n", "Here is the print information:ImportError: cannot import name 'T5BiLDModel' from 'transformers.models.t5.modeling_t5' (/nobackup/haozhang/venv/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py). I use python 3.9.13 and pytorch 2.0", "@sufeidechabei As requested, could you please share the information printed out when you run `transformers-cli env` in your terminal? \r\n\r\nCould you also point to the code on the hub which has the model implementation? ", "- `transformers` version: 4.25.1\r\n- Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.35\r\n- Python version: 3.9.13\r\n- Huggingface_hub version: 0.11.1\r\n- PyTorch version (GPU?): 1.13.0 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n\r\n@amyeroberts ", "@sufeidechabei - it's not possible to directly import the model in the following way:\r\n\r\n```\r\nfrom transformers.models.t5.modeling_t5 import T5BiLDModel\r\n```\r\nas `T5BiLDModel` isn't a model in the `modeling_t5` module. \r\n\r\nIt's possible to use checkpoints from models defined on the hub using the `AutoModel` API. See documentation [here](https://huggingface.co/docs/transformers/custom_models#using-a-model-with-custom-code). ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,681
1,685
1,685
NONE
null
### System Info I try from transformers.models.t5.modeling_t5 import T5Model, but it doesn't work. I build the library from this repo @ArthurZucker @younesbelkada ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers.models.t5.modeling_t5 import T5BiLDModel ### Expected behavior It can import the model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22799/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22799/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22798
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22798/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22798/comments
https://api.github.com/repos/huggingface/transformers/issues/22798/events
https://github.com/huggingface/transformers/pull/22798
1,670,362,112
PR_kwDOCUB6oc5ObzW6
22,798
Show diff between 2 CI runs on Slack reports
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,681
1,681
COLLABORATOR
null
# What does this PR do? It has now become more difficult to identify the **new** CI failures on Slack reports, as the number of failures is in the range of `[100, 200]` + the number of rows in the reported table is kept at around `40`. This PR adds a **diff** of the **(model) failure tables** reported by the latest run against by the last previous run. ### The effect <img width="720" alt="Screenshot 2023-04-17 060007" src="https://user-images.githubusercontent.com/2521628/232374887-66f9259b-9878-4ade-b337-553d3d34dc71.png">
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22798/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22798/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22798", "html_url": "https://github.com/huggingface/transformers/pull/22798", "diff_url": "https://github.com/huggingface/transformers/pull/22798.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22798.patch", "merged_at": 1681925257000 }
https://api.github.com/repos/huggingface/transformers/issues/22797
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22797/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22797/comments
https://api.github.com/repos/huggingface/transformers/issues/22797/events
https://github.com/huggingface/transformers/pull/22797
1,670,161,892
PR_kwDOCUB6oc5ObJw6
22,797
Add RWKV-4
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "IMO the model is in a nice shape! Would love to have a round of review before I transfer the weights on the proper organization!", "@younesbelkada In README.md\r\n\r\nThe name should be \"Bo Peng\" (Peng is the surname) instead of \"Peng Bo\" :)", "hi @sgugger, thanks A TON for this merge! I am trying to train a new model of type and facing the following error: \r\n```\r\nTraceback (most recent call last):\r\n File \"train.py\", line 229, in <module>\r\n main(model_args, data_args, training_args)\r\n File \"train.py\", line 193, in main\r\n trainer.train()\r\n File \"transformers/src/transformers/trainer.py\", line 1664, in train\r\n return inner_training_loop(\r\n File \"transformers/src/transformers/trainer.py\", line 1940, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"transformers/src/transformers/trainer.py\", line 2753, in training_step\r\n loss.backward()\r\n File \".conda/envs/rwkv-eval-3.9/lib/python3.9/site-packages/torch/_tensor.py\", line 487, in backward\r\n torch.autograd.backward(\r\n File \".conda/envs/rwkv-eval-3.9/lib/python3.9/site-packages/torch/autograd/__init__.py\", line 200, in backward\r\n Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass\r\n File \".conda/envs/rwkv-eval-3.9/lib/python3.9/site-packages/torch/autograd/function.py\", line 274, in apply\r\n return user_fn(self, *args)\r\nTypeError: backward() takes 2 positional arguments but 3 were given\r\n```\r\n\r\nFrom what I can see, the backward function of RwkvLinearAttentionBackward does not mention a g_state - should gradients be computed for the state, I guess not? Any pointers as to how I can resolve this will be very much appreciated!", "I managed to get the code to run with some changes to the forward() and backward() functions:\r\n\r\n```python\r\nclass RwkvLinearAttention(torch.autograd.Function):\r\n @staticmethod\r\n def forward(ctx, time_decay, time_first, key, value, state=None, return_state=False):\r\n\r\n batch_size, seq_len, hidden_size = key.size()\r\n if seq_len > rwkv_cuda_kernel.max_seq_length:\r\n raise ValueError(\r\n f\"Cannot process a batch with {seq_len} tokens at the same time, use a maximum of \"\r\n f\"{rwkv_cuda_kernel.max_seq_length} with this model.\"\r\n )\r\n if batch_size * hidden_size % min(hidden_size, 32) != 0:\r\n raise ValueError(\r\n f\"The product of batch size ({batch_size}) and hidden size ({hidden_size}) needs to be a round \"\r\n f\"multiple of {min(hidden_size, 32)}.\"\r\n )\r\n\r\n ctx.input_dtype = key.dtype\r\n\r\n if (\r\n time_decay.device.type != \"cuda\"\r\n or time_first.device.type != \"cuda\"\r\n or key.device.type != \"cuda\"\r\n or value.device.type != \"cuda\"\r\n ):\r\n raise ValueError(\"Calling the CUDA kernel for wkv attention requires all tensors to be on CUDA devices.\")\r\n\r\n time_decay = -torch.exp(time_decay.float().contiguous())\r\n if key.dtype == torch.float16:\r\n time_first = time_first.float()\r\n key = key.float()\r\n value = value.float()\r\n time_first = time_first.contiguous()\r\n key = key.contiguous()\r\n value = value.contiguous()\r\n # The CUDA kernel will fill this tensor.\r\n output = torch.empty_like(key, memory_format=torch.contiguous_format)\r\n if return_state or state is not None:\r\n if state is None:\r\n state = torch.zeros(\r\n batch_size,\r\n hidden_size,\r\n 3,\r\n dtype=torch.float32,\r\n device=key.device,\r\n memory_format=torch.contiguous_format,\r\n )\r\n state[:, :, 2] -= 1e38\r\n else:\r\n state = torch.cat([s.unsqueeze(2) for s in state], dim=2).contiguous()\r\n\r\n if key.dtype == torch.bfloat16:\r\n forward_func = rwkv_cuda_kernel.forward_with_state_bf16\r\n else:\r\n forward_func = rwkv_cuda_kernel.forward_with_state\r\n forward_func(time_decay, time_first.to(key.dtype), key, value, output, state)\r\n else:\r\n forward_func = rwkv_cuda_kernel.forward_bf16 if key.dtype == torch.bfloat16 else rwkv_cuda_kernel.forward\r\n forward_func(time_decay, time_first.to(key.dtype), key, value, output)\r\n ctx.save_for_backward(time_decay, time_first, key, value, output)\r\n\r\n if state is not None:\r\n state = [s.squeeze(2) for s in torch.chunk(state, 3, dim=2)]\r\n\r\n return output.to(ctx.input_dtype), state\r\n```\r\n\r\n```python\r\n def backward(ctx, g_output, g_state):\r\n input_dtype = ctx.input_dtype\r\n\r\n time_decay, time_first, key, value, output = ctx.saved_tensors\r\n # The CUDA kernel will fill those tensors.\r\n g_time_decay = torch.empty_like(\r\n time_decay,\r\n memory_format=torch.contiguous_format,\r\n dtype=torch.bfloat16 if input_dtype == torch.bfloat16 else torch.float32,\r\n )\r\n g_time_first = torch.empty_like(\r\n time_first,\r\n memory_format=torch.contiguous_format,\r\n dtype=torch.bfloat16 if input_dtype == torch.bfloat16 else torch.float32,\r\n )\r\n g_key = torch.empty_like(key, memory_format=torch.contiguous_format)\r\n g_value = torch.empty_like(value, memory_format=torch.contiguous_format)\r\n\r\n if input_dtype == torch.float16:\r\n g_output = g_output.float()\r\n backward_func = rwkv_cuda_kernel.backward_bf16 if input_dtype == torch.bfloat16 else rwkv_cuda_kernel.backward\r\n backward_func(\r\n time_decay,\r\n time_first.to(key.dtype),\r\n key,\r\n value,\r\n output,\r\n g_output.contiguous(),\r\n g_time_decay,\r\n g_time_first,\r\n g_key,\r\n g_value,\r\n )\r\n #g_time_decay = torch.sum(g_time_decay, dim=0)\r\n #g_time_first = torch.sum(g_time_first, dim=0)\r\n\r\n return (\r\n g_time_decay.to(input_dtype),\r\n g_time_first.to(input_dtype),\r\n g_key.to(input_dtype),\r\n g_value.to(input_dtype),\r\n None,\r\n None\r\n )\r\n```\r\n\r\nOne problem I run into now is that although I'm trying to train a fairly small model (12 layers, 256 hidden size, 64 context size) I can only train with a very small batch size (16) on a 40GB A100 card. For comparison, a RoBERTa model with a similar size allows for a bs of 256. This seems counterintuitive to me, but I might be wrong.\r\n\r\nAnother issue I observed is instability: in some cases, within the first 3 steps of training the loss goes from something normal like 10 to 90543067814198.3 and then to 0.0. This seems to happen more when bf16 training is disabled and at higher batch sizes when bf16 training is enabled.\r\n", "@YovaKem Would you mind try change this\r\n\r\n```python\r\n# The CUDA kernel will fill those tensors.\r\ng_time_decay = torch.empty_like(\r\n time_decay,\r\n memory_format=torch.contiguous_format,\r\n dtype=torch.bfloat16 if input_dtype == torch.bfloat16 else torch.float32,\r\n)\r\ng_time_first = torch.empty_like(time_first, memory_format=torch.contiguous_format)\r\n```\r\n\r\nto\r\n\r\n```python\r\n# The CUDA kernel will fill those tensors.\r\ng_time_decay = torch.empty(\r\n key.shape[0], key.shape[2],\r\n memory_format=torch.contiguous_format,\r\n dtype=torch.bfloat16 if input_dtype == torch.bfloat16 else torch.float32,\r\n)\r\ng_time_first = torch.empty(k.shape[0], k.shape[2], memory_format=torch.contiguous_format)\r\n```\r\n\r\nI suspect there's an overflow in the current code, as mentioned above in the review comment but not tested yet. The binary distribution on PyPI does not include the cuda kernels XD\r\n\r\nAlso, the gradient of the state should be computed, but the current kernel is not doing it. Later after I setup the env I'll open the PR.", "Thanks @Blealtan! I guess you meant `k` for `key`? I added bf16 support for `g_time_first` (I get an error otherwise) and put the tensors on CUDA\r\n\r\n```python\r\n # The CUDA kernel will fill those tensors.\r\n g_time_decay = torch.empty(\r\n key.shape[0], key.shape[2],\r\n memory_format=torch.contiguous_format,\r\n dtype=torch.bfloat16 if input_dtype == torch.bfloat16 else torch.float32,\r\n ).to(key.device)\r\n g_time_first = torch.empty(\r\n key.shape[0], key.shape[2],\r\n memory_format=torch.contiguous_format,\r\n dtype=torch.bfloat16 if input_dtype == torch.bfloat16 else torch.float32,\r\n ).to(key.device)\r\n```\r\n\r\nThis seems to solve both the OOM issue and the instability!\r\n\r\nOne question re your comment of state gradients - I now saw this\r\n\r\n> It will also match the _with_state variant of WKV forward.\r\n\r\nIn what cases is the _with_state variant used? As far as I can see the model I'm training is not passing states at all during the forward step. Is that something that only becomes relevant an inference time when the model is used like an RNN?\r\n", "Hey @sgugger how did you prepare the models? Could you point us how to convert original .pth or .safetensors model to your format? Thanks!\r\n\r\nPS\r\nAwesome RWKV joined transformers!", "@lambdaofgod The logic used to convert the RWKV checkpoints from BlinkDL to HF format can be found in the [conversion script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/rwkv/convert_rwkv_checkpoint_to_hf.py).", "@YovaKem AFAIK, `with_state` is used only in inference now (in existing non-`transformers` implementations throughout the RWKV community). However, with proper implementation, this will allow more efficient training on long sequences, but it has not yet been implemented.", "I have no idea why the CUDA kernels all disappeared from the pacakge on Pypi (it's not just RWKV, but all models using custom kernels). Will investigate later today and post a patch release when I find a solution.", "Normally custom kernels should be included in 4.29.2, sorry for the inconvenience. We added stronger to checks to make sure they don't disappear again in a future release.", "Hi, can i ask a simple question about RWKV kernel? The rwkv model without customized kernel uses a `for loop` here:\r\nhttps://github.com/huggingface/transformers/blob/3658488ff77ff8d45101293e749263acf437f4d5/src/transformers/models/rwkv/modeling_rwkv.py#L223-L241\r\n\r\nI am not familiar with cuda kernel. So i am not sure whether the customized cuda kernel still computes sequentially and delivers a faster `for loop`, or just make the computation parallelized in GPU?", "Putting this here so it doesn't get lost. \r\n\r\nI am trying to run microsoft guidance (https://github.com/microsoft/guidance) on RWKV through transformers and I am getting an error\r\n\r\n`AttributeError: 'RwkvCausalLMOutput' object has no attribute 'past_key_values'`\r\n\r\nwhich can be reproduced here: https://gist.github.com/fullstackwebdev/a6523374e6687825fcb92ca74048c12b", "@fullstackwebdev \r\nI don't think the fix should go inside `transformers` as this means we should always output `past_key_values=None` - which is quite misleading as by desing RWKV does not rely on `past_key_values` for caching - as the tokens are processed one by one. I made https://github.com/microsoft/guidance/pull/91 that fixed the issue in my local env " ]
1,681
1,684
1,683
COLLABORATOR
null
# What does this PR do? This PR is a draft and while there is a working implementation of the model, there is still a lot to do :-) This PR adds the RWKV model from [BlinkDL/RWKV-LM](https://github.com/BlinkDL/RWKV-LM) which is a RNN-like Transformers: it has an attention layer and a feed-forward, but the attention is linear and can be expressed recurrently (more details coming in the doc page of the model). Here is a code snippet to play with the model: ```py import torch from transformers import AutoTokenizer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("sgugger/rwkv-7b-pile", torch_dtype=torch.float16, device_map="auto") tokenizer = AutoTokenizer.from_pretrained("sgugger/rwkv-7b-pile") prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese." inputs = tokenizer(prompt, return_tensors="pt").to(0) output = model.generate(inputs["input_ids"], max_new_tokens=400, top_p=0.8, do_sample=True) print(tokenizer.decode(output[0].tolist())) ``` To use the chat models (called raven): ```python from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline model_id = "ybelkada/rwkv-raven-1b5" model = AutoModelForCausalLM.from_pretrained(model_id).to(0) tokenizer = AutoTokenizer.from_pretrained(model_id) question = "Tell me about ravens" prompt = f"### Instruction: {question}\n### Response:" inputs = tokenizer(prompt, return_tensors="pt").to(0) output = model.generate(inputs["input_ids"], max_new_tokens=100) print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True)) ``` Fixes #20737 Fixes #17230 TODO: - [x] Write documentation of the model explaining the linear attention and the recurrent formulas in the code - [x] Make the model compatible with generate - [x] Add output_attentions/output_hidden_states API - [ ] Convert mode models and check conversion script is compatible - [x] Tweak CUDA kernels for state to use the state for init - [x] Make tests that pass - [ ] Add attention mask to be able to batch sentences (might be in a followup PR) cc @ArthurZucker and @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22797/reactions", "total_count": 41, "+1": 0, "-1": 0, "laugh": 0, "hooray": 4, "confused": 0, "heart": 21, "rocket": 16, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22797/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22797", "html_url": "https://github.com/huggingface/transformers/pull/22797", "diff_url": "https://github.com/huggingface/transformers/pull/22797.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22797.patch", "merged_at": 1683651851000 }
https://api.github.com/repos/huggingface/transformers/issues/22796
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22796/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22796/comments
https://api.github.com/repos/huggingface/transformers/issues/22796/events
https://github.com/huggingface/transformers/pull/22796
1,670,040,704
PR_kwDOCUB6oc5OaxPc
22,796
🌐 [i18n-KO] Fix anchor links for docs `auto_tutorial`, `training`
{ "login": "gabrielwithappy", "id": 102908949, "node_id": "U_kgDOBiJEFQ", "avatar_url": "https://avatars.githubusercontent.com/u/102908949?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gabrielwithappy", "html_url": "https://github.com/gabrielwithappy", "followers_url": "https://api.github.com/users/gabrielwithappy/followers", "following_url": "https://api.github.com/users/gabrielwithappy/following{/other_user}", "gists_url": "https://api.github.com/users/gabrielwithappy/gists{/gist_id}", "starred_url": "https://api.github.com/users/gabrielwithappy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gabrielwithappy/subscriptions", "organizations_url": "https://api.github.com/users/gabrielwithappy/orgs", "repos_url": "https://api.github.com/users/gabrielwithappy/repos", "events_url": "https://api.github.com/users/gabrielwithappy/events{/privacy}", "received_events_url": "https://api.github.com/users/gabrielwithappy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Team PseudoLab, may you please review this PR?\r\n@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd\r\n\r\nI fixed anchor links for documents I translated\r\n", "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger, @ArthurZucker, @eunseojo \r\nMay you please review this PR?" ]
1,681
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? Fixed anchor links for `auto_tutorial` and `training` docs Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd @sgugger, @ArthurZucker, @eunseojo May you please review this PR? <!-- Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22796/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22796/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22796", "html_url": "https://github.com/huggingface/transformers/pull/22796", "diff_url": "https://github.com/huggingface/transformers/pull/22796.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22796.patch", "merged_at": 1681823490000 }
https://api.github.com/repos/huggingface/transformers/issues/22795
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22795/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22795/comments
https://api.github.com/repos/huggingface/transformers/issues/22795/events
https://github.com/huggingface/transformers/pull/22795
1,670,023,117
PR_kwDOCUB6oc5Oat5j
22,795
add open-llama model with ckpt
{ "login": "s-JoL", "id": 16948304, "node_id": "MDQ6VXNlcjE2OTQ4MzA0", "avatar_url": "https://avatars.githubusercontent.com/u/16948304?v=4", "gravatar_id": "", "url": "https://api.github.com/users/s-JoL", "html_url": "https://github.com/s-JoL", "followers_url": "https://api.github.com/users/s-JoL/followers", "following_url": "https://api.github.com/users/s-JoL/following{/other_user}", "gists_url": "https://api.github.com/users/s-JoL/gists{/gist_id}", "starred_url": "https://api.github.com/users/s-JoL/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/s-JoL/subscriptions", "organizations_url": "https://api.github.com/users/s-JoL/orgs", "repos_url": "https://api.github.com/users/s-JoL/repos", "events_url": "https://api.github.com/users/s-JoL/events{/privacy}", "received_events_url": "https://api.github.com/users/s-JoL/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "cc @ArthurZucker and @younesbelkada ", "Please help me review this pull request. @ArthurZucker @younesbelkada ", "Hey! Thanks will review now", "Thanks a lot for your contribution!", "> Thanks a lot for your contribution!\r\n\r\nHello, I have a question, why the open-Llama model cannot be searched in the documentation of transformers? Is there something I forgot to add?\r\n\r\n![image](https://github.com/huggingface/transformers/assets/16948304/0459b4d1-9c5d-4969-bb05-a5266db05589)\r\n", "Hi @s-JoL, thanks for notifying. \r\n\r\nThere was an issue in the doc rendering (resolved with [1](https://github.com/huggingface/huggingface-meilisearch/pull/60/files), [2](https://github.com/huggingface/huggingface-meilisearch/pull/61)) leading to some pages not being retrievable in search. Should be working now! ", "@s-JoL I noticed that the links pertaining to Open-LLaMA are currently leading to 404 errors. Could you please provide some information on what might have happened?", "@s-JoL Hi, I can't find a Open-LLaMA checkpoint and I noticed you delete your original repo. What happend? How Can I have a try of Open-LLaMA?", "@heya5 Possibly due to some controversies surrounding this project, the original author has closed the original project.\r\nhttps://github.com/chenfeng357/open-Chinese-ChatLLaMA/issues/1" ]
1,681
1,686
1,682
CONTRIBUTOR
null
This PR adds a new model called Open-Llama, which is based on Llama's implementation in Transformers. In Open-Llama, emory-efficient attention has been added, resulting in a 30% improvement in training efficiency. Additionally, hidden dropout and attention dropout have been added for better generalization during training. We have also added two optional features: stable embedding from Bloom and shared input-output vectors from PALM, which have been tested and found to improve training stability and performance. The following code snippet shows the implementation of memory-efficient attention: ```python try: from xformers import ops as xops except ImportError: xops = None print("xformers is not installed correctly.") if self.config.use_memorry_efficient_attention and xops is not None and self.training: attn_weights = None query_states = query_states.transpose(1, 2) key_states = key_states.transpose(1, 2) value_states = value_states.transpose(1, 2) attn_output = xops.memory_efficient_attention( query_states, key_states, value_states, attn_bias=xops.LowerTriangularMask(), p=self.dropout_prob ) ``` At the same time, for maximum compatibility, we have made xformers an optional dependency so that the original implementation can still be used for training and inference if it is not installed. We implemented pre-training of the Llama model based on transformers + accelerate, incorporating the modifications described above. [Open-Llama](https://github.com/Bayes-Song/Open-Llama/blob/main/README_en.md) The pre-trained model has already been open-sourced on [s-JoL/Open-Llama-V1](https://huggingface.co/s-JoL/Open-Llama-V1). ref: https://github.com/huggingface/transformers/pull/22386 cc: @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22795/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22795/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22795", "html_url": "https://github.com/huggingface/transformers/pull/22795", "diff_url": "https://github.com/huggingface/transformers/pull/22795.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22795.patch", "merged_at": 1682694093000 }