url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
โ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
โ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/25120
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25120/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25120/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25120/events
|
https://github.com/huggingface/transformers/pull/25120
| 1,822,647,564 |
PR_kwDOCUB6oc5WdHwd
| 25,120 |
Fix `.push_to_hub` and cleanup `get_full_repo_name` usage
|
{
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the review @ydshieh. I added some comments and fix a bug where token was not used (well done finding this one! :wink:). So I think we're good to go now. I'll merge the PR once CI is green :) ",
"@Wauplin Thanks for the update. Good to merge (you can re-run the failed tests and it should be fine).\r\n\r\nRegarding the comment, thanks a lot. (What I originally mean is that adding some comments on the PR pages, but it could be on the code too.)\r\n\r\n",
"~@ydshieh I don't think I have the permissions to rerun a failed test. Could you trigger it for me please :pray:~\r\n\r\n**EDIT:** I was logged out :smile: I just triggered it.\r\n"
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
A few (vaguely) related changes in this PR. The main goal was to fix a bug when `.push_to_hub` is used with a repo_id in the form `"organization/repo_name"` that is also a local working directory. The organization gets removed which pushes the model to `"username/repo_name"` instead of under the organization. This bug has been reported [on slack](https://huggingface.slack.com/archives/C02EMARJ65P/p1690366443112179) (private link) by @NathanHB. In addition to this fix, I also made some changes to get rid of `get_full_repo_name` in most cases.
**List of changes:**
- fix `src/transformers/utils/hub.py` to work with organization (bug above)
- added some ValueError when using deprecated args in addition to existing args
- get rid of `get_full_repo_name` in training scripts (no need for it when using `create_repo`), which saves 1 whoami call
- removed `get_full_repo_name` from keras_callback.py, modeling_tf_utils.py and trainer.py
- import `get_full_repo_name` from `huggingface_hub` instead of re-defining it.
I expect nothing to be broken by those changes.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25120/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25120",
"html_url": "https://github.com/huggingface/transformers/pull/25120",
"diff_url": "https://github.com/huggingface/transformers/pull/25120.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25120.patch",
"merged_at": 1690537209000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25119
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25119/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25119/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25119/events
|
https://github.com/huggingface/transformers/issues/25119
| 1,822,594,048 |
I_kwDOCUB6oc5sopQA
| 25,119 |
Support End LR for Cosine LR Scheduler
|
{
"login": "yqy2001",
"id": 55196500,
"node_id": "MDQ6VXNlcjU1MTk2NTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/55196500?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yqy2001",
"html_url": "https://github.com/yqy2001",
"followers_url": "https://api.github.com/users/yqy2001/followers",
"following_url": "https://api.github.com/users/yqy2001/following{/other_user}",
"gists_url": "https://api.github.com/users/yqy2001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yqy2001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yqy2001/subscriptions",
"organizations_url": "https://api.github.com/users/yqy2001/orgs",
"repos_url": "https://api.github.com/users/yqy2001/repos",
"events_url": "https://api.github.com/users/yqy2001/events{/privacy}",
"received_events_url": "https://api.github.com/users/yqy2001/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"You can pass along your custom scheduler to the `Trainer` :-)",
"OK, Thank you."
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
### Feature request
Customize the end learning rate of a cosine LR scheduler, as a non-zero end lr is commonly used now (e.g., LLaMA-2 use 10% of peak lr as end lr).

### Motivation
non-zero end lr is commonly used now (e.g., LLaMA-2 use 10% of peak lr as end lr)
### Your contribution
Sorry, I don't think I can be of much help.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25119/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25118
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25118/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25118/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25118/events
|
https://github.com/huggingface/transformers/pull/25118
| 1,822,591,665 |
PR_kwDOCUB6oc5Wc7g8
| 25,118 |
Fix ViT docstring regarding default dropout values.
|
{
"login": "ebezzam",
"id": 4757445,
"node_id": "MDQ6VXNlcjQ3NTc0NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4757445?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ebezzam",
"html_url": "https://github.com/ebezzam",
"followers_url": "https://api.github.com/users/ebezzam/followers",
"following_url": "https://api.github.com/users/ebezzam/following{/other_user}",
"gists_url": "https://api.github.com/users/ebezzam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ebezzam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ebezzam/subscriptions",
"organizations_url": "https://api.github.com/users/ebezzam/orgs",
"repos_url": "https://api.github.com/users/ebezzam/repos",
"events_url": "https://api.github.com/users/ebezzam/events{/privacy}",
"received_events_url": "https://api.github.com/users/ebezzam/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Related to https://github.com/huggingface/transformers/issues/25108 @ydshieh ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25118). All of your documentation changes will be reflected on that endpoint."
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25118/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25118",
"html_url": "https://github.com/huggingface/transformers/pull/25118",
"diff_url": "https://github.com/huggingface/transformers/pull/25118.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25118.patch",
"merged_at": 1690384138000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25117
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25117/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25117/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25117/events
|
https://github.com/huggingface/transformers/issues/25117
| 1,822,577,619 |
I_kwDOCUB6oc5solPT
| 25,117 |
EsmForMaskedLM no performance gain from batch processing
|
{
"login": "M-J-Murray",
"id": 13166228,
"node_id": "MDQ6VXNlcjEzMTY2MjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/13166228?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/M-J-Murray",
"html_url": "https://github.com/M-J-Murray",
"followers_url": "https://api.github.com/users/M-J-Murray/followers",
"following_url": "https://api.github.com/users/M-J-Murray/following{/other_user}",
"gists_url": "https://api.github.com/users/M-J-Murray/gists{/gist_id}",
"starred_url": "https://api.github.com/users/M-J-Murray/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/M-J-Murray/subscriptions",
"organizations_url": "https://api.github.com/users/M-J-Murray/orgs",
"repos_url": "https://api.github.com/users/M-J-Murray/repos",
"events_url": "https://api.github.com/users/M-J-Murray/events{/privacy}",
"received_events_url": "https://api.github.com/users/M-J-Murray/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"When I run a profiler it seems there is a bottleneck in the `apply_rotary_pos_emb` function called from the ESM sequence embedding. It could be that this is such a large bottle neck, that changing the batch size almost has no effect.",
"cc @Rocketknight1 ",
"Hi\r\n\r\nIt would be good to provide a full code snippet. Currently, it is not super clear that if you are also including the time spent on the tokenization (I guess that's not the case however). And in any case, with a code snippet, it's easier for us to help.\r\n\r\nThank you in advance.",
"The following code was run on a `Tesla V100-SXM2-16GB`.\r\nWith a batch size of 10, it executes in 14.96 seconds.\r\nWith a batch size of 50, it executes in 13.6 seconds.\r\nI would expect a much larger change in execution time between the two batch sizes. I would have thought a batch size of 50 would execute five times faster than a batch size of 10.\r\n\r\n```python\r\nimport numpy as np\r\nimport torch\r\nfrom torch.utils.data import DataLoader\r\nfrom transformers import EsmForMaskedLM, EsmTokenizer\r\nimport time\r\n\r\ndevice = torch.device(\"cuda\")\r\ntokenizer = EsmTokenizer.from_pretrained(\"facebook/esm2_t33_650M_UR50D\")\r\nmodel = EsmForMaskedLM.from_pretrained(\"facebook/esm2_t33_650M_UR50D\")\r\nmodel.eval()\r\nmodel = model.to(device)\r\n\r\nbatch_size = 10\r\nsamples = 500\r\nsequence_length = 250\r\ntokens = list(\"ARNDCQEGHILKMFPSTWYV\")\r\nsequences = [\"\".join(np.random.choice(tokens, sequence_length)) for _ in range(samples)]\r\n\r\nt0 = time.time()\r\nwith torch.no_grad():\r\n for batch_seqs in DataLoader(sequences, batch_size=batch_size):\r\n inputs = tokenizer(batch_seqs, return_tensors=\"pt\")\r\n inputs = inputs.to(device)\r\n model.base_model(**inputs)\r\nprint(f\"Execution time: {time.time() - t0}\")\r\n```",
"Thanks for the code snippet ๐ค ",
"Hi @M-J-Murray, I think there are a few confounding things here. Firstly, the ESM tokenizer is relatively unoptimized. This means tokenization takes longer than it does for other models. If performance is critical, I would strongly recommend tokenizing your sequences once, then saving the tokenized outputs, rather than tokenizing them on-the-fly in each loop. This applies to all models, but especially to ESM!\r\n\r\nSecondly, performance does not scale linearly with batch size. The same amount of computation has to be done for 100 batches of 2, or 2 batches of 100. The main reason to use larger batch sizes is that larger batches generally allow GPUs to do more work in parallel, which is helpful when the model is small, as small batch sizes in small models generally cannot use all the power of a high-end GPU at once. There is also the further benefit during training that fewer optimizer update steps are needed, but this does not apply when you're doing inference.\r\n\r\nIn this case, though, the model has 650M parameters, which is reasonably large. I would guess that even smaller batch sizes are enough to saturate a V100 GPU for a model of this size, so the performance benefit of larger batches would not be that significant. I think this, combined with the additional constant time added to your measurements from running the tokenizer in the loop, is enough to explain the lack of benefit, and the model is actually working as expected!",
"@Rocketknight1 Thank you, I've just validated and it does seem the tokenizer is the main bottle neck here. I will write my own tokenizer for now."
] | 1,690 | 1,690 | 1,690 |
NONE
| null |
### System Info
Transformers version: 4.30.2
Python version: 3.9.16
This occurs on both:
MacBook Pro M2: MacOS 13.2.1 (22D68), ran using mps
AND
Debian 4.19.171-2 x86_64 GNU/Linux, ran using gpu
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm using the EsmForMaskedLM model with `facebook/esm2_t33_650M_UR50D` along with the EsmTokenizer.
If I run inference on 200 sequences, it takes the same amount of time to run 10 forward passes with a batch size of 20, vs 100 forward passes on a batch size of 2. This seems to indicate the model doesn't support batch processing under the hood? It seems strange that the interface would imply that it supports batch processing, without actually supporting it properly.
```python
inputs = self.tokenizer(sequences, return_tensors="pt")
inputs = inputs.to(self.device)
self.model.base_model(**inputs)
```
### Expected behavior
I would expect running 10 forward passes to be much faster than 100 forward passes.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25117/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25116
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25116/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25116/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25116/events
|
https://github.com/huggingface/transformers/pull/25116
| 1,822,526,923 |
PR_kwDOCUB6oc5WcteT
| 25,116 |
[`MptConfig`] support from pretrained args
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @sgugger "
] | 1,690 | 1,690 | 1,690 |
COLLABORATOR
| null |
# What does this PR do?
Adds a setter for `attn_config` to allow passing dict when initializing for backward compatibility
Fixes https://github.com/huggingface/transformers/issues/25114
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25116/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25116",
"html_url": "https://github.com/huggingface/transformers/pull/25116",
"diff_url": "https://github.com/huggingface/transformers/pull/25116.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25116.patch",
"merged_at": 1690467892000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25115
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25115/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25115/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25115/events
|
https://github.com/huggingface/transformers/pull/25115
| 1,822,426,288 |
PR_kwDOCUB6oc5WcXoX
| 25,115 |
Fix beam search to sample at least 1 non eos token (#25103)
|
{
"login": "yonigottesman",
"id": 4004127,
"node_id": "MDQ6VXNlcjQwMDQxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4004127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigottesman",
"html_url": "https://github.com/yonigottesman",
"followers_url": "https://api.github.com/users/yonigottesman/followers",
"following_url": "https://api.github.com/users/yonigottesman/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigottesman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigottesman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigottesman/subscriptions",
"organizations_url": "https://api.github.com/users/yonigottesman/orgs",
"repos_url": "https://api.github.com/users/yonigottesman/repos",
"events_url": "https://api.github.com/users/yonigottesman/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigottesman/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"done",
"ok now done :) \r\nsorry for all the force pushes I just wasn't sure how you guys merge so I preferred to keep 1 clean commit",
"> sorry for all the force pushes I just wasn't sure how you guys merge so I preferred to keep 1 clean commit\r\n\r\n@yonigottesman no worries, we squash before merging ;)",
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25115). All of your documentation changes will be reflected on that endpoint."
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25115/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25115",
"html_url": "https://github.com/huggingface/transformers/pull/25115",
"diff_url": "https://github.com/huggingface/transformers/pull/25115.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25115.patch",
"merged_at": 1690564824000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25114
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25114/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25114/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25114/events
|
https://github.com/huggingface/transformers/issues/25114
| 1,822,380,204 |
I_kwDOCUB6oc5sn1Cs
| 25,114 |
MptForCausalLM.from_pretrained gives error 'dict' object has no attribute 'softmax_scale'
|
{
"login": "abacaj",
"id": 7272343,
"node_id": "MDQ6VXNlcjcyNzIzNDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7272343?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abacaj",
"html_url": "https://github.com/abacaj",
"followers_url": "https://api.github.com/users/abacaj/followers",
"following_url": "https://api.github.com/users/abacaj/following{/other_user}",
"gists_url": "https://api.github.com/users/abacaj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abacaj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abacaj/subscriptions",
"organizations_url": "https://api.github.com/users/abacaj/orgs",
"repos_url": "https://api.github.com/users/abacaj/repos",
"events_url": "https://api.github.com/users/abacaj/events{/privacy}",
"received_events_url": "https://api.github.com/users/abacaj/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @younesbelkada and @ArthurZucker ",
"Hey! Thanks for reporting. Indeed this does not work as the `attn_config` object does not seem to get converted to a `MptAttentionConfig` object. The `MptAttention` still needs the full config. \r\nWill open a PR to fix this as I guess this was previously working! ",
"It is not something that we usually support. For composition models like CLIP, CLAP etc, this would not work:\r\n```python \r\nfrom transformers import CLIPModel\r\nCLIPModel.from_pretrained(\"openai/clip-vit-base-patch16\", text_config = dict(num_hidden_layers = 2))\r\n....\r\nโ /home/arthur_huggingface_co/transformers/src/transformers/models/clip/configuration_clip.py:411 โ\r\nโ in to_dict โ\r\nโ โ\r\nโ 408 โ โ โ `Dict[str, any]`: Dictionary of all the attributes that make up this configu โ\r\nโ 409 โ โ \"\"\" โ\r\nโ 410 โ โ output = copy.deepcopy(self.__dict__) โ\r\nโ โฑ 411 โ โ output[\"text_config\"] = self.text_config.to_dict() โ\r\nโ 412 โ โ output[\"vision_config\"] = self.vision_config.to_dict() โ\r\nโ 413 โ โ output[\"model_type\"] = self.__class__.model_type โ\r\nโ 414 โ โ return output โ\r\nโฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ\r\nAttributeError: 'dict' object has no attribute 'to_dict'\r\n```\r\nWe allow:\r\n```python \r\nfrom transformers import CLIPModel, CLIPTextConfig\r\ntext_config = CLIPTextConfig(num_hidden_layers = 2)\r\nCLIPModel.from_pretrained(\"openai/clip-vit-base-patch16\", text_config = text_config)\r\n```\r\n\r\nHowever for backward compatibility, #25116 will fix this for MPT"
] | 1,690 | 1,690 | 1,690 |
NONE
| null |
### System Info
Creating model MPT:
```
model = MptForCausalLM.from_pretrained(
model_args.model_name_or_path,
cache_dir=training_args.cache_dir,
torch_dtype=torch.bfloat16,
use_cache=False,
init_device=f"cuda:{local_rank}",
attn_config=dict(attn_impl="flash", softmax_scale=None), # triton, flash
)
```
Gives the following error:
```
File "/home/anton/personal/stanford_alpaca-replit/env/lib/python3.10/site-packages/transformers/models/mpt/modeling_mpt.py", line 258, in __init__
self.attn = MptAttention(config)
File "/home/anton/personal/stanford_alpaca-replit/env/lib/python3.10/site-packages/transformers/models/mpt/modeling_mpt.py", line 137, in __init__
self.softmax_scale = config.attn_config.softmax_scale
AttributeError: 'dict' object has no attribute 'softmax_scale'
```
### Who can help?
### Information
### Tasks
### Reproduction
```
model = MptForCausalLM.from_pretrained(
model_args.model_name_or_path,
cache_dir=training_args.cache_dir,
torch_dtype=torch.bfloat16,
use_cache=False,
init_device=f"cuda:{local_rank}",
attn_config=dict(attn_impl="flash", softmax_scale=None), # triton, flash
)
```
### Expected behavior
Should be using `MptAttentionConfig` instead is using a dict object
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25114/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25113
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25113/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25113/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25113/events
|
https://github.com/huggingface/transformers/pull/25113
| 1,822,358,668 |
PR_kwDOCUB6oc5WcJLb
| 25,113 |
Fix past CI after #24334
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@ydshieh thank you for fixing it ๐ ",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,690 | 1,690 |
COLLABORATOR
| null |
# What does this PR do?
Fix dtype issue in past CI (pytorch 1.11 and 1.10) after #24334
cc @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25113/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25113",
"html_url": "https://github.com/huggingface/transformers/pull/25113",
"diff_url": "https://github.com/huggingface/transformers/pull/25113.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25113.patch",
"merged_at": 1690378482000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25112
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25112/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25112/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25112/events
|
https://github.com/huggingface/transformers/issues/25112
| 1,822,275,811 |
I_kwDOCUB6oc5snbjj
| 25,112 |
`Gradient clipping` function is not compatible with upgrade
|
{
"login": "Baibaifan",
"id": 39549453,
"node_id": "MDQ6VXNlcjM5NTQ5NDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/39549453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Baibaifan",
"html_url": "https://github.com/Baibaifan",
"followers_url": "https://api.github.com/users/Baibaifan/followers",
"following_url": "https://api.github.com/users/Baibaifan/following{/other_user}",
"gists_url": "https://api.github.com/users/Baibaifan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Baibaifan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Baibaifan/subscriptions",
"organizations_url": "https://api.github.com/users/Baibaifan/orgs",
"repos_url": "https://api.github.com/users/Baibaifan/repos",
"events_url": "https://api.github.com/users/Baibaifan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Baibaifan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] |
[
"The `unscale_gradients` is done in the first screenshot a couple of lines above. I'm not sure what it is you are reporting as a bug.",
"> unscale_gradients\r\n\r\nI run the same code in the same environment, the difference is the version of transformers.\r\n\r\ntransformers4.28.1 running normally\r\n\r\n\r\nThere is a problem with transformers4.31.0\r\n\r\n\r\nI found that the problem lies in the `clip_grad_norm_` function of accelerate. When performing `self.unscale_gradients()` calculation, it seems to support `amp` training. It seems that there is no support for `pure fp16` calculation, but there is no limit. It's weird here, the before and after versions are not compatible. I don't know if my understanding is correct.\r\n\r\n\r\n@sgugger \r\n",
"Hello @Baibaifan, you shouldn't explicitly call model.half() or model.to(torch.float16) when using amp. See this PyTorch forum message: https://discuss.pytorch.org/t/valueerror-attemting-to-unscale-fp16-gradients/81372/14\r\n\r\nCould you please make sure to remove such lines and rerun and see if that resolves the issue?",
"> https://discuss.pytorch.org/t/valueerror-attemting-to-unscale-fp16-gradients/81372/14\r\n\r\nIf I don't want to run under `amp`, I just want to run under `pure fp16`, the 4.28.1 version is fine, but 4.31.0 will report an error, I want to know why? @pacman100 ",
"@Baibaifan we don't support pure fp16 training in the `Trainer` as it doesn't converge. You can use pure `fp16` evaluation with the `--fp16_full_eval` flag.",
"> @Baibaifan we don't support pure fp16 training in the `Trainer` as it doesn't converge. You can use pure `fp16` evaluation with the `--fp16_full_eval` flag.\r\n\r\nOK, thanks. @sgugger "
] | 1,690 | 1,690 | 1,690 |
NONE
| null |
### System Info
`Gradient clipping` function is not compatible with upgrade.
transformers4.28.1๏ผ

Transformers4.30.2:

**The old and new versions do not support fp16 uniformly.**
### Who can help?
@sgugger๏ผ@pacman100
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
you can run with transformers.
```shell
torchrun \
--nproc_per_node $NUM_GPU \
--master_port $PORT_ID \
run_bloom.py \
--model_name_or_path facebook/opt-125m \
--use_fast_tokenizer False \
--train_file $TRAIN \
--validation_file $VAL \
--test_file $TEST \
--max_seq_length 512 \
--output_dir $OUTPUT \
--do_train True\
--do_eval False \
--do_predict False \
--evaluation_strategy no \
--eval_steps 1000000000 \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 16 \
--learning_rate 1e-5 \
--optim adamw_torch \
--adam_beta2 0.95 \
--weight_decay 0.1 \
--num_train_epochs 1 \
--lr_scheduler_type constant_with_warmup \
--warmup_ratio 0.1 \
--logging_first_step True \
--logging_steps 10 \
--logging_nan_inf_filter False \
--save_strategy steps \
--save_steps 10000 \
--save_total_limit 3 \
--fp16 True \
--disable_tqdm False \
--log_on_each_node False \
--report_to tensorboard \
```
**important configuration**: --fp16 True
### Expected behavior
Will not report bugs.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25112/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25111
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25111/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25111/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25111/events
|
https://github.com/huggingface/transformers/issues/25111
| 1,822,255,937 |
I_kwDOCUB6oc5snWtB
| 25,111 |
ValueError: Connection error, and we cannot find the requested files in the cached path
|
{
"login": "teamwong111",
"id": 47851024,
"node_id": "MDQ6VXNlcjQ3ODUxMDI0",
"avatar_url": "https://avatars.githubusercontent.com/u/47851024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/teamwong111",
"html_url": "https://github.com/teamwong111",
"followers_url": "https://api.github.com/users/teamwong111/followers",
"following_url": "https://api.github.com/users/teamwong111/following{/other_user}",
"gists_url": "https://api.github.com/users/teamwong111/gists{/gist_id}",
"starred_url": "https://api.github.com/users/teamwong111/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/teamwong111/subscriptions",
"organizations_url": "https://api.github.com/users/teamwong111/orgs",
"repos_url": "https://api.github.com/users/teamwong111/repos",
"events_url": "https://api.github.com/users/teamwong111/events{/privacy}",
"received_events_url": "https://api.github.com/users/teamwong111/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey! Pretty sure this was just a temporary failure! It is back up for me! ",
"Thanks for your reply. But the problem still exists for me. \r\nI debugged the code of Transformers 4.8.1 and found there is an variable called `vocab_files` in PreTrainedTokenizerBase.from_pretrained method.\r\n`vocab_files` contains {'vocab_file': 'https://huggingface.co/bert-base-uncased/resolve/main/vocab.txt', 'tokenizer_file': 'https://huggingface.co/bert-base-uncased/resolve/main/tokenizer.json', 'added_tokens_file': 'https://huggingface.co/bert-base-uncased/resolve/main/added_tokens.json', 'special_tokens_map_file': 'https://huggingface.co/bert-base-uncased/resolve/main/special_tokens_map.json', 'tokenizer_config_file': 'https://huggingface.co/bert-base-uncased/resolve/main/tokenizer_config.json'}. \r\nBut the website `https://huggingface.co/bert-base-uncased/resolve/main/added_tokens.json` and `https://huggingface.co/bert-base-uncased/resolve/main/special_tokens_map.json` show `Entry not found`. I guess it is why the bug appears.",
"Pretty sure this has been fixed since then, you should consider using a more recent version of Transformers, 4.8.1 is more than 2 years old.",
"Hi! I encountered the same problem. When reproducing the code of this paper (https://github.com/rshaojimmy/MultiModal-DeepFake), I used the same version 4.8.1 transformers, and executed this line of code\r\n\r\n`tokenizer = BertTokenizerFast.from_pretrained(args.text_encoder)` \r\n\r\nprompts such an error: \r\n\r\n`ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.`\r\n\r\nI found it has access this link(https://huggingface.co/bert-base-uncased/resolve/main/vocab.txt) but it returns 404\r\nAnd I upgraded the version of transformers to 4.10.1 and reported this error:\r\n\r\n`requests.exceptions.ConnectionError: (ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')), '(Request ID: 9709381c-b025-435d-aba5-aef8667c4d1a)')\r\n`\r\n\r\nI saw [this blog](https://blog.csdn.net/weixin_43301333/article/details/128080461), I guess it's a network problem",
"you can just upgrade transformers ..\r\nIt works for me.\r\npip install --upgrade tarnsformers",
"Thank you for the suggestion, indeed I finally solved this problem by downloading the model locally, doing a local load, using `bert-base-uncased` as an example, downloading the following file in the https://huggingface.co/bert-base-uncased/tree/main:\r\n\r\n- config.json\r\n- vocab.txt\r\n- pytorch_model.bin\r\n```\r\ntokenizer = BertTokenizerFast.from_pretrained(args.text_encoder)\r\n```\r\nๆนไธบ\r\n```\r\ntokenizer = BertTokenizerFastfrom_pretrained(\"./bert_localpath/\")\r\n```\r\n`./bert_localpath/` is the path where I put the above file.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> ### System Info\r\n> transformers version: 4.8.1 python version: 3.8 platform: Ubuntu 20.04.1 LTS\r\n> \r\n> ### Who can help?\r\n> @ArthurZucker, @younesbelkada\r\n> \r\n> ### Information\r\n> * [ ] The official example scripts\r\n> * [x] My own modified scripts\r\n> \r\n> ### Tasks\r\n> * [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\r\n> * [ ] My own task or dataset (give details below)\r\n> \r\n> ### Reproduction\r\n> Just run the below python script\r\n> \r\n> ```\r\n> from transformers import BertTokenizerFast\r\n> tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')\r\n> ```\r\n> \r\n> We can get error messages\r\n> \r\n> ```\r\n> Traceback (most recent call last):\r\n> File \"1.py\", line 3, in <module>\r\n> tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')\r\n> File \"/home/jzwang/miniconda3/envs/torch1.10/lib/python3.8/site-packages/transformers/tokenization_utils_base.py\", line 1672, in from_pretrained\r\n> resolved_vocab_files[file_id] = cached_path(\r\n> File \"/home/jzwang/miniconda3/envs/torch1.10/lib/python3.8/site-packages/transformers/file_utils.py\", line 1329, in cached_path\r\n> output_path = get_from_cache(\r\n> File \"/home/jzwang/miniconda3/envs/torch1.10/lib/python3.8/site-packages/transformers/file_utils.py\", line 1552, in get_from_cache\r\n> raise ValueError(\r\n> ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.\r\n> ```\r\n> \r\n> And I tried to download the file in the terminal using `wget https://huggingface.co/bert-base-uncased/blob/main/vocab.txt` but also failed. I do not know if there are any safety policies to prevent access.\r\n> \r\n> I checked the error code which is 104. I guess maybe some services have malfunctioned.\r\n> \r\n> I think it works well before about 2023-07-26 4:00 P.M.\r\n> \r\n> ### Expected behavior\r\n> Hope you can give me some advice or fix the bug.\r\n\r\nI encountered this problem in pycharm, but it is runnable in vscode"
] | 1,690 | 1,695 | 1,694 |
NONE
| null |
### System Info
transformers version: 4.8.1
python version: 3.8
platform: Ubuntu 20.04.1 LTS
### Who can help?
@ArthurZucker, @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Just run the below python script
```
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
```
We can get error messages
```
Traceback (most recent call last):
File "1.py", line 3, in <module>
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
File "/home/jzwang/miniconda3/envs/torch1.10/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1672, in from_pretrained
resolved_vocab_files[file_id] = cached_path(
File "/home/jzwang/miniconda3/envs/torch1.10/lib/python3.8/site-packages/transformers/file_utils.py", line 1329, in cached_path
output_path = get_from_cache(
File "/home/jzwang/miniconda3/envs/torch1.10/lib/python3.8/site-packages/transformers/file_utils.py", line 1552, in get_from_cache
raise ValueError(
ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.
```
And I tried to download the file in the terminal using `wget https://huggingface.co/bert-base-uncased/blob/main/vocab.txt` but also failed. I do not know if there are any safety policies to prevent access.
I checked the error code which is 104. I guess maybe some services have malfunctioned.
I think it works well before about 2023-07-26 4:00 P.M.
### Expected behavior
Hope you can give me some advice or fix the bug.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25111/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25110
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25110/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25110/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25110/events
|
https://github.com/huggingface/transformers/issues/25110
| 1,822,227,833 |
I_kwDOCUB6oc5snP15
| 25,110 |
Feature request: Adding einops as a Dependency
|
{
"login": "OhadRubin",
"id": 4252994,
"node_id": "MDQ6VXNlcjQyNTI5OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4252994?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OhadRubin",
"html_url": "https://github.com/OhadRubin",
"followers_url": "https://api.github.com/users/OhadRubin/followers",
"following_url": "https://api.github.com/users/OhadRubin/following{/other_user}",
"gists_url": "https://api.github.com/users/OhadRubin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OhadRubin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OhadRubin/subscriptions",
"organizations_url": "https://api.github.com/users/OhadRubin/orgs",
"repos_url": "https://api.github.com/users/OhadRubin/repos",
"events_url": "https://api.github.com/users/OhadRubin/events{/privacy}",
"received_events_url": "https://api.github.com/users/OhadRubin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"There is actually a (recent) discussion\r\n\r\n> There have been lots of discussions on this in the past, the TLDR is that we aren't supporting it cause it causes issues with things like ONNX\r\n\r\nand the approach is more\r\n\r\n> just re-wrote that part in pure pytorch,\r\n\r\n",
"For me there is no reason to add a dependency when `einops` would actually be detrimental to all the work done to optimize inference (quantization, ONNX etc.). We can revisit this if the support end up being on par with classic PyTorch operations, but we shouldn't make the code easier to read if it's not as efficient/supported by the ecosystem.",
"einops has torch.compile support, why should it be a incompatible with onnx?\r\nhttps://github.com/arogozhnikov/einops/wiki/Using-torch.compile-with-einops",
"It might be the case some point in the past (I am not the one involved in this topic previously). Not sure the status at this moment. A further investigation into this to make sure the whole ecosystem will run smoothly with models using `einops` is necessary before we add it as a dependency. But the teams have their own priorities, and we would like to see how the community reacts to this feature request first. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Should we write a test to see if einops works with the ecosystem?",
"@Bachstelze please do!"
] | 1,690 | 1,705 | 1,693 |
NONE
| null |
### Feature request
I propose that the einops library be added as a dependency in the HuggingFace Transformers library. einops (Einstein Notation Operations) is a Python library that provides a more expressive language for tensor operations. This would offer highly readable and maintainable code for complex tensor reshaping and rearranging operations.
einops GitHub page: https://github.com/arogozhnikov/einops
### Motivation
The addition of einops as a dependency would greatly facilitate the integration of new models that already use it into the Transformers library. Adding einops would require less refactoring in adding these models, making the code easier to read, and will decrease the amount of time spent on understanding and maintaining the code.
Given the potential benefits that the inclusion of einops could bring to the Transformers library, and considering its impressive community support highlighted by over 7,000 stars on GitHub, I think that this proposal merits serious consideration. I suggest that we put this feature request up for a community vote. This would allow all contributors to weigh in on the decision.
### Your contribution
.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25110/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25110/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25109
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25109/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25109/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25109/events
|
https://github.com/huggingface/transformers/pull/25109
| 1,822,214,249 |
PR_kwDOCUB6oc5Wbp3e
| 25,109 |
๐จ๐จ๐จChange default from `adamw_hf` to `adamw_torch` ๐จ๐จ๐จ
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR changes the default from `adamw_hf` to `adamw_torch`, as noted in https://github.com/huggingface/transformers/issues/25006 which fixes some breaking issues. Note that https://github.com/huggingface/transformers/issues/22141 still needs to be fulfilled once torch 2.1.0 is released (sometime in the next few months I imagine, as we're on 2.0.1) and swap it to be `ADAMW_TORCH_FUSED`.
Fixes # (issue)
Solves #25006
## Maintaining old behavior
To keep the old behavior prior to this change, ensure that you pass `"adamw_hf"` as the `optim` in your `TrainingArguments`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger @stas00
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25109/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25109/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25109",
"html_url": "https://github.com/huggingface/transformers/pull/25109",
"diff_url": "https://github.com/huggingface/transformers/pull/25109.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25109.patch",
"merged_at": 1690463488000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25108
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25108/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25108/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25108/events
|
https://github.com/huggingface/transformers/issues/25108
| 1,822,174,979 |
I_kwDOCUB6oc5snC8D
| 25,108 |
Wrong default values according to docstrings?
|
{
"login": "ebezzam",
"id": 4757445,
"node_id": "MDQ6VXNlcjQ3NTc0NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4757445?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ebezzam",
"html_url": "https://github.com/ebezzam",
"followers_url": "https://api.github.com/users/ebezzam/followers",
"following_url": "https://api.github.com/users/ebezzam/following{/other_user}",
"gists_url": "https://api.github.com/users/ebezzam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ebezzam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ebezzam/subscriptions",
"organizations_url": "https://api.github.com/users/ebezzam/orgs",
"repos_url": "https://api.github.com/users/ebezzam/repos",
"events_url": "https://api.github.com/users/ebezzam/events{/privacy}",
"received_events_url": "https://api.github.com/users/ebezzam/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @ebezzam \r\n\r\nI am not sure I understand the question. Could you describe it more precisely? Thanks.",
"Hi @ydshieh thank you for your quick response.\r\n\r\nThe docstring of [ViTConfig](https://github.com/huggingface/transformers/blob/d941f07a4e3bc7b61b7afbd25d6e2e8427fccc6d/src/transformers/models/vit/configuration_vit.py#L35) says that the default values for [`hidden_dropout_prob`](https://github.com/huggingface/transformers/blob/d941f07a4e3bc7b61b7afbd25d6e2e8427fccc6d/src/transformers/models/vit/configuration_vit.py#L58) and [`attention_probs_dropout_prob`](https://github.com/huggingface/transformers/blob/d941f07a4e3bc7b61b7afbd25d6e2e8427fccc6d/src/transformers/models/vit/configuration_vit.py#L60) are `0.1`, but in the code (linked above) they are set to `0.0`.",
"Thanks a lot @ebezzam , super clear now ๐ค \r\n\r\nWill take a look\r\n\r\n",
"Hi @ebezzam \r\n\r\nFrom [the original ViT codebase](https://github.com/google-research/vision_transformer/blob/ac6e056f9da686895f9f0f6ac026d3b5a464e59e/vit_jax/configs/models.py#L123), they should be `0.0` for the `ViT-B/16` model.\r\n\r\nThe value `0.1` in the docstring is likely from a copy-paste.\r\n\r\nWould you like to open a PR to help us correct those 2 values in the docstring? Thanks.",
"@ydshieh thanks for looking into that. Yes I can open a PR and link to this issue.",
"Thanks a lot ๐ค ",
"@ydshieh done!",
"Thank you again @ebezzam ๐ค "
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
https://github.com/huggingface/transformers/blob/d941f07a4e3bc7b61b7afbd25d6e2e8427fccc6d/src/transformers/models/vit/configuration_vit.py#L100
https://github.com/huggingface/transformers/blob/d941f07a4e3bc7b61b7afbd25d6e2e8427fccc6d/src/transformers/models/vit/configuration_vit.py#L101
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25108/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25107
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25107/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25107/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25107/events
|
https://github.com/huggingface/transformers/pull/25107
| 1,822,134,342 |
PR_kwDOCUB6oc5WbYZx
| 25,107 |
add util for ram efficient loading of model when using fsdp
|
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@pacman100 Hi, I've tried to run Llama2 with the two PR but it seems something went wrong. Plz check, thx!\r\n\r\nWhile copying the parameter named \"model.layers.29.self_attn.v_proj.weight\", whose dimensions in the model are torch.Size([4096, 4096]) and whose dimensions in the checkpoint are torch.Size([4096, 4096]), an exception occurred : ('Cannot copy out of meta tensor; no data!\\nException raised from copy_impl at ../aten/src/ATen/native/Copy.cpp:188 (most recent call first):\\nframe #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f649c6c04d7 in /opt/conda/lib/python3.10/site-packages/torch/lib/libc10.so)\\nframe #1: <unknown function> + 0x11c32e4 (0x7f64ea8552e4 in /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)\\nframe #2: at::native::copy_(at::Tensor&, at::Tensor const&, bool) + 0x62 (0x7f64eb3deb32 in /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)\\nframe #3: at::_ops::copy_::redispatch(c10::DispatchKeySet, at::Tensor&, at::Tensor const&, bool) + 0x7b (0x7f64ebff07db in /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)\\nframe #4: <unknown function> + 0x5443145 (0x7f64eead5145 in /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)\\nframe #5: at::_ops::copy_::redispatch(c10::DispatchKeySet, at::Tensor&, at::Tensor const&, bool) + 0x7b (0x7f64ebff07db in /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)\\nframe #6: <unknown function> + 0x54454f4 (0x7f64eead74f4 in /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)\\nframe #7: at::_ops::copy_::call(at::Tensor&, at::Tensor const&, bool) + 0x15f (0x7f64ec04dadf in /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)\\nframe #8: <unknown function> + 0x4cdbc9 (0x7f65030ddbc9 in /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_python.so)\\nframe #9: <unknown function> + 0x1453a3 (0x55e385cfb3a3 in /opt/conda/bin/python)\\nframe #10: _PyEval_EvalFrameDefault + 0x6f3 (0x55e385ce9b13 in /opt/conda/bin/python)\\nframe #11: <unknown function> + 0x1515df (0x55e385d075df in /opt/conda/bin/python)\\nframe #12: _PyEval_EvalFrameDefault + 0x2b8f (0x55e385cebfaf in /opt/conda/bin/python)\\nframe #13: _PyFunction_Vectorcall + 0x6f (0x55e385cfa03f in /opt/conda/bin/python)\\nframe #14: _PyEval_EvalFrameDefault + 0x304 (0x55e385ce9724 in /opt/conda/bin/python)\\nframe #15: _PyFunction_Vectorcall + 0x6f (0x55e385cfa03f in /opt/conda/bin/python)\\nframe #16: _PyEval_EvalFrameDefault + 0x304 (0x55e385ce9724 in /opt/conda/bin/python)\\nframe #17: _PyFunction_Vectorcall + 0x6f (0x55e385cfa03f in /opt/conda/bin/python)\\nframe #18: _PyEval_EvalFrameDefault + 0x304 (0x55e385ce9724 in /opt/conda/bin/python)\\nframe #19: _PyFunction_Vectorcall + 0x6f (0x55e385cfa03f in /opt/conda/bin/python)\\nframe #20: _PyEval_EvalFrameDefault + 0x304 (0x55e385ce9724 in /opt/conda/bin/python)\\nframe #21: _PyFunction_Vectorcall + 0x6f (0x55e385cfa03f in /opt/conda/bin/python)\\nframe #22: _PyEval_EvalFrameDefault + 0x304 (0x55e385ce9724 in /opt/conda/bin/python)\\nframe #23: _PyFunction_Vectorcall + 0x6f (0x55e385cfa03f in /opt/conda/bin/python)\\nframe #24: _PyEval_EvalFrameDefault + 0x12ff (0x55e385cea71f in /opt/conda/bin/python)\\nframe #25: _PyFunction_Vectorcall + 0x6f (0x55e385cfa03f in /opt/conda/bin/python)\\nframe #26: _PyEval_EvalFrameDefault + 0x304 (0x55e385ce9724 in /opt/conda/bin/python)\\nframe #27: <unknown function> + 0x150d7c (0x55e385d06d7c in /opt/conda/bin/python)\\nframe #28: _PyEval_EvalFrameDefault + 0x12ff (0x55e385cea71f in /opt/conda/bin/python)\\nframe #29: <unknown function> + 0x150d7c (0x55e385d06d7c in /opt/conda/bin/python)\\nframe #30: _PyEval_EvalFrameDefault + 0x12ff (0x55e385cea71f in /opt/conda/bin/python)\\nframe #31: _PyFunction_Vectorcall + 0x6f (0x55e385cfa03f in /opt/conda/bin/python)\\nframe #32: PyObject_Call + 0xb8 (0x55e385d07a08 in /opt/conda/bin/python)\\nframe #33: _PyEval_EvalFrameDefault + 0x2b8f (0x55e385cebfaf in /opt/conda/bin/python)\\nframe #34: _PyFunction_Vectorcall + 0x6f (0x55e385cfa03f in /opt/conda/bin/python)\\nframe #35: _PyEval_EvalFrameDefault + 0x12ff (0x55e385cea71f in /opt/conda/bin/python)\\nframe #36: _PyFunction_Vectorcall + 0x6f (0x55e385cfa03f in /opt/conda/bin/python)\\nframe #37: _PyEval_EvalFrameDefault + 0x304 (0x55e385ce9724 in /opt/conda/bin/python)\\nframe #38: _PyFunction_Vectorcall + 0x6f (0x55e385cfa03f in /opt/conda/bin/python)\\nframe #39: _PyEval_EvalFrameDefault + 0x4a35 (0x55e385cede55 in /opt/conda/bin/python)\\nframe #40: <unknown function> + 0x1e64d2 (0x55e385d9c4d2 in /opt/conda/bin/python)\\nframe #41: PyEval_EvalCode + 0x87 (0x55e385d9c417 in /opt/conda/bin/python)\\nframe #42: <unknown function> + 0x219ed9 (0x55e385dcfed9 in /opt/conda/bin/python)\\nframe #43: <unknown function> + 0x2147e4 (0x55e385dca7e4 in /opt/conda/bin/python)\\nframe #44: <unknown function> + 0x98214 (0x55e385c4e214 in /opt/conda/bin/python)\\nframe #45: _PyRun_SimpleFileObject + 0x1af (0x55e385dc4b1f in /opt/conda/bin/python)\\nframe #46: _PyRun_AnyFileObject + 0x43 (0x55e385dc46e3 in /opt/conda/bin/python)\\nframe #47: Py_RunMain + 0x39f (0x55e385dc189f in /opt/conda/bin/python)\\nframe #48: Py_BytesMain + 0x39 (0x55e385d8f709 in /opt/conda/bin/python)\\nframe #49: __libc_start_main + 0xf3 (0x7f6534704083 in /usr/lib/x86_64-linux-gnu/libc.so.6)\\nframe #50: <unknown function> + 0x1d9611 (0x55e385d8f611 in /opt/conda/bin/python)\\n',).",
"Hello @lwmlyy, I'm able to run 70B Llama on 32 A100 80GB GPUs with it without any issues. Can you share the config, minimal example and launch command?",
"@pacman100 I run the code in https://github.com/facebookresearch/llama-recipes/pull/77 with the following command: \r\ntorchrun --nnodes 1 --nproc_per_node 4 llama_finetuning.py --enable_fsdp --pure_bf16 --model_name ../Llama-2-7b-hf --batch_size_training 1 --micro_batch_size 1 --dist_checkpoint_root_folder ../Llama-2-7b-hf --dist_checkpoint_folder fine-tuned\r\n\r\nCould you also share the script for running Llama-70b as you mentioned?",
"Hello @lwmlyy, follow this: https://github.com/facebookresearch/llama-recipes/pull/77#issuecomment-1674290658",
"@pacman100 As you mentioned, if the model is loaded with accelerate, no code change is needed. I wonder why the error shows up. Could you give some advice?",
"> @pacman100 I run the code in https://github.com/facebookresearch/llama-recipes/pull/77 with the following command: \n> torchrun --nnodes 1 --nproc_per_node 4 llama_finetuning.py --enable_fsdp --pure_bf16 --model_name ../Llama-2-7b-hf --batch_size_training 1 --micro_batch_size 1 --dist_checkpoint_root_folder ../Llama-2-7b-hf --dist_checkpoint_folder fine-tuned\n> \n> Could you also share the script for running Llama-70b as you mentioned?\n\nHello, you aren't launching via `accelerate launch` (you are using torchrun) and as such the env variable `ACCELERATE_USE_FSDP` isn't enabled. ",
"> > @pacman100 I run the code in [facebookresearch/llama-recipes#77](https://github.com/facebookresearch/llama-recipes/pull/77) with the following command:\r\n> > torchrun --nnodes 1 --nproc_per_node 4 llama_finetuning.py --enable_fsdp --pure_bf16 --model_name ../Llama-2-7b-hf --batch_size_training 1 --micro_batch_size 1 --dist_checkpoint_root_folder ../Llama-2-7b-hf --dist_checkpoint_folder fine-tuned\r\n> > Could you also share the script for running Llama-70b as you mentioned?\r\n> \r\n> Hello, you aren't launching via `accelerate launch` (you are using torchrun) and as such the env variable `ACCELERATE_USE_FSDP` isn't enabled.\r\n\r\n@pacman100 Hi๏ผI meet the same error when using the following command:\r\naccelerate launch llama_finetuning.py --enable_fsdp --model_name ../Llama-2-7b-hf --batch_size_training 1 --micro_batch_size 1 --dist_checkpoint_root_folder ../Llama-2-7b-hf --dist_checkpoint_folder fine-tuned\r\n\r\nIt works fine with the command:\r\nACCELERATE_USE_FSDP=True accelerate launch llama_finetuning.py --enable_fsdp --model_name ../Llama-2-7b-hf --batch_size_training 1 --micro_batch_size 1 --dist_checkpoint_root_folder ../Llama-2-7b-hf --dist_checkpoint_folder fine-tuned\r\n\r\nBut the loss is nan.",
"Hello, you aren't using the accelerate integration of FSDP and you are mixing llama recipe implementation which doesn't use Accelerate. Please refer to the Accelerate docs on the proper way to use FSDP with Accelerate. Also, please raise a separate issue.",
"Hi @pacman100 ,\r\n\r\nI am trying to Train Llama 70B Model in FSDP, I was going through your repo https://github.com/pacman100/ram_efficient_fsdp/blob/main/train.py, code is failing when trying to import this function load_pretrained_model_only_on_rank0 getting error \"_ImportError: cannot import name 'load_pretrained_model_only_on_rank0' from 'transformers' (/usr/local/lib/python3.10/dist-packages/transformers/__init__.py)_\". Tried to check this function in the Transformer Repo but couldn't find one in the main branch.\r\n\r\nCan you please help me, how I can execute your code.\r\n\r\nRegards \r\nNabarun Barua ",
"I'm currently using transformer v.4.37.2 and accelerate v.0.26.1 and am training on one machine with 2 GPU processors. I'm seeing the Mistral 7B model being loaded onto CPU RAM x2 (once for each processor). I don't understand why since this fix was released with earlier versions transformer v.4.32.0 and accelerate v.0.22.0 and should load the model onto CPU only once, independent of the number of processors. Any insight anyone has is super appreciated! \r\n\r\nThese are the settings in my fsdp config file:\r\n`compute_environment: LOCAL_MACHINE\r\ndebug: false\r\ndistributed_type: FSDP\r\ndowncast_bf16: 'no'\r\nfsdp_config:\r\n fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP\r\n fsdp_backward_prefetch: BACKWARD_PRE\r\n fsdp_cpu_ram_efficient_loading: true\r\n fsdp_forward_prefetch: false\r\n fsdp_offload_params: false\r\n fsdp_sharding_strategy: 1\r\n fsdp_state_dict_type: SHARDED_STATE_DICT\r\n fsdp_sync_module_states: true\r\n fsdp_use_orig_params: true\r\nmachine_rank: 0\r\nmain_training_function: main\r\nmixed_precision: bf16\r\nnum_machines: 1\r\nnum_processes: 2\r\nrdzv_backend: static\r\nsame_network: true\r\ntpu_env: []\r\ntpu_use_cluster: false\r\ntpu_use_sudo: false\r\nuse_cpu: false`",
"Hello @pkaercher,\r\n\r\nTo use this feature, you would need to use Accelerate config for FSDP along with Accelerate launcher. For more details on how to use this, please refer https://huggingface.co/docs/transformers/trainer#accelerate-and-trainer",
"Hi @pacman100,\r\nThanks for your reply. I am using Accelerate config along with FSDP (see my config file in **my post above** that I created with `accelerate config --config_file \"fsdp_config.yaml\"`. I am running my script with the command `accelerate launch --config_file fsdp_config.yaml domain_adapt.py`. Attached is my `domain_adapt.py` script. When I run, I see the CPU RAM go up to 65 GB for the 7B Mistral model, which is twice as much space as it should take up given 7Billion x 4Bytes = 28GB. Twice that (loading the model once for each of the 2 GPU processors I'm using) gives 56 GB, which, plus the space taken up by my environment packages and my dataset would be roughly 65 GB makes me think that accelerate is loading the Mistral model into my CPU RAM x2, which it shouldn't according to this fix.\r\n\r\n```\r\nfrom datasets import load_dataset\r\nfrom peft import LoraConfig, get_peft_model \r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\nfrom transformers import DataCollatorForLanguageModeling\r\nfrom transformers import Trainer, TrainingArguments\r\nfrom transformers import DataCollatorForLanguageModeling\r\n\r\n# Define training parameters\r\n\r\nMODEL_CHECKPOINT = 'mistralai/Mistral-7B-Instruct-v0.2'\r\nDATASET_PATH = '/appdata/data'\r\n# SAVE_TO_PATH = '/appdata/embedding_models'\r\nMASK_WHOLE_WORDS = False\r\n\r\n\r\nARGS = {\r\n 'lora_alpha': 16,\r\n 'lora_dropout': 0.1,\r\n 'lora_r': 64,\r\n 'output_dir': '/appdata/results',\r\n 'per_device_train_batch_size': 1,\r\n 'per_device_eval_batch_size': 1,\r\n 'gradient_accumulation_steps': 16, \r\n 'optim': \"paged_adamw_32bit\",\r\n 'evaluation_strategy': 'steps', # \"epoch\", default is 'no'\r\n 'save_steps': 50, # defaults to 500\r\n 'logging_steps': 50, # defaults to 500\r\n 'num_train_epochs': 4, # default is 3\r\n 'learning_rate': 1e-4,\r\n 'max_grad_norm': 0.3, # default is 1\r\n 'max_steps': 500, # training will only run to this number of steps; overrides num_train_epochs\r\n 'warmup_ratio': 0.03,\r\n 'lr_scheduler_type': \"cosine\", # default is \"linear\"\r\n }\r\n\r\n\r\n# Define functions\r\n\r\ndef run_domain_adaptation(model_checkpoint, dataset_path, args): # training_dataset_name, mask_whole_words\r\n # Import model and tokenizer\r\n tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\r\n model = AutoModelForCausalLM.from_pretrained(model_checkpoint)\r\n\r\n # Import and tokenize the data\r\n data = load_dataset(dataset_path , split='train')\r\n tokenizer.pad_token = tokenizer.eos_token\r\n tokenizer.padding_side = \"right\"\r\n tokenizer.mask_token = '<MASK>'\r\n\r\n\r\n def tokenize_function(examples, tokenizer=tokenizer):\r\n \"\"\"examples is a dataset object\"\"\"\r\n result = tokenizer(examples[\"text\"])\r\n return result\r\n\r\n\r\n tokenized_datasets = data.map(tokenize_function, batched=True, remove_columns=[\"text\"])\r\n collator = DataCollatorForLanguageModeling(mlm=True, mlm_probability=0.15, tokenizer=tokenizer)\r\n\r\n peft_config = LoraConfig(\r\n lora_alpha=args['lora_alpha'],\r\n lora_dropout=args['lora_dropout'],\r\n r=args['lora_r'],\r\n bias=\"none\",\r\n task_type=\"CAUSAL_LM\",\r\n target_modules=[\r\n \"Wqkv\",\r\n \"out_proj\",\r\n \"up_proj\",\r\n \"down_proj\",\r\n ])\r\n\r\n training_arguments = TrainingArguments(\r\n # output_dir=args['model_output_dir'],\r\n output_dir=args['output_dir'],\r\n per_device_train_batch_size=args['per_device_train_batch_size'],\r\n per_device_eval_batch_size=args['per_device_eval_batch_size'],\r\n gradient_accumulation_steps=args['gradient_accumulation_steps'],\r\n logging_steps=args['logging_steps'],\r\n save_strategy= 'epoch',\r\n # evaluation_strategy=args['evaluation_strategy'],\r\n num_train_epochs=args['num_train_epochs'],\r\n learning_rate=args['learning_rate'],\r\n bf16=True,\r\n # fsdp='full_shard',\r\n max_grad_norm=args['max_grad_norm'],\r\n warmup_ratio=args['warmup_ratio'],\r\n group_by_length=True,\r\n report_to='none',\r\n log_level='debug',\r\n )\r\n\r\n # Train\r\n model = get_peft_model(model, peft_config)\r\n trainer = Trainer(\r\n model=model,\r\n tokenizer=tokenizer,\r\n data_collator=collator,\r\n train_dataset=tokenized_datasets,\r\n # eval_dataset=tokenized_datasets['validation'],\r\n args=training_arguments,\r\n )\r\n\r\n trainer.train()\r\n\r\n # Save the model and tokenizer\r\n model_name = f\"{model_checkpoint.split('/')[1]}\" # _{training_dataset_name}\"\r\n trainer.save_model(fr\"../embedding_models/{model_name}\")\r\n tokenizer.save_pretrained(fr\"../embedding_models/{model_name}\")\r\n # trainer.save_model(SAVE_TO_PATH)\r\n\r\nif __name__ == '__main__':\r\n run_domain_adaptation(model_checkpoint=MODEL_CHECKPOINT,\r\n dataset_path=DATASET_PATH,\r\n # training_dataset_name=TRAINING_DATASET_NAME,\r\n args=ARGS)```",
"Hello @pkaercher,\r\n\r\nThank you for the above code, this helps. You are calling `from_pretrained` method before initializing the distributed process group. As such, `from_pretrained` has no info whether a distributed training run is in place and as such doesn't know which process is rank 0 or remaining ranks. For this to work when using Trainer, please create an instance of `TrainingArguments` before calling `from_pretrained` because `TrainingArguments` instance initializes the distributed process group.\r\n\r\nUpdating the docs here https://github.com/huggingface/accelerate/pull/2430 with this information.",
"Thank you @pacman100 ! I did as you suggested and saw the max CPU RAM usage during loading of the model drop from 65.2 GB to 47.2 GB, so it looks like it's working now."
] | 1,690 | 1,707 | 1,692 |
CONTRIBUTOR
| null |
# What does this PR do?
1. Fixes an issue explained in https://github.com/pytorch/pytorch/issues/105840 when using FSDP for training very large models. Should be merged after https://github.com/huggingface/accelerate/pull/1777
Currently, when using FSDP, the model is loaded for each of the N processes completely on CPU leading to huge CPU RAM usage. When training models like Flacon-40B with FSDP on a dgx node with 8 GPUs, it would lead to CPU RAM getting out of memory because each process is loading 160GB (40B x 4Bytes (FP32)) in CPU RAM for a total of 160*8=1280GB requirement which results in script getting killed due to out of CPU RAM.
To combat this, we load the model only on rank 0 and have it on meta device when rank!=0. Then use no-op param_init_fn along with sync_module_states=True for FSDP to properly init the weights on other ranks and broadcast the params from rank 0 to other ranks.
Usage:
**No user-facing changes**:
Post this PR:
```
accelerator.process_index=0 GPU Memory before entering the loading : 0
accelerator.process_index=0 GPU Memory consumed at the end of the loading (end-begin): 0
accelerator.process_index=0 GPU Peak Memory consumed during the loading (max-begin): 0
accelerator.process_index=0 GPU Total Peak Memory consumed during the loading (max): 0
accelerator.process_index=0 CPU Memory before entering the loading : 926
accelerator.process_index=0 CPU Memory consumed at the end of the loading (end-begin): 26415
accelerator.process_index=0 CPU Peak Memory consumed during the loading (max-begin): 31818
accelerator.process_index=0 CPU Total Peak Memory consumed during the loading (max): 32744
accelerator.process_index=0 model.lm_head.weight=Parameter containing:
tensor([[-0.0179, 0.0201, -0.0273, ..., -0.0275, -0.0396, -0.0131],
[-0.0510, -0.0079, -0.0383, ..., -0.0481, 0.0581, 0.0282],
[-0.0217, -0.0216, -0.0064, ..., -0.0508, 0.0554, -0.0013],
...,
[ 0.0425, 0.0452, -0.0131, ..., 0.0019, 0.0476, 0.0342],
[-0.0170, -0.0085, 0.0449, ..., -0.0074, 0.0178, 0.0043],
[-0.0439, -0.0859, -0.0820, ..., 0.0130, 0.0669, 0.0884]],
requires_grad=True)
accelerator.process_index=1 GPU Memory before entering the loading : 0
accelerator.process_index=1 GPU Memory consumed at the end of the loading (end-begin): 0
accelerator.process_index=1 GPU Peak Memory consumed during the loading (max-begin): 0
accelerator.process_index=1 GPU Total Peak Memory consumed during the loading (max): 0
accelerator.process_index=1 CPU Memory before entering the loading : 933
accelerator.process_index=1 CPU Memory consumed at the end of the loading (end-begin): 10
accelerator.process_index=1 CPU Peak Memory consumed during the loading (max-begin): 573
accelerator.process_index=1 CPU Total Peak Memory consumed during the loading (max): 1506
accelerator.process_index=1 model.lm_head.weight=Parameter containing:
tensor([[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]], requires_grad=True)
accelerator.process_index=0 GPU Memory before entering the prepare : 0
accelerator.process_index=0 GPU Memory consumed at the end of the prepare (end-begin): 13202
accelerator.process_index=0 GPU Peak Memory consumed during the prepare (max-begin): 15458
accelerator.process_index=0 GPU Total Peak Memory consumed during the prepare (max): 15458
accelerator.process_index=0 CPU Memory before entering the prepare : 27345
accelerator.process_index=0 CPU Memory consumed at the end of the prepare (end-begin): -26394
accelerator.process_index=0 CPU Peak Memory consumed during the prepare (max-begin): 0
accelerator.process_index=0 CPU Total Peak Memory consumed during the prepare (max): 27345
FullyShardedDataParallel(
(_fsdp_wrapped_module): RWForCausalLM(
(transformer): RWModel(
(word_embeddings): Embedding(65024, 4544)
(h): ModuleList(
(0-31): 32 x FullyShardedDataParallel(
(_fsdp_wrapped_module): DecoderLayer(
(input_layernorm): LayerNorm((4544,), eps=1e-05, elementwise_affine=True)
(self_attention): Attention(
(maybe_rotary): RotaryEmbedding()
(query_key_value): Linear(in_features=4544, out_features=4672, bias=False)
(dense): Linear(in_features=4544, out_features=4544, bias=False)
(attention_dropout): Dropout(p=0.0, inplace=False)
)
(mlp): MLP(
(dense_h_to_4h): Linear(in_features=4544, out_features=18176, bias=False)
(act): GELU(approximate='none')
(dense_4h_to_h): Linear(in_features=18176, out_features=4544, bias=False)
)
)
)
)
(ln_f): LayerNorm((4544,), eps=1e-05, elementwise_affine=True)
)
(lm_head): Linear(in_features=4544, out_features=65024, bias=False)
)
)
accelerator.process_index=1 GPU Memory before entering the prepare : 0
accelerator.process_index=1 GPU Memory consumed at the end of the prepare (end-begin): 13202
accelerator.process_index=1 GPU Peak Memory consumed during the prepare (max-begin): 15458
accelerator.process_index=1 GPU Total Peak Memory consumed during the prepare (max): 15458
accelerator.process_index=1 CPU Memory before entering the prepare : 945
accelerator.process_index=1 CPU Memory consumed at the end of the prepare (end-begin): 4
accelerator.process_index=1 CPU Peak Memory consumed during the prepare (max-begin): 4
accelerator.process_index=1 CPU Total Peak Memory consumed during the prepare (max): 949
accelerator.process_index=1 model.lm_head.weight=Parameter containing:
tensor([[-0.0179, 0.0201, -0.0273, ..., -0.0275, -0.0396, -0.0131],
[-0.0510, -0.0079, -0.0383, ..., -0.0481, 0.0581, 0.0282],
[-0.0217, -0.0216, -0.0064, ..., -0.0508, 0.0554, -0.0013],
...,
[ 0.0425, 0.0452, -0.0131, ..., 0.0019, 0.0476, 0.0342],
[-0.0170, -0.0085, 0.0449, ..., -0.0074, 0.0178, 0.0043],
[-0.0439, -0.0859, -0.0820, ..., 0.0130, 0.0669, 0.0884]],
device='cuda:1', requires_grad=True)
accelerator.process_index=0 model.lm_head.weight=Parameter containing:
tensor([[-0.0179, 0.0201, -0.0273, ..., -0.0275, -0.0396, -0.0131],
[-0.0510, -0.0079, -0.0383, ..., -0.0481, 0.0581, 0.0282],
[-0.0217, -0.0216, -0.0064, ..., -0.0508, 0.0554, -0.0013],
...,
[ 0.0425, 0.0452, -0.0131, ..., 0.0019, 0.0476, 0.0342],
[-0.0170, -0.0085, 0.0449, ..., -0.0074, 0.0178, 0.0043],
[-0.0439, -0.0859, -0.0820, ..., 0.0130, 0.0669, 0.0884]],
device='cuda:0', requires_grad=True)
```
**So you can see that during loading Rank 1 doesn't take any more CPU RAM. And the performance between both setups matches.**
To Do:
- [ ] Add docs in the FSDP section
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25107/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25107",
"html_url": "https://github.com/huggingface/transformers/pull/25107",
"diff_url": "https://github.com/huggingface/transformers/pull/25107.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25107.patch",
"merged_at": 1692289415000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25106
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25106/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25106/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25106/events
|
https://github.com/huggingface/transformers/pull/25106
| 1,822,061,831 |
PR_kwDOCUB6oc5WbIrt
| 25,106 |
Fix `PvtModelIntegrationTest::test_inference_fp16`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,690 | 1,690 |
COLLABORATOR
| null |
# What does this PR do?
Failing
```bash
FAILED tests/models/pvt/test_modeling_pvt.py::PvtModelIntegrationTest::test_inference_fp16 - ValueError: PvtForImageClassification does not support `device_map='auto'`. To implement support, the modelclass needs to implement the `_no_split_modules` attribute.
```
A fix for this test will fix the other 2 shown in the report (due to bad GPU state or something else)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25106/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25106",
"html_url": "https://github.com/huggingface/transformers/pull/25106",
"diff_url": "https://github.com/huggingface/transformers/pull/25106.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25106.patch",
"merged_at": 1690376266000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25105
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25105/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25105/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25105/events
|
https://github.com/huggingface/transformers/pull/25105
| 1,821,757,803 |
PR_kwDOCUB6oc5WaHEv
| 25,105 |
fix get_keys_to_not_convert() to return correct modules for full precision inference
|
{
"login": "ranchlai",
"id": 5043767,
"node_id": "MDQ6VXNlcjUwNDM3Njc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5043767?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ranchlai",
"html_url": "https://github.com/ranchlai",
"followers_url": "https://api.github.com/users/ranchlai/followers",
"following_url": "https://api.github.com/users/ranchlai/following{/other_user}",
"gists_url": "https://api.github.com/users/ranchlai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ranchlai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ranchlai/subscriptions",
"organizations_url": "https://api.github.com/users/ranchlai/orgs",
"repos_url": "https://api.github.com/users/ranchlai/repos",
"events_url": "https://api.github.com/users/ranchlai/events{/privacy}",
"received_events_url": "https://api.github.com/users/ranchlai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @younesbelkada for a first look",
"@younesbelkada Looks like that you have fixed this in #25047 two days ago. Great work. My bad for not using latest code. \r\n",
"Alright. I am diving deeper into the code.\r\n\r\nCould you try this code: \r\n```python\r\nfrom transformers import AutoConfig, MptForCausalLM\r\nfrom transformers import AutoModelForCausalLM # many people use this for convenience\r\n\r\nfrom transformers.utils.bitsandbytes import get_keys_to_not_convert\r\n\r\nfrom accelerate import init_empty_weights\r\n\r\nconfig = AutoConfig.from_pretrained(\"mosaicml/mpt-7b\", trust_remote_code=True)\r\n\r\nwith init_empty_weights():\r\n model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)\r\n\r\n\r\nprint(get_keys_to_not_convert(model))\r\n\r\n>>> [] # this is because there is problem checking \"if (not has_tied_params) and is_base_model\" for remote code. \r\n\r\n\r\n\r\nwith init_empty_weights():\r\n model = MptForCausalLM(config) # \r\n>>> AttributeError: 'MPTConfig' object has no attribute 'hidden_size'\r\n \r\nprint(get_keys_to_not_convert(model))\r\n```\r\nSee my [another issue for checking tied parameters](https://github.com/huggingface/accelerate/issues/1761)\r\n ",
"if you comment out the following lines:\r\n```python\r\n if (not has_tied_params) and is_base_model:\r\n return []\r\n```\r\n\r\nyou will get the following : \r\n```\r\n>> ['transformer.norm_f']\r\n```\r\nBut the new fix still get: \r\n```\r\n>> ['transformer.wte']\r\n```\r\n\r\nSo maybe it is a good practice to utilize get_output_embeddings() in both find tied_parameters and checking lm_head? \r\n@sgugger @younesbelkada any comments? See again [here](https://github.com/huggingface/accelerate/issues/1761)\r\n\r\n",
"Sure, I am happy to add the test. \r\nActually I have done tests similar to yours above, for the following models:\r\n```python\r\nmodel_names = [\r\n \"mosaicml/mpt-7b\",\r\n \"mosaicml/mpt-7b-storywriter\",\r\n \"mosaicml/mpt-7b-8k-chat\",\r\n \"THUDM/chatglm2-6b\",\r\n \"THUDM/chatglm-6b\",\r\n \"lmsys/longchat-7b-16k\",\r\n \"lmsys/fastchat-t5-3b-v1.0\",\r\n \"TheBloke/koala-13B-HF\",\r\n \"WizardLM/WizardLM-13B-V1.0\",\r\n \"OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5\",\r\n \"nomic-ai/gpt4all-13b-snoozy\",\r\n \"WizardLM/WizardLM-13B-V1.0\",\r\n \"stabilityai/FreeWilly2\",\r\n \"OpenAssistant/llama2-13b-orca-8k-3319\",\r\n \"JosephusCheung/Guanaco\",\r\n \"lmsys/vicuna-7b-v1.3\",\r\n \"baichuan-inc/Baichuan-7B\",\r\n \"openlm-research/open_llama_3b\",\r\n \"EleutherAI/gpt-j-6b\",\r\n \"facebook/opt-1.3b\",\r\n \"tiiuae/falcon-7b\",\r\n \"tiiuae/falcon-40b-instruct\",\r\n ```\r\nMaybe too many. I will stick to your choice of models. ",
"> \r\nAdded test. Please would you kindly review. @younesbelkada \r\n",
"> \r\n\r\nThanks a lot. any problems please let me know ",
"Ok thanks, I will keep you updated",
"Indeed there is not `output_embeddings` for `AutoModelForSequenceClassification`. \r\nWe can pass the test the model by adding\r\n ```python\r\nif output_embeddings is None: \r\n output_embeddings = list(model.modules())[-1]\r\n```\r\nDo you think we we can add this? \r\n\r\nOne more thing, I am thinking, is it important to keep classification head (not originally `lm_head`) in full precision, as the matrix is not that large(thus quantization error might be not that large too)? Or, is it useful to quantize classification models to 8bits in the first place, as they are usually not that large. \r\n\r\n\r\n",
"Make sense. Minimum change is much safer. Will update the code.",
"Thanks very much @ranchlai !!",
"@younesbelkada You are very welcome. I have tried to keep minimum change and fix the mpt problem. Please check if that's what you mean. ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25105). All of your documentation changes will be reflected on that endpoint."
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes `get_keys_to_not_convert()` and returns the correct modules that should not be quantized for numerical stability reasons.
For GPT2 and bloom, both the current(1st output) and new version(2nd output) work well.
GPT2
```
>>> filtered_module_names_old is ['transformer.wte', 'lm_head', 'lm_head'] # both are correct, duplicated though
>>> filtered_module_names is ['lm_head', 'transformer.wte'] # both are correct
```
bloom
```
>>> filtered_module_names_old is ['lm_head', 'transformer.word_embeddings', 'lm_head'] # both are correct, duplicated though
>>> filtered_module_names is ['transformer.word_embeddings', 'lm_head'] # both are correct
```
But for ChatGLM and MPT(and possibly others), this PR finds the correct modules while the current code doesn't.
MPT
```
>>> filtered_module_names_old is ['transformer.wte', 'transformer.output_wte', 'transformer'] # this is wrong
>>> filtered_module_names is ['transformer.output_wte', 'transformer.wte'] # this is correct
```
ChatGLM2
```
>>> filtered_module_names_old is ['transformer'] # this is wrong
>>> filtered_module_names is ['transformer.output_layer'] # this is correct
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25105/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25105",
"html_url": "https://github.com/huggingface/transformers/pull/25105",
"diff_url": "https://github.com/huggingface/transformers/pull/25105.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25105.patch",
"merged_at": 1690964512000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25104
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25104/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25104/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25104/events
|
https://github.com/huggingface/transformers/issues/25104
| 1,821,740,143 |
I_kwDOCUB6oc5slYxv
| 25,104 |
Inconsistent Rotation Base for Dynamic NTK Scaling RoPE
|
{
"login": "NormXU",
"id": 33339685,
"node_id": "MDQ6VXNlcjMzMzM5Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/33339685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NormXU",
"html_url": "https://github.com/NormXU",
"followers_url": "https://api.github.com/users/NormXU/followers",
"following_url": "https://api.github.com/users/NormXU/following{/other_user}",
"gists_url": "https://api.github.com/users/NormXU/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NormXU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NormXU/subscriptions",
"organizations_url": "https://api.github.com/users/NormXU/orgs",
"repos_url": "https://api.github.com/users/NormXU/repos",
"events_url": "https://api.github.com/users/NormXU/events{/privacy}",
"received_events_url": "https://api.github.com/users/NormXU/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @ArthurZucker who works more on this (kind of) model(s).\r\n\r\nHowever, @NormXU, could you provide a short but self-contained code snippet to demonstrate the `inconsistency` you mentioned? Thanks.",
"Sure\r\n\r\nThe inconsistency happens here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/a5cc30d72ae2dc19af534e4b35c986cc28db1275/src/transformers/models/llama/modeling_llama.py#L314-L330\r\n\r\nWhile LLM generates token by token beyond its maximum trained length at the inference stage, the key_states are first applied RoPE based on cos and sin w.r.t. `kv_seq_len`, then rotated key_states are cached. \r\n\r\nThen when we come to the next token, key_states are applied to RoPE based on cos and sin w.r.t. `kv_seq_len + 1`. Since DynamicNTKScale has a rotation base w.r.t seq_len:\r\n\r\nhttps://github.com/huggingface/transformers/blob/a5cc30d72ae2dc19af534e4b35c986cc28db1275/src/transformers/models/llama/modeling_llama.py#L156-L163\r\n\r\nTherefore, we have an inconsistency between cached `keys_states` and between `keys_states` and query_states\r\n\r\n",
"Actually @gante is the king of dynamic ROPE so will let him handle this! ๐ค ",
"Hey @NormXU ๐ \r\n\r\nI agree with your consistency issue you pointed out. However, our users' biggest concern (and ours, by extension) is empirical results, regardless of correctness. \r\n\r\nIf you can share with us a few benchmarks where we see the change has positive benefits (and little to no downsides), I'll be more than happy to include it!",
"@gante Of course. \r\n\r\nHow about the perplexity experiments I did in my repo [link](https://github.com/NormXU/Consistent-DynamicNTKRoPE), \r\n\r\nThe way how we currently compute perplexity is more like we keep the rotation base consistent. Therefore, to bridge such a gap in rotation base between perplexity evaluation and inference with DynamicNTKScale, I modified the codes about how to apply the rotary embedding on keys and queries and do simple experiments on LLama1-7B.\r\n\r\nAfter modification, the perplexity is computed in this way:\r\n\r\n\r\n\r\n$K(\\alpha(x))$ means, key_states is rotated by a rotation matrix whose base $\\alpha$ is a function of sequence length.\r\n\r\nThen, I compare the perplexity and the results are shown as below\r\n\r\nThis is about perplexity value on Llama1-7B, an 2k max sequence length model, values above 12.0 are cut off for concise; \r\n**Vanilla:** RoPE w/o any interpolation;\r\n**NTK:** DynamicNTK when scale=1; \r\n**Consistent DynamicNTK:** keep rotation base between keys consistent, this is how we currently calculate perplexity \r\n **Inconsistent DynamicNTK**: keep rotation base between keys inconsistent w.r.t context length; \r\n\r\nCan this experiment convincingly demonstrate that a consistent DynamicNTK can achieve better perplexity in long context than an inconsistent DynamicNTK?\r\n\r\nBesides, could you please give me any advice on what benchmark I need to test this on? I have access to 8 x A100, enabling me to conduct many experiments quickly.\r\n\r\n",
"@NormXU I see, that makes sense! \r\n\r\nA final question, before I make a decision: have you measured throughput vs the original dynamic scaling? Since we need to reapply RoPE to the cached values, it should introduce slowdowns. The decision to include the technique in `transformers` depends on the extent of the slowdown :) ",
"cc @gante #25104 is a duplicate, opened #25308 ",
"@gante The main difference between [my implementations](https://github.com/NormXU/Consistent-DynamicNTKRoPE/blob/main/scale_rope/consistent_rope_for_llama_patch.py#L53-L64) and huggingface's is as follows: \r\n\r\nIn the former approach, all keys are cached before RoPE is applied to a length-increasing key_states list. The latter one applies RoPE only to a single key_state. Therefore, we just need to confirm whether applying RoPE on a length-increasing key_states list will take more time than applying it to a single key_state.\r\n\r\n\r\nHere is the exec time of `apply_rotary_pos_emb` in consistent DynamicNTKScale RoPE on LLaMA-7B (32 layers)\r\n\r\n| seq_length | exec time (ms) | seq_length | exec time (ms) |\r\n|------------|---------------|------------|---------------|\r\n| 16 | 56.32 | 528 | 206.08 |\r\n| 32 | 44.48 | 544 | 194.88 |\r\n| 48 | 39.68 | 560 | 197.44 |\r\n| 64 | 30.72 | 576 | 215.36 |\r\n| 80 | 43.84 | 592 | 207.04 |\r\n| 96 | 25.28 | 608 | 211.52 |\r\n| 112 | 26.24 | 624 | 220.16 |\r\n| 128 | 24.32 | 640 | 227.84 |\r\n| 144 | 35.2 | 656 | 245.76 |\r\n| 160 | 26.88 | 672 | 238.4 |\r\n| 176 | 71.68 | 688 | 248.64 |\r\n| 192 | 65.6 | 704 | 246.72 |\r\n| 208 | 95.04 | 720 | 270.08 |\r\n| 432 | 161.28 | 944 | 356.48 |\r\n| 448 | 164.16 | 960 | 367.36 |\r\n| 464 | 172.8 | 976 | 354.56 |\r\n| 480 | 177.92 | 992 | 365.12 |\r\n| 496 | 178.88 | 1008 | 407.68 |\r\n\r\nYou can find the exec time eval script [here](https://github.com/NormXU/Consistent-DynamicNTKRoPE/blob/main/eval_exec_time.py): \r\nAccording to the table above, the answer is๏ผ The throughput of consistent is impaired compared to that of dynamic's.",
"@NormXU I see -- if I understand correctly, the execution time of `apply_rotary_pos_emb` with the modification grows quickly with the sequence length, whereas in the original inconsistent DynamicNTK it doesn't grow (assuming caching is used).\r\n\r\nSince DynamicNTK will be used in the large sequence length regime, this means that we would be incurring a high execution speed penalty, which is highly undesirable. Unless we find a way to work around this speed issue, I am against adding this modification -- execution speed is paramount in LLMs nowadays :)\r\n\r\nFollow-up question: wouldn't these proposed modifications be the same as running DynamicNTK with `use_cache=False`? No caching = no inconsistency, correct?",
"@gante Indeed, No caching = no inconsistency. In fact, I haven't found any practical downstream tasks where the consistent RoPE can bring significant performance boost. The only advantage convinces me to replace it is its potential to achieve better perplexity scores when dealing with very long contexts.. Therefore, it looks, it is not necessary to correct this inconsistency in the RoPE. Speed does matter more than correctness :)",
"Thank you for discussing and iterating with us, @NormXU ๐ช I'll close the issue for now.\r\n\r\n_______________________________________________\r\n\r\nTL;DR of the discussion: the inconsistency in DynamicNTK RoPE scaling can be fixed with `use_cache=False`, at the cost of speed.",
"I write a [blog](https://normxu.github.io/A-Potential-Rotation-Inconsistency-of-Dynamic-Scaled-RoPE/) about this problem based on our discussion. Hope this can be helpful if you happen to find this issue and want to learn more details."
] | 1,690 | 1,691 | 1,691 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.19.0-42-generic-x86_64-with-glibc2.27
- Python version: 3.8.0
- PyTorch version (GPU?): 2.0.0+cu117 (True)
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
### Inconsistent problem
There is a subtle rotation inconsistency in the base factor of the DynamicNTKRope implemented in [transformers 4.31.0](https://github.com/huggingface/transformers/blob/b257c46a075419c09e5ce5c5aa39bc346ecdb9a5/src/transformers/models/llama/modeling_llama.py#L147)
Suppose we have a decoder model, like LLaMA-1, that utilizes DynamicNTKRope for interpolation and we want to evaluate it using perplexity. In any layer of this decoder model, after the key_states and query_states are computed from the hidden features, they are then rotated based on a fixed seq_len, which is the context length.
However, while generating token by token beyond its maximum trained length at the inference stage, LLM usually reuses previous **cached keys** which are rotated based on factors associated with the previous seq_len. As the seq len keeps increasing, each cached key is rotated with respect to a different base, and consequently, the inconsistency between keys and values arises.
### Expected behavior
I have conducted some experiments on the inconsistency and edited the codes about applying rotation on the keys and values to ensure the rotation base consistent [here](https://github.com/NormXU/Consistent-DynamicNTKRoPE/blob/main/scale_rope/consistent_rope_for_llama_patch.py). Please check the [repo](https://github.com/NormXU/Consistent-DynamicNTKRoPE) for further details.
While I haven't tested if a consistent rotation will benefit perplexity or downstream tasks in any dataset or language model, I believe that, from a mathematical perspective, keeping consistency in the rotation base could potentially enhance the language model's ability to learn relative positions more effectively. My intuition suggests that this consistency might offer advantages in capturing relative position information.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25104/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25103
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25103/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25103/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25103/events
|
https://github.com/huggingface/transformers/issues/25103
| 1,821,646,332 |
I_kwDOCUB6oc5slB38
| 25,103 |
Beam search genereation with len(eos_token_id) > 1 throws exceptions
|
{
"login": "yonigottesman",
"id": 4004127,
"node_id": "MDQ6VXNlcjQwMDQxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4004127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigottesman",
"html_url": "https://github.com/yonigottesman",
"followers_url": "https://api.github.com/users/yonigottesman/followers",
"following_url": "https://api.github.com/users/yonigottesman/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigottesman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigottesman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigottesman/subscriptions",
"organizations_url": "https://api.github.com/users/yonigottesman/orgs",
"repos_url": "https://api.github.com/users/yonigottesman/repos",
"events_url": "https://api.github.com/users/yonigottesman/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigottesman/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I think maybe the point of following line was to be sure there are enough tokens that are not eos:\r\n[`generation/utils.py:2976`](https://github.com/huggingface/transformers/blame/main/src/transformers/generation/utils.py#L3072)\r\n~~~\r\n# Sample 2 next tokens for each beam (so we have some spare tokens and match output of beam search)\r\nnext_token_scores, next_tokens = torch.topk(\r\n next_token_scores, 2 * num_beams, dim=1, largest=True, sorted=True\r\n)\r\n~~~\r\nMaybe instead of sampling only 2 it should sample `1+len(eos_token_id)` from each beam\r\n\r\nIf this is the case, Ill be happy to open a pr",
"cc our generation expert @gante ",
"@yonigottesman exactly, we need to get the `topk` for `max(2, 1+len(eos_token_id)) * num_beams`, such that we guarantee that we have at least 1 non-eos token per beam. \r\n\r\n> If this is the case, Ill be happy to open a pr\r\n\r\nI'll gladly take your offer ๐ ",
"awesome! Ill open a pr.\r\nwhy `max(2, 1+len(eos_token_id))` and not just `1+len(eos_token_id)`? I mean, if `len(eos_token_id)==0` isn't it fine to select just 1 token per beam?",
"@yonigottesman it probably is fine, but I'd rather err on the safe side in case there are subtle changes when `len(eos_token_id)==0` -- `max(2, 1+len(eos_token_id))` ensures the logic for that case sees no change. \r\n\r\nThese regressions are hard to track and consume a lot of time, playing defensively helps :)",
"@gante I agree,\r\nopened #25115 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"(closing as it was sorted in #25115 )"
] | 1,690 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.13.0-1031-aws-x86_64-with-glibc2.31
- Python version: 3.10.5
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: true
- Using distributed or parallel set-up in script?: false
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
~~~
import transformers
from transformers import GenerationConfig
import torch
name = "gpt2"
tokenizer = transformers.AutoTokenizer.from_pretrained(name)
model = transformers.AutoModelForCausalLM.from_pretrained(name)
gc = GenerationConfig(
max_new_tokens=40,
eos_token_id=tokenizer.encode(" black white green red brown blue yellow purple pink orange"),
pad_token_id=tokenizer.eos_token_id,
num_beams=3,
)
input_ids = tokenizer.encode("Hello, I have 3 cats, one of them is colored", return_tensors="pt")
output = model.generate(input_ids, generation_config=gc)
tokenizer.decode(output[0])
~~~
### Expected behavior
This simple beam search example should work but is throwing this exception:
~~~
File [/usr/local/lib/python3.10/site-packages/transformers/generation/utils.py:2985](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f6c756e675f63616e636572222c2273657474696e6773223a7b22686f7374223a227373683a2f2f796f6e69676f5f6770227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/site-packages/transformers/generation/utils.py:2985), in GenerationMixin.beam_search(self, input_ids, beam_scorer, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, **model_kwargs)
2982 next_tokens = next_tokens % vocab_size
2984 # stateless
-> 2985 beam_outputs = beam_scorer.process(
2986 input_ids,
2987 next_token_scores,
2988 next_tokens,
2989 next_indices,
2990 pad_token_id=pad_token_id,
2991 eos_token_id=eos_token_id,
2992 beam_indices=beam_indices,
2993 )
2995 beam_scores = beam_outputs["next_beam_scores"]
2996 beam_next_tokens = beam_outputs["next_beam_tokens"]
File [/usr/local/lib/python3.10/site-packages/transformers/generation/beam_search.py:297](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f6c756e675f63616e636572222c2273657474696e6773223a7b22686f7374223a227373683a2f2f796f6e69676f5f6770227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/site-packages/transformers/generation/beam_search.py:297), in BeamSearchScorer.process(self, input_ids, next_scores, next_tokens, next_indices, pad_token_id, eos_token_id, beam_indices, group_index)
294 break
296 if beam_idx < self.group_size:
--> 297 raise ValueError(
298 f"At most {self.group_size} tokens in {next_tokens[batch_idx]} can be equal to `eos_token_id:"
299 f" {eos_token_id}`. Make sure {next_tokens[batch_idx]} are corrected."
300 )
302 # Check if we are done so that we can save a pad step if all(done)
303 self._done[batch_group_idx] = self._done[batch_group_idx] or self._beam_hyps[batch_group_idx].is_done(
304 next_scores[batch_idx].max().item(), cur_len
305 )
ValueError: At most 3 tokens in tensor([ 2266, 11398, 4171, 4077, 11, 10912]) can be equal to `eos_token_id: [2042, 2330, 4077, 2266, 7586, 4171, 7872, 14032, 11398, 10912]`. Make sure tensor([ 2266, 11398, 4171, 4077, 11, 10912]) are corrected.
~~~
I think there is a bug in the check `if beam_idx < self.group_size` as it doesn't take into account that there could be more than 1 eos_token_id and each beam may select more than 1 eos token after the topk.
I will be happy to work on this
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25103/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25102
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25102/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25102/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25102/events
|
https://github.com/huggingface/transformers/pull/25102
| 1,821,498,750 |
PR_kwDOCUB6oc5WZRbc
| 25,102 |
documentation for llama2 models
|
{
"login": "shauray8",
"id": 39147312,
"node_id": "MDQ6VXNlcjM5MTQ3MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/39147312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shauray8",
"html_url": "https://github.com/shauray8",
"followers_url": "https://api.github.com/users/shauray8/followers",
"following_url": "https://api.github.com/users/shauray8/following{/other_user}",
"gists_url": "https://api.github.com/users/shauray8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shauray8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shauray8/subscriptions",
"organizations_url": "https://api.github.com/users/shauray8/orgs",
"repos_url": "https://api.github.com/users/shauray8/repos",
"events_url": "https://api.github.com/users/shauray8/events{/privacy}",
"received_events_url": "https://api.github.com/users/shauray8/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"changes incorporated!"
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
# What does this PR do?
Adds some documentation for 'f' models.
Fixes #25090
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@ArthurZucker
** I'm not sure if this is even the right way to include the documentation, but let me know if there's something more to add.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25102/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25102",
"html_url": "https://github.com/huggingface/transformers/pull/25102",
"diff_url": "https://github.com/huggingface/transformers/pull/25102.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25102.patch",
"merged_at": 1690374634000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25101
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25101/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25101/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25101/events
|
https://github.com/huggingface/transformers/pull/25101
| 1,821,153,681 |
PR_kwDOCUB6oc5WYIwH
| 25,101 |
fix tied_params for meta tensor
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,690 | 1,690 |
MEMBER
| null |
# What does this PR do?
This PR fix the retrieval of `tied_params` when we load a model with accelerate. The issue was that we were not able to retrieve the tied parameters by using the `id` function to check equality between meta `tensor`. Instead, we use `find_tied_parameters` from accelerate library.
Solves [#1772](https://github.com/huggingface/accelerate/issues/1772)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25101/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25101",
"html_url": "https://github.com/huggingface/transformers/pull/25101",
"diff_url": "https://github.com/huggingface/transformers/pull/25101.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25101.patch",
"merged_at": 1690322926000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25100
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25100/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25100/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25100/events
|
https://github.com/huggingface/transformers/issues/25100
| 1,821,124,224 |
I_kwDOCUB6oc5sjCaA
| 25,100 |
Trainer/accelerate crashes when loading checkpoint using FSDP: sync_module_states ValueError
|
{
"login": "chenchenygu",
"id": 63085075,
"node_id": "MDQ6VXNlcjYzMDg1MDc1",
"avatar_url": "https://avatars.githubusercontent.com/u/63085075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chenchenygu",
"html_url": "https://github.com/chenchenygu",
"followers_url": "https://api.github.com/users/chenchenygu/followers",
"following_url": "https://api.github.com/users/chenchenygu/following{/other_user}",
"gists_url": "https://api.github.com/users/chenchenygu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chenchenygu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chenchenygu/subscriptions",
"organizations_url": "https://api.github.com/users/chenchenygu/orgs",
"repos_url": "https://api.github.com/users/chenchenygu/repos",
"events_url": "https://api.github.com/users/chenchenygu/events{/privacy}",
"received_events_url": "https://api.github.com/users/chenchenygu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hello, as the error mentions, `sync_module_states` needs to be True. For this please use the accelerate launcher with the related accelerate config as specified in the docs here: https://huggingface.co/docs/transformers/main_classes/trainer#using-accelerate-launcher-with-trainer",
"Thank you, I will try that. Is it no longer supported to use the Trainer without accelerate launch, such as with torchrun? Loading from checkpoints worked fine for me before I updated to the latest versions.",
"@pacman100 I tried using accelerate launch and setting `sync_module_states` to True, but I still cannot load checkpoints, but with a different error this time:\r\n```\r\n[INFO|trainer.py:2020] 2023-07-25 16:53:23,504 >> Loading model from weights/llama-2/debug/checkpoint-10/.\r\nTraceback (most recent call last):\r\nTraceback (most recent call last):\r\n File \".../run_clm.py\", line 638, in <module>\r\n File \".../run_clm.py\", line 638, in <module>\r\nTraceback (most recent call last):\r\n File \".../run_clm.py\", line 638, in <module>\r\n main()main()\r\n\r\n main()\r\n File \".../run_clm.py\", line 584, in main\r\n File \".../run_clm.py\", line 584, in main\r\n File \".../run_clm.py\", line 584, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint) \r\n ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^^^^^^^^^^^^^^^^ ^^ ^^ ^^ ^^ ^^ ^^ ^ ^^ ^^ ^^ ^^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^^^^^^^^^^^^^^^^^^^\r\n^^^^ File \"miniconda3.../lib/python3.11/site-packages/transformers/trainer.py\", line 1528, in train\r\n^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n^^^ File \"miniconda3.../lib/python3.11/site-packages/transformers/trainer.py\", line 1528, in train\r\n^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"miniconda3.../lib/python3.11/site-packages/transformers/trainer.py\", line 1528, in train\r\n self._load_from_checkpoint(resume_from_checkpoint)self._load_from_checkpoint(resume_from_checkpoint)\r\n\r\n self._load_from_checkpoint(resume_from_checkpoint)\r\n File \"miniconda3.../lib/python3.11/site-packages/transformers/trainer.py\", line 2055, in _load_from_checkpoint\r\n File \"miniconda3.../lib/python3.11/site-packages/transformers/trainer.py\", line 2055, in _load_from_checkpoint\r\n File \"miniconda3.../lib/python3.11/site-packages/transformers/trainer.py\", line 2055, in _load_from_checkpoint\r\n load_fsdp_model(self.accelerator.state.fsdp_plugin, self.accelerator, model, resume_from_checkpoint)load_fsdp_model(self.accelerator.state.fsdp_plugin, self.accelerator, model, resume_from_checkpoint)\r\n\r\n load_fsdp_model(self.accelerator.state.fsdp_plugin, self.accelerator, model, resume_from_checkpoint)\r\n File \"miniconda3.../lib/python3.11/site-packages/accelerate/utils/fsdp_utils.py\", line 73, in load_fsdp_model\r\n File \"miniconda3.../lib/python3.11/site-packages/accelerate/utils/fsdp_utils.py\", line 73, in load_fsdp_model\r\n File \"miniconda3.../lib/python3.11/site-packages/accelerate/utils/fsdp_utils.py\", line 73, in load_fsdp_model\r\n with FSDP.state_dict_type(\r\n File \"miniconda3.../lib/python3.11/contextlib.py\", line 144, in __exit__\r\n with FSDP.state_dict_type(\r\n File \"miniconda3.../lib/python3.11/contextlib.py\", line 144, in __exit__\r\n with FSDP.state_dict_type(\r\n File \"miniconda3.../lib/python3.11/contextlib.py\", line 144, in __exit__\r\n next(self.gen)\r\n next(self.gen)\r\n File \"miniconda3.../lib/python3.11/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py\", line 720, in state_dict_type\r\n File \"miniconda3.../lib/python3.11/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py\", line 720, in state_dict_type\r\n next(self.gen)\r\n File \"miniconda3.../lib/python3.11/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py\", line 720, in state_dict_type\r\n FullyShardedDataParallel.set_state_dict_type(\r\n File \"miniconda3.../lib/python3.11/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py\", line 608, in set_state_dict_type\r\n FullyShardedDataParallel.set_state_dict_type(\r\n FullyShardedDataParallel.set_state_dict_type(\r\n File \"miniconda3.../lib/python3.11/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py\", line 608, in set_state_dict_type\r\n File \"miniconda3.../lib/python3.11/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py\", line 608, in set_state_dict_type\r\n state_dict_config_type = _state_dict_type_to_config[state_dict_type]\r\n state_dict_config_type = _state_dict_type_to_config[state_dict_type] \r\n ~~~~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ^ ^ ^ ^ ^ ^ ^~^~^~^~^~^~^~^~^~^~^~\r\n~~~~~~KeyError~: ~None~\r\n~~~~~~^^^^^^^^^^^^^^^^^\r\nKeyError: None\r\n state_dict_config_type = _state_dict_type_to_config[state_dict_type]\r\n ~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^\r\nKeyError: None\r\nWARNING:torch.distributed.elastic.multiprocessing.api:Sending process 109772 closing signal SIGTERM\r\nERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 109773) of binary: miniconda3.../bin/python\r\nTraceback (most recent call last):\r\n File \"miniconda3.../bin/accelerate\", line 8, in <module>\r\n sys.exit(main())\r\n ^^^^^^\r\n File \"miniconda3.../lib/python3.11/site-packages/accelerate/commands/accelerate_cli.py\", line 45, in main\r\n args.func(args)\r\n File \"miniconda3.../lib/python3.11/site-packages/accelerate/commands/launch.py\", line 966, in launch_command\r\n multi_gpu_launcher(args)\r\n File \"miniconda3.../lib/python3.11/site-packages/accelerate/commands/launch.py\", line 646, in multi_gpu_launcher\r\n distrib_run.run(args)\r\n File \"miniconda3.../lib/python3.11/site-packages/torch/distributed/run.py\", line 785, in run\r\n elastic_launch(\r\n File \"miniconda3.../lib/python3.11/site-packages/torch/distributed/launcher/api.py\", line 134, in __call__\r\n return launch_agent(self._config, self._entrypoint, list(args))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"miniconda3.../lib/python3.11/site-packages/torch/distributed/launcher/api.py\", line 250, in launch_agent\r\n raise ChildFailedError(\r\ntorch.distributed.elastic.multiprocessing.errors.ChildFailedError: \r\n```\r\n\r\nHere is the accelerate config I am using:\r\n```yaml\r\ncompute_environment: LOCAL_MACHINE\r\ndistributed_type: FSDP\r\ndowncast_bf16: 'no'\r\nfsdp_config:\r\n fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP\r\n fsdp_backward_prefetch_policy: BACKWARD_PRE\r\n fsdp_forward_prefetch: true\r\n fsdp_offload_params: false\r\n fsdp_sharding_strategy: 1\r\n fsdp_state_dict_type: FULL_STATE_DICT\r\n fsdp_sync_module_states: true\r\n fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer\r\n fsdp_use_orig_params: false\r\nmachine_rank: 0\r\nmain_training_function: main\r\nmixed_precision: bf16\r\nnum_machines: 1\r\nnum_processes: 4\r\nrdzv_backend: static\r\nsame_network: true\r\ntpu_env: []\r\ntpu_use_cluster: false\r\ntpu_use_sudo: false\r\nuse_cpu: false\r\n```\r\nI'm really not sure what is going on, any help would be greatly appreciated!",
"Update: it seems that using `fsdp_state_dict_type=SHARDED_STATE_DICT` fixes this issue. Not sure if it is expected behavior to not work with `FULL_STATE_DICT`.",
"Hello, can you share the outputs of the saved checkpoint? This seems to be an issue with the PyTorch. ",
"I experienced a similar problem, where using `SHARDED_STATE_DICT` instead of `FULL_STATE_DICT` allowed me to load the model when resuming from checkpoint. ",
"Hi, @chenchenygu @pacman100 i am do the same things like you, could share your training srcipt of Acceleate;\r\nHere is my training script:\r\n```\r\naccelerate launch --config_file training_scripts/accelerate_config.json train.py \\\r\n --model_name_or_path ./pre-trained-model/huggyllama/llama-7b \\\r\n --train_file datasets/yesno_task/datatsets/train_pruning.json \\\r\n --validation_file datasets/yesno_task/datatsets/valid.json \\\r\n --bf16 True \\\r\n --output_dir model/test \\\r\n --num_train_epochs 3 \\\r\n --per_device_train_batch_size 2 \\\r\n --per_device_eval_batch_size 2 \\\r\n --gradient_accumulation_steps 8 \\\r\n --evaluation_strategy \"no\" \\\r\n --save_strategy \"epoch\" \\\r\n --save_total_limit 5 \\\r\n --learning_rate 1e-5 \\\r\n --weight_decay 0. \\\r\n --warmup_ratio 0.03 \\\r\n --lr_scheduler_type \"cosine\" \\\r\n --logging_steps 1 \\\r\n --fsdp \"full_shard auto_wrap\" \\\r\n --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \\\r\n --tf32 True \\\r\n --max_length 512 \\\r\n --gradient_checkpointing True \r\n```\r\n\r\nI meet the follow warning:\r\n```\r\n/root/zhuxuekai/anaconda3/envs/py39/lib/python3.9/site-packages/transformers/training_args.py:1531: FutureWarning: using `--fsdp_transformer_layer_cls_to_wrap` is deprecated. Use fsdp_config instead \r\n warnings.warn(\r\n/root/zhuxuekai/anaconda3/envs/py39/lib/python3.9/site-packages/transformers/training_args.py:1531: FutureWarning: using `--fsdp_transformer_layer_cls_to_wrap` is deprecated. Use fsdp_config instead \r\n warnings.warn(\r\n/root/zhuxuekai/anaconda3/envs/py39/lib/python3.9/site-packages/transformers/training_args.py:1531: FutureWarning: using `--fsdp_transformer_layer_cls_to_wrap` is deprecated. Use fsdp_config instead \r\n warnings.warn(\r\n/root/zhuxuekai/anaconda3/envs/py39/lib/python3.9/site-packages/transformers/training_args.py:1531: FutureWarning: using `--fsdp_transformer_layer_cls_to_wrap` is deprecated. Use fsdp_config instead \r\nYou're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\nYou're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\nYou're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\n{'loss': 2.2814, 'learning_rate': 1e-05, 'epoch': 0.62}\r\n2023-08-02 11:29:12,413 _dedup_tensors.py:44 INFO p:MainProcess t:MainThread: Duplicate keys to remove: {}\r\n2023-08-02 11:29:52,569 _dedup_tensors.py:44 INFO p:MainProcess t:MainThread: Duplicate keys to remove: {}\r\n/root/zhuxuekai/anaconda3/envs/py39/lib/python3.9/site-packages/torch/distributed/checkpoint/filesystem.py:157: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()\r\n if tensor.storage().size() != tensor.numel():\r\n/root/zhuxuekai/anaconda3/envs/py39/lib/python3.9/site-packages/torch/distributed/checkpoint/filesystem.py:157: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()\r\n if tensor.storage().size() != tensor.numel():\r\n/root/zhuxuekai/anaconda3/envs/py39/lib/python3.9/site-packages/torch/distributed/checkpoint/filesystem.py:157: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()\r\n if tensor.storage().size() != tensor.numel():\r\n/root/zhuxuekai/anaconda3/envs/py39/lib/python3.9/site-packages/torch/distributed/checkpoint/filesystem.py:157: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()\r\n```\r\n\r\nHowever, my training script seems work, but i want solve the above wanring.\r\nThank you for your help !!",
"Updata hit OOM in training process !\r\n```\r\n/root/zhuxuekai/anaconda3/envs/py39/lib/python3.9/site-packages/torch/distributed/fsdp/_state_dict_utils.py:312: UserWarning: Failed to clone() tensor with name _fsdp_wrapped_module.model.layers.30.mlp.down_proj.weight on rank 0. This may mean that this state_dict entry could point to invalid memory regions after returning from state_dict() call if this parameter is managed by FSDP. Please check clone implementation of _fsdp_wrapped_module.model.layers.30.mlp.down_proj.weight. Error: CUDA out of memory. Tried to allocate 172.00 MiB (GPU 0; 47.54 GiB total capacity; 45.40 GiB already allocated; 25.62 MiB free; 46.42 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\r\n```\r\n\r\nThe major error seem to be **Please check clone implementation of _fsdp_wrapped_module.model.layers.30.mlp.down_proj.weight. Error: CUDA out of memory. **",
"@Xuekai-Zhu i also ran into that issue, the workaround that worked for me is https://discuss.pytorch.org/t/fsdp-failed-to-save-model-checkpoints/178232/3\r\nbut this is a hacky fix, i think the ultimate fix that is needed is https://github.com/pytorch/pytorch/issues/98823#issuecomment-1504812144\r\nbut this requires changes to the accelerate/trainer code",
"> @Xuekai-Zhu i also ran into that issue, the workaround that worked for me is https://discuss.pytorch.org/t/fsdp-failed-to-save-model-checkpoints/178232/3 but this is a hacky fix, i think the ultimate fix that is needed is [pytorch/pytorch#98823 (comment)](https://github.com/pytorch/pytorch/issues/98823#issuecomment-1504812144) but this requires changes to the accelerate/trainer code\r\n\r\n@chenchenygu Huge thank to you !!! https://discuss.pytorch.org/t/fsdp-failed-to-save-model-checkpoints/178232/3 this is also wrok for me",
"Hello, with PR https://github.com/huggingface/transformers/pull/24926, this should be resolved, i.e., by default it will use `FULL_STATE_DICT` with cpu_offload on rank_0 only.",
"> > @Xuekai-Zhu i also ran into that issue, the workaround that worked for me is https://discuss.pytorch.org/t/fsdp-failed-to-save-model-checkpoints/178232/3 but this is a hacky fix, i think the ultimate fix that is needed is [pytorch/pytorch#98823 (comment)](https://github.com/pytorch/pytorch/issues/98823#issuecomment-1504812144) but this requires changes to the accelerate/trainer code\r\n> \r\n> @chenchenygu Huge thank to you !!! https://discuss.pytorch.org/t/fsdp-failed-to-save-model-checkpoints/178232/3 this is also wrok for me\r\n\r\nUpdata, when you use this hacky fix, the trainer.save_model wouldn't save model.bin file, so you can't directly use model.from_pretrain( ) to load your model; the better way to fix above problem is updata your transformers.",
"@pacman100 I'm on transformers HEAD (within last 3 days) and when I try to resume from a checkpoint with FULL_STATE_DICT, I still get the above error:\r\n```\r\n load_fsdp_model(self.accelerator.state.fsdp_plugin, self.accelerator, model, resume_from_checkpoint)\r\n File \"/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/accelerate/utils/fsdp_utils.py\", line 79, in load_fsdp_model\r\n raise ValueError( \r\nValueError: Set the `sync_module_states` flag to `True` so that model states are synced across processes when initializing FSDP object \r\n```\r\n\r\nedit: I even tried running w/ `FSDP_SYNC_MODULE_STATES=true accelerate launch --fsdp_sync_module_states true ...` with no luck\r\n\r\nedit2: running with `FSDP_SYNC_MODULE_STATES=true FSDP_STATE_DICT_TYPE=FULL_STATE_DICT accelerate launch โฆ`\r\ngives the following new error\r\n```\r\n load_fsdp_model(self.accelerator.state.fsdp_plugin, self.accelerator, model, resume_from_checkpoint)\r\n File \"/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/accelerate/utils/fsdp_utils.py\", line 74, in load_fsdp_model\r\n with FSDP.state_dict_type(\r\n File \"/root/miniconda3/envs/py3.10/lib/python3.10/contextlib.py\", line 142, in __exit__\r\n next(self.gen)\r\n File \"/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py\", line 720, in state_dict_type\r\n FullyShardedDataParallel.set_state_dict_type(\r\n File \"/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py\", line 608, in set_state_dict_type\r\n state_dict_config_type = _state_dict_type_to_config[state_dict_type]\r\nKeyError: None\r\n```\r\nprinting out the fsdp_plugin var is:\r\n```\r\nFullyShardedDataParallelPlugin(sharding_strategy=<ShardingStrategy.FULL_SHARD: 1>, backward_prefetch=None, mixed_precision_policy=MixedPrecision(param_dtype=torch.bfloat16, reduce_dtype=torch.bfloat16, buffer_dtype=torch.bfloat16, keep_low_precision_grads=False, cast_f\r\norward_inputs=False, cast_root_forward_inputs=True), auto_wrap_policy=None, cpu_offload=CPUOffload(offload_params=False), ignored_modules=None, ignored_parameters=None, state_dict_type=<StateDictType.FULL_STATE_DICT: 1>, state_dict_config=FullStateDictConfig(offload_to\r\n_cpu=True, rank0_only=True), optim_state_dict_config=FullOptimStateDictConfig(offload_to_cpu=True, rank0_only=True), limit_all_gathers=False, use_orig_params=False, param_init_fn=None, sync_module_states=True, forward_prefetch=False)\r\n```\r\n",
"so in the FSDP.state_dict_type() context manager, when loading from a checkpoint, there is no previous fsdp state_dict_type to revert to when the context manager closes, so it fails to set the state_dict_type with a NoneType `state_dict_type`. ",
"Hi all, I also encountered this error. But when I downgrade my torch version to 2.0.0+cu117 and transformers to 4.28.1, it works fine.",
"Also experience the same problem. @winglian any tips on how to fix/patch this issue? ",
"the only thing that works is to monkeypatch torch/distributed/fsdp/fully_sharded_data_parallel.py::state_dict_type to simply skip the call back to setting it to the previous setting if the previous settings `prev_state_dict_settings.state_dict_type` is None",
"@tt6746690 I basically had to monkeypatch it in the meantime https://github.com/OpenAccess-AI-Collective/axolotl/pull/400/files#diff-0b142e48f0c0b4bdf2677ce86ee6352c3a5e5a3a9ddf22020a2920f496f74d2eR29",
"@winglian Thanks for the link to the fix! ",
"Hello, can you please check if the main branch of Transformers fixes the issue?",
"i meet the similar issue when using the accelerate function \"load_fsdp_model\" in trainer.py, anyone has any comment on this problem? \r\nFullyShardedDataParallel.set_state_dict_type(\r\n File \"/usr/local/lib/python3.9/dist-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py\", line 653, in set_state_dict_type\r\n state_dict_config_type = _state_dict_type_to_config[state_dict_type]\r\nKeyError: None",
"> Hello, can you please check if the main branch of Transformers fixes the issue?\n\nIs there a particular fix on main you believe fixes this issue? Thanks!",
"Hello, please try out the latest main branch as there have been fixes wrt FSDP saving and loading states. Please let us know if that fixes the issue.",
"> Hello, please try out the latest main branch as there have been fixes wrt FSDP saving and loading states. Please let us know if that fixes the issue.\r\n\r\n@pacman100 I just try latest main branch, unfortunately, it still can not work... The error is๏ผ\r\n File \"/usr/local/conda/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py\", line 652, in set_state_dict_type\r\n state_dict_config_type = _state_dict_type_to_config[state_dict_type]\r\nNone KeyError",
"@pacman100 I'm also running into a similar error with the latest main branch:\r\n```\r\nFile \"/home/hyen/.conda/envs/cross/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py\", line 608, in set_state_dict_type\r\n state_dict_config_type = _state_dict_type_to_config[state_dict_type]\r\nKeyError: None\r\n```",
"> @pacman100 I'm also running into a similar error with the latest main branch:\r\n> \r\n> ```\r\n> File \"/home/hyen/.conda/envs/cross/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py\", line 608, in set_state_dict_type\r\n> state_dict_config_type = _state_dict_type_to_config[state_dict_type]\r\n> KeyError: None\r\n> ```\r\n\r\nFound the bug and a quick fix in this issue https://github.com/huggingface/transformers/issues/26159",
"Fixed in PR #26180 "
] | 1,690 | 1,695 | 1,695 |
NONE
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.31
- Python version: 3.11.4
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
@sgugger @pacman100
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Use `run_clm.py` (https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py) to train a large model using the HuggingFace Trainer, use FSDP and save checkpoints. For example:
```
torchrun --nproc_per_node=4 --master_port=XXXXX experiments/run_clm.py \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--dataset_name openwebtext \
--streaming \
--per_device_train_batch_size 16 \
--gradient_accumulation_steps 1 \
--do_train \
--max_steps 1000 \
--output_dir output_dir/ \
--block_size 512 \
--save_steps 10 \
--save_total_limit 1 \
--fsdp "full_shard auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap "LlamaDecoderLayer" \
--tf32 True \
--bf16 True \
--gradient_checkpointing \
```
2. Kill training after a checkpoint has been saved. Then, resume training from the checkpoint with the `resume_from_checkpoint` training argument.
3. Observed behavior: crashes when loading checkpoint model:
```
Traceback (most recent call last):
File ".../run_clm.py", line 638, in <module>
main()
File ".../run_clm.py", line 584, in main
main()
File ".../run_clm.py", line 584, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "miniconda3.../lib/python3.11/site-packages/transformers/trainer.py", line 1528, in train
main()
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File ".../run_clm.py", line 584, in main
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "miniconda3.../lib/python3.11/site-packages/transformers/trainer.py", line 1528, in train
train_result = trainer.train(resume_from_checkpoint=checkpoint)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "miniconda3.../lib/python3.11/site-packages/transformers/trainer.py", line 1528, in train
self._load_from_checkpoint(resume_from_checkpoint)
File "miniconda3.../lib/python3.11/site-packages/transformers/trainer.py", line 2055, in _load_from_checkpoint
self._load_from_checkpoint(resume_from_checkpoint)
File "miniconda3.../lib/python3.11/site-packages/transformers/trainer.py", line 2055, in _load_from_checkpoint
self._load_from_checkpoint(resume_from_checkpoint)
File "miniconda3.../lib/python3.11/site-packages/transformers/trainer.py", line 2055, in _load_from_checkpoint
load_fsdp_model(self.accelerator.state.fsdp_plugin, self.accelerator, model, resume_from_checkpoint)
File "miniconda3.../lib/python3.11/site-packages/accelerate/utils/fsdp_utils.py", line 79, in load_fsdp_model
load_fsdp_model(self.accelerator.state.fsdp_plugin, self.accelerator, model, resume_from_checkpoint)
File "miniconda3.../lib/python3.11/site-packages/accelerate/utils/fsdp_utils.py", line 79, in load_fsdp_model
load_fsdp_model(self.accelerator.state.fsdp_plugin, self.accelerator, model, resume_from_checkpoint)
File "miniconda3.../lib/python3.11/site-packages/accelerate/utils/fsdp_utils.py", line 79, in load_fsdp_model
raise ValueError(
ValueError: Set the `sync_module_states` flag to `True` so that model states are synced across processes when initializing FSDP object
raise ValueError(
ValueError: Set the `sync_module_states` flag to `True` so that model states are synced across processes when initializing FSDP object
raise ValueError(
ValueError: Set the `sync_module_states` flag to `True` so that model states are synced across processes when initializing FSDP object
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 45997 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 45998) of binary: miniconda3.../bin/python
```
### Expected behavior
Expected behavior: can resume training from checkpoint using FSDP.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25100/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25099
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25099/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25099/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25099/events
|
https://github.com/huggingface/transformers/issues/25099
| 1,821,118,294 |
I_kwDOCUB6oc5sjA9W
| 25,099 |
Missing '1' token for the Donut Processor checkpoints
|
{
"login": "arnaudstiegler",
"id": 26485052,
"node_id": "MDQ6VXNlcjI2NDg1MDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/26485052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arnaudstiegler",
"html_url": "https://github.com/arnaudstiegler",
"followers_url": "https://api.github.com/users/arnaudstiegler/followers",
"following_url": "https://api.github.com/users/arnaudstiegler/following{/other_user}",
"gists_url": "https://api.github.com/users/arnaudstiegler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arnaudstiegler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arnaudstiegler/subscriptions",
"organizations_url": "https://api.github.com/users/arnaudstiegler/orgs",
"repos_url": "https://api.github.com/users/arnaudstiegler/repos",
"events_url": "https://api.github.com/users/arnaudstiegler/events{/privacy}",
"received_events_url": "https://api.github.com/users/arnaudstiegler/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Not sure this is something wrong in Transformers, sounds like it's the model repos that are missing something. cc @younesbelkada ",
"> Note that this behavior is not present for the tokenizer mentioned as a pretrained base on the [donut github repo](https://github.com/clovaai/donut/blob/master/donut/model.py#L159C9-L161C10)\r\n\r\nI tried the original one and still get the same result, see below.\r\n\r\nCould you provide us a code snippet that could reproduce the original behavior? Thanks\r\n\r\n```python\r\nfrom transformers import XLMRobertaTokenizer\r\nt2 = XLMRobertaTokenizer.from_pretrained(\"hyunwoongko/asian-bart-ecjk\")\r\nprint(t2.decode(t2('A1')['input_ids']))\r\n# A<unk></s>\r\n```",
"Oups, I thought I did try that. Then I'll open an issue on the Clova repo and they'll hopefully update the checkpoints in the future",
"Actually, I tried using AutoTokenizer which returns a BartTokenizer and not a Roberta Tokenizer. Using a BartTokenizer seemingly fixes the issue.\r\n\r\n```\r\nfrom transformers import AutoTokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(\"hyunwoongko/asian-bart-ecjk\")\r\nโ\r\nprint(tokenizer.decode(tokenizer('A1')['input_ids']))\r\n# โA 1 </s> en_XX\r\nprint(tokenizer.decode(tokenizer('A 1')['input_ids']))\r\n# โA โ1 </s> en_XX\r\n```\r\n\r\nThis is still a problem on their end since they are indeed using a RobertaTokenizer, but I thought it was worth mentioning here too.",
"@arnaudstiegler @sgugger @ydshieh Any Resolution or WorkAround? Does Using this work Ok for Fine Tuning Etc.. And For Inference? Or Should I add A New Special Token for 1?\r\n\r\n> Actually, I tried using AutoTokenizer which returns a BartTokenizer and not a Roberta Tokenizer. Using a BartTokenizer seemingly fixes the issue.\r\n> \r\n> ```\r\n> from transformers import AutoTokenizer\r\n> tokenizer = AutoTokenizer.from_pretrained(\"hyunwoongko/asian-bart-ecjk\")\r\n> โ\r\n> print(tokenizer.decode(tokenizer('A1')['input_ids']))\r\n> # โA 1 </s> en_XX\r\n> print(tokenizer.decode(tokenizer('A 1')['input_ids']))\r\n> # โA โ1 </s> en_XX\r\n> ```\r\n> \r\n> This is still a problem on their end since they are indeed using a RobertaTokenizer, but I thought it was worth mentioning here too.\r\n\r\n",
"@DoctorSlimm Use the tokenzer provided by the model author.\r\n\r\n`\"naver-clova-ix/donut-base\"` use `XLMRobertaTokenizer` and `\"hyunwoongko/asian-bart-ecjk\"` use `MBartTokenizer(Fast)`, and you don't have to change anything. If you have any doubt, you can open a issue on the corresponding Hub repository for clarification.\r\n\r\nThis is not an issue on `transformers` side, I am going to close this issue.",
"@ydshieh Thank You Sir! :))))))"
] | 1,690 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
### System Info
`transformers==4.31.0`
`python=3.9`
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Apparently, all Donut tokenizer checkpoints are missing the `'1'` token and only have `'โ1'` available. As a result, the Donut tokenizer is unable to tokenize any `1` that is preceded by a character.
Reproduction:
```
from transformers import DonutProcessor
processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base")
print(processor.decode(processor.tokenizer('A1')['input_ids']))
<s> A<unk></s>
print(processor.decode(processor.tokenizer('A 1')['input_ids']))
<s> A 1</s>
print(processor.decode(processor.tokenizer('A2')['input_ids'])
<s> A2</s>
```
For Document AI, having digits and characters without separation is a pretty common pattern for fields that are IDs
### Expected behavior
For each digit in [0] + [2,.., 9], the donut tokenizer has both the digit as a standalone token and the digit with a preceding blank.
For instance:
```
print(processor.tokenizer.get_vocab().get('2'))
35934
print(processor.tokenizer.get_vocab().get('โ2'))
3822
```
However, this is not true for the digit `1` which only has the token with a preceding blank:
```
print(processor.tokenizer.get_vocab().get('1'))
None
print(processor.tokenizer.get_vocab().get('โ1'))
1314
```
Note that this behavior is not present for the tokenizer mentioned as a pretrained base on the [donut github repo](https://github.com/clovaai/donut/blob/master/donut/model.py#L159C9-L161C10):
```
self.tokenizer = XLMRobertaTokenizer.from_pretrained(
"hyunwoongko/asian-bart-ecjk" if not name_or_path else name_or_path
)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25099/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25098
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25098/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25098/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25098/events
|
https://github.com/huggingface/transformers/pull/25098
| 1,821,117,869 |
PR_kwDOCUB6oc5WYAzn
| 25,098 |
Bump certifi from 2022.12.7 to 2023.7.22 in /examples/research_projects/decision_transformer
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
Bumps [certifi](https://github.com/certifi/python-certifi) from 2022.12.7 to 2023.7.22.
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/certifi/python-certifi/commit/8fb96ed81f71e7097ed11bc4d9b19afd7ea5c909"><code>8fb96ed</code></a> 2023.07.22</li>
<li><a href="https://github.com/certifi/python-certifi/commit/afe77220e0eaa722593fc5d294213ff5275d1b40"><code>afe7722</code></a> Bump actions/setup-python from 4.6.1 to 4.7.0 (<a href="https://redirect.github.com/certifi/python-certifi/issues/230">#230</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/2038739ad56abec7aaddfa90ad2ce6b3ed7f5c7b"><code>2038739</code></a> Bump dessant/lock-threads from 3.0.0 to 4.0.1 (<a href="https://redirect.github.com/certifi/python-certifi/issues/229">#229</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/44df761f4c09d19f32b3cc09208a739043a5e25b"><code>44df761</code></a> Hash pin Actions and enable dependabot (<a href="https://redirect.github.com/certifi/python-certifi/issues/228">#228</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/8b3d7bae85bbc87c9181cc1d39548db3d31627f0"><code>8b3d7ba</code></a> 2023.05.07</li>
<li><a href="https://github.com/certifi/python-certifi/commit/53da2405b1af430f6bafa21ba45d8dd8dfc726b8"><code>53da240</code></a> ci: Add Python 3.12-dev to the testing (<a href="https://redirect.github.com/certifi/python-certifi/issues/224">#224</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/c2fc3b1f64d6946f1057971ee897ea828ae848d8"><code>c2fc3b1</code></a> Create a Security Policy (<a href="https://redirect.github.com/certifi/python-certifi/issues/222">#222</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/c211ef482a01aff5f1bc92c4128bfa0c955f4a01"><code>c211ef4</code></a> Set up permissions to github workflows (<a href="https://redirect.github.com/certifi/python-certifi/issues/218">#218</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/2087de5d0aa1d472145fc1dbdfece3fe652bbac5"><code>2087de5</code></a> Don't let deprecation warning fail CI (<a href="https://redirect.github.com/certifi/python-certifi/issues/219">#219</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/e0b9fc5c8f52ac8c300da502e5760ce3d41429ec"><code>e0b9fc5</code></a> remove paragraphs about 1024-bit roots from README</li>
<li>Additional commits viewable in <a href="https://github.com/certifi/python-certifi/compare/2022.12.07...2023.07.22">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25098/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25098/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25098",
"html_url": "https://github.com/huggingface/transformers/pull/25098",
"diff_url": "https://github.com/huggingface/transformers/pull/25098.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25098.patch",
"merged_at": 1690320305000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25097
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25097/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25097/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25097/events
|
https://github.com/huggingface/transformers/pull/25097
| 1,821,117,519 |
PR_kwDOCUB6oc5WYAuo
| 25,097 |
Bump certifi from 2022.12.7 to 2023.7.22 in /examples/research_projects/visual_bert
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
[//]: # (dependabot-start)
โ ๏ธ **Dependabot is rebasing this PR** โ ๏ธ
Rebasing might not happen immediately, so don't worry if this takes some time.
Note: if you make any changes to this PR yourself, they will take precedence over the rebase.
---
[//]: # (dependabot-end)
Bumps [certifi](https://github.com/certifi/python-certifi) from 2022.12.7 to 2023.7.22.
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/certifi/python-certifi/commit/8fb96ed81f71e7097ed11bc4d9b19afd7ea5c909"><code>8fb96ed</code></a> 2023.07.22</li>
<li><a href="https://github.com/certifi/python-certifi/commit/afe77220e0eaa722593fc5d294213ff5275d1b40"><code>afe7722</code></a> Bump actions/setup-python from 4.6.1 to 4.7.0 (<a href="https://redirect.github.com/certifi/python-certifi/issues/230">#230</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/2038739ad56abec7aaddfa90ad2ce6b3ed7f5c7b"><code>2038739</code></a> Bump dessant/lock-threads from 3.0.0 to 4.0.1 (<a href="https://redirect.github.com/certifi/python-certifi/issues/229">#229</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/44df761f4c09d19f32b3cc09208a739043a5e25b"><code>44df761</code></a> Hash pin Actions and enable dependabot (<a href="https://redirect.github.com/certifi/python-certifi/issues/228">#228</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/8b3d7bae85bbc87c9181cc1d39548db3d31627f0"><code>8b3d7ba</code></a> 2023.05.07</li>
<li><a href="https://github.com/certifi/python-certifi/commit/53da2405b1af430f6bafa21ba45d8dd8dfc726b8"><code>53da240</code></a> ci: Add Python 3.12-dev to the testing (<a href="https://redirect.github.com/certifi/python-certifi/issues/224">#224</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/c2fc3b1f64d6946f1057971ee897ea828ae848d8"><code>c2fc3b1</code></a> Create a Security Policy (<a href="https://redirect.github.com/certifi/python-certifi/issues/222">#222</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/c211ef482a01aff5f1bc92c4128bfa0c955f4a01"><code>c211ef4</code></a> Set up permissions to github workflows (<a href="https://redirect.github.com/certifi/python-certifi/issues/218">#218</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/2087de5d0aa1d472145fc1dbdfece3fe652bbac5"><code>2087de5</code></a> Don't let deprecation warning fail CI (<a href="https://redirect.github.com/certifi/python-certifi/issues/219">#219</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/e0b9fc5c8f52ac8c300da502e5760ce3d41429ec"><code>e0b9fc5</code></a> remove paragraphs about 1024-bit roots from README</li>
<li>Additional commits viewable in <a href="https://github.com/certifi/python-certifi/compare/2022.12.07...2023.07.22">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25097/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25097/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25097",
"html_url": "https://github.com/huggingface/transformers/pull/25097",
"diff_url": "https://github.com/huggingface/transformers/pull/25097.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25097.patch",
"merged_at": 1690320314000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25096
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25096/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25096/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25096/events
|
https://github.com/huggingface/transformers/pull/25096
| 1,821,116,843 |
PR_kwDOCUB6oc5WYAlF
| 25,096 |
Bump certifi from 2022.12.7 to 2023.7.22 in /examples/research_projects/lxmert
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
Bumps [certifi](https://github.com/certifi/python-certifi) from 2022.12.7 to 2023.7.22.
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/certifi/python-certifi/commit/8fb96ed81f71e7097ed11bc4d9b19afd7ea5c909"><code>8fb96ed</code></a> 2023.07.22</li>
<li><a href="https://github.com/certifi/python-certifi/commit/afe77220e0eaa722593fc5d294213ff5275d1b40"><code>afe7722</code></a> Bump actions/setup-python from 4.6.1 to 4.7.0 (<a href="https://redirect.github.com/certifi/python-certifi/issues/230">#230</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/2038739ad56abec7aaddfa90ad2ce6b3ed7f5c7b"><code>2038739</code></a> Bump dessant/lock-threads from 3.0.0 to 4.0.1 (<a href="https://redirect.github.com/certifi/python-certifi/issues/229">#229</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/44df761f4c09d19f32b3cc09208a739043a5e25b"><code>44df761</code></a> Hash pin Actions and enable dependabot (<a href="https://redirect.github.com/certifi/python-certifi/issues/228">#228</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/8b3d7bae85bbc87c9181cc1d39548db3d31627f0"><code>8b3d7ba</code></a> 2023.05.07</li>
<li><a href="https://github.com/certifi/python-certifi/commit/53da2405b1af430f6bafa21ba45d8dd8dfc726b8"><code>53da240</code></a> ci: Add Python 3.12-dev to the testing (<a href="https://redirect.github.com/certifi/python-certifi/issues/224">#224</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/c2fc3b1f64d6946f1057971ee897ea828ae848d8"><code>c2fc3b1</code></a> Create a Security Policy (<a href="https://redirect.github.com/certifi/python-certifi/issues/222">#222</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/c211ef482a01aff5f1bc92c4128bfa0c955f4a01"><code>c211ef4</code></a> Set up permissions to github workflows (<a href="https://redirect.github.com/certifi/python-certifi/issues/218">#218</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/2087de5d0aa1d472145fc1dbdfece3fe652bbac5"><code>2087de5</code></a> Don't let deprecation warning fail CI (<a href="https://redirect.github.com/certifi/python-certifi/issues/219">#219</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/e0b9fc5c8f52ac8c300da502e5760ce3d41429ec"><code>e0b9fc5</code></a> remove paragraphs about 1024-bit roots from README</li>
<li>Additional commits viewable in <a href="https://github.com/certifi/python-certifi/compare/2022.12.07...2023.07.22">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25096/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25096/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25096",
"html_url": "https://github.com/huggingface/transformers/pull/25096",
"diff_url": "https://github.com/huggingface/transformers/pull/25096.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25096.patch",
"merged_at": 1690320290000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25095
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25095/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25095/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25095/events
|
https://github.com/huggingface/transformers/pull/25095
| 1,820,990,741 |
PR_kwDOCUB6oc5WXk-e
| 25,095 |
Migrate Trainer from `Repository` to `upload_folder`
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Note that the Hub tests are failing because `upload_folder` seems very slow on moon-staging (with 503 errors and multiple retries for each push). Might be worth putting everything in blocking mode for those tests (though we won't test the fact that jobs properly run in the background then)",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,691 | 1,691 |
COLLABORATOR
| null |
# What does this PR do?
This PR migrates the internal of the push to Hub strategy inside the Trainer to the upload methods instead of the old git approach. The goal is to benefit from the improvements in terms of speed (for instance `upload_folder` is faster than `Repository` on Colab) and avoid some weird sync errors we have seen happen in the past.
This migration breaks a few things. I'm listing them for the sake of completeness but I don't think those are important breaking changes.
1. the return of `Trainer.push_to_hub` will change: in the blocking case it's the url of the repo instead of the URL of the commit. In the non-blocking case, it's the Future object returned by the `huggingface_hub` libreary (instead of a tuple commit address, job in progress)
2. before the migration, it was instant to cancel the push in progress when at the end of the training, to then push the final version of the Hub (but we also had some weird bugs). After the migration we will have to wait for the last push to be completed before pusing the final version of the model.
3. the `output_dir` of the `Trainer` won't be a git repository which is an exact clone of the model repo anymore.
Note that I chose to push checkpoint jobs in two `upload_folder`: one for the model and one for the optimizer state (roughly). This way I can cancel the second one if it hasn't started when at the end of training.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25095/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25095",
"html_url": "https://github.com/huggingface/transformers/pull/25095",
"diff_url": "https://github.com/huggingface/transformers/pull/25095.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25095.patch",
"merged_at": 1691423243000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25094
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25094/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25094/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25094/events
|
https://github.com/huggingface/transformers/pull/25094
| 1,820,922,089 |
PR_kwDOCUB6oc5WXV-E
| 25,094 |
Add bloom flax
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the clinical review! Merging ๐"
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
# What does this PR do?
Adds the Flax BLOOM model, superseding #18022 (where force pushing to Younes' branch after rebase closed the PR branch and created a new one)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25094/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25094/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25094",
"html_url": "https://github.com/huggingface/transformers/pull/25094",
"diff_url": "https://github.com/huggingface/transformers/pull/25094.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25094.patch",
"merged_at": 1690478697000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25093
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25093/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25093/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25093/events
|
https://github.com/huggingface/transformers/pull/25093
| 1,820,899,775 |
PR_kwDOCUB6oc5WXQ_I
| 25,093 |
Add mask2former fp16 support
|
{
"login": "pedrohml",
"id": 2963875,
"node_id": "MDQ6VXNlcjI5NjM4NzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2963875?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pedrohml",
"html_url": "https://github.com/pedrohml",
"followers_url": "https://api.github.com/users/pedrohml/followers",
"following_url": "https://api.github.com/users/pedrohml/following{/other_user}",
"gists_url": "https://api.github.com/users/pedrohml/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pedrohml/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pedrohml/subscriptions",
"organizations_url": "https://api.github.com/users/pedrohml/orgs",
"repos_url": "https://api.github.com/users/pedrohml/repos",
"events_url": "https://api.github.com/users/pedrohml/events{/privacy}",
"received_events_url": "https://api.github.com/users/pedrohml/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @pedrohml, thanks for opening this PR! \r\n\r\nCould you add some tests for these models e.g. [like this one for ViT](https://github.com/huggingface/transformers/blob/dd9d45b6ecc0861847e21d461187711331a56138/tests/models/vit/test_modeling_vit.py#L320)",
"@amyeroberts thanks for review. do you mind to double check new changes ?",
"@pedrohml Thanks for iterating and adding some tests. The tests added should match the pattern of the ones [I linked to](https://github.com/huggingface/transformers/blob/dd9d45b6ecc0861847e21d461187711331a56138/tests/models/vit/test_modeling_vit.py#L320), rather than call integration tests that check logit values. There are two reasons for this: \r\n* An independent test enables us to select as small as possible checkpoint / config to keep the CI runs as fast as possible\r\n* It might not be robust, as we can expect some differences in the output logits. Out of interest, did you run the slow tests to confirm these pass? \r\n",
"> @pedrohml Thanks for iterating and adding some tests. The tests added should match the pattern of the ones [I linked to](https://github.com/huggingface/transformers/blob/dd9d45b6ecc0861847e21d461187711331a56138/tests/models/vit/test_modeling_vit.py#L320), rather than call integration tests that check logit values. There are two reasons for this:\r\n> \r\n> * An independent test enables us to select as small as possible checkpoint / config to keep the CI runs as fast as possible\r\n> * It might not be robust, as we can expect some differences in the output logits. Out of interest, did you run the slow tests to confirm these pass?\r\n\r\n@amyeroberts \r\nThanks for the heads up.. I managed to simplify the fp16 tests. Also, I was able to run tests locally using gpu to confirm all tests go green. This helped me to fix a mistake for oneformer workaround.\r\nI appreciate if you can review again. Feel free to add something else.",
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25093). All of your documentation changes will be reflected on that endpoint."
] | 1,690 | 1,691 | 1,691 |
CONTRIBUTOR
| null |
# What does this PR do?
Add float16 support to mask2former module and derivatives
Some operations and conversions were fixed in order to propagate the chosen user _dtype_ (ex: float32, float16) for input and embeddings. In this way, we can pass in fp16 tensors for inference.
## Who can review?
Similar PRs were reviewed by: @alaradirik, @amyeroberts, @LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25093/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25093/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25093",
"html_url": "https://github.com/huggingface/transformers/pull/25093",
"diff_url": "https://github.com/huggingface/transformers/pull/25093.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25093.patch",
"merged_at": 1691435249000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25092
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25092/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25092/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25092/events
|
https://github.com/huggingface/transformers/pull/25092
| 1,820,880,302 |
PR_kwDOCUB6oc5WXMv0
| 25,092 |
fix idefics vision config
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25092). All of your documentation changes will be reflected on that endpoint."
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
# What does this PR do?
As discussed offline, this should refactor the vision config args.
We need to address similar changes as https://huggingface.co/HuggingFaceM4/tiny-random-idefics/discussions/2 to make it work with other checkpoints
cc @stas00
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25092/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25092/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25092",
"html_url": "https://github.com/huggingface/transformers/pull/25092",
"diff_url": "https://github.com/huggingface/transformers/pull/25092.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25092.patch",
"merged_at": 1690388525000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25091
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25091/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25091/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25091/events
|
https://github.com/huggingface/transformers/pull/25091
| 1,820,857,068 |
PR_kwDOCUB6oc5WXHuC
| 25,091 |
Hotfix for failing `MusicgenForConditionalGeneration` tests
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,690 | 1,690 |
COLLABORATOR
| null |
# What does this PR do?
A exceptional case (for `MusicgenForConditionalGeneration`) is not detected by the CI triggered in #24927.
Test is currently failing on `main`, see
https://app.circleci.com/pipelines/github/huggingface/transformers/68971/workflows/503866ea-e425-4817-9849-52c6e7fd2865/jobs/863722
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25091/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25091",
"html_url": "https://github.com/huggingface/transformers/pull/25091",
"diff_url": "https://github.com/huggingface/transformers/pull/25091.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25091.patch",
"merged_at": 1690309560000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25090
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25090/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25090/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25090/events
|
https://github.com/huggingface/transformers/issues/25090
| 1,820,837,546 |
I_kwDOCUB6oc5sh8aq
| 25,090 |
Need documentation for understanding the difference between `7B` and `7Bf`
|
{
"login": "scottfleming",
"id": 11773823,
"node_id": "MDQ6VXNlcjExNzczODIz",
"avatar_url": "https://avatars.githubusercontent.com/u/11773823?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scottfleming",
"html_url": "https://github.com/scottfleming",
"followers_url": "https://api.github.com/users/scottfleming/followers",
"following_url": "https://api.github.com/users/scottfleming/following{/other_user}",
"gists_url": "https://api.github.com/users/scottfleming/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scottfleming/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scottfleming/subscriptions",
"organizations_url": "https://api.github.com/users/scottfleming/orgs",
"repos_url": "https://api.github.com/users/scottfleming/repos",
"events_url": "https://api.github.com/users/scottfleming/events{/privacy}",
"received_events_url": "https://api.github.com/users/scottfleming/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @ArthurZucker ",
"Sure, we can add a bit of documentation, mentioning that this is specific to llama2 official release ๐ "
] | 1,690 | 1,690 | 1,690 |
NONE
| null |
Unclear what the tradeoffs are between using eg `7B` and `7Bf` as an argument for `model_size` in the `convert_llama_weights_to_hf.py` script:
https://github.com/huggingface/transformers/blob/f9cc333805c47665c6afee8b5867931e54abe0c6/src/transformers/models/llama/convert_llama_weights_to_hf.py#L65
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25090/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25090/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25089
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25089/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25089/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25089/events
|
https://github.com/huggingface/transformers/pull/25089
| 1,820,787,716 |
PR_kwDOCUB6oc5WW4on
| 25,089 |
Move common image processing methods to BaseImageProcessor
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,690 | 1,690 |
COLLABORATOR
| null |
# What does this PR do?
Moves out the `rescale` and `normalize` methods of the image processors to the BaseImageProcessor class, which all the image processors inherit from.
Reason for moving `rescale` and `normalize`:
* Standard image processing steps
* Used by (almost) all image processors
* Few cases when the logic is model specific e.g. [ViVit](https://github.com/huggingface/transformers/blob/f9cc333805c47665c6afee8b5867931e54abe0c6/src/transformers/models/vivit/image_processing_vivit.py#L196).
Reason for not moving other methods:
* Many require model specific preparation before calling the transforms function. For example, `resize` has different logic across many models e.g. [1](https://github.com/huggingface/transformers/blob/f9cc333805c47665c6afee8b5867931e54abe0c6/src/transformers/models/beit/image_processing_beit.py#L142), [2](https://github.com/huggingface/transformers/blob/f9cc333805c47665c6afee8b5867931e54abe0c6/src/transformers/models/detr/image_processing_detr.py#L865), [3](https://github.com/huggingface/transformers/blob/f9cc333805c47665c6afee8b5867931e54abe0c6/src/transformers/models/vilt/image_processing_vilt.py#L196)
* Some methods aren't universal to all image processors and don't make sense to add to all image processors e.g. [reduce_label](https://github.com/huggingface/transformers/blob/f9cc333805c47665c6afee8b5867931e54abe0c6/src/transformers/models/beit/image_processing_beit.py#L235)
* Some will be moved in future e.g. center_crop but this requires a bit more work outside the PR scope
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25089/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25089",
"html_url": "https://github.com/huggingface/transformers/pull/25089",
"diff_url": "https://github.com/huggingface/transformers/pull/25089.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25089.patch",
"merged_at": 1690380558000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25088
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25088/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25088/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25088/events
|
https://github.com/huggingface/transformers/pull/25088
| 1,820,768,107 |
PR_kwDOCUB6oc5WW0d6
| 25,088 |
[`resize_embedding`] Introduce `pad_to_multiple_of` and guidance
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,693 | 1,692 |
COLLABORATOR
| null |
# What does this PR do?
Fixes #22312.
After internal discussions, it appears that adding the possibility to pad with `-1` to `tokenizers` is not really feasible ( nor is it desirable).
However, what we can do is by default resize the embedding layer to the nearest size that is optimal for the dtype of the model [following this](https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc).
Motivations:
- the `_get_resized_embeddings` is not exposed, and thus making this automatic can be a big silent win.
- if properly documented, should not really have issues.
Cons:
- it is not backward compatible, so some kind of `config.optimise_resize` might be needed?
- it is hidden and people might not really get why tokenizer.vocab_size will be different than the model's embedding dimension.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25088/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25088",
"html_url": "https://github.com/huggingface/transformers/pull/25088",
"diff_url": "https://github.com/huggingface/transformers/pull/25088.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25088.patch",
"merged_at": 1692284433000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25087
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25087/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25087/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25087/events
|
https://github.com/huggingface/transformers/pull/25087
| 1,820,671,628 |
PR_kwDOCUB6oc5WWfvG
| 25,087 |
Edit err message and comment in `test_model_is_small`
|
{
"login": "connor-henderson",
"id": 78612354,
"node_id": "MDQ6VXNlcjc4NjEyMzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/78612354?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/connor-henderson",
"html_url": "https://github.com/connor-henderson",
"followers_url": "https://api.github.com/users/connor-henderson/followers",
"following_url": "https://api.github.com/users/connor-henderson/following{/other_user}",
"gists_url": "https://api.github.com/users/connor-henderson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/connor-henderson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/connor-henderson/subscriptions",
"organizations_url": "https://api.github.com/users/connor-henderson/orgs",
"repos_url": "https://api.github.com/users/connor-henderson/repos",
"events_url": "https://api.github.com/users/connor-henderson/events{/privacy}",
"received_events_url": "https://api.github.com/users/connor-henderson/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25087). All of your documentation changes will be reflected on that endpoint."
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
# What does this PR do?
Tiny error message and comment update to bring into alignment with 1M param max mentioned in https://github.com/huggingface/transformers/pull/24824#issue-1804886584
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25087/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25087/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25087",
"html_url": "https://github.com/huggingface/transformers/pull/25087",
"diff_url": "https://github.com/huggingface/transformers/pull/25087.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25087.patch",
"merged_at": 1690302276000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25086
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25086/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25086/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25086/events
|
https://github.com/huggingface/transformers/pull/25086
| 1,820,598,445 |
PR_kwDOCUB6oc5WWPwb
| 25,086 |
Generate: return `past_key_values`
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This is a killer feature :+1: ",
"@gante Hi! Thanks for PR!\r\nDid you test feeding output past_key_values into .generate() method? Like take first 250 tokens input, run .generate(), get output past_key_values, take another 50 tokens input and run .generate() with previous 250 past_key_values? With beam search it seems to be kinda tricky. I'm trying to resolve multiple dimension mismatch problems.",
"(merging and leaving the conversion to a skip as a TODO)",
"I dont see a version number when will this be out?",
"Next release :) (v4.36)",
"@nevakrien 4.36v is now out :) "
] | 1,690 | 1,702 | 1,698 |
MEMBER
| null |
# What does this PR do?
Enables returning `past_key_values` from `generate`, if `return_dict_in_generate=True` (otherwise only the generated `input_ids` are returned) and `use_cache=True` (otherwise there is no cache to return ;) ).
In more abstract terms, this enables features like:
1. continuing a given generation without having the more expensive prefill step -- like in multi-turn conversations
2. exploring the KV values without having to place a breakpoint in `generate` ๐ ๐
The added code for the feature is minimal, so most of the PR is docs and tests ๐ค
Fixes #24841
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25086/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25086/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25086",
"html_url": "https://github.com/huggingface/transformers/pull/25086",
"diff_url": "https://github.com/huggingface/transformers/pull/25086.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25086.patch",
"merged_at": 1698939562000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25085
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25085/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25085/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25085/events
|
https://github.com/huggingface/transformers/pull/25085
| 1,820,522,237 |
PR_kwDOCUB6oc5WV_BY
| 25,085 |
[`TF`] Also apply patch to support left padding
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25085). All of your documentation changes will be reflected on that endpoint."
] | 1,690 | 1,690 | 1,690 |
COLLABORATOR
| null |
# What does this PR do?
Fixes red test on main GPTj test equivalence. Follows #24979
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25085/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25085/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25085",
"html_url": "https://github.com/huggingface/transformers/pull/25085",
"diff_url": "https://github.com/huggingface/transformers/pull/25085.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25085.patch",
"merged_at": 1690298589000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25084
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25084/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25084/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25084/events
|
https://github.com/huggingface/transformers/issues/25084
| 1,820,522,199 |
I_kwDOCUB6oc5sgvbX
| 25,084 |
AttributeError: 'GenerationConfig' object has no attribute 'task_to_id'
|
{
"login": "AmgadHasan",
"id": 109704569,
"node_id": "U_kgDOBon1eQ",
"avatar_url": "https://avatars.githubusercontent.com/u/109704569?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmgadHasan",
"html_url": "https://github.com/AmgadHasan",
"followers_url": "https://api.github.com/users/AmgadHasan/followers",
"following_url": "https://api.github.com/users/AmgadHasan/following{/other_user}",
"gists_url": "https://api.github.com/users/AmgadHasan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmgadHasan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmgadHasan/subscriptions",
"organizations_url": "https://api.github.com/users/AmgadHasan/orgs",
"repos_url": "https://api.github.com/users/AmgadHasan/repos",
"events_url": "https://api.github.com/users/AmgadHasan/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmgadHasan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Also cc @sanchit-gandhi since it comes from the audio course.",
"Can you link the full stacktrace if possible ? This might help us narrow it down faster.",
"+1 on the full stack-trace. It might require an update to your generation config since this is a fine-tuned checkpoint and the API was updated to take the `task`/`language` as arguments rather than as from the config's `forced_decoder_ids` (see https://github.com/huggingface/transformers/issues/21878#issuecomment-1451902363 for details)",
"@sanchit-gandhi \r\n@Narsil \r\n\r\nHere's a colab notebook to reproduce the error\r\n\r\nhttps://colab.research.google.com/drive/1kLjKWZSKmvPwBqnaN-NJxy6Hv4gG5oDJ?usp=sharing",
"Thanks for the notebook @AmgadHasan! The generation config for this model is indeed missing, meaning it is created automatically from the config in the call to `.generate`, and is only populated with some basic information:\r\n```python\r\nfrom transformers import pipeline\r\n\r\npipe = pipeline(\"automatic-speech-recognition\", model='arbml/whisper-largev2-ar')\r\nprint(asr.model.generation_config)\r\n```\r\n**Print Output:**\r\n```\r\nGenerationConfig {\r\n \"_from_model_config\": true,\r\n \"begin_suppress_tokens\": [\r\n 220,\r\n 50257\r\n ],\r\n \"bos_token_id\": 50257,\r\n \"decoder_start_token_id\": 50258,\r\n \"eos_token_id\": 50257,\r\n \"max_length\": 448,\r\n \"pad_token_id\": 50257,\r\n \"transformers_version\": \"4.32.0.dev0\",\r\n \"use_cache\": false\r\n}\r\n```\r\n\r\nIf we compare this to the most recent generation config, i.e the one for [Whisper large-v2](https://huggingface.co/openai/whisper-large-v2/blob/1f66457e6e36eeb6d89078882a39003e55c330b8/generation_config.json#L216-L219), we see that the generation config is missing many both the language and task token id mappings:\r\n```\r\nGenerationConfig {\r\n \"begin_suppress_tokens\": [\r\n 220,\r\n 50257\r\n ],\r\n ...\r\n \"task_to_id\": {\r\n \"transcribe\": 50359,\r\n \"translate\": 50358\r\n },\r\n ...\r\n}\r\n```\r\n\r\nThese language/task token mappings are used in the call to `.generate` to get the correct language/task token ids respectively:\r\nhttps://github.com/huggingface/transformers/blob/a1c4954d25ca030c85319dd78395a4eff816e852/src/transformers/models/whisper/modeling_whisper.py#L1691\r\n\r\nSince using the language/task arguments as input to the `.generate` method was added with the update to the generation config, these are new features that only work with the updated generation config.\r\n\r\nProbably what we can do here @ArthurZucker is throw an error when the user tries to call `.generate` and passes the language/task arguments but the generation config is missing the language/task token ids mapping? Happy to open a PR to fix this\r\n\r\nA quick fix for this issue @AmgadHasan is updating the generation config for the model checkpoint (as per my previous comment)",
"> Thanks for the notebook @AmgadHasan! The generation config for this model is indeed missing, meaning it is created automatically from the config in the call to `.generate`, and is only populated with some basic information:\r\n> \r\n> ```python\r\n> from transformers import pipeline\r\n> \r\n> pipe = pipeline(\"automatic-speech-recognition\", model='arbml/whisper-largev2-ar')\r\n> print(asr.model.generation_config)\r\n> ```\r\n> \r\n> **Print Output:**\r\n> \r\n> ```\r\n> GenerationConfig {\r\n> \"_from_model_config\": true,\r\n> \"begin_suppress_tokens\": [\r\n> 220,\r\n> 50257\r\n> ],\r\n> \"bos_token_id\": 50257,\r\n> \"decoder_start_token_id\": 50258,\r\n> \"eos_token_id\": 50257,\r\n> \"max_length\": 448,\r\n> \"pad_token_id\": 50257,\r\n> \"transformers_version\": \"4.32.0.dev0\",\r\n> \"use_cache\": false\r\n> }\r\n> ```\r\n> \r\n> If we compare this to the most recent generation config, i.e the one for [Whisper large-v2](https://huggingface.co/openai/whisper-large-v2/blob/1f66457e6e36eeb6d89078882a39003e55c330b8/generation_config.json#L216-L219), we see that the generation config is missing many both the language and task token id mappings:\r\n> \r\n> ```\r\n> GenerationConfig {\r\n> \"begin_suppress_tokens\": [\r\n> 220,\r\n> 50257\r\n> ],\r\n> ...\r\n> \"task_to_id\": {\r\n> \"transcribe\": 50359,\r\n> \"translate\": 50358\r\n> },\r\n> ...\r\n> }\r\n> ```\r\n> \r\n> These language/task token mappings are used in the call to `.generate` to get the correct language/task token ids respectively:\r\n> \r\n> https://github.com/huggingface/transformers/blob/a1c4954d25ca030c85319dd78395a4eff816e852/src/transformers/models/whisper/modeling_whisper.py#L1691\r\n> \r\n> Since using the language/task arguments as input to the `.generate` method was added with the update to the generation config, these are new features that only work with the updated generation config.\r\n> \r\n> Probably what we can do here @ArthurZucker is throw an error when the user tries to call `.generate` and passes the language/task arguments but the generation config is missing the language/task token ids mapping? Happy to open a PR to fix this\r\n> \r\n> A quick fix for this issue @AmgadHasan is updating the generation config for the model checkpoint (as per my previous comment)\r\n\r\nThanks @sanchit-gandhi ! This solved the issue.",
"The simplest way of updating the generation config is as follows:\r\n```python\r\nfrom transformers import GenerationConfig\r\n\r\nMODEL_ID = \"arbml/whisper-largev2-ar\" #ย set to your model id on the Hub\r\nMULTILINGUAL = True #ย set True for multilingual models, False for English-only\r\n\r\nif MULTILINGUAL:\r\n generation_config = GenerationConfig.from_pretrained(\"openai/whisper-large-v2\")\r\nelse:\r\n generation_config = GenerationConfig.from_pretrained(\"openai/whisper-medium.en\")\r\n\r\ngeneration_config.push_to_hub(model_id)\r\n```"
] | 1,690 | 1,691 | 1,690 |
NONE
| null |
### System Info
I am following the [Audio course](https://huggingface.co/learn/audio-course/chapter5/asr_models#longform-transcription-and-timestamps) course and tried to perform translation using the automatic speech recognition pipeline but got a weird error.
Code:
```
from transformers import pipeline
asr = pipeline("automatic-speech-recognition", model='arbml/whisper-largev2-ar', device=0)
res = asr(
audio_file_path,
max_new_tokens=256,
generate_kwargs={"task": "translate"},
chunk_length_s=30,
batch_size=8,
)
```
Error:
`AttributeError: 'GenerationConfig' object has no attribute 'task_to_id'`
This was using Colab free tier on T4
transformers version:
```
import transformers
transformers.__version__
>>> '4.31.0'
```
This error arises when using `generate_kwargs={"task": "translate"}` or `generate_kwargs={"task": "transcribe"}`
Tagging @Narsil to help with pipeline issues.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import pipeline
asr = pipeline("automatic-speech-recognition", model='arbml/whisper-largev2-ar', device=0)
res = asr(
audio_file_path,
max_new_tokens=256,
generate_kwargs={"task": "translate"},
chunk_length_s=30,
batch_size=8,
)
### Expected behavior
Should return a python `dict` with key named `text` that holds the English text.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25084/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25083
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25083/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25083/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25083/events
|
https://github.com/huggingface/transformers/pull/25083
| 1,820,478,951 |
PR_kwDOCUB6oc5WV1jH
| 25,083 |
update `use_auth_token` -> `token`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"In some training example script, (for example, `examples/tensorflow/contrastive-image-text/run_clip.py`) there are \r\n\r\n```python\r\n use_auth_token: bool = field(\r\n default=False,\r\n metadata={\r\n \"help\": (\r\n \"Will use the token generated when running `huggingface-cli login` (necessary to use this script \"\r\n \"with private models).\"\r\n )\r\n },\r\n )\r\n```\r\nDo you have any comment on how we should do with them?\r\n\r\nChange everything to use `token` but still need to accept `use_auth_token`?\r\n\r\n"
] | 1,690 | 1,690 | 1,690 |
COLLABORATOR
| null |
# What does this PR do?
Fix #25008
@sgugger This should be ready for a review. There are still a few places that use `use_auth_token`, but only during the calls to relevant methods.
I will change those calls to use `token` before merge.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25083/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25083/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25083",
"html_url": "https://github.com/huggingface/transformers/pull/25083",
"diff_url": "https://github.com/huggingface/transformers/pull/25083.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25083.patch",
"merged_at": 1690377000000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25082
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25082/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25082/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25082/events
|
https://github.com/huggingface/transformers/issues/25082
| 1,820,469,086 |
I_kwDOCUB6oc5sgide
| 25,082 |
LLaMA Tokenizer does not compute word_ids
|
{
"login": "ikergarcia1996",
"id": 18737249,
"node_id": "MDQ6VXNlcjE4NzM3MjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/18737249?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ikergarcia1996",
"html_url": "https://github.com/ikergarcia1996",
"followers_url": "https://api.github.com/users/ikergarcia1996/followers",
"following_url": "https://api.github.com/users/ikergarcia1996/following{/other_user}",
"gists_url": "https://api.github.com/users/ikergarcia1996/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ikergarcia1996/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ikergarcia1996/subscriptions",
"organizations_url": "https://api.github.com/users/ikergarcia1996/orgs",
"repos_url": "https://api.github.com/users/ikergarcia1996/repos",
"events_url": "https://api.github.com/users/ikergarcia1996/events{/privacy}",
"received_events_url": "https://api.github.com/users/ikergarcia1996/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey! Thanks for reporting this. This seems to be related to the backend tokenizer, so most probably the `tokenizers` library. cc @Narsil if you have a quick idea, otherwise, I'll investigate",
"Oh it's simple.\r\n\r\nIt's just that what you call \"word\" doesn't exist for this tokenizer...\r\nA lot of tokenizers use whitespace splitting before processing text, but llama does not.\r\n\r\nSo the other tokenizers listed here see \"This test is\" as `\"This\"+ \"test\" + \"is\"` while llama will see exactly \"This test is\".\r\nSo llama thinks its one \"word\". So outputting `[0, 0, 0]` is technically correct, even if not too terribly useful.\r\n\r\nThe best recommendation I can make is using `offsets` instead which will give you the offsets from which each tokens comes from in the original sentence.\r\nThis is the only general way that will work on any tokenizer to recover where a given token is from. And you can compute words as you'd like by some other means if you really want to use words (I usually advise to stay away from words, it creates a lot of unecessary issues when looked under enough scrutiny)\r\n\r\n```python\r\nfrom transformers import pipeline, AutoTokenizer\r\n\r\nllama_tokenizer = AutoTokenizer.from_pretrained(\"meta-llama/Llama-2-7b-chat-hf\")\r\n\r\nsentence = \"This is a test\"\r\n\r\nmodel_inputs = llama_tokenizer(sentence, add_special_tokens=False, return_offsets_mapping=True)\r\n\r\nfor (input_id, (start, stop)) in zip(model_inputs[\"input_ids\"], model_inputs[\"offset_mapping\"]):\r\n print(f\"{input_id} {start}-{stop} {sentence[start:stop]}\")\r\n\r\n```\r\n",
"Thank you @Narsil for the great explanation! "
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.31.0.dev0
- Platform: Linux-4.18.0-477.10.1.el8_8.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.7
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.1.0.dev20230515+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The LLaMA tokenizer does not compute word_ids; all the tokens have the id `0`. This function is useful for custom decoding strategies as it allows the user to know to which word in the sentence a token belongs.
```python
from transformers import AutoTokenizer
sentence = "This is a test"
gpt_neox_tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
llama_tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
gpt2_tokenizer = AutoTokenizer.from_pretrained("gpt2")
t5_tokenizer = AutoTokenizer.from_pretrained("google/mt5-base")
bert_tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
print(gpt_neox_tokenizer([sentence], add_special_tokens=False).word_ids())
print(llama_tokenizer([sentence], add_special_tokens=False).word_ids())
print(gpt2_tokenizer([sentence], add_special_tokens=False).word_ids())
print(t5_tokenizer([sentence], add_special_tokens=False).word_ids())
print(bert_tokenizer([sentence], add_special_tokens=False).word_ids())
```
Output
```python
[0, 1, 2, 3]
[0, 0, 0, 0]
[0, 1, 2, 3]
[0, 1, 2, 2, 3]
[0, 1, 2, 3]
```
### Expected behavior
```python
print(llama_tokenizer([sentence], add_special_tokens=False).word_ids())
```
The expected output is
```python
[0, 1, 2, 3]
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25082/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25081
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25081/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25081/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25081/events
|
https://github.com/huggingface/transformers/pull/25081
| 1,820,418,224 |
PR_kwDOCUB6oc5WVoZu
| 25,081 |
[`split_special_tokens`] Add support for `split_special_tokens` argument to encode
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,692 | 1,692 |
COLLABORATOR
| null |
# What does this PR do?
Argument name is totally debatable. Will also require a pull request in `tokenizers`.
The goal is to be able to simply activate and de-activate the special token splitting. Feature was asked in #22490, and is required for some production type cases, where users pass inputs and we don't want them to be able to hack them
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25081/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25081",
"html_url": "https://github.com/huggingface/transformers/pull/25081",
"diff_url": "https://github.com/huggingface/transformers/pull/25081.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25081.patch",
"merged_at": 1692357987000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25080
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25080/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25080/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25080/events
|
https://github.com/huggingface/transformers/pull/25080
| 1,820,296,958 |
PR_kwDOCUB6oc5WVN_i
| 25,080 |
fix bug : add global_step when call self.lr_scheduler.step
|
{
"login": "SuperCB",
"id": 82354186,
"node_id": "MDQ6VXNlcjgyMzU0MTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/82354186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuperCB",
"html_url": "https://github.com/SuperCB",
"followers_url": "https://api.github.com/users/SuperCB/followers",
"following_url": "https://api.github.com/users/SuperCB/following{/other_user}",
"gists_url": "https://api.github.com/users/SuperCB/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuperCB/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuperCB/subscriptions",
"organizations_url": "https://api.github.com/users/SuperCB/orgs",
"repos_url": "https://api.github.com/users/SuperCB/repos",
"events_url": "https://api.github.com/users/SuperCB/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuperCB/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,690 | 1,693 | 1,693 |
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes [Forget to put the global-step in lr scheduler in train.py #25079](https://github.com/huggingface/transformers/issues/25079)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25080/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25080",
"html_url": "https://github.com/huggingface/transformers/pull/25080",
"diff_url": "https://github.com/huggingface/transformers/pull/25080.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25080.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25079
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25079/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25079/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25079/events
|
https://github.com/huggingface/transformers/issues/25079
| 1,820,294,585 |
I_kwDOCUB6oc5sf325
| 25,079 |
Forget to put the global-step in lr scheduler in train.py
|
{
"login": "SuperCB",
"id": 82354186,
"node_id": "MDQ6VXNlcjgyMzU0MTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/82354186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuperCB",
"html_url": "https://github.com/SuperCB",
"followers_url": "https://api.github.com/users/SuperCB/followers",
"following_url": "https://api.github.com/users/SuperCB/following{/other_user}",
"gists_url": "https://api.github.com/users/SuperCB/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuperCB/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuperCB/subscriptions",
"organizations_url": "https://api.github.com/users/SuperCB/orgs",
"repos_url": "https://api.github.com/users/SuperCB/repos",
"events_url": "https://api.github.com/users/SuperCB/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuperCB/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @pacman100 ",
"Hello, a minimal reproducer for this please. The checkpointing test loads back the correct lr: https://github.com/huggingface/accelerate/actions/runs/5651936442/job/15310803049#step:5:2259",
"> Hello, a minimal reproducer for this please. The checkpointing test loads back the correct lr: https://github.com/huggingface/accelerate/actions/runs/5651936442/job/15310803049#step:5:2259\r\n\r\nHello, the difference between your job and ours lies in the **different lr_scheduler type**. We use the cosine lr_scheculer (with `--lr_scheduler_type cosine`) instead of set `\"scheduler\"` in `deepspeed_config.json`.\r\n\r\nWhat we see in wandb is, the `train/learning_rate` always restart from zero with warmups when we resume training from checkpoints.\r\n\r\n\r\n\r\nAs we go through the code(correct us if we were wrong), the reason is that DeepSpeed doesn't support consine lr_scheduler, so Transformers use [pytorch native lr_scheduler](https://github.com/pytorch/pytorch/blob/e18d53e2df44bccd7231cdf3dad6ea1255221bd4/torch/optim/lr_scheduler.py#L124), which maintains a self-incrementing variable `self.last_epoch` if we don't pass `epoch` when calling `step()`.\r\n\r\nWe will provide a minimal reproducer later.\r\n",
"We used [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) to reproduce this bug and here is our startup script and DeepSpeed configuration.\r\n**`startup script`**\r\n```\r\ntorchrun --nproc_per_node=8 --master_port=29600 train.py \\\r\n\t--model_name_or_path '/nvme/models/llama-13b/' \\\r\n\t--data_path ./alpaca_data.json \\\r\n\t--bf16 True \\\r\n\t--output_dir './chpt' \\\r\n\t--num_train_epochs 3 \\\r\n\t--per_device_train_batch_size 4 \\\r\n\t--per_device_eval_batch_size 4 \\\r\n\t--gradient_accumulation_steps 8 \\\r\n\t--evaluation_strategy \"no\" \\\r\n\t--save_strategy \"steps\" \\\r\n\t--save_steps 2000 \\\r\n\t--save_total_limit 1 \\\r\n\t--learning_rate 2e-5 \\\r\n\t--weight_decay 0. \\\r\n\t--warmup_ratio 0.03 \\\r\n\t--logging_steps 1 \\\r\n\t--learning_rate 2e-4 \\\r\n\t--warmup_ratio 0.1 \\\r\n\t--tf32 True \\\r\n\t--lr_scheduler_type 'cosine' \\\r\n\t--adam_beta2 0.95 \\\r\n\t--deepspeed 'deepspeed.zero3.json' \\\r\n\t--save_strategy 'steps' \\\r\n\t--save_steps 10\r\n```\r\n`deepspeed config`\r\n```\r\n{\r\n \"bf16\": {\r\n \"enabled\": \"auto\"\r\n },\r\n \"zero_optimization\": {\r\n \"stage\": 3,\r\n \"overlap_comm\": true,\r\n \"contiguous_gradients\": true,\r\n \"sub_group_size\": 1e9,\r\n \"reduce_bucket_size\": \"auto\",\r\n \"stage3_prefetch_bucket_size\": \"auto\",\r\n \"stage3_param_persistence_threshold\": \"auto\",\r\n \"stage3_max_live_parameters\": 1e9,\r\n \"stage3_max_reuse_distance\": 1e9,\r\n \"stage3_gather_16bit_weights_on_model_save\": true\r\n },\r\n \"gradient_accumulation_steps\": \"auto\",\r\n \"gradient_clipping\": \"auto\",\r\n \"steps_per_print\": 5,\r\n \"train_batch_size\": \"auto\",\r\n \"train_micro_batch_size_per_gpu\": \"auto\",\r\n \"wall_clock_breakdown\": false,\r\n \"flops_profiler\": {\r\n \"enabled\": true,\r\n \"profile_step\": 5,\r\n \"module_depth\": -1,\r\n \"top_modules\": 1,\r\n \"detailed\": true,\r\n \"output_file\": \"logs/deepspeed_flops.log\"\r\n }\r\n}\r\n```\r\n**before restart**\r\n\r\n\r\n\r\n**after restart**\r\n\r\n\r\n**wandb**\r\n\r\n\r\n\r\nWe save a checkpoint every ten steps, and when we restart the training from the previous checkpoint, we observe that the learning rate after the restart is different from the learning rate before the restart. This indicates that the LR (learning rate) scheduler's state is not reset during the restart.\r\n",
"@pacman100 Hello, I apologize for bothering you. Could you please let us know if there have been any updates or if there might be any corrections needed in our description? Much thanks.",
"Hello @Dounm, please see https://github.com/huggingface/transformers/issues/25865#issuecomment-1700722492 and https://github.com/huggingface/transformers/issues/25865#issuecomment-1702239905. It should be resolving this issue.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,690 | 1,697 | 1,697 |
NONE
| null |
### System Info
- ` transformers` version: 4.30.2
- Platform: Linux-3.10.0-1160.83.1.el7.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
We fine-tuned the llama model using trainer.py and accelerated the training process using DeepSpeed. We chose the cosine function as the type for the lr scheduler.
### Expected behavior
In the process of training LLM using `train()` in trainer.py, we observed that after reloading the model from a checkpoint with deepspeed zero-3, the learning rate scheduler did not resume iteration from the previous training's learning rate. Instead, there were gaps in the learning rate progression.We used a lr scheduler of the cosine type, which is not provided by Deepspeed. Instead, we observed that the trainer used the [lr_scheduler](https://github.com/pytorch/pytorch/blob/main/torch/optim/lr_scheduler.py#L124) defined in torch.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25079/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25079/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25078
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25078/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25078/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25078/events
|
https://github.com/huggingface/transformers/pull/25078
| 1,820,172,996 |
PR_kwDOCUB6oc5WUyoA
| 25,078 |
replace `per_gpu_eval_batch_size` with `per_device_eval_batch_size` in readme of multiple-choice task
|
{
"login": "statelesshz",
"id": 28150734,
"node_id": "MDQ6VXNlcjI4MTUwNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/statelesshz",
"html_url": "https://github.com/statelesshz",
"followers_url": "https://api.github.com/users/statelesshz/followers",
"following_url": "https://api.github.com/users/statelesshz/following{/other_user}",
"gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions",
"organizations_url": "https://api.github.com/users/statelesshz/orgs",
"repos_url": "https://api.github.com/users/statelesshz/repos",
"events_url": "https://api.github.com/users/statelesshz/events{/privacy}",
"received_events_url": "https://api.github.com/users/statelesshz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25078). All of your documentation changes will be reflected on that endpoint."
] | 1,690 | 1,694 | 1,690 |
CONTRIBUTOR
| null |
### What does this PR do?
replace `per_gpu_eval_batch_size` with `per_device_eval_batch_size` in readme of multiple-choice task as the training arguments `per_gpu_*` is deprecated.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25078/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25078",
"html_url": "https://github.com/huggingface/transformers/pull/25078",
"diff_url": "https://github.com/huggingface/transformers/pull/25078.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25078.patch",
"merged_at": 1690287117000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25077
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25077/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25077/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25077/events
|
https://github.com/huggingface/transformers/pull/25077
| 1,820,099,126 |
PR_kwDOCUB6oc5WUitT
| 25,077 |
[`PEFT`] Peft integration alternative design
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Mmmm, but know we can't do `model = AutoModelForCausalLM.from_pretrained(\"ybelkada/opt-350m-lora\")` despite the checkpoint having all the correct info to load the model (compared to the alternative PR). Can we still add the necessary code in `from_pretrained` to have the load take one line instead of two?\r\n\r\nApart from that, the design looks great to me!",
"At the moment it looks like loading multiple adapters is supported, could we maybe add a couple of examples when multiple adapters are loaded?",
"@patrickvonplaten \r\n\r\n> At the moment it looks like loading multiple adapters is supported, could we maybe add a couple of examples when multiple adapters are loaded?\r\n\r\nModified the PR description to expose all the possible things that users can do currently",
"I think that now all core components are available, before adding the tests and the documentation, I would love to have a round of review @patrickvonplaten @sgugger @pacman100 @BenjaminBossan \r\nI have updated the PR description to detail all possible things users can perform with this integration. Note that there is also a Trainer support and is fully BC with PeftModels\r\n\r\nEDIT: I don't know why the CI is currently failing :/ ",
"Ah, and also this PR should close https://github.com/huggingface/transformers/pull/24750 as pipeline should work out of the box now if you inject adapters or pass a path to an adapter file",
"Thank you @younesbelkada for the impressive work on adding PEFT as a utility library in Transformers ๐ฅ๐โจ",
"> # What does this PR do?\r\n> From the offline discussion + the comments from @patrickvonplaten in [#24827 (comment)](https://github.com/huggingface/transformers/pull/24827#issuecomment-1641750464) I propose a new design for tightly integrating PEFT into transformers. This integration enables loading any PEFT adapter that is saved locally or on the Hub directly into PEFT without dispatching the entire model creation process to PEFT as introduced in #24827.\r\n> \r\n> This would also enable an easier pipeline integration (a one-liner to load adapter weights) | EDIT: pipeline should work out of the box\r\n> \r\n> Let's constraint this integration to few PEFT methods only, for simplicity and redirect users to use PEFT for advanced features (e.g. merge and unload) and advanced PEFT methods (adaptation prompt, prompt learning).\r\n> \r\n> Current API:\r\n> \r\n> ## Load a model with an adapter locally or from the Hub:\r\n> ```python\r\n> import torch\r\n> from transformers import AutoModelForCausalLM, OPTForCausalLM\r\n> \r\n> model_id = \"facebook/opt-350m\"\r\n> adapter_model_id = \"ybelkada/opt-350m-lora\"\r\n> \r\n> # directly on from_pretrained\r\n> model = AutoModelForCausalLM.from_pretrained(adapter_model_id)\r\n> print(model)\r\n> \r\n> # directly on from_pretrained\r\n> model = OPTForCausalLM.from_pretrained(adapter_model_id)\r\n> print(model)\r\n> ```\r\n> \r\n> ## Load and attach adapter to an existing model\r\n> ```python\r\n> from transformers import AutoModelForCausalLM\r\n> \r\n> # with load_adapter\r\n> model = AutoModelForCausalLM.from_pretrained(model_id)\r\n> model.load_adapter(adapter_model_id)\r\n> \r\n> print(model)\r\n> \r\n> # 8-bit + multiGPU compatiblity\r\n> model = AutoModelForCausalLM.from_pretrained(model_id, load_in_8bit=True, device_map=\"balanced\")\r\n> model.load_adapter(adapter_model_id)\r\n> \r\n> print(model)\r\n> print(set(model.hf_device_map.values()))\r\n> \r\n> _ = model(torch.LongTensor([[0, 1, 2, 3]]).to(0))\r\n> ```\r\n> \r\n> ## Attach an adapter, iteratively enable / disable adapters\r\n> ```python\r\n> from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer\r\n> from peft import PeftConfig\r\n> \r\n> model_id = \"facebook/opt-350m\"\r\n> adapter_model_id = \"ybelkada/opt-350m-lora\"\r\n> tokenizer = AutoTokenizer.from_pretrained(model_id)\r\n> text = \"Hello\"\r\n> inputs = tokenizer(text, return_tensors=\"pt\")\r\n> \r\n> model = AutoModelForCausalLM.from_pretrained(model_id)\r\n> peft_config = PeftConfig.from_pretrained(adapter_model_id)\r\n> \r\n> # To get random weights\r\n> peft_config.init_lora_weights = False\r\n> \r\n> model.add_adapter(peft_config)\r\n> print(model)\r\n> \r\n> model.disable_adapters()\r\n> output_disabled = model.generate(**inputs)\r\n> print(tokenizer.decode(output_disabled[0], skip_special_tokens=True))\r\n> >>> Hello, I'm a newbie to this sub. I'm looking for a good place to\r\n> \r\n> model.enable_adapters()\r\n> output_enabled = model.generate(**inputs)\r\n> print(tokenizer.decode(output_enabled[0], skip_special_tokens=True))\r\n> >>> Hello, MMMMMMMM\r\n> ```\r\n> \r\n> ## Add multiple adapters\r\n> ```python\r\n> from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer\r\n> from peft import PeftConfig, LoraConfig\r\n> \r\n> model_id = \"facebook/opt-350m\"\r\n> \r\n> # directly on from_pretrained\r\n> model = AutoModelForCausalLM.from_pretrained(model_id)\r\n> \r\n> lora_config = LoraConfig(\r\n> target_modules=[\"q_proj\", \"k_proj\"],\r\n> init_lora_weights=False\r\n> )\r\n> \r\n> model.add_adapter(lora_config, adapter_name=\"adapter_1\")\r\n> \r\n> # attach new adapter with same config\r\n> model.add_adapter(lora_config, adapter_name=\"adapter_2\")\r\n> \r\n> model.set_adapter(\"adapter_1\")\r\n> output_disabled = model.generate(**inputs)\r\n> print(tokenizer.decode(output_disabled[0], skip_special_tokens=True))\r\n> >>> Hello, I'm a newbie to this sub. I'm looking for a good place to\r\n> \r\n> model.set_adapter(\"adapter_2\")\r\n> output_enabled = model.generate(**inputs)\r\n> print(tokenizer.decode(output_enabled[0], skip_special_tokens=True))\r\n> >>> Hello, I'm a newbie to the game. I'm looking for a good way to\r\n> ```\r\n> \r\n> ## Save adapters\r\n> ```python\r\n> from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer\r\n> from peft import PeftConfig, LoraConfig\r\n> \r\n> model_id = \"facebook/opt-350m\"\r\n> \r\n> # directly on from_pretrained\r\n> model = AutoModelForCausalLM.from_pretrained(model_id)\r\n> \r\n> lora_config = LoraConfig(\r\n> target_modules=[\"q_proj\", \"k_proj\"],\r\n> )\r\n> \r\n> model.add_adapter(lora_config)\r\n> \r\n> ... # train here\r\n> \r\n> model.save_pretrained(save_dir) \r\n> \r\n> # you can either load it back with transformers or PEFT\r\n> \r\n> from peft import AutoPeftModelForCausalLM\r\n> \r\n> model = AutoPeftModelForCausalLM.from_pretrained(save_dir)\r\n> \r\n> # or\r\n> \r\n> model = AutoModelForCausalLM.from_pretrained(save_dir)\r\n> ```\r\n> \r\n> ## Train adapters using Trainer\r\n> Check this gist: https://gist.github.com/younesbelkada/cdda6e4abcb09e58f6324d75e0d88862\r\n> \r\n> This PR is on par with: [huggingface/peft#749](https://github.com/huggingface/peft/pull/749)\r\n> \r\n> Features to support:\r\n> \r\n> * [x] loading PEFT adapters\r\n> * [x] using multiple adapters\r\n> * [x] deal with models loaded with accelerate\r\n> * [x] Loading directly from `from_pretrained`\r\n> * [ ] Merging adapter weights - to not support\r\n> * [ ] Unload adapter weights - to not support\r\n> * [x] Training with BC with expected PEFT checkpoints format (do we really want to support training? Shall we just redirect users to load a classic `PeftModel` if they want to train a model?)\r\n> * [x] What about `save_pretrained` ?\r\n> \r\n> Features to **not** support:\r\n> \r\n> * [x] disabling adapters\r\n> * [x] prompt tuning / prompt learning methods\r\n> \r\n> TODOs:\r\n> \r\n> * [ ]ย docs\r\n> * [x] tests\r\n> \r\n> cc @sgugger @patrickvonplaten @BenjaminBossan @pacman100\r\n\r\nWhat packages to install do i need to run this code?",
"> What packages to install do i need to run this code?\r\n\r\nJust install the latest `transformers` & `peft`\r\n\r\n```bash\r\npip install -U peft transformers\r\n```"
] | 1,690 | 1,704 | 1,692 |
CONTRIBUTOR
| null |
# What does this PR do?
From the offline discussion + the comments from @patrickvonplaten in https://github.com/huggingface/transformers/pull/24827#issuecomment-1641750464 I propose a new design for tightly integrating PEFT into transformers.
This integration enables loading any PEFT adapter that is saved locally or on the Hub directly into PEFT without dispatching the entire model creation process to PEFT as introduced in #24827.
This would also enable an easier pipeline integration (a one-liner to load adapter weights) | EDIT: pipeline should work out of the box
Let's constraint this integration to few PEFT methods only, for simplicity and redirect users to use PEFT for advanced features (e.g. merge and unload) and advanced PEFT methods (adaptation prompt, prompt learning).
Current API:
## Load a model with an adapter locally or from the Hub:
```python
import torch
from transformers import AutoModelForCausalLM, OPTForCausalLM
model_id = "facebook/opt-350m"
adapter_model_id = "ybelkada/opt-350m-lora"
# directly on from_pretrained
model = AutoModelForCausalLM.from_pretrained(adapter_model_id)
print(model)
# directly on from_pretrained
model = OPTForCausalLM.from_pretrained(adapter_model_id)
print(model)
```
## Load and attach adapter to an existing model
```python
from transformers import AutoModelForCausalLM
# with load_adapter
model = AutoModelForCausalLM.from_pretrained(model_id)
model.load_adapter(adapter_model_id)
print(model)
# 8-bit + multiGPU compatiblity
model = AutoModelForCausalLM.from_pretrained(model_id, load_in_8bit=True, device_map="balanced")
model.load_adapter(adapter_model_id)
print(model)
print(set(model.hf_device_map.values()))
_ = model(torch.LongTensor([[0, 1, 2, 3]]).to(0))
```
## Attach an adapter, iteratively enable / disable adapters
```python
from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer
from peft import PeftConfig
model_id = "facebook/opt-350m"
adapter_model_id = "ybelkada/opt-350m-lora"
tokenizer = AutoTokenizer.from_pretrained(model_id)
text = "Hello"
inputs = tokenizer(text, return_tensors="pt")
model = AutoModelForCausalLM.from_pretrained(model_id)
peft_config = PeftConfig.from_pretrained(adapter_model_id)
# To get random weights
peft_config.init_lora_weights = False
model.add_adapter(peft_config)
print(model)
model.disable_adapters()
output_disabled = model.generate(**inputs)
print(tokenizer.decode(output_disabled[0], skip_special_tokens=True))
>>> Hello, I'm a newbie to this sub. I'm looking for a good place to
model.enable_adapters()
output_enabled = model.generate(**inputs)
print(tokenizer.decode(output_enabled[0], skip_special_tokens=True))
>>> Hello, MMMMMMMM
```
## Add multiple adapters
```python
from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer
from peft import PeftConfig, LoraConfig
model_id = "facebook/opt-350m"
# directly on from_pretrained
model = AutoModelForCausalLM.from_pretrained(model_id)
lora_config = LoraConfig(
target_modules=["q_proj", "k_proj"],
init_lora_weights=False
)
model.add_adapter(lora_config, adapter_name="adapter_1")
# attach new adapter with same config
model.add_adapter(lora_config, adapter_name="adapter_2")
model.set_adapter("adapter_1")
output_disabled = model.generate(**inputs)
print(tokenizer.decode(output_disabled[0], skip_special_tokens=True))
>>> Hello, I'm a newbie to this sub. I'm looking for a good place to
model.set_adapter("adapter_2")
output_enabled = model.generate(**inputs)
print(tokenizer.decode(output_enabled[0], skip_special_tokens=True))
>>> Hello, I'm a newbie to the game. I'm looking for a good way to
```
## Save adapters
```python
from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer
from peft import PeftConfig, LoraConfig
model_id = "facebook/opt-350m"
# directly on from_pretrained
model = AutoModelForCausalLM.from_pretrained(model_id)
lora_config = LoraConfig(
target_modules=["q_proj", "k_proj"],
)
model.add_adapter(lora_config)
... # train here
model.save_pretrained(save_dir)
# you can either load it back with transformers or PEFT
from peft import AutoPeftModelForCausalLM
model = AutoPeftModelForCausalLM.from_pretrained(save_dir)
# or
model = AutoModelForCausalLM.from_pretrained(save_dir)
```
## Train adapters using Trainer
Check this gist: https://gist.github.com/younesbelkada/cdda6e4abcb09e58f6324d75e0d88862
This PR is on par with: https://github.com/huggingface/peft/pull/749
Features to support:
- [x] loading PEFT adapters
- [x] using multiple adapters
- [x] deal with models loaded with accelerate
- [x] Loading directly from `from_pretrained`
- [ ] Merging adapter weights - to not support
- [ ] Unload adapter weights - to not support
- [x] Training with BC with expected PEFT checkpoints format (do we really want to support training? Shall we just redirect users to load a classic `PeftModel` if they want to train a model?)
- [x] What about `save_pretrained` ?
Features to **not** support:
- [x] disabling adapters
- [x] prompt tuning / prompt learning methods
TODOs:
- [ ]ย docs
- [x] tests
cc @sgugger @patrickvonplaten @BenjaminBossan @pacman100
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25077/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25077/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25077",
"html_url": "https://github.com/huggingface/transformers/pull/25077",
"diff_url": "https://github.com/huggingface/transformers/pull/25077.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25077.patch",
"merged_at": 1692378484000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25076
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25076/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25076/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25076/events
|
https://github.com/huggingface/transformers/issues/25076
| 1,820,091,573 |
I_kwDOCUB6oc5sfGS1
| 25,076 |
Evaluation resulting in "RuntimeError: Tensors must be contiguous".
|
{
"login": "notrichardren",
"id": 34405553,
"node_id": "MDQ6VXNlcjM0NDA1NTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/34405553?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/notrichardren",
"html_url": "https://github.com/notrichardren",
"followers_url": "https://api.github.com/users/notrichardren/followers",
"following_url": "https://api.github.com/users/notrichardren/following{/other_user}",
"gists_url": "https://api.github.com/users/notrichardren/gists{/gist_id}",
"starred_url": "https://api.github.com/users/notrichardren/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/notrichardren/subscriptions",
"organizations_url": "https://api.github.com/users/notrichardren/orgs",
"repos_url": "https://api.github.com/users/notrichardren/repos",
"events_url": "https://api.github.com/users/notrichardren/events{/privacy}",
"received_events_url": "https://api.github.com/users/notrichardren/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] |
[
"You should debug the code to find which tensor PyTorch complains about and make sure you add the `contiguous` as requested.\r\n\r\nBut we could also make sure to add it in `gather` directly @muellerzr ",
"@notrichardren try running again, installing accelerate via `pip install git+https://github.com/huggingface/accelerate` please, it should work with the proposed solution :) ",
"Thanks a ton -- it seems to work now!\r\n\r\nI had to make a few changes to my codebase that were bugs on my side (DeepSpeed zero stage 3 and modifications to compute_metrics) but otherwise, it works and the original error doesn't pop up."
] | 1,690 | 1,690 | 1,690 |
NONE
| null |
### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.31.0
- Platform: Linux-5.15.0-1037-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- mixed_precision: fp16
- use_cpu: False
- num_processes: 7
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- deepspeed_config: {'gradient_accumulation_steps': 1, 'offload_optimizer_device': 'none', 'offload_param_device': 'none', 'zero3_init_flag': False, 'zero_stage': 2}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Distributed
### Who can help?
@sgugger
Apologies in advance if this is not a bug and instead a fault of my code. Either way, I think there's likely room to either fix a bug or provide significant improvements to the documentation.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I provide a minimum viable example:
```
import torch
from torch import nn
from transformers import AutoModel, LlamaTokenizer, DataCollatorWithPadding
import pandas as pd
from datasets import load_dataset
from transformers import TrainingArguments, Trainer
class RewardModel(nn.Module):
def __init__(self, model):
super().__init__()
self.language_model = model
self.fc = nn.Linear(self.language_model.config.hidden_size, 1)
def forward(self, input_ids, attention_mask, labels):
"""
Given inputs to a language model, returns a reward at the last sequence position (no normalization).
Input is the output of a tokenizer.
Output is float for size batch_size.
"""
outputs = self.language_model(input_ids = input_ids, attention_mask = attention_mask)
last_hidden_state = outputs.last_hidden_state
reward = self.fc(last_hidden_state) # (batch_size, seq_len, 1)
reward = reward.squeeze(-1) # (batch_size, seq_len)
reward = reward[:,-1] # takes reward at last seq pos (batch_size)
loss = torch.nn.functional.cross_entropy(reward, labels.half())
return {"output": reward, "loss": loss}
# Model
pretrained_model_name = "decapoda-research/llama-7b-hf"
model = AutoModel.from_pretrained(pretrained_model_name)
reward_model = RewardModel(model)
# Tokenizer
tokenizer = LlamaTokenizer.from_pretrained(pretrained_model_name)
if tokenizer.pad_token is None:
tokenizer.pad_token='[PAD]'
# Dataset:
tokenized_dataset = load_dataset('notrichardren/hh-rlhf-tf') # Load a datasetdict with ['train', 'test'] with features input_ids, attention_mask, labels
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
# Metric
def compute_metrics(eval_preds):
rewards, labels = eval_preds
return nn.MSELoss(rewards, labels)
# Training -- training a probe on last unembed layer
for param in reward_model.parameters():
param.requires_grad = False
for param in reward_model.fc.parameters():
param.requires_grad = True
args = TrainingArguments("test-trainer",
evaluation_strategy="steps",
eval_steps = 50,
num_train_epochs = 3,
per_device_train_batch_size = 4,
per_device_eval_batch_size = 4,
remove_unused_columns = False,
logging_strategy = "steps",
logging_steps = 3,
fp16=True
)
trainer = Trainer( # probably using cross entropy loss
reward_model,
args,
train_dataset=tokenized_dataset["train"],
eval_dataset=tokenized_dataset["test"],
data_collator=data_collator,
tokenizer=tokenizer,
# compute_metrics=compute_metrics,
)
trainer.train()
```
```
Traceback (most recent call last):
File "reward-model-v3-trainer.py", line 166, in <module>
trainer.train()
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1901, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/transformers/trainer.py", line 2226, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/transformers/trainer.py", line 2934, in evaluate
output = eval_loop(
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/transformers/trainer.py", line 3147, in evaluation_loop
logits = self.accelerator.gather_for_metrics((logits))
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/accelerate/accelerator.py", line 2012, in gather_for_metrics
tensor = self.gather(tensor)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/accelerate/accelerator.py", line 1985, in gather
return gather(tensor)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/accelerate/utils/operations.py", line 289, in gather
return _gpu_gather(tensor)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/accelerate/utils/operations.py", line 269, in _gpu_gather
return recursively_apply(_gpu_gather_one, tensor, error_on_other_type=True)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/accelerate/utils/operations.py", line 128, in recursively_apply
return func(data, *args, **kwargs)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/accelerate/utils/operations.py", line 266, in _gpu_gather_one
torch.distributed.all_gather(output_tensors, tensor)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1451, in wrapper
return func(*args, **kwargs)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 2448, in all_gather
work = default_pg.allgather([tensor_list], [tensor])
RuntimeError: Tensors must be contiguous
```
### Expected behavior
I would expect the evaluate function to work much like the example provided in the [documentation](https://huggingface.co/learn/nlp-course/chapter3/3?fw=pt#evaluation).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25076/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25076/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25075
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25075/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25075/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25075/events
|
https://github.com/huggingface/transformers/pull/25075
| 1,820,088,185 |
PR_kwDOCUB6oc5WUgVL
| 25,075 |
Set `TF32` flag for PyTorch cuDNN backend
|
{
"login": "XuehaiPan",
"id": 16078332,
"node_id": "MDQ6VXNlcjE2MDc4MzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/16078332?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XuehaiPan",
"html_url": "https://github.com/XuehaiPan",
"followers_url": "https://api.github.com/users/XuehaiPan/followers",
"following_url": "https://api.github.com/users/XuehaiPan/following{/other_user}",
"gists_url": "https://api.github.com/users/XuehaiPan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XuehaiPan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XuehaiPan/subscriptions",
"organizations_url": "https://api.github.com/users/XuehaiPan/orgs",
"repos_url": "https://api.github.com/users/XuehaiPan/repos",
"events_url": "https://api.github.com/users/XuehaiPan/events{/privacy}",
"received_events_url": "https://api.github.com/users/XuehaiPan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Set the `TF32` flag for both PyTorch's CUDA and cuDNN backend.
Currently, the `TrainingArguments` parser only sets the `TF32` flag for the CUDA backend. The user can manually pass `--tf32 False` in the command line, but the `torch.backends.cudnn.allow_tf32` would remain `True` during training. There are also some test cases in that we manually set `torch.backends.cuda.matmul.allow_tf32 = False`.
NOTE: The default value of `torch.backends.cudnn.allow_tf32` for the cuDNN backend is `True` (was added 3 years ago).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25075/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25075",
"html_url": "https://github.com/huggingface/transformers/pull/25075",
"diff_url": "https://github.com/huggingface/transformers/pull/25075.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25075.patch",
"merged_at": 1690286689000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25074
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25074/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25074/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25074/events
|
https://github.com/huggingface/transformers/pull/25074
| 1,820,073,677 |
PR_kwDOCUB6oc5WUdN3
| 25,074 |
Fix: repeat per sample for SAM image embeddings
|
{
"login": "xk-huang",
"id": 33593707,
"node_id": "MDQ6VXNlcjMzNTkzNzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/33593707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xk-huang",
"html_url": "https://github.com/xk-huang",
"followers_url": "https://api.github.com/users/xk-huang/followers",
"following_url": "https://api.github.com/users/xk-huang/following{/other_user}",
"gists_url": "https://api.github.com/users/xk-huang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xk-huang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xk-huang/subscriptions",
"organizations_url": "https://api.github.com/users/xk-huang/orgs",
"repos_url": "https://api.github.com/users/xk-huang/repos",
"events_url": "https://api.github.com/users/xk-huang/events{/privacy}",
"received_events_url": "https://api.github.com/users/xk-huang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @younesbelkada since you added the model.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25074). All of your documentation changes will be reflected on that endpoint."
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Use `repeat_interleave`, rather than `repeat`, to avoid the misalignment of the batch dimension. We repeat each images `point_batch_size` times.
Note that the semantic of `torch.repeat` is different from that of `tf` and `np`.
ref:
- https://github.com/facebookresearch/segment-anything/blob/6fdee8f2727f4506cfbbe553e23b895e27956588/segment_anything/modeling/mask_decoder.py#L126
- https://www.tensorflow.org/api_docs/python/tf/repeat
- https://numpy.org/doc/stable/reference/generated/numpy.repeat.html
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25074/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25074",
"html_url": "https://github.com/huggingface/transformers/pull/25074",
"diff_url": "https://github.com/huggingface/transformers/pull/25074.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25074.patch",
"merged_at": 1690288214000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25073
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25073/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25073/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25073/events
|
https://github.com/huggingface/transformers/issues/25073
| 1,819,986,324 |
I_kwDOCUB6oc5sesmU
| 25,073 |
Slow Tokenizer adds whitespace after special token
|
{
"login": "g588928812",
"id": 128976718,
"node_id": "U_kgDOB7AHTg",
"avatar_url": "https://avatars.githubusercontent.com/u/128976718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g588928812",
"html_url": "https://github.com/g588928812",
"followers_url": "https://api.github.com/users/g588928812/followers",
"following_url": "https://api.github.com/users/g588928812/following{/other_user}",
"gists_url": "https://api.github.com/users/g588928812/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g588928812/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g588928812/subscriptions",
"organizations_url": "https://api.github.com/users/g588928812/orgs",
"repos_url": "https://api.github.com/users/g588928812/repos",
"events_url": "https://api.github.com/users/g588928812/events{/privacy}",
"received_events_url": "https://api.github.com/users/g588928812/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey! My first suggestion would be to not use the legacy behaviour by setting `legacy = False` when you initialize the tokenizer. \r\nSecond, the `txt_encoded == txt_encoded_decoded` assumption is not always true for all tokenizers. In this case, the decoding adds an extra space, maybe because it is based on the previous legacy behaviour. Will investigate",
"> My first suggestion would be to not use the legacy behaviour by setting `legacy = False` when you initialize the tokenizer.\r\n\r\nthanks! I tried that though and it did not change the output",
"Ok, the same issue exists with the fast version, but the problem is with the encoding that adds extra spaces between the special tokens.... It's a mess haha",
"@ArthurZucker\r\nSorry I can't understand when and why we need to set `legacy=False` , Could you exlpain๏ผ\r\nI run the code as follows:\r\n```python\r\n txt = \"one more thing\" + \"<s>\" + \"traditionally\" + \"<s>\"\r\n tokenizer1 = LlamaTokenizer.from_pretrained(\r\n \"./resources/models/llama-2-7b-hf\", legacy=True, use_fast=False\r\n )\r\n tokenizer2 = LlamaTokenizer.from_pretrained(\r\n \"./resources/models/llama-2-7b-hf\", legacy=False, use_fast=False\r\n )\r\n\r\n t1 = tokenizer1.tokenize(txt)\r\n t2 = tokenizer2.tokenize(txt)\r\n\r\n```\r\nThen I got:\r\n```\r\nt1:['โone', 'โmore', 'โthing', '<s>', 'โtradition', 'ally', '<s>']\r\nt2:['โone', 'โmore', 'โthing', '<s>', 'tradition', 'ally', '<s>']\r\n```\r\nThe word starting with a `โ` usually means the start of a new word (as when comparing `โ`more and `ally`).\r\nEven though we don't add a space before \"traditionally\", it is still considered a new word.\r\nSo, seems `tokenizer2` is meaningful?\r\n ",
"No, words starting with `_` means that these word have a space before them, and thus the token is `_tradition`. While `tradition` is a different token. If you read the documentation that points to the PR #24565, there is a similar example. \r\nWhat's important to understand is the concept of `added tokens`. \r\n\r\nMost often, sentencepiece tokenizers have a vocabulary, but some tokens are added afterwards. This happens with t5 for example. In `transformers`, we do not modify the underlying sentencepiece object. But we still support adding tokens. \r\n\r\nNow imagine if `thin` is part of the sentencpiece vocab, but not `_thin`. If `thin` appears next to a work like `thinking`, is will be tokenized as [`_`, `thin`, `king`], not [`_`, `thin`, `_king`]. The same applies for any tokens that are originally part of the sentencepiece model. \r\n\r\nIn `transformers` all `special tokens` are kind of added to the vocabulary, so we want to reproduce the behaviour and not add extra space. \r\n\r\nPS: please refrain from asking something pretty much unrelated. If you have a question (not a bug) feel free to post it on [the discussion forum](https://discuss.huggingface.co/)\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@ArthurZucker this should be reopened, right? As stated in your previous response:\r\n> In transformers all special tokens are kind of added to the vocabulary, so we want to reproduce the behaviour **and not add extra space.**\r\n\r\nSo, basically, there **should not be added space after special tokens**... However, I'm getting the opposite results to [this](https://github.com/huggingface/transformers/issues/25073#issuecomment-1655271420), with `legacy=False` being incorrect.\r\n```py\r\n\r\nfrom transformers import AutoTokenizer\r\ntext = \"hello world\"\r\n\r\n# 1. Legacy tokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(\"hf-internal-testing/llama-tokenizer\", use_fast=False, legacy=True)\r\ntoken_ids = tokenizer.encode(text, add_special_tokens=True)\r\nprint(f'{token_ids=}') # [1, 22172, 3186] (correct)\r\nprint(f'{tokenizer.decode(token_ids)=}') # '<s>hello world' (correct)\r\n\r\n# 2. Non-Legacy tokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(\"hf-internal-testing/llama-tokenizer\", use_fast=False, legacy=False)\r\ntoken_ids = tokenizer.encode(text, add_special_tokens=True)\r\nprint(f'{token_ids=}') # [1, 22172, 3186] (correct)\r\nprint(f'{tokenizer.decode(token_ids)=}') # '<s> hello world' (incorrect)\r\n```\r\n\r\n(this is also different to the other related issues, since those deals with encoding and not decoding)",
"Yes, until #26678 is merged ",
"Just wait a bit!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Closed by https://github.com/huggingface/transformers/pull/26678 ๐ "
] | 1,690 | 1,707 | 1,707 |
NONE
| null |
### System Info
Python 3.10.6
Transformers 4.31.0
<class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer
import transformers
tokenizer = AutoTokenizer.from_pretrained(
"../models/llama-2-7b",
use_fast=False,
)
txt="this is one sentence." + tokenizer.eos_token + "this is another sentence." + tokenizer.eos_token + "this is the third sentence." + tokenizer.eos_token
txt_encoded = tokenizer.encode(txt, add_special_tokens=False)
txt_encoded_decoded = tokenizer.decode(txt_encoded)
txt_encoded_decoded_spaces_false = tokenizer.decode(txt_encoded, spaces_between_special_tokens=False)
print(transformers.__version__)
print(tokenizer.__class__)
print(f"INPUT:\n{txt}\n")
print(f"ROUNDTRIP:\n{txt_encoded_decoded}\n")
print(f"ROUNDTRIP w/ spaces_between_special_tokens=F:\n{txt_encoded_decoded}\n")
```
**Output**:
```
You are using the legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This means that tokens that come after special tokens will not be properly handled. We recommend you to read the related pull request available at https://github.com/huggingface/transformers/pull/24565
4.31.0
<class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>
INPUT:
this is one sentence.</s>this is another sentence.</s>this is the third sentence.</s>
ROUNDTRIP:
this is one sentence.</s> this is another sentence.</s> this is the third sentence.</s>
ROUNDTRIP w/ spaces_between_special_tokens=F:
this is one sentence.</s> this is another sentence.</s> this is the third sentence.</s>
```
### Expected behavior
`txt == txt_encoded_decoded`
I expect `text` to be the same as `decode(encode(text))`, however a whitespace is added after each special token (`</s>`). From what I saw in previous issues, `spaces_between_special_tokens=F` should change that but it does not, whitespaces are still there.
What am I missing?
Thank you for your help and apologies in advance, this issue seems to come up quite often and I spent quite some time going through issues in this repo but nothing solved it for me.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25073/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25072
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25072/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25072/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25072/events
|
https://github.com/huggingface/transformers/pull/25072
| 1,819,918,244 |
PR_kwDOCUB6oc5WT648
| 25,072 |
[Docs] fix rope_scaling doc string
|
{
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
The `rope_scaling` supports *two* scaling strategies not three.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25072/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25072",
"html_url": "https://github.com/huggingface/transformers/pull/25072",
"diff_url": "https://github.com/huggingface/transformers/pull/25072.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25072.patch",
"merged_at": 1690284851000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25071
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25071/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25071/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25071/events
|
https://github.com/huggingface/transformers/issues/25071
| 1,819,894,446 |
I_kwDOCUB6oc5seWKu
| 25,071 |
evaluation_strategy and eval_steps does not work
|
{
"login": "nkjulia",
"id": 15606158,
"node_id": "MDQ6VXNlcjE1NjA2MTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/15606158?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nkjulia",
"html_url": "https://github.com/nkjulia",
"followers_url": "https://api.github.com/users/nkjulia/followers",
"following_url": "https://api.github.com/users/nkjulia/following{/other_user}",
"gists_url": "https://api.github.com/users/nkjulia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nkjulia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nkjulia/subscriptions",
"organizations_url": "https://api.github.com/users/nkjulia/orgs",
"repos_url": "https://api.github.com/users/nkjulia/repos",
"events_url": "https://api.github.com/users/nkjulia/events{/privacy}",
"received_events_url": "https://api.github.com/users/nkjulia/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Please provide a reproducer we can execute. We do not have your `train_file` and `validation_file`.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I have the same problem after I update transformers to 4.33.2 and resuming interrupted training. No matter how I change it, eval_steps and save_steps is always 500.\r\n",
"same comment, sorry but if we don't have a reproducer we cannot help you ๐ข "
] | 1,690 | 1,695 | 1,693 |
NONE
| null |
### System Info
- `transformers` version: 4.27.1
- Platform: Linux-4.15.0-200-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.16
- Huggingface_hub version: 0.13.2
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): 2.4.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
train LM use run_clm.py
paramters like below:
--model_name_or_path gpt2 \
--output_dir ${out_model_dir} \
--train_file ${train_file} \
--validation_file ${validation_file} \
--validation_split_percentage 5 \
--block_size 512 \
--num_train_epochs 2 \
--per_device_train_batch_size 16 \
--do_train --do_eval --line_by_line \
--evaluation_strategy steps \
--eval_steps 100
but no eval result printed when training.
### Expected behavior
print eval result every eval_steps
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25071/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25070
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25070/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25070/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25070/events
|
https://github.com/huggingface/transformers/issues/25070
| 1,819,786,262 |
I_kwDOCUB6oc5sd7wW
| 25,070 |
Param grad None despite model training with requires_grad=True
|
{
"login": "Remorax",
"id": 26062692,
"node_id": "MDQ6VXNlcjI2MDYyNjky",
"avatar_url": "https://avatars.githubusercontent.com/u/26062692?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Remorax",
"html_url": "https://github.com/Remorax",
"followers_url": "https://api.github.com/users/Remorax/followers",
"following_url": "https://api.github.com/users/Remorax/following{/other_user}",
"gists_url": "https://api.github.com/users/Remorax/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Remorax/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Remorax/subscriptions",
"organizations_url": "https://api.github.com/users/Remorax/orgs",
"repos_url": "https://api.github.com/users/Remorax/repos",
"events_url": "https://api.github.com/users/Remorax/events{/privacy}",
"received_events_url": "https://api.github.com/users/Remorax/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @sanchit-gandhi since this is an audio model.",
"Hey @Remorax - the print statements you've added are in the forward pass of the model, where the gradients will always be zero. The gradients are only computed in the back prop, which is triggered once the forward pass is completed. Once the back propagation is complete, the gradients are used to compute the parameter updates (using the optimiser), and the parameter updates applied to the parameters. The gradients and parameters updates are then set to zero, and we go again:\r\nhttps://github.com/huggingface/transformers/blob/2fac3422389d5b4284482f036409222b3beba822/src/transformers/trainer.py#L1892C38-L1892C38\r\n\r\nThis is why the gradients are always zero for the print statements you've added (they're reset after each parameter update step). If the gradients were truly zero for every training step, then we'd never make any parameter updates, and the train loss would stay constant. The fact that your loss is decreasing normally means that the gradients are being computed and parameter updates applied to the params. Hope that explains it.",
"Thanks so much, that helps! Yes I suspected it was training anyway but didn't know that .grad was reset at every backprop step - thought it would be visible in the forward pass as well.\r\n\r\nThanks for clarifying!"
] | 1,690 | 1,690 | 1,690 |
NONE
| null |
### System Info
- `transformers` version: 4.32.0.dev0
- Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.17
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): 2.13.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am trying to call the HubertForCTC class and fine-tune the Hubert-base model with Librispeech 100h using CTC Loss. I notice that during training, the .grad values of the: a) model parameters (ie. self.hubert.parameters) as well as b) the output layer parameters (self.lm_head.parameters) is always None (even after several backprop updates), though requires_grad is True for all of these parameters. More confusingly, the loss is also decreasing normally and the WER is improving. Could someone explain why? Unless I am missing something, the .grad value should be set after backpropagation, is it not?
FYIM I have followed the Huggingface blog on fine-tuning Wav2Vec2 and adapted it for Hubert. I provide my [train.py](https://gist.github.com/Remorax/9a68143c56f2457969a3ab6a4b360d90) and my [config file](https://gist.github.com/Remorax/2cde160f4fd87166ada46796746b9c6f) here.
Steps to reproduce:
1. Replace lines 1234-1245 of [modelling_hubert.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/hubert/modeling_hubert.py) with this snippet (adds some print statements):
```
outputs = self.hubert(
input_values,
attention_mask=attention_mask,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
print (f"No. of params: {len([p for p in list(self.hubert.parameters())])}")
print (f"No. of params with grad updated: {len([p for p in list(self.hubert.parameters()) if p.grad])}")
print (f"No. of params with requires grad updated: {len([p for p in list(self.hubert.parameters()) if p.requires_grad])}")
hidden_states = outputs[0]
hidden_states = self.dropout(hidden_states)
logits = self.lm_head(hidden_states)
print (f"No. of params with grad updated in LM Head: {len([p for p in list(self.lm_head.parameters()) if p.grad])}")
print (f"No. of params with requires grad updated in LM Head: {len([p for p in list(self.lm_head.parameters()) if p.requires_grad])}")
```
2. Download [train.py](https://gist.github.com/Remorax/9a68143c56f2457969a3ab6a4b360d90) and [config.json](https://gist.github.com/Remorax/2cde160f4fd87166ada46796746b9c6f), and call train.py as follows:
```
model_name="facebook/hubert-base-ls960"
prefix="results/hubert_debug"
config_path="<path_to_config>"
rm -rf ${DIR}/${prefix}
python3 train.py \
--model_name $model_name --save_prefix ${prefix} \
--num_workers 24 --language "en" \
--trainer_config $config_path
```
The output I always get is:
No. of params: 211
No. of params with grad updated: 0
No. of params with requires grad updated: 211
No. of params with grad updated in LM Head: 0
No. of params with requires grad updated in LM Head: 2
### Expected behavior
Since I am fine-tuning all parameters, ideally, I should get:
No. of params: 211
No. of params with grad updated: 211
No. of params with requires grad updated: 211
No. of params with grad updated in LM Head: 2
No. of params with requires grad updated in LM Head: 2
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25070/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25069
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25069/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25069/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25069/events
|
https://github.com/huggingface/transformers/issues/25069
| 1,819,640,842 |
I_kwDOCUB6oc5sdYQK
| 25,069 |
Fast and normal tokenizer generate different output when handling consecutive spaces.
|
{
"login": "torshie",
"id": 1214465,
"node_id": "MDQ6VXNlcjEyMTQ0NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1214465?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/torshie",
"html_url": "https://github.com/torshie",
"followers_url": "https://api.github.com/users/torshie/followers",
"following_url": "https://api.github.com/users/torshie/following{/other_user}",
"gists_url": "https://api.github.com/users/torshie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/torshie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/torshie/subscriptions",
"organizations_url": "https://api.github.com/users/torshie/orgs",
"repos_url": "https://api.github.com/users/torshie/repos",
"events_url": "https://api.github.com/users/torshie/events{/privacy}",
"received_events_url": "https://api.github.com/users/torshie/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @ArthurZucker ",
"This seems to be pretty close to #24918. \r\nI don't know how they converted the fast tokenizer but it seems wrong. I suggest you open an issue at https://hf.co/openlm-research/open_llama_7b/discussions, as if you use the `huggyllama/llama-7b` model, you will not have this issue: \r\n\r\n```python \r\nimport transformers\r\n\r\n>>> fast = transformers.AutoTokenizer.from_pretrained('openlm-research/open_llama_7b', use_fast=True, cache_dir='hf_cache')\r\n>>> normal = transformers.AutoTokenizer.from_pretrained('openlm-research/open_llama_7b', use_fast=False, cache_dir='hf_cache')\r\n\r\n>>> print(fast.__class__, fast('a b'))\r\n<class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'> {'input_ids': [1, 263, 1678, 289], 'attention_mask': [1, 1, 1, 1]}\r\n>>> print(normal.__class__, normal('a b'))\r\n<class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'> {'input_ids': [1, 263, 1678, 289], 'attention_mask': [1, 1, 1, 1]}\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,690 | 1,693 | 1,693 |
NONE
| null |
### System Info
Transformer version: 4.29.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code to reproduce:
```python
import transformers
fast = transformers.AutoTokenizer.from_pretrained('openlm-research/open_llama_7b', use_fast=True, cache_dir='hf_cache')
normal = transformers.AutoTokenizer.from_pretrained('openlm-research/open_llama_7b', use_fast=False, cache_dir='hf_cache')
print(fast.__class__, fast('a b'))
print(normal.__class__, normal('a b'))
```
output:
```text
<class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'> {'input_ids': [1, 260, 31822, 31822, 31822, 284], 'token_type_ids': [0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1]}
<class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'> {'input_ids': [1, 260, 284], 'attention_mask': [1, 1, 1]}
```
### Expected behavior
Same output
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25069/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25068
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25068/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25068/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25068/events
|
https://github.com/huggingface/transformers/pull/25068
| 1,819,637,916 |
PR_kwDOCUB6oc5WS-S2
| 25,068 |
change vocab_size in deberta-v2 default config
|
{
"login": "codingchild2424",
"id": 45235027,
"node_id": "MDQ6VXNlcjQ1MjM1MDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/45235027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codingchild2424",
"html_url": "https://github.com/codingchild2424",
"followers_url": "https://api.github.com/users/codingchild2424/followers",
"following_url": "https://api.github.com/users/codingchild2424/following{/other_user}",
"gists_url": "https://api.github.com/users/codingchild2424/gists{/gist_id}",
"starred_url": "https://api.github.com/users/codingchild2424/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codingchild2424/subscriptions",
"organizations_url": "https://api.github.com/users/codingchild2424/orgs",
"repos_url": "https://api.github.com/users/codingchild2424/repos",
"events_url": "https://api.github.com/users/codingchild2424/events{/privacy}",
"received_events_url": "https://api.github.com/users/codingchild2424/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25068). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,690 | 1,693 | 1,693 |
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
It's tiny issues.
The default vocab size in deberta-v2 config is not fitted on the deberta-v2 in huggingface models.
I changed 128100 to 128001.
Thanks.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25068/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25068",
"html_url": "https://github.com/huggingface/transformers/pull/25068",
"diff_url": "https://github.com/huggingface/transformers/pull/25068.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25068.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25067
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25067/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25067/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25067/events
|
https://github.com/huggingface/transformers/pull/25067
| 1,819,493,115 |
PR_kwDOCUB6oc5WSfFn
| 25,067 |
Fix broken link in README_hd.md
|
{
"login": "susnato",
"id": 56069179,
"node_id": "MDQ6VXNlcjU2MDY5MTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/susnato",
"html_url": "https://github.com/susnato",
"followers_url": "https://api.github.com/users/susnato/followers",
"following_url": "https://api.github.com/users/susnato/following{/other_user}",
"gists_url": "https://api.github.com/users/susnato/gists{/gist_id}",
"starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/susnato/subscriptions",
"organizations_url": "https://api.github.com/users/susnato/orgs",
"repos_url": "https://api.github.com/users/susnato/repos",
"events_url": "https://api.github.com/users/susnato/events{/privacy}",
"received_events_url": "https://api.github.com/users/susnato/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
In line 457, the link we was pointing to https://huggingface.co/ instead of https://huggingface.co/docs/transformers/index#supported . I have fixed that in this PR.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25067/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25067",
"html_url": "https://github.com/huggingface/transformers/pull/25067",
"diff_url": "https://github.com/huggingface/transformers/pull/25067.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25067.patch",
"merged_at": 1690286942000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25066
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25066/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25066/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25066/events
|
https://github.com/huggingface/transformers/pull/25066
| 1,819,467,505 |
PR_kwDOCUB6oc5WSZo_
| 25,066 |
fix: add TOC anchor link
|
{
"login": "eenzeenee",
"id": 71638597,
"node_id": "MDQ6VXNlcjcxNjM4NTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/71638597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eenzeenee",
"html_url": "https://github.com/eenzeenee",
"followers_url": "https://api.github.com/users/eenzeenee/followers",
"following_url": "https://api.github.com/users/eenzeenee/following{/other_user}",
"gists_url": "https://api.github.com/users/eenzeenee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eenzeenee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eenzeenee/subscriptions",
"organizations_url": "https://api.github.com/users/eenzeenee/orgs",
"repos_url": "https://api.github.com/users/eenzeenee/repos",
"events_url": "https://api.github.com/users/eenzeenee/events{/privacy}",
"received_events_url": "https://api.github.com/users/eenzeenee/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
There are duplicate titles [Requirements] in perf_infer_gpu_one.md from line 51 and 117 causing an error occurs when moving the table of contents. So add anchor link for each title.
Fixes #25028
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger @stevhliu
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25066/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25066/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25066",
"html_url": "https://github.com/huggingface/transformers/pull/25066",
"diff_url": "https://github.com/huggingface/transformers/pull/25066.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25066.patch",
"merged_at": 1690286554000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25065
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25065/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25065/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25065/events
|
https://github.com/huggingface/transformers/issues/25065
| 1,819,397,539 |
I_kwDOCUB6oc5scc2j
| 25,065 |
llama2 training has nan
|
{
"login": "LZY-the-boys",
"id": 72137647,
"node_id": "MDQ6VXNlcjcyMTM3NjQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/72137647?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LZY-the-boys",
"html_url": "https://github.com/LZY-the-boys",
"followers_url": "https://api.github.com/users/LZY-the-boys/followers",
"following_url": "https://api.github.com/users/LZY-the-boys/following{/other_user}",
"gists_url": "https://api.github.com/users/LZY-the-boys/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LZY-the-boys/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LZY-the-boys/subscriptions",
"organizations_url": "https://api.github.com/users/LZY-the-boys/orgs",
"repos_url": "https://api.github.com/users/LZY-the-boys/repos",
"events_url": "https://api.github.com/users/LZY-the-boys/events{/privacy}",
"received_events_url": "https://api.github.com/users/LZY-the-boys/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"What are you setting as your `pad_token`? I was getting `nan` while running batched inference (see post [here](https://discuss.huggingface.co/t/llama2-pad-token-for-batched-inference/48020)) and am wondering if this might be related to your issue?",
"I set tokenizer.pad_token_id = 0, which is the same as llama1 (:",
"I have the same question!",
"Me too๏ผ",
"Hey! Thanks all for reporting this. I would suggest using the following: \r\n```python \r\n if attention_mask is not None:\r\n if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):\r\n raise ValueError(\r\n f\"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}\"\r\n )\r\n attn_weights = attn_weights + attention_mask\r\n dtype_min = torch.tensor(\r\n torch.finfo(attn_weights.dtype).min, device=attn_weights.device, dtype=attn_weights.dtype\r\n )\r\n attn_weights = torch.max(attn_weights, dtype_min)\r\n``` \r\nThis was removed from:\r\n\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L343-L348 \r\n\r\nbecause it is not used in the original model. This should help. Otherwise adding \r\n```python \r\n # clamp inf values to enable fp16 training\r\n if hidden_states.dtype == torch.float16:\r\n max_dtype = torch.finfo(hidden_states.dtype).max\r\n clamp_value = torch.where(torch.isinf(hidden_states).any(), max_dtype - 1000, max_dtype)\r\n hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value)\r\n```\r\nwill also help mitigate this \r\n",
"Another fix is to train with bfloat16",
"> Another fix is to train with bfloat16\r\n\r\n+1, this actually worked for me with accelerate FSDP training",
"Can you share what config you used for training @arazd I still see nan with bf16\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> I set tokenizer.pad_token_id = 0, which is the same as llama1 (:\r\n\r\n@LZY-the-boys, by the way, could you please tell how do you exclude padding tokens from loss computation then? As model expects padding tokens in the `labels` to be `-100`, what `CrossEntropyLoss` expects:\r\n\r\nhttps://github.com/huggingface/transformers/blob/000e52aec8850d3fe2f360adc6fd256e5b47fe4c/src/transformers/models/llama/modeling_llama.py#L792-L796",
"Something like this \r\n`labels = torch.where(inputs.input_ids == tokenizer.pad_token_id, labels, -100)` \r\nwould do the trick. ",
"> Hey! Thanks all for reporting this. I would suggest using the following:\r\n> \r\n> ```python\r\n> if attention_mask is not None:\r\n> if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):\r\n> raise ValueError(\r\n> f\"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}\"\r\n> )\r\n> attn_weights = attn_weights + attention_mask\r\n> dtype_min = torch.tensor(\r\n> torch.finfo(attn_weights.dtype).min, device=attn_weights.device, dtype=attn_weights.dtype\r\n> )\r\n> attn_weights = torch.max(attn_weights, dtype_min)\r\n> ```\r\n> \r\n> This was removed from:\r\n> \r\n> https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L343-L348\r\n> \r\n> because it is not used in the original model. This should help. Otherwise adding\r\n> \r\n> ```python\r\n> # clamp inf values to enable fp16 training\r\n> if hidden_states.dtype == torch.float16:\r\n> max_dtype = torch.finfo(hidden_states.dtype).max\r\n> clamp_value = torch.where(torch.isinf(hidden_states).any(), max_dtype - 1000, max_dtype)\r\n> hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value)\r\n> ```\r\n> \r\n> will also help mitigate this\r\n\r\nHi @ArthurZucker,\r\n\r\nI was wondering if it would be possible to integrate the provided code snippet(the 2nd approach) into the transformer's repository.\r\n\r\nI have concerns about using bf16, as it may result in some loss of precision during inference. Additionally, older GPUs like V100 do not support bf16. Currently, in order to bypass the issue of `nan` values in padded input data, I have had to manually copy a significant amount of code from the modeling_llama.py file and make the aforementioned modifications.\r\n\r\nFurthermore, to ensure the safety of these modifications, would it be possible to apply the clamp only to the padded token section? According to theory, `nan` values can occur at any token's position, not just limited to padded tokens. However, based on the investigation conducted in pull request #25284, it appears that `nan` values are most likely to occur at padded tokens. Therefore, I believe it would be sufficient to modify the logits solely for padded tokens. If, by any chance, the same error occurs at unpadded tokens, it might be preferable to consider changing the precision to fp32/bf16, as altering the logits of unpadded tokens could degrade the model's performance.\r\n\r\nThank you for your assistance.",
"Hey! I'll deep dive a bit on this issue this week, making sure we can fix this without any issues but remember that the goal of `transformers` is to make it easy to develop on top of it! \r\nTo me the changes required in a codebase look pretty minimal, clamping just before softmax. Remember that clamping has a cost in terms of speed. ",
"Hi Arthur, thank you for your suggestion! I appreciate your willingness to address the issue. Yeah, I agree clamping should do the work. And more precisely, we should clamp before layernorm. Given the chain of error:\r\n\r\n30.layernorm -> 30.attention -> 30.mlp[-inf appears] -> 31.layernorm[nan appears] -> 31.attention[more nans appear]\r\n\r\nto avoid nan, we'd better clamp somewhere before layernorm. Hence I prefer the second approach, which rescue the logits after layer 30's mlp.\r\n\r\nHowever, I am uncertain if this solution addresses the root cause of the problem. As highlighted in pr #25284, there seems to be an issue with the attention mask for left-padded tokens. Some of the padded tokens have masks like [-65534, -Inf, -Inf, -65534, -65534], resulting in attention weights of [0.33, 0, 0, 0.33, 0.33]. Consequently, the padded token at position 1 is attending to tokens after it (position 4 and 5). This behavior is unexpected in a causal language model and is not something the model should have learned during pretraining. I wonder if correcting the attention behavior would eliminate the generation of -inf values or if there might be another underlying issue.\r\n\r\nFor now, I will adhere to the solution of clamping for the sake of simplicity and ease of use.\r\n\r\nThank you again for your attention and assistance.",
"Sure! Correcting the padded causal mask is also important! I remember these issues appeared when @younesbelkada was porting the LlamaFlashAttention, I'll definitely have a look! ",
"I am observing the exact same behavior as @VeryLazyBoy on the 30th and 31st decoder layer of llama-2. I'm using llama-2-chat-7b. Is there any update on this issue or should I still use the workaround?\r\n\r\nUpdate: FYI, did some investigation and found that the nan values come from\r\n\r\n- The 30.MLP gives inf values in ``hidden_states``, same as what @VeryLazyBoy said\r\n- In 31.input_layernorm, there is ``variance = hidden_states.pow(2).mean(-1, keepdim=True)``. Variance now has a few inf values from ``hidden_states``.\r\n- Then, the next line in 31.input_layernorm is ``hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)``. ``torch.rsqrt(variance + self.variance_epsilon)`` turns those inf values to 0 values. Then, you multiply the inf values in ``hidden_states`` with those 0 values and produce the few nan values.\r\n- Lastly, the nan values spread out in the attention calculations and result in a lot of nan values in the last hidden states.\r\n\r\nMy workaround is to add the clamp before self.input_layernorm is called in LlamaDecoderLayer's forward. Let me know if there is a better solution.",
"Mmm the issue mentioned by @VeryLazyBoy regarding the attention mask was fixed by #27114 and should not longer affect the forward. \r\nIf the training is done in fp16 instead of bf16 I think it's pretty much expected that there will be some overflow. \r\nClamping can significantly slow down the training, but might be the best if you have to use float16 ",
"[This comment](https://github.com/huggingface/transformers/pull/27114#issuecomment-1848235762) might be helpful",
"> I am observing the exact same behavior as @VeryLazyBoy on the 30th and 31st decoder layer of llama-2. I'm using llama-2-chat-7b. Is there any update on this issue or should I still use the workaround?\r\n> \r\n> Update: FYI, did some investigation and found that the nan values come from\r\n> \r\n> * The 30.MLP gives inf values in `hidden_states`, same as what @VeryLazyBoy said\r\n> * In 31.input_layernorm, there is `variance = hidden_states.pow(2).mean(-1, keepdim=True)`. Variance now has a few inf values from `hidden_states`.\r\n> * Then, the next line in 31.input_layernorm is `hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)`. `torch.rsqrt(variance + self.variance_epsilon)` turns those inf values to 0 values. Then, you multiply the inf values in `hidden_states` with those 0 values and produce the few nan values.\r\n> * Lastly, the nan values spread out in the attention calculations and result in a lot of nan values in the last hidden states.\r\n> \r\n> My workaround is to add the clamp before self.input_layernorm is called in LlamaDecoderLayer's forward. Let me know if there is a better solution.\r\n\r\n@BaleChen Hi, I'd like to ask how did you add the clamp, I added the clamp and still get nan"
] | 1,690 | 1,702 | 1,694 |
NONE
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.19.0-42-generic-x86_64-with-glibc2.35
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: NO
- mixed_precision: fp16
- use_cpu: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am training llama2-7b using huggingface trainer. I find that it will occur nan in **forward** when training with batchsize > 1 for llama2, however when batchsize=1 there is no error.
I dive into it and find that the **nan** occurs in layer.31.input_layer_norm, which is caused by **inf** in layers.30.mlp forward after the post_layer_norm, and this **inf** may comes from huge value in hidden_size. However, this doesn't explain why llama1 and llama2 with batchsize=1 can work, which also has huge outliners in hidden_size.
The code I use is like this:
The dataset format is: `write a xxx. ###sentences: xxxx`
The checkpoint of `meta-llama/Llama-2-7b-hf` and original meta checkpoint converted by transformers code are both tried.
```
trainer = transformers.Trainer(
model=model,
train_dataset=train_data,
eval_dataset=val_data,
args=transformers.TrainingArguments(
per_device_train_batch_size=args.micro_batch,
gradient_accumulation_steps=args.gradient_accumulation_steps,
warmup_ratio=args.warmup_ratio,
num_train_epochs=args.num_epoch,
learning_rate=3e-4,
fp16=True,
logging_steps=args.log_steps,
logging_first_step=True, # convenient
evaluation_strategy="no",
save_strategy=args.save_strategy,
eval_steps=None,
save_steps=args.save_steps,
output_dir=args.output_path,
load_best_model_at_end= False,
ddp_find_unused_parameters=False if ddp else None,
report_to="wandb" if args.wandb else [],
ignore_data_skip=args.ignore_data_skip,
),
data_collator=PROMPT.data_collator()
)
model.config.use_cache = False
if list(pathlib.Path(args.output_path).glob("checkpoint-*")):
trainer.train(resume_from_checkpoint=True)
else:
trainer.train()
trainer.save_state()
model.save_pretrained(args.output_path)
```
### Expected behavior
The training should has no nan in forward, thus loss will be normal in backward
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25065/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25064
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25064/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25064/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25064/events
|
https://github.com/huggingface/transformers/pull/25064
| 1,819,366,664 |
PR_kwDOCUB6oc5WSDhR
| 25,064 |
Feature/forward
|
{
"login": "t46",
"id": 19530191,
"node_id": "MDQ6VXNlcjE5NTMwMTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/19530191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/t46",
"html_url": "https://github.com/t46",
"followers_url": "https://api.github.com/users/t46/followers",
"following_url": "https://api.github.com/users/t46/following{/other_user}",
"gists_url": "https://api.github.com/users/t46/gists{/gist_id}",
"starred_url": "https://api.github.com/users/t46/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/t46/subscriptions",
"organizations_url": "https://api.github.com/users/t46/orgs",
"repos_url": "https://api.github.com/users/t46/repos",
"events_url": "https://api.github.com/users/t46/events{/privacy}",
"received_events_url": "https://api.github.com/users/t46/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,690 | 1,690 | 1,690 |
NONE
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25064/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25064",
"html_url": "https://github.com/huggingface/transformers/pull/25064",
"diff_url": "https://github.com/huggingface/transformers/pull/25064.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25064.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25063
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25063/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25063/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25063/events
|
https://github.com/huggingface/transformers/issues/25063
| 1,819,325,953 |
I_kwDOCUB6oc5scLYB
| 25,063 |
model.generate does not work when using a AlbertModel
|
{
"login": "anujsahani01",
"id": 83875986,
"node_id": "MDQ6VXNlcjgzODc1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/83875986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anujsahani01",
"html_url": "https://github.com/anujsahani01",
"followers_url": "https://api.github.com/users/anujsahani01/followers",
"following_url": "https://api.github.com/users/anujsahani01/following{/other_user}",
"gists_url": "https://api.github.com/users/anujsahani01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anujsahani01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anujsahani01/subscriptions",
"organizations_url": "https://api.github.com/users/anujsahani01/orgs",
"repos_url": "https://api.github.com/users/anujsahani01/repos",
"events_url": "https://api.github.com/users/anujsahani01/events{/privacy}",
"received_events_url": "https://api.github.com/users/anujsahani01/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I don't know what more you want me to say. As the error clearly states, this is not a model that supports `.generate`. ALBERT was trained on a masked language modeling objective so none of its variants are compatible with `.generate()`."
] | 1,690 | 1,690 | 1,690 |
NONE
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.15.109+-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.0 (cpu)
- Jax version: 0.4.13
- JaxLib version: 0.4.13
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger
Apologies in advance if this is not a bug and instead a fault of my code.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
## Step 1: Loading AI4Bharat/indic-bert model and tokenizer
```
# Load model directly
tokenizer = AutoTokenizer.from_pretrained("ai4bharat/indic-bert", src_lang="en_XX",tgt_lang="mr_IN")
model = AlbertModel.from_pretrained("ai4bharat/indic-bert", max_length = context_length)
```
## Step 2: Loading the dataset using datasets
```
from datasets import load_dataset
dataset = load_dataset("anujsahani01/English-Marathi", split = 'test[:20]')
```
## Step 3: Tokenizing the input data
```
# generate output
model_inputs = tokenizer(data['english'], text_target = data['marathi'], max_length= max_length, padding = 'max_length',
truncation=True, return_tensors = 'pt').to(device)
labels = model_inputs['labels']
source = model_inputs['input_ids']
```
## Step 4: Using .generate to make predictions
```
preds = model.generate(**model_inputs, max_new_tokens = max_length)
```
### Expected behavior
Hey folks,
I was comparing different multilingual language models on basis of different evaluation metrics. Was not able to generate outputs using indic-bert
## Model
```ai4bharat/indic-bert```
## Error Message
```
TypeError: The current model class (AlbertModel) is not compatible with `.generate()`, as it doesn't have a language model head.
```
Any of your inputs will be highly appreciated.
Thank You !
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25063/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25062
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25062/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25062/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25062/events
|
https://github.com/huggingface/transformers/pull/25062
| 1,819,258,811 |
PR_kwDOCUB6oc5WRsBY
| 25,062 |
GPTQ integration
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> 2- Would it make sense to slowly \"deprecate\" the args load_in_4bit and load_in_8bit as it might lead to confusion for users (because technically you can do 4bit / 8bit with GPTQ as well) - not sure about the right approach here\r\n\r\nYes, it would make sense to deprecate these args as we will add more quantization methods in the future. It will confuse the user to have load_4_bit and load_8_bit only for bitsandbytes quantization. \r\n\r\n> 3- Note that I am not sure GPTQ works out of the box for vision / multimodal models, and would be better / safer if we just support text models for now (it would break at the tokenizer init call). How can we have a safety checker that effectively checks if the model is a text model?\r\n\r\nWe should definitely limit it to text model for now. For now, it works for decoder or encoder model but not for more complex architecture with multiple transformers block like decoder-encoder model. I will need to check if it can be easily extended. ",
"> Just to confirm, does all the bnb slow tests pass with this PR? ๐ \r\n\r\nYes ! Both gptq and bnb slow tests passed\r\n\r\n> Left few clarification and nits on the documentation. I think it is worth it to add a check to raise an error or warning if the model is not a pure text model. I think the easiest would be to check if self.main_input_name contains input_ids.\r\n\r\nAdded\r\n\r\n> What do you think of adding an extra dependency list, [similarly as agents](https://github.com/huggingface/transformers/blob/main/setup.py#L410) that adds all the required packages to play with various supported quantization schemes (bitsandbytes, optimum, auto-gptq, accelerate)\r\n\r\nYes, we can do that after the release of optimum ! ",
"Thanks for this work! I noticed that the supported bits in quantization config doesn't match with `auto_gptq`\r\n\r\nIn `auto_gptq`:\r\n```\r\n.../auto_gptq/nn_modules/qlinear/qlinear_cuda.py\", line 38, in __init__\r\n raise NotImplementedError(\"Only 2,3,4,8 bits are supported.\")\r\nNotImplementedError: Only 2,3,4,8 bits are supported.\r\n```\r\n\r\nIn `transformers`:\r\n```\r\n.../transformers/utils/quantization_config.py\", line 395, in post_init\r\n raise ValueError(f\"Only support quantization to [2,4,6,8] bits but found {self.bits}\")\r\nValueError: Only support quantization to [2,4,6,8] bits but found 3\r\n```\r\n\r\nIs there any specific reason behind it?",
"Hi thanks for reporting. This is indeed a mistake. I will fix this in a follow up PR ! "
] | 1,690 | 1,691 | 1,691 |
MEMBER
| null |
# What does this PR do?
This PR adds the possibility to perform GTPQ quantization with transformers model using [optimum](https://github.com/huggingface/optimum) library. The backend relies on [auto_gptq](https://github.com/PanQiWei/AutoGPTQ) library where we use `GTPQ` and `QuantLinear` class.
Here's the related [PR](https://github.com/huggingface/optimum/pull/1216) on optimum side. This PR can only be merged after the optimum one.
### Quantize model
Unlike `bitsandbytes`, it is not feasible to quantize the weights right after loading them in order to reduce memory consumption. This is why, we need to first load the entire model and then quantize it (all done in `from_pretrained`)
```py
from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig
model_name = "facebook/opt-125m"
tokenizer = AutoTokenizer.from_pretrained(model_name)
config = GPTQConfig(bits=4, dataset = "c4", tokenizer=tokenizer, group_size=128, desc_act=False)
# works also with device_map (cpu offload works but not disk offload)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, quantization_config=config)
```
### Save hf model
We save the `quantization_config` in `model.config`
```py
# move to the device you want to save model (needed if you used device_map before)
model.to(device)
quantized_model_folder = 'opt-125m-quantized_hf'
model.save_pretrained.(save_folder)
```
### Load quantized weights
If the `model.config` has a `quantization_config` key, we will replace the layers of the model and load the quantized weights.
```py
quantized_model_from_saved = AutoModelForCausalLM.from_pretrained(quantized_model_folder,
device_map = "auto")
```
TODO:
- [ ] merge optimum PR first
- [x] doc
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25062/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 5,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25062/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25062",
"html_url": "https://github.com/huggingface/transformers/pull/25062",
"diff_url": "https://github.com/huggingface/transformers/pull/25062.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25062.patch",
"merged_at": 1691697990000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25061
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25061/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25061/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25061/events
|
https://github.com/huggingface/transformers/pull/25061
| 1,819,256,856 |
PR_kwDOCUB6oc5WRrlk
| 25,061 |
Generation refactor: new interface, new classes.
|
{
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @manueldeprada ๐ \r\n\r\nThank you for the proposal, but this PR adds further parameterization to `generate()`, and we want to go in the opposite direction whenever possible -- `.generate()` already has too many possible arguments. \r\n\r\nRelated to this PR: we are internally discussing how to refactor `.generate()` -- isolating the generation strategies like you did here is indeed one of the candidates. However, until we settle on a complete plan, we will not be considering refactors like this one :) \r\n\r\nBear with us, we also want to make `.generate()` more extensible! ",
"Thanks for your reply, @gante!\r\n\r\nIt's great to hear that a comprehensive rethink of `generate()` is in progress. In this PR, my aim was to be minimally invasive while preserving backward compatibility. However, I am eager to see a thorough refactor, even if it entails introducing breaking changes. As someone who has worked extensively with generation strategies, I have a few observations that might be useful for the internal discussions:\r\n\r\n- The current system of arguments is highly complex and can lead to confusion when selecting a generation strategy. Moreover, the situation is complicated further by models setting their own `generation_config`, as defaults can inadvertently change between models. My suggestion is to streamline `generate()` to have two primary arguments: `generation_method` and `generation_config`. Default behaviors could be as follows:\r\n - If no method or configuration is passed, the function reverts to the model's default.\r\n - If a method is specified, a valid configuration should also be provided. Users could create a new valid `generation_config`, or if they want to override certain parameters from the model's defaults, they can retrieve the model's default configuration (perhaps using `model.generation_config`), modify the desired parameters, and then pass it to `generate()`.\r\n - Besides strings, the `generation_method` could accept custom objects, similar to what I proposed in this PR.\r\n- I think `beam_search()` and similar methods from `generation.utils` should be deprecated and replaced with extensible classes, as demonstrated in this PR.\r\n- The feature needs consistent and clear naming. Terms such as generators, decoders, generation adapters, or generation strategies could be used. It's crucial to establish a convention and stick to it.\r\n- Isolating the strategies could foster an ecosystem where novel strategies and implementations could be shared via Spaces, similar to the current practice with metrics.\r\n- The isolation of strategies could also make the refactoring of individual strategies much simpler. My work with `beam_search()` indicates that it could also benefit from a rethink.\r\n - For instance, completely separating `beam_search` from `group_beam_search` could simplify and streamline the code, making it easier to extend. This could also open up possibilities for full vectorization of `beam_search` along the batch dimension (removing the `for` loops inside `BeamSearchScorer`).\r\n - There is also important details that are hidden in obscure corners in the implementation. For example, all the `early_stopping` related info, and how it relates to `length_penalty`, is not clear in the documentation.\r\n- I'm curious about the progress of the discussions on the future of `generate`. Do you have an estimated timeline for these changes? Given the growing body of literature on decoding strategies ([example 1](https://github.com/wouterkool/stochastic-beam-search), [example 2](https://github.com/rycolab/cpsbs), [example 3](https://github.com/Roxot/mbr-nmt))โmostly developed on Meta's FairseqโI believe easy generation extensibility in Huggingface would attract these advancements to the platform.\r\n\r\nTo provide some context, my experience is primarily derived from porting example 1 to Huggingface Transformers. This necessitated forking the entire transformers library, which is not an ideal approach. Given that the reimagining of `generate()` will likely take some time, I plan to publish my changes as a separate companion package in the interim. Please keep us updated on the discussion, and let me know if I can assist further with the refactor! :hugs: ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey @gante any progress in the internal discussion? \r\n\r\nAs a quick reminder of what this thread was about, my PR tried to address the two main obstacles that the \"decoding research\" community encounters in transformers:\r\n1. Being able to tell `generate()` _explicitly_ what generation strategy to use (instead of relying on setting the arguments).\r\n2. Being able to extend a generation strategy and pass it to `generate()`.\r\n\r\nIn the meantime, more libraries have appeared that work around this limitation of `generate()`. Look for example at [ZurichNLP/mbr](https://github.com/ZurichNLP/mbr) by @jvamvas.\r\n\r\nWhat is the current state of the discussion? Would you be open to adding a `generation_strategy` param to either `generate()` or `GenerationConfig`?\r\n\r\nAlso, would you accept a PR that, without adding new parametrization, decoupled `generate()` from the generation strategies? So that the new `_get_generation_strategy()` returns a GenerationStrategy subclass, `beam_search()`, `sample()` and such methods are encapsulated in their classes (with all the extra logic, such as creating the BeamScorer, etc)."
] | 1,690 | 1,699 | 1,693 |
CONTRIBUTOR
| null |
## Summary
This PR introduces an updated `generate()` interface for the Huggingface Transformers library. The update focuses on enhancing extensibility and enabling the explicit declaration of the generation strategy, while ensuring backward compatibility.
## Detailed Description
### Introducing the "generation_strategy" Argument
In the existing `generate()` function, a user must pass arguments such as `num_beams=5, do_sample=True` to choose a specific generation strategy (in this case, beam sampling). This approach can be somewhat confusing, especially when aiming for a specific decoding strategy. For instance, one might assume `do_sample=False` by default. However, when a user changes the model, and the new model has `do_sample=True` as the default, the intended generation method also inadvertently changes. See a [previous PR](https://github.com/huggingface/transformers/pull/22473) for a scenario where this happened.
This PR proposes a new parameter, `generation_strategy`, within the `generate()` function. This addition allows the user to pass a string (`greedy`, `beam_search`, `beam_sample`, ...) to explicitly choose the intended generation method. Alternatively, instead of a string, the user can pass a custom GenerationStrategy object as the parameter (more on this later). If the provided parameters are not compatible with the requested strategy, an Exception is raised, alerting the user to the discrepancy. This update does not modify the default behaviour of the `generate()` function, nor does it break compatibility. To this end, I locally executed the generation tests, and they all pass with the same warnings (edit: I see they are not passing in CircleCI, I will investigate later).
### Enhancing Extensibility of Generation Strategies
While the Huggingface Transformers library is well-regarded for its extensibility, particularly regarding model innovations and implementations, the generation interface has lacked this quality to some degree.
Implementing a new generation strategy, like tweaking the Beam Search code, can be challenging. The associated code resides deep inside the `GenerationMixin`, a class that users cannot subclass. Additionally, there's no option to pass a custom BeamScorer to `generate()`.
A potential workaround is subclassing the model and overriding the `generate()` method. However, this requires rewriting a substantial amount of code from `generate()`, with a complex network of dependencies within `GenerationMixin` that isn't clear to interact with. Thus, enhancing the extensibility and making the generation part more "hack-friendly" was an important motivation for this PR.
### Proposed Changes
With these considerations in mind, the PR proposes a new abstract class, `GenerationStrategy` (or alternatively `Decoder`, naming can be discussed), which defines a common interface for implementing any `GenerationStrategy` variant. Concrete strategies are referred to as "Decoders", such as the "BeamSearchDecoder".
All existing strategies have been refactored into their respective `GenerationStrategy` class. This approach ensures `generate()` is agnostic to the decoding strategy and that each strategy checks its parameters and the generation config independently.
Subsequently, the `generate()` function has been refactored to use the new classes. Facade methods like `beam_search()`, which merely instantiate and call the new Decoders, have been retained in `generation/utils` for backwards compatibility.
With this change, now it is possible to elegantly create a custom GenerationStrategy or subclass an existing strategy, and just pass the customized object to `generate()`. This will allow the emerging research in generation strategies to use HF (right now, you can see in the literature that fairseq is more common).
### New use case examples
```python
# selecting strategy with a string
outputs = model.generate(input_ids=input_ids, generation_strategy='greedy')
# using a custom strategy
decoder = CustomBeamSearch()
outputs = model.generate(input_ids=input_ids, generation_strategy=decoder)
```
### Remaining Work
The proposed code in this PR currently lacks docstrings for the new classes, as it would be more appropriate to add these after finalizing the naming conventions and other details through discussion in this thread.
Additionally, the PR introduces changes to the library LazyImport init files, and feedback on best practices for working with Lazy imports would be greatly appreciated (as I don't have any experience). New tests to validate these changes will be added once the code receives some feedback.
Looking forward to your valuable feedback to improve this PR further.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
I see @gante @sgugger @patrickvonplaten @thomwolf very active in the git history for generation commits.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25061/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25061",
"html_url": "https://github.com/huggingface/transformers/pull/25061",
"diff_url": "https://github.com/huggingface/transformers/pull/25061.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25061.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25060
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25060/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25060/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25060/events
|
https://github.com/huggingface/transformers/issues/25060
| 1,819,188,771 |
I_kwDOCUB6oc5sbp4j
| 25,060 |
LlaVa model in transformers
|
{
"login": "RajeshRadha",
"id": 3087574,
"node_id": "MDQ6VXNlcjMwODc1NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3087574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RajeshRadha",
"html_url": "https://github.com/RajeshRadha",
"followers_url": "https://api.github.com/users/RajeshRadha/followers",
"following_url": "https://api.github.com/users/RajeshRadha/following{/other_user}",
"gists_url": "https://api.github.com/users/RajeshRadha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RajeshRadha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RajeshRadha/subscriptions",
"organizations_url": "https://api.github.com/users/RajeshRadha/orgs",
"repos_url": "https://api.github.com/users/RajeshRadha/repos",
"events_url": "https://api.github.com/users/RajeshRadha/events{/privacy}",
"received_events_url": "https://api.github.com/users/RajeshRadha/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @RajeshRadha Thank you for the feature request.\r\n\r\nAs @ArthurZucker mentioning to me, the repo. has reached 4K starts and 300 fork, it seems this is quite popular.\r\n\r\nWill leave our core maintainers @amyeroberts and @sgugger to see if this qualifies the model to be in `transformers` or we still prefer to have it first on the Hub. ",
"Given the popularity and performance of the model, I think it'd be a good addition into `transformers` :) \r\n\r\n@RajeshRadha if you'd like to add the model, feel free to open a PR and tag @ArthurZucker and myself for review. ",
"Just for reference, before the model got so popular, #22848 and #23849 were opened! \r\n",
"Any update about this model? https://github.com/huggingface/transformers/pull/23849 is closed and unactivated.",
"cc @rafaelpadilla and @amyeroberts if one of you has the bandwidth",
"I won't have time unfortunately before I'm off :( If @rafaelpadilla or anyone in the community would like to add this model - it would be a great addition! ",
"PR will be merged the coming week ๐ ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"#27662 closes this",
"This is a great integration. As a further step, it would be great to have an API for multi-modal models.\r\n\r\nI think it's unlikely TGI (see [here](https://github.com/huggingface/text-generation-inference/issues/280)) or vLLM would integrate multi-modal as it's too different.\r\n\r\nThere is a (closed) [PR on the Llava project](https://github.com/haotian-liu/LLaVA/pull/599) that allows for a simple single-call API. Possibly building on that is a good way to go.\r\n\r\nA key feature I see as valuable is continuous batching, this is what really allows devs to spin up a multi-modal end point for production.\r\n\r\n*Questions*\r\n- Is it too much of a stretch to try and add continuous batching to transformers? I'm guessing yes, because for LLMs, that has been offloaded to TGI.\r\n- Are there other angles that should be considered generally for getting to a multi modal API?",
"Thanks @RonanKMcGovern for your feedback ! I think TGI could support multi-modal models as they did it in the past with idefics if I am not mistaken cc @OlivierDehaene ",
"Thanks @younesbelkada that makes sense intuitively. IDEFIX (flamenco style models) have a single tokenizer, whether it's image or text (if I'm not mistaken) so that makes it easier plug and play for TFI.\r\n\r\nI see that as a pretty significant advantage. With an a good inference endpoint, llava just isn't as useful because devs can't use it well in production.\r\n\r\nI need to read more on why llava 1.6 is stronger than IDEFIX. I guess IDEFIX has the drawback that it had to be entirely trained from scratch.\r\n\r\nMakes me wonder whether it would have been better to take an IDEFIX approach in making Llava."
] | 1,690 | 1,707 | 1,704 |
NONE
| null |
### Feature request
Support to Llava model in transformers? https://github.com/haotian-liu/LLaVA Similar to InstructBlip w/ connector module between image embeddings and LLM's
### Motivation
Llava is performing really well in MLLM related tasks and for folks to try out InstructBlip vs Llava models it makes it easier if it's in hugging face as it's mostly using the same Image Encoder embeddings from (EVA or ViT or CLIP) and foundational models from (T5 or Vicuna or Llama-2). Code maintenance and ease of integration is easy
### Your contribution
I can definitely help with a PR or tag along with folks in hugging face to make it happen
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25060/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25060/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25059
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25059/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25059/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25059/events
|
https://github.com/huggingface/transformers/pull/25059
| 1,818,924,956 |
PR_kwDOCUB6oc5WQjXL
| 25,059 |
Fix `token` in auto classes
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25059). All of your documentation changes will be reflected on that endpoint.",
"Good catch. I guess I have to deal with the internal usages of those `kwargs`",
"Yes, ideally we want to only use `token` everywhere now :-) Happy to help if you need some!"
] | 1,690 | 1,693 | 1,690 |
COLLABORATOR
| null |
# What does this PR do?
Fix `token` in auto classes.
Fix #25008
We (me) should start to:
- add (some) explicit arguments to `from_pretrained` for auto classes
- (probably) clean-up the usage of `use_auth_token` even internally
But let's do this in separate PR(s), I promise.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25059/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25059",
"html_url": "https://github.com/huggingface/transformers/pull/25059",
"diff_url": "https://github.com/huggingface/transformers/pull/25059.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25059.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25058
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25058/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25058/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25058/events
|
https://github.com/huggingface/transformers/pull/25058
| 1,818,919,029 |
PR_kwDOCUB6oc5WQiJI
| 25,058 |
Fix last models for common tests that are too big.
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,690 | 1,690 |
COLLABORATOR
| null |
# What does this PR do?
This PR fixes the last batch of models that are too big for common tests. After it, only two classes override the common test to skip it:
- timm-backbone
- layoutlmv2
In the first case, it's because there is no timm model small enough to work (I did switch to the smallest resnet though) and in the second detectron2 does not let us configure a smaller backbone (I did try with the constant exposed by they don't seem to have any effect).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25058/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25058",
"html_url": "https://github.com/huggingface/transformers/pull/25058",
"diff_url": "https://github.com/huggingface/transformers/pull/25058.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25058.patch",
"merged_at": 1690286165000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25057
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25057/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25057/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25057/events
|
https://github.com/huggingface/transformers/pull/25057
| 1,818,854,366 |
PR_kwDOCUB6oc5WQT03
| 25,057 |
fix deepspeed load best model at end when the model gets sharded
|
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
# What does this PR do?
1. Fixes https://github.com/huggingface/transformers/issues/25027
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25057/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25057/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25057",
"html_url": "https://github.com/huggingface/transformers/pull/25057",
"diff_url": "https://github.com/huggingface/transformers/pull/25057.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25057.patch",
"merged_at": 1690422103000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25056
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25056/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25056/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25056/events
|
https://github.com/huggingface/transformers/pull/25056
| 1,818,782,634 |
PR_kwDOCUB6oc5WQELq
| 25,056 |
[`IDEFICS`]ย Fix idefics ci
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25056). All of your documentation changes will be reflected on that endpoint."
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
# What does this PR do?
Attempt to fix IDEFICS failing CI as discussed offline @stas00 - more info coming soon
original PR: https://github.com/huggingface/transformers/pull/24796
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25056/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25056/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25056",
"html_url": "https://github.com/huggingface/transformers/pull/25056",
"diff_url": "https://github.com/huggingface/transformers/pull/25056.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25056.patch",
"merged_at": 1690302279000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25055
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25055/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25055/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25055/events
|
https://github.com/huggingface/transformers/pull/25055
| 1,818,781,524 |
PR_kwDOCUB6oc5WQD8D
| 25,055 |
[`RWKV`] Add note in doc on `RwkvStoppingCriteria`
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,690 | 1,690 |
COLLABORATOR
| null |
# What does this PR do?
Given the usage of RWKV, decide to go with a tip in the doc rather than changing the `generate` and adding `RwkvStoppingCriteria` to the library. Adresses #23852
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25055/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25055",
"html_url": "https://github.com/huggingface/transformers/pull/25055",
"diff_url": "https://github.com/huggingface/transformers/pull/25055.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25055.patch",
"merged_at": 1690272901000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25054
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25054/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25054/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25054/events
|
https://github.com/huggingface/transformers/issues/25054
| 1,818,768,482 |
I_kwDOCUB6oc5saDRi
| 25,054 |
removed_unused_columns = True deletes dataset for custom loss func
|
{
"login": "notrichardren",
"id": 34405553,
"node_id": "MDQ6VXNlcjM0NDA1NTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/34405553?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/notrichardren",
"html_url": "https://github.com/notrichardren",
"followers_url": "https://api.github.com/users/notrichardren/followers",
"following_url": "https://api.github.com/users/notrichardren/following{/other_user}",
"gists_url": "https://api.github.com/users/notrichardren/gists{/gist_id}",
"starred_url": "https://api.github.com/users/notrichardren/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/notrichardren/subscriptions",
"organizations_url": "https://api.github.com/users/notrichardren/orgs",
"repos_url": "https://api.github.com/users/notrichardren/repos",
"events_url": "https://api.github.com/users/notrichardren/events{/privacy}",
"received_events_url": "https://api.github.com/users/notrichardren/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I did some further investigation and found that this dataset seems to work _if_ and only if you set remove_unused_columns = False in the training arguments.\r\n\r\n**It seems very odd that \"remove_unused_columns = True\" would result in the entire dataset being wiped, given that the features in the dataset are ['labels', 'input_ids', 'attention_mask'].** This on its own seems like a bug rather than a feature and seems like it could be a point of confusion (especially without an appropriate error message for the user). It seems to also directly contradict the [documentation description](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.remove_unused_columns) of the function.\r\n\r\nIf you'd like to modify remove_unused_columns to reflect this, it may be good to keep the issue open to address this -- otherwise, feel free to close it.",
"`remove_unused_columns=True` will remove the keys in the dataset that are not accepted by the models, which is decided by looking at the model forward signature. Your `RewardModel` does not have any argument names (it takes `*args`), that's why there is this bug. If you unpack that `*args` to use names like `labels`, `input_ids`, `attention_mask`, the issue will disappear.",
"Ah, I see. Thank you very much for clarifying this, this makes a lot of sense. I appreciate it!",
"No problem! We should probably make a note of it in the documentation."
] | 1,690 | 1,690 | 1,690 |
NONE
| null |
Feel free to see the[ second comment](https://github.com/huggingface/transformers/issues/25054#issuecomment-1648263249) directly. The original issue was on confusion for the training dataset seemingly not working -- I narrowed it down to the ```removed_unused_columns = True``` dataset deleting an entire dataset, which contradicts the description in the documentation.
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.15.0-1037-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- mixed_precision: fp16
- use_cpu: False
- num_processes: 7
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- deepspeed_config: {'gradient_accumulation_steps': 1, 'offload_optimizer_device': 'none', 'offload_param_device': 'none', 'zero3_init_flag': False, 'zero_stage': 2}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: GPU
- Using distributed or parallel set-up in script?: parallel
### Who can help?
@sgugger
I sincerely believe this is a bug in Transformers, though my apologies if it turns out to be my own code.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
import torch
from torch import nn
from transformers import AutoTokenizer, AutoModel, LlamaTokenizer, DataCollatorWithPadding
import pandas as pd
from datasets import load_dataset, Dataset, DatasetDict
from functools import partial
from transformers import TrainingArguments, Trainer
import numpy as np
import evaluate
class RewardModel(nn.Module):
def __init__(self, model):
super().__init__()
self.language_model = model
self.fc = nn.Linear(self.language_model.config.hidden_size, 1)
def forward(self, **args):
outputs = self.language_model(**args)
last_hidden_state = outputs.last_hidden_state
reward = self.fc(last_hidden_state) # (batch_size, seq_len, 1)
reward = reward.squeeze(-1) # (batch_size, seq_len)
reward = reward[:,-1] # takes reward at last seq pos (batch_size)
return reward
pretrained_model_name = "decapoda-research/llama-7b-hf"
model = AutoModel.from_pretrained(pretrained_model_name)
reward_model = RewardModel(model)
for param in reward_model.parameters(): # all the requires grads are false
param.requires_grad = False
for param in reward_model.fc.parameters(): # except the last layer
param.requires_grad = True
tokenizer = LlamaTokenizer.from_pretrained(pretrained_model_name)
if tokenizer.pad_token is None:
tokenizer.pad_token='[PAD]'
tokenized_dataset = load_dataset('notrichardren/hh-rlhf-tf') # datasetdict with train/test split that has columns 'input_ids', 'attention_mask', 'labels'
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
class LabelPopTrainer(Trainer):
def compute_loss(self,model,inputs, return_outputs=False):
labels = inputs.pop("labels")
outputs = model(**inputs).flatten()
loss = torch.nn.functional.cross_entropy(outputs, labels.half())
return (loss, outputs) if return_outputs else loss
args = TrainingArguments("test-trainer",
num_train_epochs = 3,
per_device_train_batch_size = 4,
logging_strategy = "steps",
logging_steps = 3,
fp16=True
)
trainer = LabelPopTrainer(
reward_model,
args,
train_dataset=tokenized_dataset["train"],
data_collator=data_collator,
tokenizer=tokenizer,
)
trainer.train()
trainer.save_model("trained_model")
```
### Expected behavior
I get the error that โInvalid key: 222838 is out of bounds for size 0โ which is due to the train_dataloader __getitem__ function not working. However, tokenized_dataset[โtrainโ] (which is the one passed to the trainer) is:
```
Dataset({
features: ['labels', 'input_ids', 'attention_mask'],
num_rows: 224104
})
```
I would expect the dataset, therefore, to work when run with ```accelerate launch ___.py``` Below is the more complete error:
```
Traceback (most recent call last):
File "reward-model-v3-mve-2.py", line 67, in <module>
trainer.train()
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1787, in _inner_training_loop
for step, inputs in enumerate(epoch_iterator):
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/accelerate/data_loader.py", line 384, in __iter__
current_batch = next(dataloader_iter)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 633, in __next__
data = self._next_data()
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 677, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = self.dataset.__getitems__(possibly_batched_index)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2796, in __getitems__
batch = self.__getitem__(keys)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2792, in __getitem__
return self._getitem(key)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2776, in _getitem
pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 583, in query_table
_check_valid_index_key(key, size)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 536, in _check_valid_index_key
_check_valid_index_key(int(max(key)), size=size)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 526, in _check_valid_index_key
raise IndexError(f"Invalid key: {key} is out of bounds for size {size}")
IndexError: Invalid key: 202710 is out of bounds for size 0
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25054/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25053
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25053/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25053/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25053/events
|
https://github.com/huggingface/transformers/pull/25053
| 1,818,765,711 |
PR_kwDOCUB6oc5WQAf5
| 25,053 |
[ `PreTrainedTokenizerFast`] Keep properties from fast tokenizer
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Yes, currently everything is ignored"
] | 1,690 | 1,690 | 1,690 |
COLLABORATOR
| null |
# What does this PR do?
Start a potential fix to #24179, ended up being releated to #24441, calling modifies the values of the underlying tokenizer, but never changes anything on the surface so will probably add some kind of warning in the documentation.
TLDR; adds the possibility of initialize a `PreTrainedTokenizerFast` from a `tokenizer.Tokenizer`, keeping the `padding` and `truncation` informations.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25053/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25053",
"html_url": "https://github.com/huggingface/transformers/pull/25053",
"diff_url": "https://github.com/huggingface/transformers/pull/25053.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25053.patch",
"merged_at": 1690303501000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25052
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25052/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25052/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25052/events
|
https://github.com/huggingface/transformers/pull/25052
| 1,818,677,318 |
PR_kwDOCUB6oc5WPtTe
| 25,052 |
MaskFormer - enable return_dict in order to compile
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @merveenoyan who highlighted the compiling issue ๐ ",
"@sgugger Yep! I'll update that",
"@sgugger Most of the custom tests aren't easy to remove, unfortunately. As MaskFormer combines hidden states from different modules -- encoder, transformer decoder and pixel module decoder -- which are defined in nested arguments in the config e.g. `model.config.decoder_config.num_hidden_layers` it breaks a lot of assumptions. \r\n\r\nI've removed `test_training` and made `_prepare_for_class` and `test_output_attentions` match the common equivalent. I've also updated some of the config values to make sure they're small by default. ",
"Thanks a lot!",
"Also, I wasn't able to enable the FX tests. There's an issue when the model is symbolically traced, where the output shapes are slightly different (off by one). Checking the outputs from the compiled model using `torch.compile` this doesn't occur, so I'm leaving this for a future PR :) "
] | 1,690 | 1,690 | 1,690 |
COLLABORATOR
| null |
# What does this PR do?
Is was previously not possible to use `torch.compile` on Maskformer as some modules always returned a dataclass, instead of a tuple.
This PR adds a `return_dict` argument to these modules, which defaults to `True` to maintain the previous behaviour.
Tested with:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor,AutoModelForInstanceSegmentation
checkpoint = "facebook/maskformer-swin-base-ade"
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained(checkpoint)
model = AutoModelForInstanceSegmentation.from_pretrained(checkpoint).to("cuda")
processed_input = processor([image, image], return_tensors='pt').to(device="cuda")
compiled_model = torch.compile(model, fullgraph=True)
with torch.no_grad():
compiled_model(**processed_input, return_dict=False)
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25052/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25052/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25052",
"html_url": "https://github.com/huggingface/transformers/pull/25052",
"diff_url": "https://github.com/huggingface/transformers/pull/25052.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25052.patch",
"merged_at": 1690385011000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25051
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25051/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25051/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25051/events
|
https://github.com/huggingface/transformers/pull/25051
| 1,818,587,090 |
PR_kwDOCUB6oc5WPZpA
| 25,051 |
Add ViTMatte
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@sgugger apart from the toctree issue which I'm still investigating, all comments are addressed.\r\n\r\n",
"Before I start reviewing - could you separate out the addition of VitDet and VitMatte? They should have their own respective PRs. "
] | 1,690 | 1,693 | 1,693 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds the ViTMatte model, an elegant approach to image matting, entirely relying on the Vision Transformer backbone doing the heavy work, with a lightweight head on top.
Here's a Colab notebook showcasing inference: https://colab.research.google.com/drive/1pWTn3Iur-NR2xUIyDE31dBgng_hXjSsn?usp=sharing.
The model leverages [VitDet](https://arxiv.org/abs/2203.16527) as backbone, hence this PR adds VitDet as a standalone model as well. It then leverages the AutoBackbone class to use this model as a backbone for image matting.
Fixes #25040.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25051/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25051/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25051",
"html_url": "https://github.com/huggingface/transformers/pull/25051",
"diff_url": "https://github.com/huggingface/transformers/pull/25051.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25051.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25050
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25050/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25050/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25050/events
|
https://github.com/huggingface/transformers/issues/25050
| 1,818,571,193 |
I_kwDOCUB6oc5sZTG5
| 25,050 |
Trainer does not properly read "label" column.
|
{
"login": "notrichardren",
"id": 34405553,
"node_id": "MDQ6VXNlcjM0NDA1NTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/34405553?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/notrichardren",
"html_url": "https://github.com/notrichardren",
"followers_url": "https://api.github.com/users/notrichardren/followers",
"following_url": "https://api.github.com/users/notrichardren/following{/other_user}",
"gists_url": "https://api.github.com/users/notrichardren/gists{/gist_id}",
"starred_url": "https://api.github.com/users/notrichardren/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/notrichardren/subscriptions",
"organizations_url": "https://api.github.com/users/notrichardren/orgs",
"repos_url": "https://api.github.com/users/notrichardren/repos",
"events_url": "https://api.github.com/users/notrichardren/events{/privacy}",
"received_events_url": "https://api.github.com/users/notrichardren/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"You are using a model which does not accept `labels` in its forward pass: `AutoModel` gives you the base model, it is only suitable to extract the last hidden state generated by the model, not for training.",
"By default, am I supposed to pass a model to Trainer that is able to accept \"labels\" in its forward pass? \r\n\r\nI wasn't too sure how Trainer worked, but based on the documentation, my impression was that \"labels\" was supposed to be a separate column (that is then removed) before all of the args are passed into the forward pass. Afterward, the loss calculation would happen where the \"labels\" are compared to model predictions.\r\n\r\n@sgugger "
] | 1,690 | 1,692 | 1,692 |
NONE
| null |
### System Info
When I have a HuggingFace dataset with 'input_ids', 'attention_mask', and 'labels', the current HuggingFace Trainer does not properly read 'labels' and separate the labels from the inputs in the forward pass.
I also tried a 'label' (as opposed to 'labels') column and it does not work with this either -- it fails to separate the label from the rest of the forward pass.
This contrasted with the demo for Trainer, where a dataset has the tokenized version of a given dataset as well as a 'label' feature column.
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The HuggingFace Accelerate config I'm using has fp16 enabled. Here is the code to replicate:
```
import torch
from torch import nn
from transformers import AutoTokenizer, AutoModel, LlamaTokenizer, DataCollatorWithPadding
import pandas as pd
from datasets import load_dataset, Dataset, DatasetDict
from functools import partial
from transformers import TrainingArguments, Trainer
import numpy as np
import evaluate
class RewardModel(nn.Module):
def __init__(self, model):
super().__init__()
self.language_model = model
self.fc = nn.Linear(self.language_model.config.hidden_size, 1)
def forward(self, **args):
outputs = self.language_model(**args)
last_hidden_state = outputs.last_hidden_state
reward = self.fc(last_hidden_state) # (batch_size, seq_len, 1)
reward = reward.squeeze(-1) # (batch_size, seq_len)
reward = reward[:,-1] # takes reward at last seq pos (batch_size)
return reward
pretrained_model_name = "decapoda-research/llama-7b-hf"
model = AutoModel.from_pretrained(pretrained_model_name)
reward_model = RewardModel(model)
tokenizer = LlamaTokenizer.from_pretrained(pretrained_model_name)
if tokenizer.pad_token is None:
tokenizer.pad_token='[PAD]'
tokenized_dataset = load_dataset('notrichardren/hh-rlhf-tf') # has columns 'input_ids', 'attention_mask', 'labels'
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
for param in reward_model.parameters(): # all the requires grads are false
param.requires_grad = False
for param in reward_model.fc.parameters(): # except the last layer
param.requires_grad = True
training_args = TrainingArguments("test-trainer",
num_train_epochs = 250,
per_device_train_batch_size = 1,
remove_unused_columns=False,
fp16=True
)
trainer = Trainer( # probably using cross entropy loss
reward_model,
training_args,
train_dataset=tokenized_dataset["train"],
data_collator=data_collator,
tokenizer=tokenizer,
)
trainer.train()
trainer.save_model("trained_model")
```
I receive a TypeError on the forward pass, where **forward() got an unexpected keyword argument 'labels'**
```
File "minimum-viable-example.py", line 19, in forward
return forward_call(*args, **kwargs)
File "minimum-viable-example.py", line 19, in forward
tr_loss_step = self.training_step(model, inputs)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/transformers/trainer.py", line 2654, in training_step
return inner_training_loop(
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1809, in _inner_training_loop
return forward_call(*args, **kwargs)
File "minimum-viable-example.py", line 19, in forward
outputs = model(**inputs)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
outputs = self.language_model(**args)
ret_val = func(*args, **kwargs)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1735, in forward
tr_loss_step = self.training_step(model, inputs)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/transformers/trainer.py", line 2654, in training_step
outputs = self.language_model(**args)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
loss = self.compute_loss(model, inputs)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/transformers/trainer.py", line 2679, in compute_loss
outputs = self.language_model(**args)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
loss = self.compute_loss(model, inputs)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/transformers/trainer.py", line 2679, in compute_loss
return forward_call(*args, **kwargs)
return forward_call(*args, **kwargs)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
TypeError: forward() got an unexpected keyword argument 'labels'
return forward_call(*args, **kwargs)
TypeError: forward() got an unexpected keyword argument 'labels'loss = self.module(*inputs, **kwargs)
```
### Expected behavior
I would expect the 'labels' column (I tried both 'label' and 'labels') to not be put in the forward pass, but instead to be used in the loss function.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25050/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25049
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25049/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25049/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25049/events
|
https://github.com/huggingface/transformers/pull/25049
| 1,818,525,506 |
PR_kwDOCUB6oc5WPMQz
| 25,049 |
Better error message when signal is not supported on OS
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,690 | 1,690 |
COLLABORATOR
| null |
# What does this PR do?
This PR adds a `try`/`except` block around the logic asking the user whether they want to execute the code on a distant repo when they don't set `trust_remote_code=True` to give a helpful error message.
Fixes #25029
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25049/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25049",
"html_url": "https://github.com/huggingface/transformers/pull/25049",
"diff_url": "https://github.com/huggingface/transformers/pull/25049.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25049.patch",
"merged_at": 1690223657000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25048
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25048/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25048/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25048/events
|
https://github.com/huggingface/transformers/pull/25048
| 1,818,524,881 |
PR_kwDOCUB6oc5WPMH4
| 25,048 |
[DOCS] add docstrings to TypicalLogitsWarper
|
{
"login": "akshayamadhuri",
"id": 76612327,
"node_id": "MDQ6VXNlcjc2NjEyMzI3",
"avatar_url": "https://avatars.githubusercontent.com/u/76612327?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akshayamadhuri",
"html_url": "https://github.com/akshayamadhuri",
"followers_url": "https://api.github.com/users/akshayamadhuri/followers",
"following_url": "https://api.github.com/users/akshayamadhuri/following{/other_user}",
"gists_url": "https://api.github.com/users/akshayamadhuri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akshayamadhuri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akshayamadhuri/subscriptions",
"organizations_url": "https://api.github.com/users/akshayamadhuri/orgs",
"repos_url": "https://api.github.com/users/akshayamadhuri/repos",
"events_url": "https://api.github.com/users/akshayamadhuri/events{/privacy}",
"received_events_url": "https://api.github.com/users/akshayamadhuri/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@akshayamadhuri you probably need to run `make fixup` on your terminal and then commit the changes to make our CI happy :D "
] | 1,690 | 1,690 | 1,690 |
NONE
| null |
# What does this PR do?
Added some doc string to TypicalLogitsWarper with some examples as well.
@gante let me know if there's anything else should be add or remove from the docs.
Fixes #24783
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25048/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25048",
"html_url": "https://github.com/huggingface/transformers/pull/25048",
"diff_url": "https://github.com/huggingface/transformers/pull/25048.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25048.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25047
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25047/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25047/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25047/events
|
https://github.com/huggingface/transformers/pull/25047
| 1,818,510,641 |
PR_kwDOCUB6oc5WPJAP
| 25,047 |
[`8bit`] Fix 8bit corner case with Blip2 8bit
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Tests are green, merging!"
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/25011
Fixes https://github.com/huggingface/transformers/issues/25026
https://github.com/huggingface/transformers/pull/24095/ introduced a new check for retrieving the list of modules that are not needed to be quantized (e.g. LM head). While it works perfectly fine for text models, when using `model.named_children()`, the last element of that list would be the entire `language_model` module for `Blip2` models. This lead to the entire language model not being converted in 8bit by `replace_bnb_linear` method, leading to an 8bit bnb weight forced to be loaded on a `nn.Linear` module, hence the error.
The fix is to use `model.named_parameters()` to correctly get the last parameter (usually the lm_head) and not the last children
cc @sgugger
Will mark as ready for review once the slow tests are green
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25047/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25047",
"html_url": "https://github.com/huggingface/transformers/pull/25047",
"diff_url": "https://github.com/huggingface/transformers/pull/25047.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25047.patch",
"merged_at": 1690210721000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25046
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25046/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25046/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25046/events
|
https://github.com/huggingface/transformers/pull/25046
| 1,818,509,786 |
PR_kwDOCUB6oc5WPI0G
| 25,046 |
[DOCS] add example NoBadWordsLogitsProcessor
|
{
"login": "SoyGema",
"id": 24204714,
"node_id": "MDQ6VXNlcjI0MjA0NzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/24204714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SoyGema",
"html_url": "https://github.com/SoyGema",
"followers_url": "https://api.github.com/users/SoyGema/followers",
"following_url": "https://api.github.com/users/SoyGema/following{/other_user}",
"gists_url": "https://api.github.com/users/SoyGema/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SoyGema/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SoyGema/subscriptions",
"organizations_url": "https://api.github.com/users/SoyGema/orgs",
"repos_url": "https://api.github.com/users/SoyGema/repos",
"events_url": "https://api.github.com/users/SoyGema/events{/privacy}",
"received_events_url": "https://api.github.com/users/SoyGema/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"(@SoyGema did a minor edit on the PR header: the word \"fixes\" before an issue number on a PR automatically closes the issue when the PR is merged)"
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
See #24783 .
Add example to NoBadWordsLogitsProcessor.
Some analysis [here](https://github.com/SoyGema/contrib_schema) .
Kudos to @nablabits
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@gante
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25046/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25046/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25046",
"html_url": "https://github.com/huggingface/transformers/pull/25046",
"diff_url": "https://github.com/huggingface/transformers/pull/25046.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25046.patch",
"merged_at": 1690292509000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25045
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25045/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25045/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25045/events
|
https://github.com/huggingface/transformers/pull/25045
| 1,818,491,358 |
PR_kwDOCUB6oc5WPEx-
| 25,045 |
Fix some bugs for two stage training of deformable detr
|
{
"login": "jypjypjypjyp",
"id": 34328687,
"node_id": "MDQ6VXNlcjM0MzI4Njg3",
"avatar_url": "https://avatars.githubusercontent.com/u/34328687?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jypjypjypjyp",
"html_url": "https://github.com/jypjypjypjyp",
"followers_url": "https://api.github.com/users/jypjypjypjyp/followers",
"following_url": "https://api.github.com/users/jypjypjypjyp/following{/other_user}",
"gists_url": "https://api.github.com/users/jypjypjypjyp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jypjypjypjyp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jypjypjypjyp/subscriptions",
"organizations_url": "https://api.github.com/users/jypjypjypjyp/orgs",
"repos_url": "https://api.github.com/users/jypjypjypjyp/repos",
"events_url": "https://api.github.com/users/jypjypjypjyp/events{/privacy}",
"received_events_url": "https://api.github.com/users/jypjypjypjyp/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@jypjypjypjyp Thanks for opening this PR! Could you: share a minimal code snippet in the PR which triggers the errors on main and which this PR resolves? ",
"> @jypjypjypjyp Thanks for opening this PR! Could you: share a minimal code snippet in the PR which triggers the errors on main and which this PR resolves?\r\n\r\nHello @amyeroberts:\r\n\r\nUsing the original code might not result in errors, but there are apparent mistakes in several places in the source code. Let me explain my modifications:\r\n\r\n1. Line 2002: In the original code, 'enc_outputs' is stored in outputs, but the computation of loss uses **outputs_loss**, which means enc_outputs will never be utilized.\r\n2. Line 2267: Due to the first point, 'enc_outputs' will never be used, so this problem will never arise. However, if the first issue is remedied, this problem is revealed. Throughout the entire code, 'class_labels' is used as a key rather than 'labels'.\r\n3. Line 2270: This piece of code is derived from the original Deformable Detr code, and there is no log functionality in our implementation, rendering this code meaningless.\r\n4. Line 1972: This segment of code involves the calculation of the auxiliary loss; if the original code is used, an error will be thrown during the calculation of the auxiliary loss because the shape of the tensor passed in is incorrect.\r\n\r\nIn my usage, the modified code can be correctly trained. It's possible that my understanding is incorrect, and I hope you can verify these issues.",
"@jypjypjypjyp Thank you for explaining so clearly and with such detail, it really helps :) \r\n\r\nAll of these changes make sense. My only request is that we add a test to make sure the model can train when `config.two_stage` is `True` e.g. somethlng like `test_training_two_stage`, similar to [test_training](https://github.com/huggingface/transformers/blob/03f98f96836477f6f5b86957d3ce98778cad5d94/tests/test_modeling_common.py#L508). ",
"@amyeroberts Sorry, I'm not very familiar with the test portion of the code. Could you help me complete it, or explain it in more detail? I'm unsure about what specifically I should do.",
"@jypjypjypjyp Of course :) \r\n\r\nAll models are tested to make sure that they can be trained i.e. do a forwards / backwards pass. Most models' tests are implemented in [test_modeling_common.py](https://github.com/huggingface/transformers/blob/d27e4c18fe2970abcb9a48dcb8a824e48083b15f/tests/test_modeling_common.py#L4). \r\n\r\nBy default, Deformable Detr has [`two_stage` set to False](https://github.com/huggingface/transformers/blob/d27e4c18fe2970abcb9a48dcb8a824e48083b15f/src/transformers/models/deformable_detr/configuration_deformable_detr.py#L184C1-L184C1), and only the [default model config value](https://github.com/huggingface/transformers/blob/d27e4c18fe2970abcb9a48dcb8a824e48083b15f/tests/models/deformable_detr/test_modeling_deformable_detr.py#L136) was used during testing. This is why the issues with two stage training were never uncovered. \r\n\r\nEach model has its own test module for its model logic e.g. [test_modeling_deformable_detr.py](https://github.com/huggingface/transformers/blob/d27e4c18fe2970abcb9a48dcb8a824e48083b15f/tests/models/deformable_detr/test_modeling_deformable_detr.py). Here we can add model specific tests which aren't covered by the tests in test_modeling_common.py. \r\n\r\nI'm suggesting that we add another test to specifically test two-stage training in `DeformableDetrModelTest` e.g. something like: \r\n\r\n```python\r\n\r\nclass DeformableDetrModelTest\r\n ...\r\n\r\n def test_two_stage_training(self):\r\n model_class = DeformableDetrForObjectDetection\r\n config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()\r\n config.return_dict = True\r\n config.two_stage = True\r\n\r\n model = model_class(config)\r\n model.to(torch_device)\r\n model.train()\r\n inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True)\r\n loss = model(**inputs).loss\r\n loss.backward()\r\n```\r\n\r\n\r\n ",
"@amyeroberts Hello, Thank you for your tutorial! I have added a test_two_stage_training and discovered another issue - the feature dimension of get_proposal_pos_embed is fixed. I made modifications to address this (which also involved modifying the code in deta when using 'make fix-copies'). Please review again.",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
Hello @amyeroberts:
There are some issues encountered when training with the two-stage method using Deformable DETR, for which I have made modifications.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25045/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25045",
"html_url": "https://github.com/huggingface/transformers/pull/25045",
"diff_url": "https://github.com/huggingface/transformers/pull/25045.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25045.patch",
"merged_at": 1690972236000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25044
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25044/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25044/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25044/events
|
https://github.com/huggingface/transformers/pull/25044
| 1,818,470,611 |
PR_kwDOCUB6oc5WPAM3
| 25,044 |
compute_loss in trainer failing to label shift for PEFT model when label smoothing enabled.
|
{
"login": "njbrake",
"id": 33383515,
"node_id": "MDQ6VXNlcjMzMzgzNTE1",
"avatar_url": "https://avatars.githubusercontent.com/u/33383515?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/njbrake",
"html_url": "https://github.com/njbrake",
"followers_url": "https://api.github.com/users/njbrake/followers",
"following_url": "https://api.github.com/users/njbrake/following{/other_user}",
"gists_url": "https://api.github.com/users/njbrake/gists{/gist_id}",
"starred_url": "https://api.github.com/users/njbrake/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/njbrake/subscriptions",
"organizations_url": "https://api.github.com/users/njbrake/orgs",
"repos_url": "https://api.github.com/users/njbrake/repos",
"events_url": "https://api.github.com/users/njbrake/events{/privacy}",
"received_events_url": "https://api.github.com/users/njbrake/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"No we cannot add this like that, as this is not a model that lives in Transformers.\r\ncc @younesbelkada ",
"@njbrake thanks for the PR, can you elaborate more on the label shifting issue? \r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L832 should be automatically shifting the labels if you pass a dataset with labels ",
"@sgugger I think your comment makes sense. This issue is showing up at https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L2680\r\n\r\nIn that case, it seems to make sense to me that in this context, the trainer should see the model that is sitting \"behind\" the PEFT model, so that it would see that the PEFT model was the LLama architecture. i'm not sure on whether that means a change is required in the PEFT library, the trainer code, or in my code (or maybe in all three ๐ )",
"@younesbelkada Since i'm using the label smoother, the trainer API pops out the labels via https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L2669 before calling the forward function.",
"I don't think we should unwrap the model from its peft container, as it would trigger bug when saving (Younes will confirm!) so the best fix is probably to add an or clause in the specific test in the Trainer.",
"Thanks for clarifying @njbrake that makes sense, I second what @sgugger said, a better fix would be to add a proper check to see if the model is and instance of peft model on the appropriate line, otherwise users might encounter unexpected behaviours (e.g. when saving the model)",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
# What does this PR do?
When training LlamaForCausalLM with PEFT enabled, I noticed that the compute_loss function of the trainer with label_smoothing enabled was not shifting the labels. Further investigation found that `unwrap_model(model)._get_name()` returned "PeftModelForCausalLM", and the `MODEL_FOR_CAUSAL_LM_MAPPING_NAMES` dict doesn't have that, so the label shift was not happening.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
I believe this PR would be a fit for @sgugger review.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25044/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25044",
"html_url": "https://github.com/huggingface/transformers/pull/25044",
"diff_url": "https://github.com/huggingface/transformers/pull/25044.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25044.patch",
"merged_at": 1690210390000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25043
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25043/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25043/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25043/events
|
https://github.com/huggingface/transformers/issues/25043
| 1,818,464,047 |
I_kwDOCUB6oc5sY48v
| 25,043 |
Can't Reproduce GLUE scores using official BERT implementation
|
{
"login": "BiEchi",
"id": 60613238,
"node_id": "MDQ6VXNlcjYwNjEzMjM4",
"avatar_url": "https://avatars.githubusercontent.com/u/60613238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BiEchi",
"html_url": "https://github.com/BiEchi",
"followers_url": "https://api.github.com/users/BiEchi/followers",
"following_url": "https://api.github.com/users/BiEchi/following{/other_user}",
"gists_url": "https://api.github.com/users/BiEchi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BiEchi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BiEchi/subscriptions",
"organizations_url": "https://api.github.com/users/BiEchi/orgs",
"repos_url": "https://api.github.com/users/BiEchi/repos",
"events_url": "https://api.github.com/users/BiEchi/events{/privacy}",
"received_events_url": "https://api.github.com/users/BiEchi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The results you will get are very sensitive to the seed used.",
"Hi @sgugger , so it means that as long as I get the same results as on the huggingface README, my model is nearly identical to the paper, right?\r\nAlso a follow-up: in our paper we shall also include this score. Do you remember anyone ever reproduced the scores in the paper using huggingface?",
"I remember getting the same results on Cola or even better results on Cola with a different seed (but then some of the other results were different). I don't remember which seed however :sweat_smile: ",
"Ugh that's embarassing ๐
these original authors are too good at these kind of tricks.\r\nAnyway, thanks for the instant help! I'll get back to this thread if I get identical or even better results.\r\n",
"@sgugger Sorry for putting this up after closing the issue. I'm writing to ask if you know anyone successfully reproduced pre-training the RoBERTa model from scratch and gained as good scores as the listed ones as shown in https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification. Specifically, I'm interested in how the script they use with [`run_mlm.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py). \r\nLooking forward to your reply!",
"`run_mlm` is just an example, it does not reproduce RoBERTa pretraining.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I'm closing this thread as I've already got a perfect answer."
] | 1,690 | 1,692 | 1,692 |
NONE
| null |
### System Info
- `transformers` version: 4.32.0.dev0
- Platform: Linux-5.15.0-1036-aws-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
As stated at https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification, the scores don't match the official scores on paper and GLUE benchmark. My scores match the huggingface benchmark but is lower than the official implementation in some benchmarks like CoLA. How did this happen and how can we avoid it? Looking forward to your help!
<img width="1018" alt="image" src="https://github.com/huggingface/transformers/assets/60613238/5f368343-0619-4084-9039-39683f772d3f">
### Expected behavior
<img width="550" alt="image" src="https://github.com/huggingface/transformers/assets/60613238/b6cc374a-5e2d-409d-ac9d-3b73a7ced4fd">
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25043/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25042
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25042/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25042/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25042/events
|
https://github.com/huggingface/transformers/pull/25042
| 1,818,454,965 |
PR_kwDOCUB6oc5WO8x4
| 25,042 |
Generate - add beam indices output in contrained beam search
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,690 | 1,690 |
MEMBER
| null |
# What does this PR do?
Fixes #25000
This PR adds the `beam_indices` output to constrained beam search, just like in the other beam methods. The changes are (mostly) copy-paste from beam search, and they pipe `beam_indices` all the way to the output.
Script for reproducibility (didn't work before, works after these changes)
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelForCausalLM.from_pretrained("gpt2").to('cuda')
prompt = """Tell me some about Canada"""
input_tokenized_info = tokenizer(prompt, return_tensors="pt")
input_ids, attention_mask = input_tokenized_info['input_ids'], input_tokenized_info[ 'attention_mask']
input_ids = input_ids.to('cuda')
attention_mask = attention_mask.to('cuda')
force_words = ["Canada"]
force_words_ids = tokenizer(force_words, add_special_tokens=False).input_ids
outputs = model.generate(input_ids=input_ids, attention_mask=attention_mask,num_beams =4,max_new_tokens=10, return_dict_in_generate=True,output_scores=True, force_words_ids=force_words_ids)
print(outputs.beam_indices)
# Before: `None`
# After: `tensor([[ 0, 1, 1, 1, 0, 0, 2, 3, 3, 1, -1, -1, -1, -1, -1]], device='cuda:0')`
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25042/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25042/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25042",
"html_url": "https://github.com/huggingface/transformers/pull/25042",
"diff_url": "https://github.com/huggingface/transformers/pull/25042.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25042.patch",
"merged_at": 1690279950000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25041
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25041/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25041/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25041/events
|
https://github.com/huggingface/transformers/issues/25041
| 1,818,432,449 |
I_kwDOCUB6oc5sYxPB
| 25,041 |
LlamaTokenizerFast report vocab_size info not correct
|
{
"login": "apachemycat",
"id": 8221103,
"node_id": "MDQ6VXNlcjgyMjExMDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8221103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apachemycat",
"html_url": "https://github.com/apachemycat",
"followers_url": "https://api.github.com/users/apachemycat/followers",
"following_url": "https://api.github.com/users/apachemycat/following{/other_user}",
"gists_url": "https://api.github.com/users/apachemycat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apachemycat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apachemycat/subscriptions",
"organizations_url": "https://api.github.com/users/apachemycat/orgs",
"repos_url": "https://api.github.com/users/apachemycat/repos",
"events_url": "https://api.github.com/users/apachemycat/events{/privacy}",
"received_events_url": "https://api.github.com/users/apachemycat/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @ArthurZucker ",
"Hey! Sorry, the formatting of the issue is a bit strange I do not understand the problem. \r\nCould you try to reformulate the issue with a minimal reproducing script using a model path? ",
"OK , base_model=/models/WizardLM-13B-V1.0-Merged \r\n\r\n model = AutoModelForCausalLM.from_pretrained(\r\n base_model,\r\n load_in_8bit=True,\r\n device_map=device_map,\r\n trust_remote_code=True\r\n )\r\n # Tokenizer\r\n tokenizer = AutoTokenizer.from_pretrained(\r\n base_model,\r\n padding_side=\"right\"\r\n )\r\n print(f\" token class ,tokens vocab :{len(tokenizer.get_vocab())}\\n {tokenizer}\")\r\n print(tokenizer.get_vocab()๏ผ\r\n print( model ๏ผ\r\nใ๏ฝ๏ฝ๏ฝใ๏ฝ๏ฝ๏ฝใ\r\n\r\n**token class ,tokens vocab :32001**\r\n LlamaTokenizerFast(name_or_path='/models/WizardLM-13B-V1.0-Merged', **vocab_size=32000,**\r\n\r\norigin model\r\n LlamaForCausalLM(\r\n (model): LlamaModel(\r\n (embed_tokens): Embedding(**32001**, 5120, padding_idx=0)\r\n\r\n",
"So, 1. you still have not provided a link to a valid model on the hub. `WizardLM-13B-V1.0-Merged` does not seem to exist as it is probably a local folder on your machine. 2. if it is on the hub, feel free to open an issue on the corresponding repository, as this does not seem to be an issue with transformers. \r\nThe llama2 models have the correct dimension. ",
"https://huggingface.co/WizardLM/WizardLM-13B-V1.0 This is WizardLM-13B V1.0 diff weight. ,can using \r\nAlpaca also can used test\r\nThe Stanford Alpaca model independently trained on decapoda-research/llama-7b-hf at \"chavinlo/alpaca-native\" uses tokenizer.add_special_tokens({'pad_token': '[PAD]'}) and hence the model's vocab size is set to 32001.",
"I am really sorry but I don't understand the problem ",
"with https://huggingface.co/WizardLM/WizardLM-13B-V1.0 or Alpaca model๏ผvocab size is 32001๏ผ\r\nbut LlamaTokenizerFast report vocab_size=32000 (print(tokenlizer) out put result)\r\ntoken class ,tokens vocab :32001\r\nLlamaTokenizerFast(name_or_path='/models/WizardLM-13B-V1.0-Merged', vocab_size=32000,",
"I just ran:\r\n```python \r\n>>> from transformers import LlamaTokenizerFast\r\n>>> tok = LlamaTokenizerFast.from_pretrained(\"WizardLM/WizardLM-13B-V1.0\")\r\n\r\n>>> len(tok)\r\n32001\r\n```\r\nEven if you want to force the vocab_size using : \r\n```python \r\n>>> tok = LlamaTokenizerFast.from_pretrained(\"WizardLM/WizardLM-13B-V1.0\", vocab_size=32000)\r\n```\r\nthis will not work. The fast tokenizer is initialized from the json file, and the `vocab_size` is not an argument that is used. If you don't want the padding token just set it to None and remove it from the `added_tokens_encoder` and `added_tokens_decoder`. \r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,690 | 1,693 | 1,693 |
NONE
| null |
### System Info
Linux
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
model = AutoModelForCausalLM.from_pretrained(
base_model,
load_in_8bit=True,
device_map=device_map,
trust_remote_code=True
)
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained(
base_model,
padding_side="right"
)
return ,LlamaTokenizerFast token
print(f" token class ,tokens vocab :{len(tokenizer.get_vocab())}\n {tokenizer}")
**token class ,tokens vocab :32001**
LlamaTokenizerFast(name_or_path='/models/WizardLM-13B-V1.0-Merged', **vocab_size=32000**,(mistake ) model_max_length=2048, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'eos_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'unk_token': AddedToken("<unk>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'pad_token': '[PAD]'}, clean_up_tokenization_spaces=False)
### Expected behavior
with Vicuna or WizardLM-13B ,LlamaTokenizerFast report vocab_size=32000 ,but should 32001,because PAD token added by Vicuna
LlamaTokenizerFast(name_or_path='/models/WizardLM-13B-V1.0-Merged', **vocab_size=32000,** model_max_length=2048, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'eos_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'unk_token': AddedToken("<unk>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'pad_token': '[PAD]'}, clean_up_tokenization_spaces=False)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25041/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25040
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25040/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25040/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25040/events
|
https://github.com/huggingface/transformers/issues/25040
| 1,818,411,189 |
I_kwDOCUB6oc5sYsC1
| 25,040 |
Add ViTMatte model
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
},
{
"id": 5769473378,
"node_id": "LA_kwDOCUB6oc8AAAABV-MtYg",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Vision",
"name": "Vision",
"color": "C079EF",
"default": false,
"description": ""
}
] |
open
| false | null |
[] |
[] | 1,690 | 1,690 | null |
COLLABORATOR
| null |
### Model description
ViTMatte is a recently released model for alpha matting on images i.e. background removal.
The model accepts an input image and trimap (manually labelled grayscale image outlining the rough border of the foreground object) and predicts the alpha mate for each pixel.
It introduces a series of small adaptations to the ViT architecture - selective global attention + window attention; adding convolutional blocks between transformers blocks - to reduce computational complexity and enhancing the high-frequency information passed through the network.
At the time of publishing, ViTMatte showed SOTA performance on Distinctions-646 and strong performance (> Mattformer) on Composition-1K.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Github: https://github.com/hustvl/ViTMatte
Paper: https://arxiv.org/pdf/2305.15272.pdf
Demo: https://colab.research.google.com/drive/1Dc2qoJueNZQyrTU19sIcrPyRDmvuMTF3?usp=sharing
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25040/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25040/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/25039
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25039/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25039/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25039/events
|
https://github.com/huggingface/transformers/pull/25039
| 1,818,410,776 |
PR_kwDOCUB6oc5WOzEC
| 25,039 |
Add test when downloading from gated repo
|
{
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger The fix is merged and deployed server-side so the test can now be added in `transformers` :)"
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
Following PR after https://github.com/huggingface/transformers/pull/25034.
Goal is to add a test to check that accessing a gated repo raises a custom message for the user. Cannot be merged yet as the server returns "RepoNotFound" when user is not authenticated, even when it's a public gated repo. Once server implementation is fixed ([see internal PR](https://github.com/huggingface/moon-landing/pull/7106)), this PR should pass.
Until then, no need to review/try to fix it.
**Note:** a workaround would be to have a token for a user in the CI but that would require a secret in GH which would not work in PRs. Let's keep it simple and have a test only for unauthenticated users.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25039/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25039/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25039",
"html_url": "https://github.com/huggingface/transformers/pull/25039",
"diff_url": "https://github.com/huggingface/transformers/pull/25039.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25039.patch",
"merged_at": 1690546468000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25038
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25038/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25038/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25038/events
|
https://github.com/huggingface/transformers/pull/25038
| 1,818,382,088 |
PR_kwDOCUB6oc5WOsy-
| 25,038 |
Add dispatch_batches to training arguments
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25038). All of your documentation changes will be reflected on that endpoint."
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds the option to set `Accelerator`'s `dispatch_batches` argument through the `TrainingArguments`. This is needed in cases such as https://github.com/huggingface/transformers/issues/24999, when `dispatch_batches=False` is needed on the streaming dataset
Fixes # (issue)
Solves https://github.com/huggingface/transformers/issues/24999
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25038/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25038",
"html_url": "https://github.com/huggingface/transformers/pull/25038",
"diff_url": "https://github.com/huggingface/transformers/pull/25038.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25038.patch",
"merged_at": 1690205240000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25037
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25037/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25037/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25037/events
|
https://github.com/huggingface/transformers/pull/25037
| 1,818,374,544 |
PR_kwDOCUB6oc5WOrKn
| 25,037 |
Add offload support to Bark
|
{
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Why not leverage Accelerate here as is done in Diffusers pipeline? (cc @patrickvonplaten )",
"Hi @sgugger, thank you for taking part! \r\nDiffusers pipeline is using [`accelerate's cpu_offload`](https://github.com/huggingface/accelerate/blob/69e4c3c54da3201eda288b500d138761e7a5221c/src/accelerate/big_modeling.py#L146) which itself is using [`accelerate's dispatch model`](https://github.com/huggingface/accelerate/blob/69e4c3c54da3201eda288b500d138761e7a5221c/src/accelerate/big_modeling.py#L146). \r\n\r\nThe only issue here is that accelerate's cpu offload is done whenever calling the forward pass of the model. Here, some sub-model are auto-regressive so it would (if I'm not wrong) offload/onload over and over again while forwarding during the `generate` call. This would be sub-optimal and time-consuming, and would remove the benefits of my version of `offload_to_cpu`.\r\n\r\nBTW, I'm open to suggestions on how to best make it work with accelerate, if it's possible! @muellerzr (hi !) or @sgugger , do you have some ideas?\r\n",
"No Diffusers uses [`cpu_offload_with_hook`](https://github.com/huggingface/accelerate/blob/69e4c3c54da3201eda288b500d138761e7a5221c/src/accelerate/big_modeling.py#L194) which gives you a hook to pass along to the next model in the pipeline. This way you can have the auto-regressive model in the middle be called several times and only when we go to the next model is it offloaded back to the CPU, which looks like what you are doing here in this PR.",
"Nice, it's indeed what I'm doing @sgugger , many thanks for your help! I'll adapt the PR",
"Here an example: https://github.com/huggingface/diffusers/blob/fa356bd4da2593c2b91f76c1f63b6238249ec001/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L225",
"@sgugger , thanks for the quick review! I've applied your last nit comments.\r\n\r\n",
"Hi @sgugger and @sanchit-gandhi , again thanks for the review! \r\nI've applied your last nit comments.\r\nDon't hesitate to reach out to me if I need to refactor or improve something! "
] | 1,690 | 1,690 | 1,690 |
COLLABORATOR
| null |
# What does this PR do?
Bark is a TTS model recently added by #24086.
It is made of is of 4 main sub-models, which are sequentially called during generation (`BarkModel.generate`). When a sub-model is used, the other models remain unused, taking up a lot of space on the GPU.
This PR thus propose a simple, yet effective, handmade offloading of sub-models.
```python
from transformers import BarkModel, BarkProcessor
# no need to load the model onto GPU, it will be done inside the generate function
model = BarkModel.from_pretrained("suno/bark")
processor = BarkProcessor.from_pretrained("suno/bark")
# works if a GPU is available. Throws a warning if not, and uses default behavior.
device = "cuda"
# input must be put onto the right device, i.e onto the GPU.
input = processor("Hey, it's a test").to(device)
# one simple additional argument
output = model.generate(**input, offload_to_cpu = True)
# output is loaded onto GPU as well
```
With this PR, GPU footprint is around 60% lower, while being less than 10% slower, based on a benchmark I've done and that will be shared soon (`batch_size = 1` on a single TITAN RTX).
TODO:
- [x] write tests
## Who can review?
Hey @sanchit-gandhi and @amyeroberts, I'm tagging you because you were the ones reviewing #24086 and because the changes are really small!
I'm wondering whether the code I've written conforms to the transformer paradigm and whether I need to raise additional warnings or errors in extreme cases!
Many thanks!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25037/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25037",
"html_url": "https://github.com/huggingface/transformers/pull/25037",
"diff_url": "https://github.com/huggingface/transformers/pull/25037.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25037.patch",
"merged_at": 1690468517000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25036
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25036/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25036/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25036/events
|
https://github.com/huggingface/transformers/issues/25036
| 1,818,214,248 |
I_kwDOCUB6oc5sX79o
| 25,036 |
[BUG] RecursionError: maximum recursion depth exceeded when loading LLaMA-1 but success loading LLaMA-2
|
{
"login": "SingL3",
"id": 20473466,
"node_id": "MDQ6VXNlcjIwNDczNDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/20473466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SingL3",
"html_url": "https://github.com/SingL3",
"followers_url": "https://api.github.com/users/SingL3/followers",
"following_url": "https://api.github.com/users/SingL3/following{/other_user}",
"gists_url": "https://api.github.com/users/SingL3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SingL3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SingL3/subscriptions",
"organizations_url": "https://api.github.com/users/SingL3/orgs",
"repos_url": "https://api.github.com/users/SingL3/repos",
"events_url": "https://api.github.com/users/SingL3/events{/privacy}",
"received_events_url": "https://api.github.com/users/SingL3/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I set a small recursion limit to 50 and I now got this trace back for loading LLaMA-1:\r\n```\r\n File \"/mnt/home//Open-Assistant/model/model_training/trainer_sft.py\", line 481, in <module>\r\n main()\r\n File \"/mnt/home//Open-Assistant/model/model_training/trainer_sft.py\", line 333, in main\r\n tokenizer = get_tokenizer(training_conf)\r\n File \"/mnt/home//Open-Assistant/model/model_training/utils/utils.py\", line 214, in get_tokenizer\r\n tokenizer = transformers.AutoTokenizer.from_pretrained(tokenizer_name, cache_dir=conf.cache_dir)\r\n File \"/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py\", line 702, in from_pretrained\r\n return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)\r\n File \"/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_base.py\", line 1841, in from_pretrained\r\n return cls._from_pretrained(\r\n File \"/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_base.py\", line 2004, in _from_pretrained\r\n tokenizer = cls(*init_inputs, **init_kwargs)\r\n File \"/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/models/llama/tokenization_llama_fast.py\", line 126, in __init__\r\n self.update_post_processor()\r\n File \"/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/models/llama/tokenization_llama_fast.py\", line 136, in update_post_processor\r\n bos_token_id = self.bos_token_id\r\n File \"/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_base.py\", line 1136, in bos_token_id\r\n return self.convert_tokens_to_ids(self.bos_token)\r\n File \"/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py\", line 250, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py\", line 257, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n...\r\n```\r\nUntil it exceeded the limit.",
"FYI, I can load LLaMA-1 using `transformers==4.28.1`",
"Hey! Thanks for reporting, this is very similar to #22415, and #22414.\r\nHaving a minimal reproducer would be better, I cannot reproduce this out-of-the-box, have no idea what `tokenizer_config` they are using or what not. \r\n\r\nFrom the traceback, I created a minimal reproducer:\r\n```python \r\n>>> from transformers import AutoTokenizer\r\n>>> tokenizer = AutoTokenizer.from_pretrained(\"huggyllama/llama-7b\", unk_token = \"<endoftext>\", bos_token=\"<endoftext>\", eos_token=\"<endoftext>\")\r\n```\r\nIn this case, the `<endoftext>` token does not exist, and since there are a few issues with adding tokens when initializing, cf #23909 after calling `super().__init__()` the token is still not part of the vocab. \r\n\r\n\r\nIt works with `transformers==4.28.1`, because the tokenizer did not have the `self.update_post_processor()`. \r\n\r\nThere are no real quick fixes appart from downgrading for now, but I'll probably either remove the call in the init, or fixing token addition will make sure the token is properly added after calling `super().__init__()`. ",
"@ArthurZucker \r\nTo reproduce:\r\n`tokenizer_config.json`:\r\n```json\r\n{\"bos_token\": \"\", \"eos_token\": \"\", \"model_max_length\": 1000000000000000019884624838656, \"tokenizer_class\": \"LlamaTokenizer\", \"unk_token\": \"\"}\r\n```\r\nThis is an old version of LLaMA-1, I think it is from [here](https://huggingface.co/decapoda-research/llama-7b-hf/tree/main)(need to edit `LLaMATokenizer` to `LlamaTokenizer`).",
"Transformers versions 4.31.0\r\nThis also hits the same issue\r\n\r\n```\r\nfrom tokenizers.trainers import BpeTrainer\r\nfrom tokenizers.pre_tokenizers import Whitespace\r\nfrom tokenizers import Tokenizer\r\nfrom tokenizers.models import BPE\r\nfrom datasets import load_dataset\r\nfrom transformers import LlamaConfig, LlamaTokenizer, LlamaForCausalLM, LlamaTokenizerFast, PreTrainedTokenizerFast\r\n\r\ndataset = load_dataset(\"tiny_shakespeare\")\r\ntokenizer = Tokenizer(BPE(unk_token=\"[UNK]\"))\r\ntrainer = BpeTrainer(vocab_size=1024)\r\ntokenizer.pre_tokenizer = Whitespace()\r\ntokenizer.train_from_iterator(dataset['train']['text'], trainer=trainer)\r\n\r\n# tokenizer.encode(\"hello world\")\r\ntokenizer.save('tiny.json')\r\nt_tok1 = PreTrainedTokenizerFast(tokenizer_file='tiny.json') # works\r\nt_tok2 = LlamaTokenizerFast(tokenizer_file='tiny.json') # fails\r\n```",
"```\r\ntrainer = BpeTrainer(special_tokens=[\"<unk>\", \"<s>\", \"</s>\"], vocab_size=1024)\r\n```\r\n\r\nGot it to work by doing this",
"Thanks for debugging! ",
"@ArthurZucker I just ran into this issue today when doing:\r\n```py\r\nfrom transformers import pipeline\r\npipeline('text-generation', 'JackFram/llama-160m')\r\n```\r\n\r\nit does indeed look like the problem is what you mentioned. Here's the [tokenizer_config.json](https://huggingface.co/JackFram/llama-160m/blob/main/tokenizer_config.json).\r\n\r\nEdit: Author merged by PR to fix this [here](https://huggingface.co/JackFram/llama-160m/discussions/9#64f3df59da084738feffa4ae), but the error still remains in transformers.\r\n",
"That is expected, the[ special tokens map file is still wrong](https://huggingface.co/JackFram/llama-160m/blob/main/special_tokens_map.json) ๐ ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Will close ass #23909 should have resolved it",
"Using `transformers=4.34.1` and python 3.9.18 I still encounter the the recursion depth limit which should have been fixed with #23909 if I understood correctly?\r\nHere is my code:\r\n``` python\r\nimport transformers\r\nimport sys\r\nsys.setrecursionlimit(100)\r\nprint(transformers.__version__)\r\ntokenizer = transformers.LlamaTokenizer.from_pretrained('baffo32/decapoda-research-llama-7B-hf')\r\nprint(\"Done LlamaTokenizer\")\r\ntokenizer = transformers.AutoTokenizer.from_pretrained('baffo32/decapoda-research-llama-7B-hf')\r\nprint(\"Done AutoTokenizer\")\r\n```\r\nThis is the `tokenizer_config.json`:\r\n``` json\r\n{\"bos_token\": \"\", \"eos_token\": \"\", \"model_max_length\": 1000000000000000019884624838656, \"tokenizer_class\": \"LlamaTokenizer\", \"unk_token\": \"\"}\r\n```\r\nThe error I am getting is:\r\n``` shell\r\n4.34.1\r\nYou are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thouroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565\r\nDone LlamaTokenizer\r\nTraceback (most recent call last):\r\n File \"/home/r403k/Projects/huggingface/minimal_working_example.py\", line 10, in <module>\r\n tokenizer = transformers.AutoTokenizer.from_pretrained('baffo32/decapoda-research-llama-7B-hf', use_local_files=False)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py\", line 751, in from_pretrained\r\n return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 2017, in from_pretrained\r\n return cls._from_pretrained(\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 2249, in _from_pretrained\r\n tokenizer = cls(*init_inputs, **init_kwargs)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/models/llama/tokenization_llama_fast.py\", line 134, in __init__\r\n self.update_post_processor()\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/models/llama/tokenization_llama_fast.py\", line 147, in update_post_processor\r\n bos_token_id = self.bos_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1172, in bos_token_id\r\n return self.convert_tokens_to_ids(self.bos_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 329, in convert_tokens_to_ids\r\n return self._convert_token_to_id_with_added_voc(tokens)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py\", line 336, in _convert_token_to_id_with_added_voc\r\n return self.unk_token_id\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1191, in unk_token_id\r\n return self.convert_tokens_to_ids(self.unk_token)\r\n File \"/home/r403k/miniconda3/envs/openflamingo/lib/python3.9/site-packages/transformers/tokenization_utils_base.py\", line 1057, in unk_token\r\n return str(self._unk_token)\r\nRecursionError: maximum recursion depth exceeded while getting the str of an object\r\n```",
"Hey! It was indeed fixed for other models, but Llama is a bit specific, we call `self.update_post_processor()` which makes sure that the eos token is added. This should work without a bos token, but you can't have no `unk_token`, this is pretty much a requirement for all our tokenizers. Not sure I can fix this as raising an error would not be BC :/ would recommend doing this: \r\n```python \r\n>>> tokenizer = transformers.LlamaTokenizer.from_pretrained('baffo32/decapoda-research-llama-7B-hf', unk_token=\"<unk>\") \r\n```",
"Hey Arthur!\r\nthank you for the quick reply! I only just now saw [this](https://github.com/huggingface/transformers/issues/24318#issuecomment-1596801322) where you solved my issue as well.\r\nSetting the `unk_token` in the `tokenizer_config.json` was sufficient as well."
] | 1,690 | 1,699 | 1,695 |
NONE
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-4.18.0-425.3.1.el8.x86_64-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Running (Open-Assistant)[https://github.com/LAION-AI/Open-Assistant] script for (SFT)[https://github.com/LAION-AI/Open-Assistant/blob/main/model/model_training/trainer_sft.py]
When loading LLaMA-1, got this error:
```
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1155, in unk_token_id
return self.convert_tokens_to_ids(self.unk_token)
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 250, in convert_tokens_to_ids
return self._convert_token_to_id_with_added_voc(tokens)
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 257, in _convert_token_to_id_with_added_voc
return self.unk_token_id
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1155, in unk_token_id
return self.convert_tokens_to_ids(self.unk_token)
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 250, in convert_tokens_to_ids
return self._convert_token_to_id_with_added_voc(tokens)
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 257, in _convert_token_to_id_with_added_voc
return self.unk_token_id
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1155, in unk_token_id
return self.convert_tokens_to_ids(self.unk_token)
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 250, in convert_tokens_to_ids
return self._convert_token_to_id_with_added_voc(tokens)
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 257, in _convert_token_to_id_with_added_voc
return self.unk_token_id
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1155, in unk_token_id
return self.convert_tokens_to_ids(self.unk_token)
RecursionError: maximum recursion depth exceeded
```
Seems like go into a infinite loop, and success loading LLaMA-2.
### Expected behavior
Success loading LLaMA-1 and LLaMA-2
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25036/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25036/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25035
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25035/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25035/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25035/events
|
https://github.com/huggingface/transformers/pull/25035
| 1,818,130,231 |
PR_kwDOCUB6oc5WN1gh
| 25,035 |
fix(integrations): store serialized `TrainingArgs` to `wandb.config` without sanitization.
|
{
"login": "parambharat",
"id": 12809212,
"node_id": "MDQ6VXNlcjEyODA5MjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/12809212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/parambharat",
"html_url": "https://github.com/parambharat",
"followers_url": "https://api.github.com/users/parambharat/followers",
"following_url": "https://api.github.com/users/parambharat/following{/other_user}",
"gists_url": "https://api.github.com/users/parambharat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/parambharat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/parambharat/subscriptions",
"organizations_url": "https://api.github.com/users/parambharat/orgs",
"repos_url": "https://api.github.com/users/parambharat/repos",
"events_url": "https://api.github.com/users/parambharat/events{/privacy}",
"received_events_url": "https://api.github.com/users/parambharat/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,704 | 1,690 |
CONTRIBUTOR
| null |
Allows resuming training runs when reusing the wandb config.
# What does this PR do?
Currently the `WandbLogger` uses the `to_sanitized_dict()` method in `TrainingArguments` to serialize the training hyperparameters. This converts nested objects and `NoneType` objects to `str` for safe serialization. However, using the stored config when resuming a training run leads to issues while initializing the `TrainingArguments` from the `wandb.run.config`. This PR fixes this by instead using the `to_dict` method to serialize the `TrainingArguments`. The resulting dictionary can be stored in the `wandb.config` and reused to resume training runs.
## Before submitting
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
## Who can review?
- trainer: @sgugger , @pacman100
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25035/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25035/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25035",
"html_url": "https://github.com/huggingface/transformers/pull/25035",
"diff_url": "https://github.com/huggingface/transformers/pull/25035.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25035.patch",
"merged_at": 1690202559000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25034
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25034/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25034/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25034/events
|
https://github.com/huggingface/transformers/pull/25034
| 1,818,084,947 |
PR_kwDOCUB6oc5WNreU
| 25,034 |
Support GatedRepoError + use raise from
|
{
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger thanks for the review :)\r\nI made the requested change and updated another error message as well to use `token` instead of `use_auth_token` (the error message for generic `RepoNotFoundError`)"
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
(PR started after comment from @osanseviero [on slack](https://huggingface.slack.com/archives/C03V11RNS7P/p1690185858721759?thread_ts=1689956871.406059&cid=C03V11RNS7P) -private link)
This PR adds 2 things:
- raise a more custom error in case of `GatedRepoError` when downloading a file. `GatedRepoError` is a subclass of `RepoNotFoundError` in which the repo is actually found but user don't have access to it (the inheritance is there for backward compatibility)
- when raising a EnvironmentError in `utils/hub.py` I think it's best to use the Python's syntax `raise ... from ...`. This make debugging much easier for both users and maintainers.
At the moment `GatedRepoError` is triggered only if token is passed but a [PR is moon-landing](https://github.com/huggingface/moon-landing/pull/7106) (private link) is opened to also trigger a gated repo error for unauthenticated users.
**Note:** there might be some tests to adapt and I'm willing to do it once the logic is approved
(EDIT: I just checked and in the lowest version of `huggingface_hub` that is supported (0.14.1), GatedRepoError [already exists](https://github.com/huggingface/huggingface_hub/blob/v0.14.1/src/huggingface_hub/utils/_errors.py#L108) so no import issue to worry about)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25034/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25034",
"html_url": "https://github.com/huggingface/transformers/pull/25034",
"diff_url": "https://github.com/huggingface/transformers/pull/25034.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25034.patch",
"merged_at": 1690204359000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25033
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25033/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25033/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25033/events
|
https://github.com/huggingface/transformers/pull/25033
| 1,818,042,103 |
PR_kwDOCUB6oc5WNiF7
| 25,033 |
[`logging.py`] set default `stderr` path if `None`
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Monkey patching globals like `sys.stdout` and `sys.stderr` is more of an emergency hack than a fix that libraries should be using. The proper way to handle Python windowed mode is to guard any direct usage of `sys.stderr` in `if sys.stderr is not None` blocks.\r\n\r\n```python\r\nsys.stderr.write(\"hello\") # This is broken\r\n\r\n# Any of these are fine\r\nif sys.stderr:\r\n sys.stderr.write(\"hello\")\r\nprint(\"hello\", file=sys.stderr)\r\n```\r\n\r\n> Though it is not entirely related to transformers, adding a safeguard seems like a good practice\r\n\r\nIt is related to any library that uses `sys.stderr` directly. "
] | 1,690 | 1,690 | 1,690 |
COLLABORATOR
| null |
# What does this PR do?
Attempt to fix #24047
Thanks to contributors, seems like the issue [is known](https://github.com/pyinstaller/pyinstaller/issues/7334#issuecomment-1357447176).
Though it is not entirely related to `transformers`, adding a safeguard seems like a good practice
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25033/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25033",
"html_url": "https://github.com/huggingface/transformers/pull/25033",
"diff_url": "https://github.com/huggingface/transformers/pull/25033.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25033.patch",
"merged_at": 1690201905000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25032
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25032/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25032/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25032/events
|
https://github.com/huggingface/transformers/issues/25032
| 1,818,019,097 |
I_kwDOCUB6oc5sXMUZ
| 25,032 |
Allow resuming of logging to WANDB with the Trainer
|
{
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @parambharat who has done a lot of work on the Wandb integration",
"Hi @BramVanroy , Thanks for bringing this up. If I understood the issue correctly, you want to resume a wandb run while resuming training. There are two solutions I can think of that don't require changing `TrainingArguments`. Can you try these instead ?\r\n\r\n\r\n1. Set the env vars `WANDB_RESUME=\"must\"` and `WANDB_RUN_ID=<run_id_you_want_to_resume>` before running your training script. wandb should read these env vars upon initialization and should resume your training run. [W&B Env Vars Reference](https://docs.wandb.ai/guides/track/environment-variables)\r\n2. Initialize a `run` with `wandb.init(resume=must, id=<run_id_you_want_to_resume>` before initializing the `Trainer`. The trainer will [not initialize a new run](https://github.com/huggingface/transformers/blob/8f1f0bf50f402881c0aa53b18f21736a151adf5b/src/transformers/integrations.py#L733C16-L733C38) if a run already exists. Here's a [colab](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/huggingface/wandb_hf_example.ipynb#scrollTo=RK5HRy1JQ0yX) example of this from the wandb examples repo.",
"@parambharat Oh, I was not aware of the resume environment variable! That would make life indeed much easier in combination with the wandb run ID!\r\n\r\nA run's ID is the alphanumeric part of the URL, right? So in this example\r\n\r\n> https://wandb.ai/username/projectname/runs/phup4zp1/\r\n\r\nit would be `phup4zp1`?\r\n\r\nIf it's that easy then I am a happy man! ",
"@BramVanroy , Yes, that's the `run_id` ๐ ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Ah, sorry forgot to come back to this. This works indeed, thanks a lot @parambharat!"
] | 1,690 | 1,692 | 1,692 |
COLLABORATOR
| null |
### Feature request
As far as I can tell, it is currently not possible to resume a training run and continue logging to the same run on WANDB when using the Trainer. The reason is that WANDB would [require](https://github.com/wandb/wandb/blob/a32638cb1c6ab775e9ed431d9a9b4b8a30685453/wandb/sdk/wandb_init.py#L1036-L1053) you to set `resume=True` and a run `id` in `wandb.init` (or env `WANDB_RUN_ID`) for this to work.
The Trainer currently does not allow for these options as far as I can see
https://github.com/huggingface/transformers/blob/c9a82be592ca305180a7ab6a36e884bca1d426b8/src/transformers/integrations.py#L726-L737
###
### Motivation
Make it easy to resume logging to wandb without any code changes, directly through a config or CLI (TrainingArguments).
### Your contribution
I can work on this. I would update TrainingArguments to add two new arguments:
- `wandb_resume`
- `wandb_id` (which backs off to the environment variable `WANDB_RUN_ID`)
which would then be passed to the `wandb` integration Callback as part of `args`
https://github.com/huggingface/transformers/blob/c9a82be592ca305180a7ab6a36e884bca1d426b8/src/transformers/integrations.py#L684
which can then be used in `init`
https://github.com/huggingface/transformers/blob/c9a82be592ca305180a7ab6a36e884bca1d426b8/src/transformers/integrations.py#L725-L737
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25032/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25031
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25031/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25031/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25031/events
|
https://github.com/huggingface/transformers/pull/25031
| 1,818,014,116 |
PR_kwDOCUB6oc5WNb-Z
| 25,031 |
Fix doctest
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,690 | 1,690 |
COLLABORATOR
| null |
# What does this PR do?
Fix a doctest (that is recently added): precision issue
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25031/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25031",
"html_url": "https://github.com/huggingface/transformers/pull/25031",
"diff_url": "https://github.com/huggingface/transformers/pull/25031.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25031.patch",
"merged_at": 1690315807000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25030
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25030/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25030/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25030/events
|
https://github.com/huggingface/transformers/pull/25030
| 1,817,994,455 |
PR_kwDOCUB6oc5WNXvT
| 25,030 |
[`generate`] Only warn users if the `generation_config`'s `max_length` is set to the default value
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger as discussed on Slack with @ArthurZucker: `max_new_tokens` can't be used to set a default `generation_config` that works well out of the box -- `max_length` does. As such, let's adapt the warning to enable us (and the users) to set good default arguments."
] | 1,690 | 1,690 | 1,690 |
COLLABORATOR
| null |
# What does this PR do?
Updates the condition for raising a warning in generate to warn users if the `generation_config`'s `max_length` is set to the default value.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25030/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25030",
"html_url": "https://github.com/huggingface/transformers/pull/25030",
"diff_url": "https://github.com/huggingface/transformers/pull/25030.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25030.patch",
"merged_at": 1690287638000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25029
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25029/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25029/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25029/events
|
https://github.com/huggingface/transformers/issues/25029
| 1,817,828,264 |
I_kwDOCUB6oc5sWduo
| 25,029 |
Bug in autotokenizer some open source models such as falcon.
|
{
"login": "Alkahwaji",
"id": 23569519,
"node_id": "MDQ6VXNlcjIzNTY5NTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23569519?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Alkahwaji",
"html_url": "https://github.com/Alkahwaji",
"followers_url": "https://api.github.com/users/Alkahwaji/followers",
"following_url": "https://api.github.com/users/Alkahwaji/following{/other_user}",
"gists_url": "https://api.github.com/users/Alkahwaji/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Alkahwaji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Alkahwaji/subscriptions",
"organizations_url": "https://api.github.com/users/Alkahwaji/orgs",
"repos_url": "https://api.github.com/users/Alkahwaji/repos",
"events_url": "https://api.github.com/users/Alkahwaji/events{/privacy}",
"received_events_url": "https://api.github.com/users/Alkahwaji/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey! Could you provide a full trace to know where the error come from?\r\nI cannot reproduce the error with the script you provided. Make sure you are using the latest version of transformers too! ",
"Thanks. \r\n\r\n\r\n\r\n",
"It seems that the [SIGALARM is not available on Windows](https://stackoverflow.com/questions/52779920/why-is-signal-sigalrm-not-working-in-python-on-windows). But I am not very familiar with this, will let @sgugger answer! \r\n\r\nAs per the[ contribution guidelines](https://github.com/huggingface/transformers/blob/c9a82be592ca305180a7ab6a36e884bca1d426b8/CONTRIBUTING.md), could you provide the platform you are running this on? \r\n- which version of python are you using? \r\n- which version of transformers are you using? \r\n- are you on linux or windows? ",
"You can avoid this error by passing `trust_remote_code=True` (which is needed for this model). We will make the error message clearer when signal is not available."
] | 1,690 | 1,690 | 1,690 |
NONE
| null |
### System Info
I want to get the sentence difference and use autotokenizer a falcon model 40B and 7B but usually, I receive an attribute error bug: model 'signal' has no attribute 'SIGALRM'.

@younesbelkada @ArthurZucker @Rocketknight1
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
'''
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-40b-instruct")
model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-40b-instruct")
'''

### Expected behavior
Give me a method to get the word embeddings using sentence transformers or autotokenizer open source models such as falcon or LLMA2.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25029/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25028
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25028/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25028/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25028/events
|
https://github.com/huggingface/transformers/issues/25028
| 1,817,672,611 |
I_kwDOCUB6oc5sV3uj
| 25,028 |
[docs] duplicate table of contents in perf_infer_gpu_one.mdx
|
{
"login": "eenzeenee",
"id": 71638597,
"node_id": "MDQ6VXNlcjcxNjM4NTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/71638597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eenzeenee",
"html_url": "https://github.com/eenzeenee",
"followers_url": "https://api.github.com/users/eenzeenee/followers",
"following_url": "https://api.github.com/users/eenzeenee/following{/other_user}",
"gists_url": "https://api.github.com/users/eenzeenee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eenzeenee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eenzeenee/subscriptions",
"organizations_url": "https://api.github.com/users/eenzeenee/orgs",
"repos_url": "https://api.github.com/users/eenzeenee/repos",
"events_url": "https://api.github.com/users/eenzeenee/events{/privacy}",
"received_events_url": "https://api.github.com/users/eenzeenee/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I can't see this on the main branch, maybe this was fixed by the recent refactor of this guide?",
"Good catch! I don't think this [doc](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one) was refactored in the recent PR (it was just moved), and I'm getting the same error on the `main` branch.\r\n\r\nWould you like to open a PR with your proposed fix? ๐ค ",
"Thanks for your feedbacks. I opened PR https://github.com/huggingface/transformers/pull/25066!"
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
## Description
There are duplicate titles [**Requirements**] in `perf_infer_gpu_one.md` from line 51 and 117 causing an error occurs when moving the table of contents.
## Document / Language
`perf_infer_gpu_one.md` / en
## Suggestion
line 51
As is :
```### Requirements```
To be :
```### Requirements [[requirements-for-fp4-mixedprecision-inference]]```
line 117
As is :
```### Requirements```
To be :
```### Requirements [[requirements-for-int8-mixedprecision-matrix-decomposition]]```
Please let me know if I missed something in guideilnes.
Thank you in advance for your attention to it!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25028/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25027
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25027/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25027/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25027/events
|
https://github.com/huggingface/transformers/issues/25027
| 1,817,488,181 |
I_kwDOCUB6oc5sVKs1
| 25,027 |
LIama-2 7B fine-tuning with DeepSpeed OOM error during loading the best model at end when `load_best_model_at_end` specified as True.
|
{
"login": "Neo9061",
"id": 8206465,
"node_id": "MDQ6VXNlcjgyMDY0NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8206465?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Neo9061",
"html_url": "https://github.com/Neo9061",
"followers_url": "https://api.github.com/users/Neo9061/followers",
"following_url": "https://api.github.com/users/Neo9061/following{/other_user}",
"gists_url": "https://api.github.com/users/Neo9061/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Neo9061/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Neo9061/subscriptions",
"organizations_url": "https://api.github.com/users/Neo9061/orgs",
"repos_url": "https://api.github.com/users/Neo9061/repos",
"events_url": "https://api.github.com/users/Neo9061/events{/privacy}",
"received_events_url": "https://api.github.com/users/Neo9061/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hello, I'm able to run the following minimal example with any issues:\r\n\r\n```\r\nexport WANDB_DISABLED=\"true\"\r\nexport CUDA_VISIBLE_DEVICES=\"0,1\"\r\ncd transformers\r\ndeepspeed --num_nodes 1 --num_gpus 2 --master_port 10999 /home/sourab/transformers/examples/pytorch/language-modeling/run_clm.py --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --do_train --do_eval --max_train_samples 30 --max_eval_samples 10 --block_size 512 --overwrite_output_dir --gradient_checkpointing --save_strategy \"steps\" --evaluation_strategy \"steps\" --eval_steps 10 --save_steps 10 --load_best_model_at_end --output_dir /tmp/test-clm --deepspeed /home/sourab/transformers/tests/deepspeed/ds_config_zero3.json\r\n```\r\n\r\noutput:\r\n```\r\n2023-07-24 10:39:47,947] [INFO] [config.py:950:print_user_config] json = {\r\n \"fp16\": {\r\n \"enabled\": false, \r\n \"loss_scale\": 0, \r\n \"loss_scale_window\": 1000, \r\n \"initial_scale_power\": 16, \r\n \"hysteresis\": 2, \r\n \"min_loss_scale\": 1\r\n }, \r\n \"bf16\": {\r\n \"enabled\": false\r\n }, \r\n \"optimizer\": {\r\n \"type\": \"AdamW\", \r\n \"params\": {\r\n \"lr\": 5e-05, \r\n \"betas\": [0.9, 0.999], \r\n \"eps\": 1e-08, \r\n \"weight_decay\": 0.0\r\n }\r\n }, \r\n \"scheduler\": {\r\n \"type\": \"WarmupLR\", \r\n \"params\": {\r\n \"warmup_min_lr\": 0, \r\n \"warmup_max_lr\": 5e-05, \r\n \"warmup_num_steps\": 0\r\n }\r\n }, \r\n \"zero_optimization\": {\r\n \"stage\": 3, \r\n \"offload_optimizer\": {\r\n \"device\": \"cpu\", \r\n \"pin_memory\": true\r\n }, \r\n \"offload_param\": {\r\n \"device\": \"cpu\", \r\n \"pin_memory\": true\r\n }, \r\n \"overlap_comm\": true, \r\n \"contiguous_gradients\": true, \r\n \"sub_group_size\": 1.000000e+09, \r\n \"reduce_bucket_size\": 5.898240e+05, \r\n \"stage3_prefetch_bucket_size\": 5.308416e+05, \r\n \"stage3_param_persistence_threshold\": 7.680000e+03, \r\n \"stage3_max_live_parameters\": 1.000000e+09, \r\n \"stage3_max_reuse_distance\": 1.000000e+09, \r\n \"stage3_gather_16bit_weights_on_model_save\": true\r\n }, \r\n \"gradient_accumulation_steps\": 1, \r\n \"gradient_clipping\": 1.0, \r\n \"steps_per_print\": inf, \r\n \"train_batch_size\": 2, \r\n \"train_micro_batch_size_per_gpu\": 1, \r\n \"wall_clock_breakdown\": false\r\n}\r\n[INFO|trainer.py:1682] 2023-07-24 10:39:47,947 >> ***** Running training *****\r\n[INFO|trainer.py:1683] 2023-07-24 10:39:47,947 >> Num examples = 30\r\n[INFO|trainer.py:1684] 2023-07-24 10:39:47,947 >> Num Epochs = 3\r\n[INFO|trainer.py:1685] 2023-07-24 10:39:47,947 >> Instantaneous batch size per device = 1\r\n[INFO|trainer.py:1688] 2023-07-24 10:39:47,947 >> Total train batch size (w. parallel, distributed & accumulation) = 2\r\n[INFO|trainer.py:1689] 2023-07-24 10:39:47,947 >> Gradient Accumulation steps = 1\r\n[INFO|trainer.py:1690] 2023-07-24 10:39:47,947 >> Total optimization steps = 45\r\n[INFO|trainer.py:1691] 2023-07-24 10:39:47,947 >> Number of trainable parameters = 124,439,808\r\n 0%| | 0/45 [00:00<?, ?it/s][WARNING|logging.py:295] 2023-07-24 10:39:48,027 >> `use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...\r\n[WARNING|logging.py:295] 2023-07-24 10:39:48,027 >> `use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...\r\n 22%|โโโโโโโโโโโโโโโโโโโโ | 10/45 [00:05<00:15, 2.27it/s][INFO|trainer.py:3081] 2023-07-24 10:39:53,150 >> ***** Running Evaluation *****\r\n[INFO|trainer.py:3083] 2023-07-24 10:39:53,150 >> Num examples = 10\r\n[INFO|trainer.py:3086] 2023-07-24 10:39:53,151 >> Batch size = 1\r\n{'eval_loss': 3.356262683868408, 'eval_accuracy': 0.3947162426614481, 'eval_runtime': 0.5527, 'eval_samples_per_second': 18.092, 'eval_steps_per_second': 9.046, 'epoch': 0.67} \r\n 22%|โโโโโโโโโโโโโโโโโโโโ | 10/45 [00:05<00:15, 2.27it/s[INFO|trainer.py:2807] 2023-07-24 10:39:53,991 >> Saving model checkpoint to /tmp/test-clm/checkpoint-10 \r\n[INFO|configuration_utils.py:458] 2023-07-24 10:39:53,991 >> Configuration saved in /tmp/test-clm/checkpoint-10/config.json\r\n[INFO|configuration_utils.py:379] 2023-07-24 10:39:53,992 >> Configuration saved in /tmp/test-clm/checkpoint-10/generation_config.json\r\n[INFO|modeling_utils.py:1855] 2023-07-24 10:39:54,649 >> Model weights saved in /tmp/test-clm/checkpoint-10/pytorch_model.bin\r\n[INFO|tokenization_utils_base.py:2210] 2023-07-24 10:39:54,650 >> tokenizer config file saved in /tmp/test-clm/checkpoint-10/tokenizer_config.json\r\n[INFO|tokenization_utils_base.py:2217] 2023-07-24 10:39:54,650 >> Special tokens file saved in /tmp/test-clm/checkpoint-10/special_tokens_map.json\r\n[2023-07-24 10:39:54,735] [INFO] [logging.py:96:log_dist] [Rank 0] [Torch] Checkpoint global_step10 is about to be saved!\r\n/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n[2023-07-24 10:39:54,738] [INFO] [logging.py:96:log_dist] [Rank 0] Saving model checkpoint: /tmp/test-clm/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_model_states.pt\r\n[2023-07-24 10:39:54,738] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /tmp/test-clm/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_model_states.pt...\r\n[2023-07-24 10:39:54,744] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /tmp/test-clm/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_model_states.pt.\r\n[2023-07-24 10:39:54,744] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /tmp/test-clm/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_optim_states.pt...\r\n[2023-07-24 10:39:57,379] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /tmp/test-clm/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_optim_states.pt.\r\n[2023-07-24 10:39:57,379] [INFO] [engine.py:3285:_save_zero_checkpoint] zero checkpoint saved /tmp/test-clm/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_optim_states.pt\r\n[2023-07-24 10:39:57,386] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step10 is ready now!\r\n 44%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 20/45 [00:13<00:12, 2.07it/s][INFO|trainer.py:3081] 2023-07-24 10:40:01,597 >> ***** Running Evaluation *****\r\n[INFO|trainer.py:3083] 2023-07-24 10:40:01,598 >> Num examples = 10\r\n[INFO|trainer.py:3086] 2023-07-24 10:40:01,598 >> Batch size = 1\r\n{'eval_loss': 3.3019282817840576, 'eval_accuracy': 0.40371819960861055, 'eval_runtime': 0.3621, 'eval_samples_per_second': 27.618, 'eval_steps_per_second': 13.809, 'epoch': 1.33} \r\n 44%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 20/45 [00:14<00:12, 2.07it/s[INFO|trainer.py:2807] 2023-07-24 10:40:02,302 >> Saving model checkpoint to /tmp/test-clm/checkpoint-20 \r\n[INFO|configuration_utils.py:458] 2023-07-24 10:40:02,303 >> Configuration saved in /tmp/test-clm/checkpoint-20/config.json\r\n[INFO|configuration_utils.py:379] 2023-07-24 10:40:02,303 >> Configuration saved in /tmp/test-clm/checkpoint-20/generation_config.json\r\n[INFO|modeling_utils.py:1855] 2023-07-24 10:40:02,971 >> Model weights saved in /tmp/test-clm/checkpoint-20/pytorch_model.bin\r\n[INFO|tokenization_utils_base.py:2210] 2023-07-24 10:40:02,971 >> tokenizer config file saved in /tmp/test-clm/checkpoint-20/tokenizer_config.json\r\n[INFO|tokenization_utils_base.py:2217] 2023-07-24 10:40:02,972 >> Special tokens file saved in /tmp/test-clm/checkpoint-20/special_tokens_map.json\r\n[2023-07-24 10:40:03,063] [INFO] [logging.py:96:log_dist] [Rank 0] [Torch] Checkpoint global_step20 is about to be saved!\r\n/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n[2023-07-24 10:40:03,066] [INFO] [logging.py:96:log_dist] [Rank 0] Saving model checkpoint: /tmp/test-clm/checkpoint-20/global_step20/zero_pp_rank_0_mp_rank_00_model_states.pt\r\n[2023-07-24 10:40:03,066] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /tmp/test-clm/checkpoint-20/global_step20/zero_pp_rank_0_mp_rank_00_model_states.pt...\r\n[2023-07-24 10:40:03,080] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /tmp/test-clm/checkpoint-20/global_step20/zero_pp_rank_0_mp_rank_00_model_states.pt.\r\n[2023-07-24 10:40:03,081] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /tmp/test-clm/checkpoint-20/global_step20/zero_pp_rank_0_mp_rank_00_optim_states.pt...\r\n[2023-07-24 10:40:06,196] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /tmp/test-clm/checkpoint-20/global_step20/zero_pp_rank_0_mp_rank_00_optim_states.pt.\r\n[2023-07-24 10:40:06,197] [INFO] [engine.py:3285:_save_zero_checkpoint] zero checkpoint saved /tmp/test-clm/checkpoint-20/global_step20/zero_pp_rank_0_mp_rank_00_optim_states.pt\r\n[2023-07-24 10:40:06,204] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step20 is ready now!\r\n 67%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 30/45 [00:22<00:07, 2.01it/s][INFO|trainer.py:3081] 2023-07-24 10:40:10,531 >> ***** Running Evaluation *****\r\n[INFO|trainer.py:3083] 2023-07-24 10:40:10,531 >> Num examples = 10\r\n[INFO|trainer.py:3086] 2023-07-24 10:40:10,531 >> Batch size = 1\r\n{'eval_loss': 3.2902770042419434, 'eval_accuracy': 0.40332681017612526, 'eval_runtime': 0.4135, 'eval_samples_per_second': 24.186, 'eval_steps_per_second': 12.093, 'epoch': 2.0} \r\n 67%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 30/45 [00:22<00:07, 2.01it/s[INFO|trainer.py:2807] 2023-07-24 10:40:11,199 >> Saving model checkpoint to /tmp/test-clm/checkpoint-30 \r\n[INFO|configuration_utils.py:458] 2023-07-24 10:40:11,200 >> Configuration saved in /tmp/test-clm/checkpoint-30/config.json\r\n[INFO|configuration_utils.py:379] 2023-07-24 10:40:11,200 >> Configuration saved in /tmp/test-clm/checkpoint-30/generation_config.json\r\n[INFO|modeling_utils.py:1855] 2023-07-24 10:40:12,098 >> Model weights saved in /tmp/test-clm/checkpoint-30/pytorch_model.bin\r\n[INFO|tokenization_utils_base.py:2210] 2023-07-24 10:40:12,098 >> tokenizer config file saved in /tmp/test-clm/checkpoint-30/tokenizer_config.json\r\n[INFO|tokenization_utils_base.py:2217] 2023-07-24 10:40:12,098 >> Special tokens file saved in /tmp/test-clm/checkpoint-30/special_tokens_map.json\r\n[2023-07-24 10:40:12,188] [INFO] [logging.py:96:log_dist] [Rank 0] [Torch] Checkpoint global_step30 is about to be saved!\r\n/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n[2023-07-24 10:40:12,191] [INFO] [logging.py:96:log_dist] [Rank 0] Saving model checkpoint: /tmp/test-clm/checkpoint-30/global_step30/zero_pp_rank_0_mp_rank_00_model_states.pt\r\n[2023-07-24 10:40:12,191] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /tmp/test-clm/checkpoint-30/global_step30/zero_pp_rank_0_mp_rank_00_model_states.pt...\r\n[2023-07-24 10:40:12,197] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /tmp/test-clm/checkpoint-30/global_step30/zero_pp_rank_0_mp_rank_00_model_states.pt.\r\n[2023-07-24 10:40:12,198] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /tmp/test-clm/checkpoint-30/global_step30/zero_pp_rank_0_mp_rank_00_optim_states.pt...\r\n[2023-07-24 10:40:15,492] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /tmp/test-clm/checkpoint-30/global_step30/zero_pp_rank_0_mp_rank_00_optim_states.pt.\r\n[2023-07-24 10:40:15,492] [INFO] [engine.py:3285:_save_zero_checkpoint] zero checkpoint saved /tmp/test-clm/checkpoint-30/global_step30/zero_pp_rank_0_mp_rank_00_optim_states.pt\r\n[2023-07-24 10:40:15,499] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step30 is ready now!\r\n 89%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 40/45 [00:31<00:02, 2.02it/s][INFO|trainer.py:3081] 2023-07-24 10:40:19,832 >> ***** Running Evaluation *****\r\n[INFO|trainer.py:3083] 2023-07-24 10:40:19,832 >> Num examples = 10\r\n[INFO|trainer.py:3086] 2023-07-24 10:40:19,832 >> Batch size = 1\r\n{'eval_loss': 3.3038055896759033, 'eval_accuracy': 0.40136986301369865, 'eval_runtime': 0.4144, 'eval_samples_per_second': 24.13, 'eval_steps_per_second': 12.065, 'epoch': 2.67} \r\n 89%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 40/45 [00:32<00:02, 2.02it/s[INFO|trainer.py:2807] 2023-07-24 10:40:20,497 >> Saving model checkpoint to /tmp/test-clm/checkpoint-40 \r\n[INFO|configuration_utils.py:458] 2023-07-24 10:40:20,497 >> Configuration saved in /tmp/test-clm/checkpoint-40/config.json\r\n[INFO|configuration_utils.py:379] 2023-07-24 10:40:20,498 >> Configuration saved in /tmp/test-clm/checkpoint-40/generation_config.json\r\n[INFO|modeling_utils.py:1855] 2023-07-24 10:40:21,169 >> Model weights saved in /tmp/test-clm/checkpoint-40/pytorch_model.bin\r\n[INFO|tokenization_utils_base.py:2210] 2023-07-24 10:40:21,169 >> tokenizer config file saved in /tmp/test-clm/checkpoint-40/tokenizer_config.json\r\n[INFO|tokenization_utils_base.py:2217] 2023-07-24 10:40:21,169 >> Special tokens file saved in /tmp/test-clm/checkpoint-40/special_tokens_map.json\r\n[2023-07-24 10:40:21,259] [INFO] [logging.py:96:log_dist] [Rank 0] [Torch] Checkpoint global_step40 is about to be saved!\r\n/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n[2023-07-24 10:40:21,262] [INFO] [logging.py:96:log_dist] [Rank 0] Saving model checkpoint: /tmp/test-clm/checkpoint-40/global_step40/zero_pp_rank_0_mp_rank_00_model_states.pt\r\n[2023-07-24 10:40:21,262] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /tmp/test-clm/checkpoint-40/global_step40/zero_pp_rank_0_mp_rank_00_model_states.pt...\r\n[2023-07-24 10:40:21,268] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /tmp/test-clm/checkpoint-40/global_step40/zero_pp_rank_0_mp_rank_00_model_states.pt.\r\n[2023-07-24 10:40:21,268] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /tmp/test-clm/checkpoint-40/global_step40/zero_pp_rank_0_mp_rank_00_optim_states.pt...\r\n[2023-07-24 10:40:23,964] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /tmp/test-clm/checkpoint-40/global_step40/zero_pp_rank_0_mp_rank_00_optim_states.pt.\r\n[2023-07-24 10:40:23,964] [INFO] [engine.py:3285:_save_zero_checkpoint] zero checkpoint saved /tmp/test-clm/checkpoint-40/global_step40/zero_pp_rank_0_mp_rank_00_optim_states.pt\r\n[2023-07-24 10:40:23,971] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step40 is ready now!\r\n100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 45/45 [00:38<00:00, 1.37it/s][INFO|trainer.py:1930] 2023-07-24 10:40:26,063 >> \r\n\r\nTraining completed. Do not forget to share your model on huggingface.co/models =)\r\n\r\n\r\n[INFO|trainer.py:2089] 2023-07-24 10:40:26,063 >> Loading best model from /tmp/test-clm/checkpoint-30 (score: 3.2902770042419434).\r\n[INFO|deepspeed.py:381] 2023-07-24 10:40:26,063 >> Attempting to resume from /tmp/test-clm/checkpoint-30\r\n[2023-07-24 10:40:26,073] [INFO] [torch_checkpoint_engine.py:27:load] [Torch] Loading checkpoint from /tmp/test-clm/checkpoint-30/global_step30/zero_pp_rank_0_mp_rank_00_model_states.pt...\r\n[2023-07-24 10:40:26,077] [INFO] [torch_checkpoint_engine.py:29:load] [Torch] Loaded checkpoint from /tmp/test-clm/checkpoint-30/global_step30/zero_pp_rank_0_mp_rank_00_model_states.pt.\r\n[2023-07-24 10:40:26,078] [INFO] [torch_checkpoint_engine.py:27:load] [Torch] Loading checkpoint from /tmp/test-clm/checkpoint-30/global_step30/zero_pp_rank_0_mp_rank_00_model_states.pt...\r\n[2023-07-24 10:40:26,082] [INFO] [torch_checkpoint_engine.py:29:load] [Torch] Loaded checkpoint from /tmp/test-clm/checkpoint-30/global_step30/zero_pp_rank_0_mp_rank_00_model_states.pt.\r\n[2023-07-24 10:40:26,086] [INFO] [torch_checkpoint_engine.py:27:load] [Torch] Loading checkpoint from /tmp/test-clm/checkpoint-30/global_step30/zero_pp_rank_0_mp_rank_00_optim_states.pt...\r\n[2023-07-24 10:40:26,479] [INFO] [torch_checkpoint_engine.py:29:load] [Torch] Loaded checkpoint from /tmp/test-clm/checkpoint-30/global_step30/zero_pp_rank_0_mp_rank_00_optim_states.pt.\r\n[2023-07-24 10:40:26,479] [INFO] [engine.py:2865:_get_all_zero_checkpoint_state_dicts] successfully read 2 ZeRO state_dicts for rank 0\r\n[2023-07-24 10:40:26,605] [INFO] [engine.py:2815:_load_zero_checkpoint] loading 2 zero partition checkpoints for rank 0\r\n{'train_runtime': 38.7307, 'train_samples_per_second': 2.324, 'train_steps_per_second': 1.162, 'train_loss': 3.3458041720920138, 'epoch': 3.0}\r\n100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 45/45 [00:38<00:00, 1.16it/s]\r\n[INFO|trainer.py:2807] 2023-07-24 10:40:26,966 >> Saving model checkpoint to /tmp/test-clm\r\n[INFO|configuration_utils.py:458] 2023-07-24 10:40:26,967 >> Configuration saved in /tmp/test-clm/config.json\r\n[INFO|configuration_utils.py:379] 2023-07-24 10:40:26,967 >> Configuration saved in /tmp/test-clm/generation_config.json\r\n[INFO|modeling_utils.py:1855] 2023-07-24 10:40:28,333 >> Model weights saved in /tmp/test-clm/pytorch_model.bin\r\n[INFO|tokenization_utils_base.py:2210] 2023-07-24 10:40:28,333 >> tokenizer config file saved in /tmp/test-clm/tokenizer_config.json\r\n[INFO|tokenization_utils_base.py:2217] 2023-07-24 10:40:28,333 >> Special tokens file saved in /tmp/test-clm/special_tokens_map.json\r\n***** train metrics *****\r\n epoch = 3.0\r\n train_loss = 3.3458\r\n train_runtime = 0:00:38.73\r\n train_samples = 30\r\n train_samples_per_second = 2.324\r\n train_steps_per_second = 1.162\r\n07/24/2023 10:40:28 - INFO - __main__ - *** Evaluate ***\r\n[INFO|trainer.py:3081] 2023-07-24 10:40:28,418 >> ***** Running Evaluation *****\r\n[INFO|trainer.py:3083] 2023-07-24 10:40:28,418 >> Num examples = 10\r\n[INFO|trainer.py:3086] 2023-07-24 10:40:28,418 >> Batch size = 1\r\n100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 5/5 [00:00<00:00, 15.77it/s]\r\n***** eval metrics *****\r\n epoch = 3.0\r\n eval_accuracy = 0.4033\r\n eval_loss = 3.2903\r\n eval_runtime = 0:00:00.38\r\n eval_samples = 10\r\n eval_samples_per_second = 26.017\r\n eval_steps_per_second = 13.009\r\n perplexity = 26.8503\r\n[2023-07-24 10:40:30,989] [INFO] [launch.py:347:main] Process 1140775 exits successfully.\r\n[2023-07-24 10:40:31,991] [INFO] [launch.py:347:main] Process 1140774 exits successfully.\r\n```\r\n\r\n",
"Thanks @pacman100! Which model are you using in above example? previously I am also able to successfully run with GPT-Neo models (a relative small model) but hit issue with large models like Falcon 7B and IIama 2 7B on g5.12xlarge.",
"Hello @Neo9061, above PR https://github.com/huggingface/transformers/pull/25057 should fix this, please confirm the same.",
"Thanks @pacman100 for the quick fix! Just for my understanding, any insight why I used the code from transformers 4.31.0 (shown as below) and still hit the OOM error? I mean for my previous investigation. (for context details, plz see my post above. THX!)\r\n\r\nAt the meanwhile, I am testing your fix above. Will update in this thread. \r\n\r\n```\r\ntrain_result = trainer.train()\r\n\r\ncheckpoint_dirs = sorted(glob.glob(f\"/opt/ml/model/checkpoint-*\"))\r\ncheckpoint_path = checkpoint_dirs[0] # this is because I set total_save_limit as 1\r\nload_path, _ = trainer.model_wrapped.load_checkpoint(\r\n checkpoint_path, load_optimizer_states=False, load_lr_scheduler_states=False\r\n)\r\n\r\ntrainer.save_model()\r\n\r\n```",
"Hello, see this issue: https://github.com/huggingface/accelerate/issues/1707",
"Hi sorry for a probably unrelated problem here.\r\nIf I want to save the model in fp16 version, what should I do? Since I know fp16(AMP) is a way of accelerating the training process and saving mem in some cases, but the saved parameters are still fp32.\r\n\r\nI just wanna do the same sth similar to the Llama model whose parameters are the fp16 version so that we can do faster about inferences.",
"Hi @pacman100 I still see the error using your branch of transfromers. See log below. Please let me know if there is anything you want me to provide. THX!\r\n\r\nSecond thought: for evaluation/inference purpose, I don't need optimizer and lr scheduler. Is there a way to not save those parameters to save some memory?\r\n\r\n\r\n```\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] bfloat16_enabled ............. True\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] checkpoint_parallel_write_pipeline False\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] checkpoint_tag_validation_enabled True\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] checkpoint_tag_validation_fail False\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] comms_config ................. <deepspeed.comm.config.DeepSpeedCommsConfig object at 0x7f9090172bf0>\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] communication_data_type ...... None\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}}\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] curriculum_enabled_legacy .... False\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] curriculum_params_legacy ..... False\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] data_efficiency_config ....... {'enabled': False, 'seed': 1234, 'data_sampling': {'enabled': False, 'num_epochs': 1000, 'num_workers': 0, 'curriculum_learning': {'enabled': False}}, 'data_routing': {'enabled': False, 'random_ltd': {'enabled': False, 'layer_token_lr_schedule': {'enabled': False}}}}\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] data_efficiency_enabled ...... False\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] dataloader_drop_last ......... False\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] disable_allgather ............ False\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] dump_state ................... False\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] dynamic_loss_scale_args ...... None\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] eigenvalue_enabled ........... False\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] eigenvalue_gas_boundary_resolution 1\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] eigenvalue_layer_name ........ bert.encoder.layer\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] eigenvalue_layer_num ......... 0\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] eigenvalue_max_iter .......... 100\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] eigenvalue_stability ......... 1e-06\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] eigenvalue_tol ............... 0.01\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] eigenvalue_verbose ........... False\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] elasticity_enabled ........... False\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] flops_profiler_config ........ {\r\n \"enabled\": false, \r\n \"recompute_fwd_factor\": 0.0, \r\n \"profile_step\": 1, \r\n \"module_depth\": -1, \r\n \"top_modules\": 1, \r\n \"detailed\": true, \r\n \"output_file\": null\r\n}\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] fp16_auto_cast ............... None\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] fp16_enabled ................. False\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] fp16_master_weights_and_gradients False\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] global_rank .................. 0\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] grad_accum_dtype ............. None\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] gradient_accumulation_steps .. 2\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] gradient_clipping ............ 1.0\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] gradient_predivide_factor .... 1.0\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] hybrid_engine ................ enabled=False max_out_tokens=512 inference_tp_size=1 release_inference_cache=False pin_parameters=True tp_gather_partition_size=8\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] initial_dynamic_scale ........ 1\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] load_universal_checkpoint .... False\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] loss_scale ................... 1.0\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] memory_breakdown ............. False\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] mics_hierarchial_params_gather False\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] mics_shard_size .............. -1\r\n[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] monitor_config ............... tensorboard=TensorBoardConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') wandb=WandbConfig(enabled=False, group=None, team=None, project='deepspeed') csv_monitor=CSVConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') enabled=False\r\n[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] nebula_config ................ {\r\n \"enabled\": false, \r\n \"persistent_storage_path\": null, \r\n \"persistent_time_interval\": 100, \r\n \"num_of_version_in_retention\": 2, \r\n \"enable_nebula_load\": true, \r\n \"load_path\": null\r\n}\r\n[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] optimizer_legacy_fusion ...... False\r\n[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] optimizer_name ............... adamw\r\n[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] optimizer_params ............. {'lr': 6e-06, 'betas': [0.9, 0.999], 'eps': 1e-08, 'weight_decay': 0.2}\r\n[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0}\r\n[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] pld_enabled .................. False\r\n[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] pld_params ................... False\r\n[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] prescale_gradients ........... False\r\n[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] scheduler_name ............... WarmupLR\r\n[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] scheduler_params ............. {'warmup_min_lr': 0, 'warmup_max_lr': 6e-06, 'warmup_num_steps': 2}\r\n[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] sparse_attention ............. None\r\n[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] sparse_gradients_enabled ..... False\r\n[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] steps_per_print .............. inf\r\n[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] train_batch_size ............. 16\r\n[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] train_micro_batch_size_per_gpu 2\r\n[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] use_node_local_storage ....... False\r\n[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] wall_clock_breakdown ......... False\r\n[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] world_size ................... 4\r\n[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] zero_allow_untested_optimizer False\r\n[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] zero_config .................. stage=3 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=16777216 allgather_partitions=True allgather_bucket_size=500,000,000 overlap_comm=True load_from_fp32_weights=True elastic_checkpoint=False offload_param=DeepSpeedZeroOffloadParamConfig(device='cpu', nvme_path=None, buffer_count=5, buffer_size=100,000,000, max_in_cpu=1,000,000,000, pin_memory=False) offload_optimizer=DeepSpeedZeroOffloadOptimizerConfig(device='cpu', nvme_path=None, buffer_count=4, pin_memory=False, pipeline=False, pipeline_read=False, pipeline_write=False, fast_init=False) sub_group_size=1000000000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=15099494 param_persistence_threshold=40960 model_persistence_threshold=sys.maxsize max_live_parameters=1000000000 max_reuse_distance=1000000000 gather_16bit_weights_on_model_save=True ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False zero_hpz_partition_size=1 zero_quantized_weights=False zero_quantized_gradients=False mics_shard_size=-1 mics_hierarchical_params_gather=False memory_efficient_linear=True\r\n[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] zero_enabled ................. True\r\n[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] zero_force_ds_cpu_optimizer .. True\r\n[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] zero_optimization_stage ...... 3\r\n[2023-07-25 00:30:36,503] [INFO] [config.py:950:print_user_config] json = {\r\n \"fp16\": {\r\n \"enabled\": false, \r\n \"loss_scale\": 0, \r\n \"loss_scale_window\": 1000, \r\n \"initial_scale_power\": 12, \r\n \"hysteresis\": 2, \r\n \"min_loss_scale\": 1\r\n }, \r\n \"bf16\": {\r\n \"enabled\": true\r\n }, \r\n \"optimizer\": {\r\n \"type\": \"AdamW\", \r\n \"params\": {\r\n \"lr\": 6e-06, \r\n \"betas\": [0.9, 0.999], \r\n \"eps\": 1e-08, \r\n \"weight_decay\": 0.2\r\n }\r\n }, \r\n \"scheduler\": {\r\n \"type\": \"WarmupLR\", \r\n \"params\": {\r\n \"warmup_min_lr\": 0, \r\n \"warmup_max_lr\": 6e-06, \r\n \"warmup_num_steps\": 2\r\n }\r\n }, \r\n \"zero_optimization\": {\r\n \"stage\": 3, \r\n \"offload_optimizer\": {\r\n \"device\": \"cpu\", \r\n \"pin_memory\": false\r\n }, \r\n \"offload_param\": {\r\n \"device\": \"cpu\", \r\n \"pin_memory\": false\r\n }, \r\n \"overlap_comm\": true, \r\n \"contiguous_gradients\": true, \r\n \"sub_group_size\": 1.000000e+09, \r\n \"reduce_bucket_size\": 1.677722e+07, \r\n \"stage3_prefetch_bucket_size\": 1.509949e+07, \r\n \"stage3_param_persistence_threshold\": 4.096000e+04, \r\n \"stage3_max_live_parameters\": 1.000000e+09, \r\n \"stage3_max_reuse_distance\": 1.000000e+09, \r\n \"stage3_gather_fp16_weights_on_model_save\": true\r\n }, \r\n \"gradient_accumulation_steps\": 2, \r\n \"gradient_clipping\": 1.0, \r\n \"steps_per_print\": inf, \r\n \"train_batch_size\": 16, \r\n \"train_micro_batch_size_per_gpu\": 2, \r\n \"wall_clock_breakdown\": false\r\n}\r\n[INFO|trainer.py:1682] 2023-07-25 00:30:36,503 >> ***** Running training *****\r\n[INFO|trainer.py:1683] 2023-07-25 00:30:36,503 >> Num examples = 180\r\n[INFO|trainer.py:1684] 2023-07-25 00:30:36,503 >> Num Epochs = 1\r\n[INFO|trainer.py:1685] 2023-07-25 00:30:36,504 >> Instantaneous batch size per device = 2\r\n[INFO|trainer.py:1688] 2023-07-25 00:30:36,504 >> Total train batch size (w. parallel, distributed & accumulation) = 16\r\n[INFO|trainer.py:1689] 2023-07-25 00:30:36,504 >> Gradient Accumulation steps = 2\r\n[INFO|trainer.py:1690] 2023-07-25 00:30:36,504 >> Total optimization steps = 11\r\n[INFO|trainer.py:1682] 2023-07-25 00:30:36,503 >> ***** Running training *****\r\n[INFO|trainer.py:1683] 2023-07-25 00:30:36,503 >> Num examples = 180\r\n[INFO|trainer.py:1684] 2023-07-25 00:30:36,503 >> Num Epochs = 1\r\n[INFO|trainer.py:1685] 2023-07-25 00:30:36,504 >> Instantaneous batch size per device = 2\r\n[INFO|trainer.py:1688] 2023-07-25 00:30:36,504 >> Total train batch size (w. parallel, distributed & accumulation) = 16\r\n[INFO|trainer.py:1689] 2023-07-25 00:30:36,504 >> Gradient Accumulation steps = 2\r\n[INFO|trainer.py:1690] 2023-07-25 00:30:36,504 >> Total optimization steps = 11\r\n[INFO|trainer.py:1691] 2023-07-25 00:30:36,505 >> Number of trainable parameters = 6,738,448,384\r\n[INFO|trainer.py:1691] 2023-07-25 00:30:36,505 >> Number of trainable parameters = 6,738,448,384\r\n0%| | 0/11 [00:00<?, ?it/s]\r\nYou're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\nYou're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\nYou're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\nYou're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\nYou're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\nYou're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\n[WARNING|logging.py:280] 2023-07-25 00:30:36,510 >> You're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\n[WARNING|logging.py:280] 2023-07-25 00:30:36,510 >> You're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\n07/25/2023 00:31:11 - INFO - __main__ - !!!!!!At this step throughput is 0.45318892143877243\r\n9%|โ | 1/11 [00:35<05:53, 35.31s/it]\r\n07/25/2023 00:31:42 - INFO - __main__ - !!!!!!At this step throughput is 0.47042510136622717\r\n18%|โโ | 2/11 [01:05<04:51, 32.37s/it]\r\n07/25/2023 00:32:13 - INFO - __main__ - !!!!!!At this step throughput is 0.47886025282245415\r\n27%|โโโ | 3/11 [01:36<04:14, 31.84s/it]\r\n07/25/2023 00:32:44 - INFO - __main__ - !!!!!!At this step throughput is 0.4844130442539049\r\n36%|โโโโ | 4/11 [02:07<03:40, 31.47s/it]\r\n07/25/2023 00:33:15 - INFO - __main__ - !!!!!!At this step throughput is 0.4884299545826904\r\n45%|โโโโโ | 5/11 [02:38<03:07, 31.24s/it]\r\n07/25/2023 00:33:45 - INFO - __main__ - !!!!!!At this step throughput is 0.4916091094101314\r\n55%|โโโโโโ | 6/11 [03:09<02:35, 31.02s/it]\r\n07/25/2023 00:34:17 - INFO - __main__ - !!!!!!At this step throughput is 0.49364129923765976\r\n64%|โโโโโโโ | 7/11 [03:41<02:05, 31.42s/it]\r\n07/25/2023 00:34:48 - INFO - __main__ - !!!!!!At this step throughput is 0.4954246781847558\r\n73%|โโโโโโโโ | 8/11 [04:12<01:33, 31.16s/it]\r\n07/25/2023 00:35:18 - INFO - __main__ - !!!!!!At this step throughput is 0.4971914292369494\r\n82%|โโโโโโโโโ | 9/11 [04:41<01:01, 30.68s/it]\r\n07/25/2023 00:35:48 - INFO - __main__ - !!!!!!At this step throughput is 0.49877618579058647\r\n91%|โโโโโโโโโ | 10/11 [05:11<00:30, 30.55s/it]\r\n{'loss': 1.7188, 'learning_rate': 6e-06, 'epoch': 0.87}\r\n91%|โโโโโโโโโ | 10/11 [05:11<00:30, 30.55s/it]\r\n[INFO|trainer.py:3080] 2023-07-25 00:35:48,400 >> ***** Running Evaluation *****\r\n[INFO|trainer.py:3080] 2023-07-25 00:35:48,400 >> ***** Running Evaluation *****\r\n[INFO|trainer.py:3082] 2023-07-25 00:35:48,400 >> Num examples = 20\r\n[INFO|trainer.py:3085] 2023-07-25 00:35:48,400 >> Batch size = 8\r\n[INFO|trainer.py:3082] 2023-07-25 00:35:48,400 >> Num examples = 20\r\n[INFO|trainer.py:3085] 2023-07-25 00:35:48,400 >> Batch size = 8\r\n0%| | 0/1 [00:00<?, ?it/s]#033[A\r\n#033[A\r\n{'eval_loss': 1.104188323020935, 'eval_runtime': 3.1127, 'eval_samples_per_second': 6.425, 'eval_steps_per_second': 0.321, 'epoch': 0.87}\r\n91%|โโโโโโโโโ | 10/11 [05:15<00:30, 30.55s/it]\r\n#015100%|โโโโโโโโโโ| 1/1 [00:00<00:00, 1080.45it/s]\r\n#033[A\r\n#033[A\r\n[INFO|trainer.py:2806] 2023-07-25 00:36:03,394 >> Saving model checkpoint to /opt/ml/model/checkpoint-10\r\n[INFO|trainer.py:2806] 2023-07-25 00:36:03,394 >> Saving model checkpoint to /opt/ml/model/checkpoint-10\r\n[INFO|configuration_utils.py:458] 2023-07-25 00:36:03,394 >> Configuration saved in /opt/ml/model/checkpoint-10/config.json\r\n[INFO|configuration_utils.py:458] 2023-07-25 00:36:03,394 >> Configuration saved in /opt/ml/model/checkpoint-10/config.json\r\n[INFO|configuration_utils.py:379] 2023-07-25 00:36:03,395 >> Configuration saved in /opt/ml/model/checkpoint-10/generation_config.json\r\n[INFO|configuration_utils.py:379] 2023-07-25 00:36:03,395 >> Configuration saved in /opt/ml/model/checkpoint-10/generation_config.json\r\n[INFO|modeling_utils.py:1863] 2023-07-25 00:36:15,055 >> The model is bigger than the maximum size per checkpoint (10GB) and is going to be split in 2 checkpoint shards. You can find where each parameters has been saved in the index located at /opt/ml/model/checkpoint-10/pytorch_model.bin.index.json.\r\n[INFO|modeling_utils.py:1863] 2023-07-25 00:36:15,055 >> The model is bigger than the maximum size per checkpoint (10GB) and is going to be split in 2 checkpoint shards. You can find where each parameters has been saved in the index located at /opt/ml/model/checkpoint-10/pytorch_model.bin.index.json.\r\n[INFO|tokenization_utils_base.py:2210] 2023-07-25 00:36:15,055 >> tokenizer config file saved in /opt/ml/model/checkpoint-10/tokenizer_config.json\r\n[INFO|tokenization_utils_base.py:2210] 2023-07-25 00:36:15,055 >> tokenizer config file saved in /opt/ml/model/checkpoint-10/tokenizer_config.json\r\n[INFO|tokenization_utils_base.py:2217] 2023-07-25 00:36:15,055 >> Special tokens file saved in /opt/ml/model/checkpoint-10/special_tokens_map.json\r\n[INFO|tokenization_utils_base.py:2217] 2023-07-25 00:36:15,055 >> Special tokens file saved in /opt/ml/model/checkpoint-10/special_tokens_map.json\r\n[2023-07-25 00:36:15,659] [INFO] [logging.py:96:log_dist] [Rank 0] [Torch] Checkpoint global_step10 is about to be saved!\r\n/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n[2023-07-25 00:36:15,675] [INFO] [logging.py:96:log_dist] [Rank 0] Saving model checkpoint: /opt/ml/model/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_model_states.pt\r\n[2023-07-25 00:36:15,675] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /opt/ml/model/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_model_states.pt...\r\n[2023-07-25 00:36:15,689] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /opt/ml/model/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_model_states.pt.\r\n[2023-07-25 00:36:15,689] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /opt/ml/model/checkpoint-10/global_step10/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt...\r\n[2023-07-25 00:37:16,991] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /opt/ml/model/checkpoint-10/global_step10/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt.\r\n[2023-07-25 00:37:16,992] [INFO] [engine.py:3285:_save_zero_checkpoint] zero checkpoint saved /opt/ml/model/checkpoint-10/global_step10/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt\r\n[2023-07-25 00:37:17,699] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step10 is ready now!\r\n07/25/2023 00:37:49 - INFO - __main__ - !!!!!!At this step throughput is 0.49004957528181253\r\n100%|โโโโโโโโโโ| 11/11 [07:12<00:00, 58.13s/it]\r\n[INFO|trainer.py:1930] 2023-07-25 00:37:49,056 >> \r\nTraining completed. Do not forget to share your model on huggingface.co/models =)\r\n[INFO|trainer.py:1930] 2023-07-25 00:37:49,056 >> \r\nTraining completed. Do not forget to share your model on huggingface.co/models =)\r\n[INFO|trainer.py:2089] 2023-07-25 00:37:49,058 >> Loading best model from /opt/ml/model/checkpoint-10 (score: 1.104188323020935).\r\n[INFO|trainer.py:2089] 2023-07-25 00:37:49,058 >> Loading best model from /opt/ml/model/checkpoint-10 (score: 1.104188323020935).\r\n[INFO|deepspeed.py:381] 2023-07-25 00:37:49,060 >> Attempting to resume from /opt/ml/model/checkpoint-10\r\n[INFO|deepspeed.py:381] 2023-07-25 00:37:49,060 >> Attempting to resume from /opt/ml/model/checkpoint-10\r\n[2023-07-25 00:37:49,109] [INFO] [torch_checkpoint_engine.py:27:load] [Torch] Loading checkpoint from /opt/ml/model/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_model_states.pt...\r\n[2023-07-25 00:37:49,143] [INFO] [torch_checkpoint_engine.py:29:load] [Torch] Loaded checkpoint from /opt/ml/model/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_model_states.pt.\r\n[2023-07-25 00:37:49,151] [INFO] [torch_checkpoint_engine.py:27:load] [Torch] Loading checkpoint from /opt/ml/model/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_model_states.pt...\r\n[2023-07-25 00:37:49,161] [INFO] [torch_checkpoint_engine.py:29:load] [Torch] Loaded checkpoint from /opt/ml/model/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_model_states.pt.\r\n[2023-07-25 00:37:49,180] [INFO] [torch_checkpoint_engine.py:27:load] [Torch] Loading checkpoint from /opt/ml/model/checkpoint-10/global_step10/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt...\r\n[2023-07-25 00:38:05,103] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 230\r\n[2023-07-25 00:38:08,243] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 231\r\n[2023-07-25 00:38:08,243] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 232\r\n[2023-07-25 00:38:11,500] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 233\r\n```",
"Second thought: how can I get away with load the best model using Trainer and implement it outside of Trainer? like this line in clm.py https://github.com/philschmid/huggingface-llama-2-samples/blob/18838c203285e7eefa2169e5413db4b8e8013a02/training/scripts/run_clm.py#L238",
"Hi @pacman100 gentle bump on above issue to see if there is anything I can provide to let you better root cause. THX a lot!",
"> Hello, see this issue: https://github.com/huggingface/accelerate/issues/1707\n\nAs mentioned, this is the issue and isn't related to DeepSpeed integration. Please follow up with the DeepSpeed team"
] | 1,690 | 1,690 | 1,690 |
NONE
| null |
### System Info
Hi Community!
I am using `run_clm.py` with `deepspeed` to fine-tune LIama 7B on `g5.12xlarge` EC2 instance (4 GPU, Total GPU memory 96 GB, vCPUs 48 with 192 GB).
* Transformer version: 4.28.1
* DeepSpeed version: 0.10.0 (lastest)
* Instance: `g5.12xlarge` EC2 instance (4 GPU, Total GPU memory 96 GB, vCPUs 48 with 192 GB).
* DeepSpeed file config: [ds_config.pdf](https://github.com/huggingface/transformers/files/12140984/ds_config.pdf)
* Invoking command: `cmd = /opt/conda/bin/python3.10 -u -m deepspeed.launcher.launch --world_info=<OMIT_AS_NON_IMPORTANT> --master_addr=<OMIT_AS_NON_IMPORTANT> --master_port=<OMIT_AS_NON_IMPORTANT>--enable_each_rank_log=None run_clm.py --deepspeed ds_config.json --model_name_or_path /tmp --train_file /opt/ml/input/data/train --do_train --output_dir /opt/ml/model --num_train_epochs 1 --gradient_accumulation_steps 4 --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --logging_steps 10 --warmup_ratio 0.1 --learning_rate 6e-06 --weight_decay 0.2 --seed 10 --max_input_length -1 --validation_split_ratio 0.1 --train_data_split_seed 0 --max_steps -1 --early_stopping_patience 3 --early_stopping_threshold 0.0 --adam_beta1 0.9 --adam_beta2 0.999 --max_grad_norm 1.0 --label_smoothing_factor 0.0 --logging_strategy steps --save_strategy steps --save_steps 10 --dataloader_num_workers 0 --lr_scheduler_type constant_with_warmup --warmup_steps 0 --evaluation_strategy steps --eval_steps 10 --bf16 --instruction_tuned --gradient_checkpointing --save_total_limit 1`
The model is able to be trained successfully until the very end step of loading the best model. Because of `load_best_model_at_end` argument being True, when trainer.py uses DeepSpeed engine to load the model, it goes OOM.
### Who can help?
@sgugger
@pacman100
@ArthurZucker and @younesbelkada
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
During the stage of load the best model, I saw weird print out of initialization of deepspeed log that appears at the beginning of the training. Then I verified and saw this [line](https://github.com/huggingface/transformers/blob/04ab5605fbb4ef207b10bf2772d88c53fc242e83/src/transformers/trainer.py#L2184) in trainer.py. Then I thought this OOM is due to unnecessary usage of Deep speed init function (as also indicated by the comment above the code).
Next, I uses latest version of Transformers `4.31.0` as I saw it no longer uses deepspeed init to load the best model ([line](https://github.com/huggingface/transformers/blob/e42587f596181396e1c4b63660abf0c736b10dae/src/transformers/trainer.py#L2107) and [deepspeed loading function](https://github.com/huggingface/transformers/blob/e42587f596181396e1c4b63660abf0c736b10dae/src/transformers/deepspeed.py#L371)). Then I hit LIama 2 configuration bug. See below. I don't know why during loading the best model, this [line](https://github.com/huggingface/transformers/blob/e42587f596181396e1c4b63660abf0c736b10dae/src/transformers/trainer.py#L2107) of deepspeed is not triggered but [this line](https://github.com/huggingface/transformers/blob/e42587f596181396e1c4b63660abf0c736b10dae/src/transformers/trainer.py#L2168) did.
```
Training completed. Do not forget to share your model on huggingface.co/models =)
[INFO|trainer.py:1934] 2023-07-24 02:37:09,873 >>
Training completed. Do not forget to share your model on huggingface.co/models =)
[INFO|trainer.py:2093] 2023-07-24 02:37:09,889 >> Loading best model from /opt/ml/model/checkpoint-10 (score: 1.4037604331970215).
[INFO|trainer.py:2093] 2023-07-24 02:37:09,889 >> Loading best model from /opt/ml/model/checkpoint-10 (score: 1.4037604331970215).
Traceback (most recent call last):
File "/opt/ml/code/run_clm.py", line 229, in <module>
main()
File "/opt/ml/code/run_clm.py", line 178, in main
train_result = trainer.train() # load model/optimizer/scheduler states
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 1944, in _inner_training_loop
self._load_best_model()
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2168, in _load_best_model
load_result = load_sharded_checkpoint(
File "/opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py", line 431, in load_sharded_checkpoint
model.load_state_dict(state_dict, strict=False)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for LlamaForCausalLM:
#011size mismatch for model.layers.24.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.24.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.24.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.24.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.24.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.24.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.24.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.25.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.25.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.25.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.25.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.25.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.25.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.25.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.26.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.26.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.26.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.26.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.26.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.26.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.26.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.27.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.27.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.27.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.27.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.27.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.27.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.27.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.28.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.28.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.28.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.28.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.28.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.28.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.28.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.29.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.29.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.29.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.29.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.29.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.29.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.29.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.30.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.30.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.30.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.30.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.30.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.30.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.30.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.31.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.31.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.31.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.31.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.31.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.31.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.31.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for lm_head.weight: copying a param with shape torch.Size([32004, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
[2023-07-24 02:37:16,892] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 164
[2023-07-24 02:37:20,147] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 165
Traceback (most recent call last):
File "/opt/ml/code/run_clm.py", line 229, in <module>
main()
File "/opt/ml/code/run_clm.py", line 178, in main
train_result = trainer.train() # load model/optimizer/scheduler states
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 1944, in _inner_training_loop
self._load_best_model()
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2168, in _load_best_model
load_result = load_sharded_checkpoint(
File "/opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py", line 431, in load_sharded_checkpoint
model.load_state_dict(state_dict, strict=False)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for LlamaForCausalLM:
#011size mismatch for model.embed_tokens.weight: copying a param with shape torch.Size([32004, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.0.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.0.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.0.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.0.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.0.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.0.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.0.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.1.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.1.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.1.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.1.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.1.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.1.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.1.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.2.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.2.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.2.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.2.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.2.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.2.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.2.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.3.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.3.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.3.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.3.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.3.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.3.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.3.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.4.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.4.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.4.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.4.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.4.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.4.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.4.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.5.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.5.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.5.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.5.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.5.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.5.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.5.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.6.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.6.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.6.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.6.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.6.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.6.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.6.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.7.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.7.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.7.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.7.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.7.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.7.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.7.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.8.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.8.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.8.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.8.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.8.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.8.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.8.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.9.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.9.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.9.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.9.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.9.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.9.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.9.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.10.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.10.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.10.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.10.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.10.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.10.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.10.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.11.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.11.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.11.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.11.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.11.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.11.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.11.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.12.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.12.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.12.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.12.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.12.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.12.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.12.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.13.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.13.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.13.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.13.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.13.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.13.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.13.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.14.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.14.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.14.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.14.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.14.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.14.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.14.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.15.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.15.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.15.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.15.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.15.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.15.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.15.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.16.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.16.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.16.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.16.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.16.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.16.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.16.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.17.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.17.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.17.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.17.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.17.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.17.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.17.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.18.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.18.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.18.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.18.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.18.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.18.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.18.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.19.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.19.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.19.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.19.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.19.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.19.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.19.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.20.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.20.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.20.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.20.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.20.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.20.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.20.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.21.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.21.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.21.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.21.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.21.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.21.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.21.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.22.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.22.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.22.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.22.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.22.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.22.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.22.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.23.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.23.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.23.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.23.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.23.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.23.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.23.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
```
Then, I thought while I am waiting for HF to fix the configuration issue with LIama 2. I can uses the [latest code of loading the best model](https://github.com/huggingface/transformers/blob/e42587f596181396e1c4b63660abf0c736b10dae/src/transformers/deepspeed.py#L371) from `transformers 4.31.0` and apply it to the code with`transformers 4.28.1`.
Thus I disable `load_best_model_at_end`, and try to load it after `Trainer.train()` with following code.
```
train_result = trainer.train()
checkpoint_dirs = sorted(glob.glob(f"/opt/ml/model/checkpoint-*"))
checkpoint_path = checkpoint_dirs[0] # this is because I set total_save_limit as 1
load_path, _ = trainer.model_wrapped.load_checkpoint(
checkpoint_path, load_optimizer_states=False, load_lr_scheduler_states=False
)
trainer.save_model()
```
I hit OOM when I specified `load_optimizer_states` and `load_lr_scheduler_states` being True. Then I thought since the model I save is used for evaluation/inference only rather than resuming training from the checkpoints. Thus I don't need optimzer and lr scheduler. However, when I set them as False, I still hit the error.
Please advise what you think on this issue. THX!
### Expected behavior
I expect the best model to be loaded without OOM error as the model can be trained successfully before hitting the final saving step.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25027/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25027/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25026
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25026/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25026/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25026/events
|
https://github.com/huggingface/transformers/issues/25026
| 1,817,213,569 |
I_kwDOCUB6oc5sUHqB
| 25,026 |
load_in_8bit=True broken with new transformers
|
{
"login": "pseudotensor",
"id": 2249614,
"node_id": "MDQ6VXNlcjIyNDk2MTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2249614?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pseudotensor",
"html_url": "https://github.com/pseudotensor",
"followers_url": "https://api.github.com/users/pseudotensor/followers",
"following_url": "https://api.github.com/users/pseudotensor/following{/other_user}",
"gists_url": "https://api.github.com/users/pseudotensor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pseudotensor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pseudotensor/subscriptions",
"organizations_url": "https://api.github.com/users/pseudotensor/orgs",
"repos_url": "https://api.github.com/users/pseudotensor/repos",
"events_url": "https://api.github.com/users/pseudotensor/events{/privacy}",
"received_events_url": "https://api.github.com/users/pseudotensor/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @younesbelkada ",
"Hi @pseudotensor \r\nThanks for the issue and the clean reproducer, I can confirm the issue persists on the current main brach of transformers and does not occur with #25047 - once that PR merged your issue will be fixed\r\n\r\nAlso, there is no need to specify `load_in_8bit=True` and `device_map=\"auto\"` in the `Blip2Processor.from_pretrained` method\r\n```python\r\nimport requests\r\nfrom PIL import Image\r\nimg_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'\r\nimage = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')\r\nfrom transformers import Blip2Processor, Blip2ForConditionalGeneration\r\ngpu_id = 0\r\ndevice_map = {\"\": gpu_id}\r\nblip_model = blip_processor = 'Salesforce/blip2-flan-t5-xl'\r\nprompt = 'an image of'\r\n\r\ndevice = 'cuda'\r\nload_half = False\r\nimport torch\r\nwith torch.no_grad():\r\n context_class_cast = torch.autocast\r\n with context_class_cast(device):\r\n processor = Blip2Processor.from_pretrained(blip_processor)\r\n model = Blip2ForConditionalGeneration.from_pretrained(blip_model,\r\n load_in_8bit=True,\r\n device_map=device_map)\r\n\r\n inputs = processor(image, prompt, return_tensors=\"pt\")\r\n output = model.generate(**inputs)\r\n\r\n caption: str = processor.decode(output[0], skip_special_tokens=True)\r\n print(caption)\r\n```"
] | 1,690 | 1,690 | 1,690 |
NONE
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.4.0-150-generic-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.20.3
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: bf16
- use_cpu: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- gpu_ids: 0
- rdzv_backend: static
- same_network: True
- main_training_function: main
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Script based upon: https://huggingface.co/Salesforce/blip2-flan-t5-xl
```python
import requests
from PIL import Image
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
from transformers import Blip2Processor, Blip2ForConditionalGeneration
gpu_id = 0
device_map = {"": gpu_id}
blip_model = blip_processor = 'Salesforce/blip2-flan-t5-xl'
prompt = 'an image of'
device = 'cuda'
load_half = False
import torch
with torch.no_grad():
context_class_cast = torch.autocast
with context_class_cast(device):
processor = Blip2Processor.from_pretrained(blip_processor,
load_in_8bit=True,
device_map=device_map)
model = Blip2ForConditionalGeneration.from_pretrained(blip_model,
load_in_8bit=True,
device_map=device_map)
inputs = processor(image, prompt, return_tensors="pt")
output = model.generate(**inputs)
caption: str = processor.decode(output[0], skip_special_tokens=True)
print(caption)
```
output:
```
/home/jon/miniconda3/envs/h2ogpt/bin/python3.10 /home/jon/h2ogpt/checkblip2_mine.py
Console output is saving to: /home/jon/h2ogpt/pycharm.log
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
bin /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda121.so
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 7.5
CUDA SETUP: Detected CUDA version 121
CUDA SETUP: Loading binary /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda121.so...
/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: /home/jon/miniconda3/envs/h2ogpt did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...
warn(msg)
/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/clang+llvm-4.0.0-x86_64-linux-gnu-ubuntu-16.04/lib'), PosixPath('/opt/rstudio-1.0.136/bin'), PosixPath('/usr/lib/jvm/default-java/jre/lib/amd64/server'), PosixPath('/home/jon/lib'), PosixPath('/usr/local/cuda/extras/CUPTI/lib64')}
warn(msg)
Loading checkpoint shards: 100%|โโโโโโโโโโ| 2/2 [00:07<00:00, 3.88s/it]
/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/bitsandbytes/autograd/_functions.py:318: UserWarning: MatMul8bitLt: inputs will be cast from torch.float32 to float16 during quantization
warnings.warn(f"MatMul8bitLt: inputs will be cast from {A.dtype} to float16 during quantization")
/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/transformers/generation/utils.py:1369: UserWarning: Using `max_length`'s default (20) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.
warnings.warn(
```
Gives empty output for caption in transformers==4.31.0 but gives correct output for all older transformers from 4.30.2 and backwards.
### Expected behavior
All dependencies fixed, just do:
```
pip install transformers==4.30.2
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting transformers==4.30.2
Downloading transformers-4.30.2-py3-none-any.whl (7.2 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 7.2/7.2 MB 55.8 MB/s eta 0:00:00
Requirement already satisfied: filelock in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from transformers==4.30.2) (3.12.2)
Requirement already satisfied: huggingface-hub<1.0,>=0.14.1 in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from transformers==4.30.2) (0.16.4)
Requirement already satisfied: numpy>=1.17 in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from transformers==4.30.2) (1.23.5)
Requirement already satisfied: packaging>=20.0 in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from transformers==4.30.2) (23.1)
Requirement already satisfied: pyyaml>=5.1 in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from transformers==4.30.2) (6.0)
Requirement already satisfied: regex!=2019.12.17 in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from transformers==4.30.2) (2023.6.3)
Requirement already satisfied: requests in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from transformers==4.30.2) (2.31.0)
Requirement already satisfied: tokenizers!=0.11.3,<0.14,>=0.11.1 in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from transformers==4.30.2) (0.13.3)
Requirement already satisfied: safetensors>=0.3.1 in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from transformers==4.30.2) (0.3.1)
Requirement already satisfied: tqdm>=4.27 in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from transformers==4.30.2) (4.65.0)
Requirement already satisfied: fsspec in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from huggingface-hub<1.0,>=0.14.1->transformers==4.30.2) (2023.6.0)
Requirement already satisfied: typing-extensions>=3.7.4.3 in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from huggingface-hub<1.0,>=0.14.1->transformers==4.30.2) (4.7.1)
Requirement already satisfied: charset-normalizer<4,>=2 in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from requests->transformers==4.30.2) (3.2.0)
Requirement already satisfied: idna<4,>=2.5 in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from requests->transformers==4.30.2) (3.4)
Requirement already satisfied: urllib3<3,>=1.21.1 in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from requests->transformers==4.30.2) (1.26.16)
Requirement already satisfied: certifi>=2017.4.17 in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from requests->transformers==4.30.2) (2023.5.7)
Installing collected packages: transformers
Attempting uninstall: transformers
Found existing installation: transformers 4.31.0
Uninstalling transformers-4.31.0:
Successfully uninstalled transformers-4.31.0
Successfully installed transformers-4.30.2
(h2ogpt) jon@pseudotensor:~/h2ogpt$
```
Then re-run script and get:
```
/home/jon/miniconda3/envs/h2ogpt/bin/python3.10 /home/jon/h2ogpt/checkblip2_mine.py
Console output is saving to: /home/jon/h2ogpt/pycharm.log
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
bin /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda121.so
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 7.5
CUDA SETUP: Detected CUDA version 121
CUDA SETUP: Loading binary /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda121.so...
/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: /home/jon/miniconda3/envs/h2ogpt did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...
warn(msg)
/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/usr/lib/jvm/default-java/jre/lib/amd64/server'), PosixPath('/opt/rstudio-1.0.136/bin'), PosixPath('/opt/clang+llvm-4.0.0-x86_64-linux-gnu-ubuntu-16.04/lib'), PosixPath('/home/jon/lib'), PosixPath('/usr/local/cuda/extras/CUPTI/lib64')}
warn(msg)
Loading checkpoint shards: 100%|โโโโโโโโโโ| 2/2 [00:08<00:00, 4.04s/it]
/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/bitsandbytes/autograd/_functions.py:318: UserWarning: MatMul8bitLt: inputs will be cast from torch.float32 to float16 during quantization
warnings.warn(f"MatMul8bitLt: inputs will be cast from {A.dtype} to float16 during quantization")
/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/transformers/generation/utils.py:1353: UserWarning: Using `max_length`'s default (20) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.
warnings.warn(
a woman and her dog on the beach
```
i.e. returns the caption: `a woman and her dog on the beach`
I also tried the latest bitsandbytes==0.41.0 and same effect as with older 0.39.0.
I also tried newer accelerate==0.21.0 and has same effect as with older 0.20.3
I also tried latest transformers 4.32.0.dev0 and same effect.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25026/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25026/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25025
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25025/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25025/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25025/events
|
https://github.com/huggingface/transformers/issues/25025
| 1,817,191,483 |
I_kwDOCUB6oc5sUCQ7
| 25,025 |
KeyError: 'eval_loss' when doing LukeForMaskedLM
|
{
"login": "higopires",
"id": 66256549,
"node_id": "MDQ6VXNlcjY2MjU2NTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/66256549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/higopires",
"html_url": "https://github.com/higopires",
"followers_url": "https://api.github.com/users/higopires/followers",
"following_url": "https://api.github.com/users/higopires/following{/other_user}",
"gists_url": "https://api.github.com/users/higopires/gists{/gist_id}",
"starred_url": "https://api.github.com/users/higopires/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/higopires/subscriptions",
"organizations_url": "https://api.github.com/users/higopires/orgs",
"repos_url": "https://api.github.com/users/higopires/repos",
"events_url": "https://api.github.com/users/higopires/events{/privacy}",
"received_events_url": "https://api.github.com/users/higopires/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This example might need adjustments to be used on Luke, as the model has an API that is slightly different from BERT.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,690 | 1,693 | 1,693 |
NONE
| null |
### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.14.21-150400.24.55-default-x86_64-with-glibc2.31
- Python version: 3.10.10
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: **Yes**
- Using distributed or parallel set-up in script?: **No**
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm using [run_mlm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py) to evaluate some models on a custom dataset (one text by line). In some models (BERT, XLM-RoBERTa, DeBERTa) I managed to run the evaluations successfully and even on LUKE I can start the evaluation, but after all the evaluation steps on [studio-ousia/mluke-base-lite](https://huggingface.co/studio-ousia/mluke-base-lite), I got the following:
```
Traceback (most recent call last):
File "/cfs/home/u021274/higo/./language-modeling/run_mlm.py", line 658, in <module>
main()
File "/cfs/home/u021274/higo/./language-modeling/run_mlm.py", line 629, in main
perplexity = math.exp(metrics["eval_loss"])
KeyError: 'eval_loss'
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ /cfs/home/u021274/higo/./language-modeling/run_mlm.py:658 in <module> โ
โ โ
โ 655 โ
โ 656 โ
โ 657 if __name__ == "__main__": โ
โ โฑ 658 โ main() โ
โ 659 โ
โ โ
โ /cfs/home/u021274/higo/./language-modeling/run_mlm.py:629 in main โ
โ โ
โ 626 โ โ max_eval_samples = data_args.max_eval_samples if data_args.max_eval_samples is n โ
โ 627 โ โ metrics["eval_samples"] = min(max_eval_samples, len(eval_dataset)) โ
โ 628 โ โ try: โ
โ โฑ 629 โ โ โ perplexity = math.exp(metrics["eval_loss"]) โ
โ 630 โ โ except OverflowError: โ
โ 631 โ โ โ perplexity = float("inf") โ
โ 632 โ โ metrics["perplexity"] = perplexity โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
KeyError: 'eval_loss'
```
To do that, i entered the following prompt:
```
CUDA_VISIBLE_DEVICES=5 python3 ./language-modeling/run_mlm.py \
--model_name_or_path studio-ousia/mluke-base-lite \
--validation_file ./data/test_data.txt \
--max_seq_length 512 \
--line_by_line True \
--do_eval \
--output_dir ./other_models/mluke-base-lite \
--fp16 True
```
The configuration above was functional on all other before mentioned models, only on LUKE I'm having this issue.
### Expected behavior
Evaluation of mluke-base-lite on a given text dataset.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25025/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25025/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25024
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25024/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25024/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25024/events
|
https://github.com/huggingface/transformers/issues/25024
| 1,817,158,547 |
I_kwDOCUB6oc5sT6OT
| 25,024 |
Weights of BlipModel are not initialized from the model checkpoint
|
{
"login": "Vibhu04",
"id": 29009031,
"node_id": "MDQ6VXNlcjI5MDA5MDMx",
"avatar_url": "https://avatars.githubusercontent.com/u/29009031?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vibhu04",
"html_url": "https://github.com/Vibhu04",
"followers_url": "https://api.github.com/users/Vibhu04/followers",
"following_url": "https://api.github.com/users/Vibhu04/following{/other_user}",
"gists_url": "https://api.github.com/users/Vibhu04/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vibhu04/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vibhu04/subscriptions",
"organizations_url": "https://api.github.com/users/Vibhu04/orgs",
"repos_url": "https://api.github.com/users/Vibhu04/repos",
"events_url": "https://api.github.com/users/Vibhu04/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vibhu04/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Also cc @ydshieh who was just discussing this internally :-)",
"Hi @Vibhu04 \r\nThanks for the issue, \r\nindeed there is a problem with `BlipModel` classes. Note that BlipModel would stand for the \"pre-trained\" versions of Blip to extract raw logits / hidden states from text and vision input. That class has been copied from CLIPModel class and needs a careful refactoring to be able to reproduce the correct pre-trained Blip models: https://github.com/salesforce/BLIP/blob/main/models/blip_pretrain.py#L112-L136 .\r\nEven after the refactoring one would need to convert the pre-trained BLIP weights as they are different from existing weights on the Hub + they contain additional modules.\r\nI can put that on my TODO but cannot give an accurate ETA, for now if you want to use Blip as a model to retrieve hidden states and logits, I would advise you to use `BlipForConditionalGeneration`",
"Hi @younesbelkada, thanks a lot for your prompt reply. I actually want to compute the image-text similarity score given an input image and a text, and I was hoping I could use `BlipModel` for that. Would there be a way of achieving this using `BlipForConditionalGeneration`? If not, is there any other `Blip` model class that I could use for this purpose? \r\nThanks a lot. ",
"Thanks for your reply @Vibhu04 \r\nFor computing image and text similarity score, I would advise you to use the ITM (image text matching) models: https://huggingface.co/Salesforce/blip-itm-base-coco \r\n\r\n```python\r\nimport requests\r\nfrom PIL import Image\r\nfrom transformers import BlipProcessor, BlipForImageTextRetrieval\r\n\r\nprocessor = BlipProcessor.from_pretrained(\"Salesforce/blip-itm-base-coco\")\r\nmodel = BlipForImageTextRetrieval.from_pretrained(\"Salesforce/blip-itm-base-coco\")\r\n\r\nimg_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' \r\nraw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')\r\n\r\nquestion = \"A woman and a dog sitting together in a beach.\"\r\ninputs = processor(raw_image, question, return_tensors=\"pt\")\r\n\r\nitm_scores = model(**inputs)[0]\r\ncosine_score = model(**inputs, use_itm_head=False)[0]\r\n```",
"Hi @younesbelkada, thank you so much. If I may, I just have one last question: is there a lighter variant (i.e. fewer parameters) of the model that you mentioned? Thanks a lot.",
"Hi @Vibhu04 \r\nThanks a lot, hm, to the best of my knowledge the smallest model of that kind is: https://huggingface.co/Salesforce/blip-itm-base-coco - however you can run them in half-precision to reduce their memory footprint by 2:\r\n\r\n```python\r\nimport requests\r\nfrom PIL import Image\r\nimport torch\r\nfrom transformers import BlipProcessor, BlipForImageTextRetrieval\r\n\r\nprocessor = BlipProcessor.from_pretrained(\"Salesforce/blip-itm-base-coco\")\r\nmodel = BlipForImageTextRetrieval.from_pretrained(\"Salesforce/blip-itm-base-coco\", torch_dtype=torch.bfloat16)\r\n\r\nimg_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' \r\nraw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')\r\n\r\nquestion = \"A woman and a dog sitting together in a beach.\"\r\ninputs = processor(raw_image, question, return_tensors=\"pt\").to(torch.bfloat16)\r\n\r\nitm_scores = model(**inputs)[0]\r\ncosine_score = model(**inputs, use_itm_head=False)[0]\r\n```",
"Thank you so much for your help @younesbelkada! "
] | 1,690 | 1,690 | 1,690 |
NONE
| null |
### System Info
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.15.0-76-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.15
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@younesbelkada @ArthurZucker @amyeroberts @ydshieh
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from PIL import Image
import requests
from transformers import AutoProcessor, BlipModel
model = BlipModel.from_pretrained("Salesforce/blip-image-captioning-base")
processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(
text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True
)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1)
```
### Expected behavior
The code snippet is an example from https://huggingface.co/docs/transformers/model_doc/blip#transformers.BlipProcessor.
The warning that I get is:
Some weights of BlipModel were not initialized from the model checkpoint at Salesforce/blip-image-captioning-base and are newly initialized: ['text_model.encoder.layer.10.crossattention.output.dense.weight', 'text_model.encoder.layer.4.attention.output.LayerNorm.bias', 'text_model.encoder.layer.2.intermediate.dense.bias', 'text_model.encoder.layer.1.attention.self.value.bias', 'text_model.encoder.layer.5.attention.output.LayerNorm.bias', 'text_model.encoder.layer.2.attention.output.dense.bias', 'text_model.encoder.layer.1.crossattention.self.key.weight', 'text_model.encoder.layer.5.crossattention.self.key.bias', 'text_model.encoder.layer.11.crossattention.output.LayerNorm.bias', 'text_model.encoder.layer.1.attention.self.value.weight', 'text_model.encoder.layer.8.attention.self.key.weight', 'text_model.encoder.layer.9.crossattention.output.dense.bias', 'text_model.encoder.layer.7.crossattention.self.key.bias', 'text_model.encoder.layer.1.attention.output.dense.bias', 'text_model.encoder.layer.8.output.LayerNorm.bias', ...
It seems that the model weights are being initialised anew as there's some error with loading the pre-trained weights. Please guide me in solving this issue.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25024/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25023
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25023/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25023/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25023/events
|
https://github.com/huggingface/transformers/pull/25023
| 1,817,055,299 |
PR_kwDOCUB6oc5WKUd6
| 25,023 |
๐ [i18n-KO] Translated `tokenizer_summary.md` to Korean
|
{
"login": "HanNayeoniee",
"id": 33839093,
"node_id": "MDQ6VXNlcjMzODM5MDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/33839093?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HanNayeoniee",
"html_url": "https://github.com/HanNayeoniee",
"followers_url": "https://api.github.com/users/HanNayeoniee/followers",
"following_url": "https://api.github.com/users/HanNayeoniee/following{/other_user}",
"gists_url": "https://api.github.com/users/HanNayeoniee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HanNayeoniee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HanNayeoniee/subscriptions",
"organizations_url": "https://api.github.com/users/HanNayeoniee/orgs",
"repos_url": "https://api.github.com/users/HanNayeoniee/repos",
"events_url": "https://api.github.com/users/HanNayeoniee/events{/privacy}",
"received_events_url": "https://api.github.com/users/HanNayeoniee/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25023). All of your documentation changes will be reflected on that endpoint.",
"๋ฆฌ๋ทฐ๋ฅผ ๋จ๊ธฐ๊ณ submit์ ์ ํ์๋ค์ .. ใ
",
"> ๋ฆฌ๋ทฐ๋ฅผ ๋จ๊ธฐ๊ณ submit์ ์ ํ์๋ค์ .. ใ
\r\n\r\nใ
ใ
ใ
ใ
ใ
ใ
๋คํํ ์ง๊ธ ๋ฒ์ญ ์์ ์ค์ด๋ผ์ ํ์ธํ์ต๋๋ค!",
"> ๋์ฐ๋ ํญ์ ์๊ธฐ ์ฌ์ด ๋ง๋ก ๋ฒ์ญํด์ฃผ์
์ ์ข์์! ๊ฒ๋ค๊ฐ ์ด๋ฒ ๋ฌธ์์์ ํ ํฌ๋์ด์ ๋ฅผ ์ญ ๋๋ฌ๋ณผ ์ ์์ด์ ์ ์ตํ์ต๋๋ค ๐\r\n> \r\n> ๋ฆฌ๋ทฐ ํ๋ฉด์ glossary ๊ด๋ จํ ์์ ์ ์์ ๋ช ๊ฐ์ง ๋๋ ธ์ต๋๋ค. ์ฐธ๊ณ ๋ถํ ๋๋ฆฝ๋๋ค!\r\n\r\n์ ๊ฐ ๋ฒ์ญ์ ์ค๋๋ง์ ํด์ ๊ทธ๋ฐ์ง glossary ๊ด๋ จ ์์ ์ฌํญ์ด ๋ง๊ตฐ์.. ๊ผผ๊ผผํ ๋ฆฌ๋ทฐ ๊ฐ์ฌํฉ๋๋ค!!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,690 | 1,695 | 1,695 |
CONTRIBUTOR
| null |
<!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated `<your_file>.md` to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค -->
# What does this PR do?
Translated the `<tokenizer_summary>.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- ๋ฉ์ธ ์ด์์ ๊ธฐ๋ก์ด ๋จ์์! ๊ฐ์ง์ฐ๊ตฌ์ ๋ฆฌํฌ๋ฅผ ์ฌ์ฉํด ์ฐ์ตํ์ค๋๋ ์ ๊ฑฐํด์ฃผ์๋ฉด ๊ฐ์ฌํ๊ฒ ์ต๋๋ค! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์๋ง ๊ฐ์ง์ฐ๊ตฌ์ ํ์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
@sronger, @TaeYupNoh, @kj021, @HanNayeoniee, @eenzeenee, @sim-so
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. ๊ฐ์ง์ฐ๊ตฌ์ ํ์๋ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์๋ง ํ๊น
ํ์ด์ค ์ง์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
<!-- @sgugger, @ArthurZucker, @eunseojo May you please review this PR? -->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25023/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25023",
"html_url": "https://github.com/huggingface/transformers/pull/25023",
"diff_url": "https://github.com/huggingface/transformers/pull/25023.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25023.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25022
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25022/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25022/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25022/events
|
https://github.com/huggingface/transformers/issues/25022
| 1,817,054,171 |
I_kwDOCUB6oc5sTgvb
| 25,022 |
Incorrect padding_side Setting as 'left' in Llama Family Model
|
{
"login": "voidful",
"id": 10904842,
"node_id": "MDQ6VXNlcjEwOTA0ODQy",
"avatar_url": "https://avatars.githubusercontent.com/u/10904842?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/voidful",
"html_url": "https://github.com/voidful",
"followers_url": "https://api.github.com/users/voidful/followers",
"following_url": "https://api.github.com/users/voidful/following{/other_user}",
"gists_url": "https://api.github.com/users/voidful/gists{/gist_id}",
"starred_url": "https://api.github.com/users/voidful/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/voidful/subscriptions",
"organizations_url": "https://api.github.com/users/voidful/orgs",
"repos_url": "https://api.github.com/users/voidful/repos",
"events_url": "https://api.github.com/users/voidful/events{/privacy}",
"received_events_url": "https://api.github.com/users/voidful/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey! Indeed, as it was written in the documentation a padding token is required. Seems that by default the padding side is set to `left`. We cannot update the `tokenization` file (for backward compatibility reasons) but we can update the tokenizers online to make sure they use `padding_side = right` by default. ",
"> Hey! Indeed, as it was written in the documentation a padding token is required. Seems that by default the padding side is set to `left`. We cannot update the `tokenization` file (for backward compatibility reasons) but we can update the tokenizers online to make sure they use `padding_side = right` by default.\r\n\r\nGreat, I would be nice to update the default padding_side of those model.",
"There does not seem to be any documentation regarding what the correct padding_side should be for CodeLLAMA family. Is there a way to find this out ? @ArthurZucker I also opened a related issue [here](https://github.com/huggingface/transformers/issues/26072).",
"CodeLlama is Llama family so same padding side. I answered on your issue ๐ค "
] | 1,690 | 1,695 | 1,690 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.15.0-1041-azure-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.16.2
- Safetensors version: 0.3.1
### Who can help?
text models: @ArthurZucker and @younesbelkada generate: @gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When utilizing the Llama Family Model for batch generation, an issue arises due to the lack of a padding token. To clarify, the original model uses pad_id = -1, implying the absence of a padding token. This logic is infeasible for our scenario.
Here is our proposed solution:
Firstly, a padding token should be added using the command tokenizer.add_special_tokens({"pad_token":"<pad>"}), following which the token embedding must be resized accordingly. It's essential to also set model.config.pad_token_id. The embed_tokens layer of the model is initialized with self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.config.padding_idx). This ensures that encoding the padding token outputs zeros. Therefore, passing it during initialization is recommended.
### Expected behavior
Another important aspect is setting the padding_side to 'right'. This is crucial for correct padding direction.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25022/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25021
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25021/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25021/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25021/events
|
https://github.com/huggingface/transformers/issues/25021
| 1,817,030,907 |
I_kwDOCUB6oc5sTbD7
| 25,021 |
fp16 DDP training in 4.31.0
|
{
"login": "getao",
"id": 12735658,
"node_id": "MDQ6VXNlcjEyNzM1NjU4",
"avatar_url": "https://avatars.githubusercontent.com/u/12735658?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/getao",
"html_url": "https://github.com/getao",
"followers_url": "https://api.github.com/users/getao/followers",
"following_url": "https://api.github.com/users/getao/following{/other_user}",
"gists_url": "https://api.github.com/users/getao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/getao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/getao/subscriptions",
"organizations_url": "https://api.github.com/users/getao/orgs",
"repos_url": "https://api.github.com/users/getao/repos",
"events_url": "https://api.github.com/users/getao/events{/privacy}",
"received_events_url": "https://api.github.com/users/getao/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Similar problem here. When using fp16 to train Llama2 with LoRA, always nan. Bf16 works well, but official Llama2 uses fp16.",
"Oh, I understand! It doesn't use the cuda.amp but uses accelerator to automatically handle the loss scale, right?\r\n\r\nAnd the reason why fp16's gradient is large is that they're scaled gradients. \r\n\r\nHowever, it is still strange that gradients in the optimizer are fp32. Are they designed so for scaling? I'm sorry that I'm not very familiar with accelerator",
"This is **mixed precision** training. The gradients are computed in float16 but converted to float32 to do the optimizer update as we can't do the update in low precision. As for properly computing the gradient norm, you need to unscale the gradients first to compare them to a training without mixed precision (or bfloat16 as the training in bfloat16 does not require gradient scaling).",
"> This is **mixed precision** training. The gradients are computed in float16 but converted to float32 to do the optimizer update as we can't do the update in low precision. As for properly computing the gradient norm, you need to unscale the gradients first to compare them to a training without mixed precision (or bfloat16 as the training in bfloat16 does not require gradient scaling).\r\n\r\nMany thanks for your answering. If I use --fp16 during training, what other arguments should I add for this? I am confused that using --fp16 to tune Llama2 always meets nan, but --bf16 works well.",
"I also observed the same thing that mixed precision training of llama-7b is very frequently resulting in nan losses. The issue does not exist for 13b for me. As you say, bfloat16 is more stable. I dont think there is anything wrong in the code base, rather some strange peculiarity with the 7b weights? Curious if someone has some insights on that.",
"> I also observed the same thing that mixed precision training of llama-7b is very frequently resulting in nan losses. The issue does not exist for 13b for me. As you say, bfloat16 is more stable. I dont think there is anything wrong in the code base, rather some strange peculiarity with the 7b weights? Curious if someone has some insights on that.\r\n\r\nThat's interesting findings. I haven't tried the 13b llama-2 yet.",
"I meet a more confused thing! When I torchrun/deepspeed fp16 train glm or baichuan with 1 or 2 gpus, the loss is ok. But when i use more than 2 gpus, like 3, the loss will overflow until fail! My gpus is V100, and i have try different version of transformers\\torch\\deepspeed",
"> I meet a more confused thing! When I torchrun/deepspeed fp16 train glm or baichuan with 1 or 2 gpus, the loss is ok. But when i use more than 2 gpus, like 3, the loss will overflow until fail! My gpus is V100, and i have try different version of transformers\\torch\\deepspeed\r\n\r\nI observed the similar results. When using more gpus or larger gradient accumulatio steps, the result doesn't become better (as expected) but often becomes worse (using fp16 in v100)",
"so why? what is the difference between 2GPUs and 3(or more) GPUS when do training that causes the unexpected result. ps: I used to run the same train code in A100 with AMP fp16 which is ok.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,690 | 1,693 | 1,693 |
NONE
| null |
### System Info
pytorch 1.13.1
transformers==4.31.0
### Who can help?
Hi @sgugger ,
I used the 4.31.0 to train a Llama model with LoRA. I observe some problems with --fp16 training and I'm not sure if it is a bug in Trainer.py:
My model is like:
```
class MyModel(nn.Module):
def __init__(self, model_name):
super().__init__()
self.model_name = model_name
self.base_model = LlamaForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16)
self.base_model = get_peft_model(self.base_model, lora_config)
self.other_modules = nn.Linear(4096, 4096)
```
I used the Trainer to train the model with the following command line:
`torchrun --nproc_per_node=4 main.py --max_steps 100000 --fp16
`
I find the model's gradients (in self.optimizer in the Trainer) are not fp16 but fp32. Is it correct?
Also, I find that no gradient_scaling is conducted during training since self.do_grad_scaling is always False (because self.sharded_ddp is None and args.half_precision_backend will be always "auto"). The current trainer.py will not correctly set up args.half_precision_backend and scaler if self.sharded_ddp is None. Are these observations expected? I'm a little confused why setting up args.half_precision_backend and scaler require sharded_ddp. As a result, I've found that during the training process, I often encounter the loss becoming NaN. I'm not sure whether it is because no gradient_scaling is conducted and half_precision_backend is not correctly set up during training.
Following are my grad_norm (before grad_clipping) with and without --fp16. (My base model here is "JackFram/llama-160m" for debugging) **The results are significantly different.**
Without --fp16:
step 1: grad_norm=0.059
Step 5: grad_norm=0.054
Step 10: grad_norm=0.048
Step 15: grad_norm=0.050
Step 20: grad_norm=0.050
With --fp16:
Step 1: grad_norm = nan
Step 5: grad_norm = 129.88
Step 10: grad_norm=126.98
Step 15: grad_norm=149.58
Step 20: grad_norm=80.7
```
def compute_grad_norm(optimizer): # the function to compute grad_norm
total_norm = 0.0
for group in optimizer.param_groups:
for param in group['params']:
if param.grad is not None:
param_norm = param.grad.data.norm(2)
total_norm += param_norm.item() ** 2
total_norm = torch.sqrt(torch.tensor(total_norm))
return total_norm
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Expected behavior
do_grad_scaling=True when --fp16 is enabled; rarely confronting loss becoming nan
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25021/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/25021/timeline
|
completed
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.