url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/24418
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24418/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24418/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24418/events
|
https://github.com/huggingface/transformers/issues/24418
| 1,769,164,033 |
I_kwDOCUB6oc5pc00B
| 24,418 |
Review added
|
{
"login": "repo-reviews",
"id": 135327276,
"node_id": "U_kgDOCBDuLA",
"avatar_url": "https://avatars.githubusercontent.com/u/135327276?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/repo-reviews",
"html_url": "https://github.com/repo-reviews",
"followers_url": "https://api.github.com/users/repo-reviews/followers",
"following_url": "https://api.github.com/users/repo-reviews/following{/other_user}",
"gists_url": "https://api.github.com/users/repo-reviews/gists{/gist_id}",
"starred_url": "https://api.github.com/users/repo-reviews/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/repo-reviews/subscriptions",
"organizations_url": "https://api.github.com/users/repo-reviews/orgs",
"repos_url": "https://api.github.com/users/repo-reviews/repos",
"events_url": "https://api.github.com/users/repo-reviews/events{/privacy}",
"received_events_url": "https://api.github.com/users/repo-reviews/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,687 | 1,690 | 1,690 |
NONE
| null |
### Thank you for building **transformers**!
@AuroraCoboAguilera created a review titled:
*Hugging face, the accelerator to develop your own NLP*
on [repo-reviews.github.io](https://repo-reviews.github.io) to share their experience using **transformers**.
[link to review](https://repo-reviews.github.io//reviews/2023-06-21_AuroraCoboAguilera_huggingface_transformers)
If you would like to help your super-users share their experiences using your repo, add a [badge](https://github.com/repo-reviews/repo-reviews.github.io#add-badges) to your README.md.
We hope that sharing these experiences helps your users **increase their productivity**.
--
Please be kind,
I’m a human!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24418/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24417
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24417/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24417/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24417/events
|
https://github.com/huggingface/transformers/pull/24417
| 1,769,121,791 |
PR_kwDOCUB6oc5ToHMm
| 24,417 |
Skip `test_conditional_generation_pt_pix2struct` in Past CI (torch < 1.11)
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
Same as in #24270, but for a test inside pipeline test in `ImageToTextPipelineTests`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24417/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24417/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24417",
"html_url": "https://github.com/huggingface/transformers/pull/24417",
"diff_url": "https://github.com/huggingface/transformers/pull/24417.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24417.patch",
"merged_at": 1687440853000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24416
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24416/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24416/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24416/events
|
https://github.com/huggingface/transformers/pull/24416
| 1,769,058,737 |
PR_kwDOCUB6oc5Tn5bb
| 24,416 |
[`bnb`] Fix bnb serialization issue with new release
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24416). All of your documentation changes will be reflected on that endpoint."
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes the failing bnb tests in: https://github.com/huggingface/transformers/actions/runs/5329457026/jobs/9655258851
The recent release of bitsandbytes slightly broke the serialization mechanism for int8 models. In the new release, bitsandbytes has introduced a new way of serializing int8 weights that are more memory efficient and avoid OOM issues when saving for instance PEFT models.
https://github.com/TimDettmers/bitsandbytes/pull/503
That PR introduced a new paradigm, when saving int8 state dict, [that state dict will contain some string values](https://github.com/TimDettmers/bitsandbytes/pull/503/files#diff-4d235c7e595546c6656c229dfa139298ce6602b356c2d0bafcb2352eb2cfae79R360-R363) to store some metadata information related to the quantized format.
Therefore the fix should be to slightly adapt the `shard_checkpoint` method (that is called regardless of if the model is sharded or not) by adding a new argument `state_dict_contains_metadata` that will skip manipulating the `weight` variable that is no longer a tensor but a string. We constraint `state_dict_contains_metadata` to be applicable only in the int8 case to ensure we don't break anything else
cc @amyeroberts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24416/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24416",
"html_url": "https://github.com/huggingface/transformers/pull/24416",
"diff_url": "https://github.com/huggingface/transformers/pull/24416.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24416.patch",
"merged_at": 1687441239000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24415
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24415/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24415/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24415/events
|
https://github.com/huggingface/transformers/pull/24415
| 1,768,989,268 |
PR_kwDOCUB6oc5TnqSi
| 24,415 |
fix the grad_acc issue at epoch boundaries
|
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I believe this PR also fixes #24245",
"@amyeroberts we coordinate Accelerate releases to be a day or two before `transformers`, so there shouldn't be an issue there :) \r\n\r\n(Though @pacman100 we should do the version check like we've done before with these fixes 😬 )",
"Hello,\r\n\r\nHow do i install this? I expected that PR means some kind of transformers update, in this case there should be install link such as git+https://github.com/huggingface/transformers@de9255de27abfcae4a1f816b904915f0b1e23cd9\r\n\r\n\r\n\r\n\r\n \r\n",
"Hello @Oxi84, you can install this once it gets merged via `pip install git+https://github.com/huggingface/transformers` and `pip install git+https://github.com/huggingface/accelerate`",
"Just completed a training run with this PR and can confirm that the issue didn't occur. Thanks for the fix!"
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
Should solve accumulating via epoch when using Accelerate (seen in https://github.com/huggingface/transformers/issues/23935#issuecomment-1588134562). Requires https://github.com/huggingface/accelerate/pull/1624
Fixes # (issue)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24415/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24415",
"html_url": "https://github.com/huggingface/transformers/pull/24415",
"diff_url": "https://github.com/huggingface/transformers/pull/24415.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24415.patch",
"merged_at": 1687522387000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24414
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24414/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24414/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24414/events
|
https://github.com/huggingface/transformers/issues/24414
| 1,768,977,236 |
I_kwDOCUB6oc5pcHNU
| 24,414 |
Trouble fine-tuning zero-shot image classification model
|
{
"login": "moon001light",
"id": 137312482,
"node_id": "U_kgDOCC844g",
"avatar_url": "https://avatars.githubusercontent.com/u/137312482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moon001light",
"html_url": "https://github.com/moon001light",
"followers_url": "https://api.github.com/users/moon001light/followers",
"following_url": "https://api.github.com/users/moon001light/following{/other_user}",
"gists_url": "https://api.github.com/users/moon001light/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moon001light/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moon001light/subscriptions",
"organizations_url": "https://api.github.com/users/moon001light/orgs",
"repos_url": "https://api.github.com/users/moon001light/repos",
"events_url": "https://api.github.com/users/moon001light/events{/privacy}",
"received_events_url": "https://api.github.com/users/moon001light/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @moon001light, thanks for raising this issue.\r\n\r\nModels that can be loaded with the `AutoXxx` API will have a shared input and output structure. In the example notebook, the model is loaded using `AutoModelForImageClassification`. Models loaded with `AutoModelForZeroShotImageClassification` have a different set of expected inputs, in particular, they don't accept `labels` and expect `input_ids`. Here's a guide on performing the zero-shot image classification task using transformers: https://huggingface.co/docs/transformers/tasks/zero_shot_image_classification. \r\n\r\nAdd more example scripts is on my to-do list. I'll make sure to include this task! ",
"Thanks for the quick reply @amyeroberts , the link you gave seems to be a high level overview. Is there anything else besides replacing `labels` with `input_ids`? Could you point me to the code so that I can see the full list of expected inputs for `AutoModelForZeroShotImageClassification`? Appreciate your patience thanks :)",
"@moon001light If all you want is to train this particular checkpoint, then I would look directly at the architecture it loads, which in this [case is CLIPModel](https://github.com/huggingface/transformers/blob/ea91c2adca842da3d2f87e094504fa7d66a7008a/src/transformers/models/clip/modeling_clip.py#L1082). \r\n\r\nMore generally, the auto groups are [defined in `modeling_auto.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/modeling_auto.py). For [AutoModelForZeroShotClassification](https://github.com/huggingface/transformers/blob/ea91c2adca842da3d2f87e094504fa7d66a7008a/src/transformers/models/auto/modeling_auto.py#L1256C7-L1256C46) the model architectures that can be used are listed under [MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING_NAMES](https://github.com/huggingface/transformers/blob/ea91c2adca842da3d2f87e094504fa7d66a7008a/src/transformers/models/auto/modeling_auto.py#L980). \r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"A bit of a newbie here, after looking into it I am still not sure how to properly set `input_ids`. This is what I tried:\r\n\r\n```\r\nfrom transformers import AutoTokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\r\n\r\ndef collate_fn(examples):\r\n pixel_values = torch.stack([example[\"pixel_values\"] for example in examples])\r\n input_ids = tokenizer(examples[\"label\"], padding=\"max_length\", truncation=True)\r\n return {\"pixel_values\": pixel_values, \"input_ids\": input_ids} \r\n```\r\n\r\nBut my `input_ids` is wrong :( Not really sure what `input_ids` should be. Looking forward to that example script of yours so I can learn @amyeroberts . Appreciate any help I can get :)",
"@moon001light I suggest inspecting the objects at each step to see what they are and contain. For example, the output of \r\n\r\n```python\r\ntokenizer(examples[\"label\"], padding=\"max_length\", truncation=True)\r\n```\r\n\r\nis not `input_ids`, but rather a dictionary that contains `input_ids`.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,687 | 1,691 | 1,691 |
NONE
| null |
### System Info
transformers 4.30.2
python 3.11.4
### Who can help?
@amyeroberts @sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am trying to fine-tune a **zero-shot** image classifier, I am following this example for reference: https://github.com/huggingface/notebooks/blob/main/examples/image_classification.ipynb
What I changed from the notebook:
The checkpoint I am starting from: `laion/CLIP-ViT-L-14-DataComp.XL-s13B-b90K`
For the model I am using `AutoModelForZeroShotImageClassification` like this:
```
model = AutoModelForZeroShotImageClassification.from_pretrained(
model_checkpoint,
label2id=label2id,
id2label=id2label,
ignore_mismatched_sizes = True,
)
```
When I run `trainer.train()`, I get this error:
> TypeError: CLIPModel.forward() got an unexpected keyword argument 'labels'
### Expected behavior
With my changes to the notebook, it should fine-tune for zero-shot pre-trained model `laion/CLIP-ViT-L-14-DataComp.XL-s13B-b90K`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24414/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24413
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24413/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24413/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24413/events
|
https://github.com/huggingface/transformers/pull/24413
| 1,768,891,339 |
PR_kwDOCUB6oc5TnVML
| 24,413 |
Create Pretrained module
|
{
"login": "Ntrystan",
"id": 95559349,
"node_id": "U_kgDOBbIetQ",
"avatar_url": "https://avatars.githubusercontent.com/u/95559349?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ntrystan",
"html_url": "https://github.com/Ntrystan",
"followers_url": "https://api.github.com/users/Ntrystan/followers",
"following_url": "https://api.github.com/users/Ntrystan/following{/other_user}",
"gists_url": "https://api.github.com/users/Ntrystan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ntrystan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ntrystan/subscriptions",
"organizations_url": "https://api.github.com/users/Ntrystan/orgs",
"repos_url": "https://api.github.com/users/Ntrystan/repos",
"events_url": "https://api.github.com/users/Ntrystan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ntrystan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,687 | 1,687 | 1,687 |
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24413/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24413",
"html_url": "https://github.com/huggingface/transformers/pull/24413",
"diff_url": "https://github.com/huggingface/transformers/pull/24413.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24413.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24412
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24412/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24412/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24412/events
|
https://github.com/huggingface/transformers/pull/24412
| 1,768,846,875 |
PR_kwDOCUB6oc5TnL29
| 24,412 |
Removed @torch.no_grad() and in-place operations in optimizers for backwards
|
{
"login": "shirayu",
"id": 963961,
"node_id": "MDQ6VXNlcjk2Mzk2MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/963961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shirayu",
"html_url": "https://github.com/shirayu",
"followers_url": "https://api.github.com/users/shirayu/followers",
"following_url": "https://api.github.com/users/shirayu/following{/other_user}",
"gists_url": "https://api.github.com/users/shirayu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shirayu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shirayu/subscriptions",
"organizations_url": "https://api.github.com/users/shirayu/orgs",
"repos_url": "https://api.github.com/users/shirayu/repos",
"events_url": "https://api.github.com/users/shirayu/events{/privacy}",
"received_events_url": "https://api.github.com/users/shirayu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I consider this pull request is insufficient because tests have failed.\r\nI welcome comments on the fix!",
"Transformers is primarily a library of models, not optimizers. I would recommend not using the AdamW/Adafactor from the library (which are going to be removed in the next major version) and use another implementation :-)",
"Thank you for the comment!\r\n\r\nAll right, I found an implementation `fairseq.optim.adafactor.Adafactor` and I will use it.\r\n\r\nhttps://github.com/pytorch/pytorch/issues/30446"
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
In #23417, two``@torch.no_grad()`` lines were added before
```python
def step(self, closure: Callable = None):
```
in class `AdamW` and `Adafactor`.
However, I think this should not be because this causes errors in backward.
Other optimizers in Pytorch have ``@_use_grad_for_differentiable`` lines before `def step`.
```python
@_use_grad_for_differentiable
def step(self, closure=None):
```
- Examples
- https://github.com/pytorch/pytorch/blob/430cb3e1600e0aca742105a2cdf4a01d901955dd/torch/optim/adam.py#L122-L123
- https://github.com/pytorch/pytorch/blob/430cb3e1600e0aca742105a2cdf4a01d901955dd/torch/optim/adamw.py#L149-L150
I also replaced in-place operations into assignments for backwards.
## Question
Should the following line also be replaced?
https://github.com/huggingface/transformers/blob/6ce6d62b6f20040129ec9831e7c4f6576402ea42/src/transformers/optimization.py#L728
## Context
I faced this problem when using PyTorch Lightning 2.0.3 with `transformers.optimization.Adafactor` as an optimizer.
With 3cf01b206 (one previous commit), this error did not occur.
```txt
File "/path/to/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 225, in optimizer_step
return self.precision_plugin.optimizer_step(optimizer, model=model, closure=closure, **kwargs)
File "/path/to/lib/python3.10/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 114, in optimizer_step
return optimizer.step(closure=closure, **kwargs)
File "/path/to/lib/python3.10/site-packages/torch/optim/optimizer.py", line 280, in wrapper
out = func(*args, **kwargs)
File "/path/to/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/path/to/lib/python3.10/site-packages/transformers/optimization.py", line 649, in step
loss = closure()
File "/path/to/lib/python3.10/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 101, in _wrap_closure
closure_result = closure()
File "/path/to/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 140, in __call__
self._result = self.closure(*args, **kwargs)
File "/path/to/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 135, in closure
self._backward_fn(step_output.closure_loss)
File "/path/to/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 232, in backward_fn
call._call_strategy_hook(self.trainer, "backward", loss, optimizer)
File "/path/to/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 287, in _call_strategy_hook
output = fn(*args, **kwargs)
File "/path/to/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 200, in backward
self.precision_plugin.backward(closure_loss, self.lightning_module, optimizer, *args, **kwargs)
File "/path/to/lib/python3.10/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 67, in backward
model.backward(tensor, *args, **kwargs)
File "/path/to/lib/python3.10/site-packages/pytorch_lightning/core/module.py", line 1046, in backward
loss.backward(*args, **kwargs)
File "/path/to/lib/python3.10/site-packages/torch/_tensor.py", line 487, in backward
torch.autograd.backward(
File "/path/to/lib/python3.10/site-packages/torch/autograd/__init__.py", line 200, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Epoch 0: 0%| | 0/2852 [00:01<?, ?it/s]
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
- PyTorch: @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24412/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24412",
"html_url": "https://github.com/huggingface/transformers/pull/24412",
"diff_url": "https://github.com/huggingface/transformers/pull/24412.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24412.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24411
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24411/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24411/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24411/events
|
https://github.com/huggingface/transformers/issues/24411
| 1,768,666,636 |
I_kwDOCUB6oc5pa7YM
| 24,411 |
GPTJForCausalLM with instruction provided on tutorial doesn't load on 4090
|
{
"login": "km5ar",
"id": 54015474,
"node_id": "MDQ6VXNlcjU0MDE1NDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/54015474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/km5ar",
"html_url": "https://github.com/km5ar",
"followers_url": "https://api.github.com/users/km5ar/followers",
"following_url": "https://api.github.com/users/km5ar/following{/other_user}",
"gists_url": "https://api.github.com/users/km5ar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/km5ar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/km5ar/subscriptions",
"organizations_url": "https://api.github.com/users/km5ar/orgs",
"repos_url": "https://api.github.com/users/km5ar/repos",
"events_url": "https://api.github.com/users/km5ar/events{/privacy}",
"received_events_url": "https://api.github.com/users/km5ar/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @km5ar \r\nThanks for the issue\r\n\r\nThis is because google colab instances have a relatively low CPU RAM (<24GB) and the GPT-J model stored on the Hub at `EleutherAI/gpt-j-6b` are actually in float32 (24GB). Therefore from_pretrained will try to download that large file and crashed. Moreover, for large models it is recommended to load a model either with `low_cpu_mem_usage=True` or `device_map=\"auto\"` as by default from_pretrained will initialize a random model with the same number of paramaters then try to populate the model with the dowloaded weights. Using `low_cpu_mem_usage=True` will avoid that step and not create a dummy random model at the beginning of the from_pretrained call.\r\n\r\nTo run gpt-j 6B on google colab consider using this repo: [`ybelkada/gpt-j-6b-sharded-bf16`](https://huggingface.co/ybelkada/gpt-j-6b-sharded-bf16) if you want to load the model in bf16 (by passing `torch_dtype=torch.bfloat16` ) or this repo: [`philschmid/gpt-j-6B-fp16-sharded`](https://huggingface.co/philschmid/gpt-j-6B-fp16-sharded) if you want to run the model in `float16` (by passing `torch_dtype=torch.float16`).",
"@younesbelkada \r\nThanks for the answer\r\n\r\nHowever, my question is \r\nif you see code, I did use torch.float16.\r\nand please see the following screenshot,\r\nI was copy the code from the tutorial from official guide, which clearly said \"The model should fit on 16GB GPU for inference.\"\r\n\r\nI was using a 4090 in my local machine.\r\n\r\n\r\n\r\n",
"@km5ar \r\nThanks ! \r\nThe statement \"The model should fit on 16GB GPU\" is still true. As explained above, the culprit is that `low_cpu_mem_uage` is set by default to `False` therefore blows up the Google Colab's CPU memory that is relatively low for that model due to the reasons I have detailed. Also, loading a checkpoint that is sharded helps to not getting those errors as the shards are processed one by one and deleted afterwards. \r\n\r\n<img width=\"560\" alt=\"Screenshot 2023-06-22 at 14 06 53\" src=\"https://github.com/huggingface/transformers/assets/49240599/d919490b-e915-431b-8d64-08c78762adb9\">\r\n\r\nAs you can see from the screen shot above, `low_cpu_mem_usage=False` combined with that checkpoint will force the program to allocate 12+12=24GB CPU memory before moving the weights on GPU, hence your error.\r\n\r\nCan you try to call `from_pretrained` with `low_cpu_mem_usage=True`, and use [philschmid/gpt-j-6B-fp16-sharded](https://huggingface.co/philschmid/gpt-j-6B-fp16-sharded) instead of the original repository?\r\n\r\nThanks",
"Thank you!!! that's very helpful!"
] | 1,687 | 1,687 | 1,687 |
NONE
| null |
### System Info
GPU 4090
- `transformers` version: 4.30.2
- Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.9.16
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): GPU 2.0.1+cu117 (True)
- Using GPU in script?: <4090>
- Using distributed or parallel set-up in script?: <no>
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction



### Expected behavior
from the official tutorial, it said it should work with 16 GB of Ram, but 4090 has 24G of ram.
the kernel is died.
Can anyone help me out?
https://huggingface.co/docs/transformers/model_doc/gptj
error:
Kernel Restarting
The kernel for gpt-j-6b/1.ipynb appears to have died. It will restart automatically.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24411/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24410
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24410/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24410/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24410/events
|
https://github.com/huggingface/transformers/issues/24410
| 1,768,659,684 |
I_kwDOCUB6oc5pa5rk
| 24,410 |
Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:3!
|
{
"login": "karths8",
"id": 47289950,
"node_id": "MDQ6VXNlcjQ3Mjg5OTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/47289950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/karths8",
"html_url": "https://github.com/karths8",
"followers_url": "https://api.github.com/users/karths8/followers",
"following_url": "https://api.github.com/users/karths8/following{/other_user}",
"gists_url": "https://api.github.com/users/karths8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/karths8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/karths8/subscriptions",
"organizations_url": "https://api.github.com/users/karths8/orgs",
"repos_url": "https://api.github.com/users/karths8/repos",
"events_url": "https://api.github.com/users/karths8/events{/privacy}",
"received_events_url": "https://api.github.com/users/karths8/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"You cannot use `device_map=\"auto\"` with the `to` method afterward as you do. The model will be split up on the GPUs already.\r\nAlso, how are you launching your training script after?",
"Getting the same error when i remove the `to` method. Traceback given below. Also, I launch the training script using `CUDA_VISIBLE_DEVICES=0,1,2,3 python training.py`\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/root/Custom-LLM/new_train.py\", line 91, in <module>\r\n main()\r\n File \"/root/Custom-LLM/new_train.py\", line 88, in main\r\n trainer.train()\r\n File \"/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py\", line 1645, in train\r\n return inner_training_loop(\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py\", line 1938, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py\", line 2759, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py\", line 2784, in compute_loss\r\n outputs = model(**inputs)\r\n ^^^^^^^^^^^^^^^\r\n File \"/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/.cache/huggingface/modules/transformers_modules/instructcodet5p-16b/modeling_codet5p.py\", line 932, in forward\r\n loss = loss_fct(logits.reshape(-1, self.decoder.config.vocab_size), labels.view(-1))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/nn/modules/loss.py\", line 1174, in forward\r\n return F.cross_entropy(input, target, weight=self.weight,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/nn/functional.py\", line 3029, in cross_entropy\r\n return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nRuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:3 and cuda:0! (when checking argument for argument target in method wrapper_CUDA_nll_loss_forward)\r\n 0%| | 0/17409 [00:02<?, ?it/s]\r\n```",
"Ah, the problem lies in the custom code of this model. You need to move the `labels` to the device of the logits [here](https://huggingface.co/Salesforce/instructcodet5p-16b/blob/main/modeling_codet5p.py#L930) by adding `labels = labels.to(logits.device)`. Your logits are on the last GPU but your labels are still on the first one.",
"I made a [PR](https://huggingface.co/Salesforce/instructcodet5p-16b/discussions/4) with this suggestion on the repo. You can check it out locally by adding the `revision=\"pr_4\"` argument when loading the model.",
"Thanks a lot! That solved the issue",
"I have a similar error but I could not figure out the reason.\r\nI am using a pre-trained ESM-V2 models from huggingface using QLoRa technique.\r\nHere is my encoder:\r\n\r\n```python\r\n def __init__(self):\r\n # QLoRa fine-tuning:\r\n quantization_config = BitsAndBytesConfig(\r\n load_in_4bit=True,\r\n bnb_4bit_quant_type=\"nf4\",\r\n bnb_4bit_use_double_quant=True,\r\n bnb_4bit_compute_dtype=torch.float16\r\n )\r\n self.model = EsmModel.from_pretrained(model_name, quantization_config=quantization_config)\r\n self.model = prepare_model_for_kbit_training(self.model,\r\n use_gradient_checkpointing=False)\r\n\r\n config = LoraConfig(\r\n r=8,\r\n lora_alpha=32,\r\n target_modules=[\r\n \"query\", \"key\", \"value\",\r\n \"dense\"\r\n ],\r\n lora_dropout=0.05,\r\n bias=\"none\",\r\n )\r\n self.model = get_peft_model(self.model, config)\r\n .\r\n .\r\n .\r\n\r\n def forward(self, x):\r\n x_sequence = {key: value for key, value in x[\"sequence\"].items()}\r\n features = self.model(**x_sequence)\r\n .\r\n .\r\n .\r\n```\r\n\r\nHere is my decoder forward code:\r\n```python\r\n def forward(self, encoder_out, target_input):\r\n tgt_mask, tgt_padding_mask = create_mask(target_input, self.pad_idx, self.device)\r\n tgt_embedding = self.embedding(target_input)\r\n tgt_embedding = self.decoder_pos_drop(tgt_embedding + self.decoder_pos_embed)\r\n\r\n encoder_out = self.encoder_pos_drop(encoder_out + self.encoder_pos_embed)\r\n\r\n encoder_out = encoder_out.transpose(0, 1)\r\n tgt_embedding = tgt_embedding.transpose(0, 1)\r\n\r\n preds = self.decoder(memory=encoder_out,\r\n tgt=tgt_embedding,\r\n tgt_mask=tgt_mask,\r\n tgt_key_padding_mask=tgt_padding_mask)\r\n preds = preds.transpose(0, 1)\r\n return self.output(preds)\r\n```\r\n\r\nThis is my training loop:\r\n\r\n```python\r\n accelerator = Accelerator(\r\n mixed_precision='fp16',\r\n gradient_accumulation_steps=8\r\n )\r\n\r\n net, optimizer, dataloaders_dict[\"train\"], scheduler = accelerator.prepare(\r\n net, optimizer, dataloaders_dict[\"train\"], scheduler\r\n )\r\n\r\n for i, data in enumerate(tools['train_loader']):\r\n with accelerator.accumulate(tools['net']):\r\n embeddings, task_num, sequence, target = data\r\n\r\n target_input = target[:, :-1]\r\n target_expected = target[:, 1:]\r\n\r\n batch = {\"sequence\": sequence, \"embedding\": embeddings, \"target_input\": target_input}\r\n\r\n preds = tools['net'](batch)\r\n loss = tools['loss_function'](preds.reshape(-1, preds.shape[-1]), target_expected.reshape(-1))\r\n loss = torch.mean(loss)\r\n\r\n avg_loss = accelerator.gather(loss.repeat(tools[\"train_batch_size\"])).mean()\r\n train_loss += avg_loss.item() / tools['accum_iter']\r\n\r\n accelerator.backward(loss)\r\n if accelerator.sync_gradients:\r\n accelerator.clip_grad_norm_(tools['net'].parameters(), tools['grad_clip'])\r\n\r\n tools['optimizer'].step()\r\n tools['scheduler'].step()\r\n tools['optimizer'].zero_grad()\r\n```\r\n\r\nI connected an autoregressive decoder to it to create a seq2seq model. My code works pretty well when I use one GPU, but when I set the accelerate config to use two GPUs, I got this error:\r\n \r\n**RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)**\r\n\r\nThis is my error log:\r\n```\r\nFile \"/home/mpngf/projects/JointTraining/train.py\", line 90, in train\r\n preds = tools['net'](batch, mode=0)\r\n File \"/home/mpngf/environments/joint_training/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/mpngf/environments/joint_training/lib/python3.10/site-packages/torch/nn/parallel/distributed.py\", line 1156, in forward\r\n output = self._run_ddp_forward(*inputs, **kwargs)\r\n File \"/home/mpngf/environments/joint_training/lib/python3.10/site-packages/torch/nn/parallel/distributed.py\", line 1110, in _run_ddp_forward\r\n return module_to_run(*inputs[0], **kwargs[0]) # type: ignore[index]\r\n File \"/home/mpngf/environments/joint_training/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/mpngf/environments/joint_training/lib/python3.10/site-packages/accelerate/utils/operations.py\", line 632, in forward\r\n return model_forward(*args, **kwargs)\r\n File \"/home/mpngf/environments/joint_training/lib/python3.10/site-packages/accelerate/utils/operations.py\", line 620, in __call__\r\n return convert_to_fp32(self.model_forward(*args, **kwargs))\r\n File \"/home/mpngf/environments/joint_training/lib/python3.10/site-packages/torch/amp/autocast_mode.py\", line 14, in decorate_autocast\r\n return func(*args, **kwargs)\r\n File \"/home/mpngf/projects/JointTraining/model.py\", line 192, in forward\r\n encoder_out = self.encoder(batch)\r\n File \"/home/mpngf/environments/joint_training/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/mpngf/projects/JointTraining/model.py\", line 89, in forward\r\n features = self.model(**x[\"sequence\"])\r\n File \"/home/mpngf/environments/joint_training/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/mpngf/environments/joint_training/lib/python3.10/site-packages/peft/peft_model.py\", line 322, in forward\r\n return self.get_base_model()(*args, **kwargs)\r\n File \"/home/mpngf/environments/joint_training/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/mpngf/environments/joint_training/lib/python3.10/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n File \"/home/mpngf/environments/joint_training/lib/python3.10/site-packages/transformers/models/esm/modeling_esm.py\", line 917, in forward\r\n embedding_output = self.embeddings(\r\n File \"/home/mpngf/environments/joint_training/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/mpngf/environments/joint_training/lib/python3.10/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n File \"/home/mpngf/environments/joint_training/lib/python3.10/site-packages/transformers/models/esm/modeling_esm.py\", line 203, in forward\r\n inputs_embeds = self.word_embeddings(input_ids)\r\n File \"/home/mpngf/environments/joint_training/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/mpngf/environments/joint_training/lib/python3.10/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n File \"/home/mpngf/environments/joint_training/lib/python3.10/site-packages/torch/nn/modules/sparse.py\", line 162, in forward\r\n return F.embedding(\r\n File \"/home/mpngf/environments/joint_training/lib/python3.10/site-packages/torch/nn/functional.py\", line 2210, in embedding\r\n return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\nRuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)\r\n```\r\n\r\nIt seams that it is related to this line:\r\n```python\r\nfeatures = self.model(**x_sequence)\r\n```\r\n\r\n@sgugger, would you be so kind as to assist me with this matter? I would greatly appreciate anyone who can offer their expertise to help me ensure the functionality of my code across multiple GPUs. ",
"Hey 🤗 We try to keep the github issues for bugs/feature requests and not for custom code debugging 😅 \r\nCould you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n\r\nThanks!",
"\r\n[modeling_falcon.zip](https://github.com/huggingface/transformers/files/13403101/modeling_falcon.zip)\r\nI had the same problem by using falcon model for sequence classification with more the one gpu and device auto option.\r\nI added in the modelling_falcon.py on line 1402 and 1507 \"labels = labels.to(logits.device)\"\r\nand from then on it worked. I updated the transformers and accelerater too.\r\nPerhaps this helps to somebody.\r\nRegards\r\nDragan"
] | 1,687 | 1,700 | 1,688 |
NONE
| null |
### System Info
- `transformers` version: 4.29.2
- Platform: Linux-5.4.0-137-generic-x86_64-with-glibc2.31
- Python version: 3.11.4
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am trying to finetune an [InstructCodeT5+](https://huggingface.co/Salesforce/instructcodet5p-16b) model on some training data using a multi-GPU setup. The same code (given further below) seems to work in a single-GPU setting (when i set `CUDA_VISIBLE_DEVICES=0`):
```
CUDA_VISIBLE_DEVICES=0,1,2,3 python training.py
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
bin /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cuda118.so
CUDA SETUP: CUDA runtime path found: /root/anaconda3/envs/datachat_env/lib/libcudart.so.11.0
CUDA SETUP: Highest compute capability among GPUs detected: 8.0
CUDA SETUP: Detected CUDA version 118
CUDA SETUP: Loading binary /root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cuda118.so...
[nltk_data] Downloading package punkt to /root/nltk_data...
[nltk_data] Package punkt is already up-to-date!
The model weights are not tied. Please use the `tie_weights` method before using the `infer_auto_device` function.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:24<00:00, 4.92s/it]
/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/optimization.py:407: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
warnings.warn(
0%| | 0/17409 [00:00<?, ?it/s]You're using a CodeGenTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
Traceback (most recent call last):
File "/root/Custom-LLM/training.py", line 364, in <module>
main()
File "/root/Custom-LLM/training.py", line 336, in main
trainer.train()
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 1664, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 1940, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 2735, in training_step
loss = self.compute_loss(model, inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 2767, in compute_loss
outputs = model(**inputs)
^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.cache/huggingface/modules/transformers_modules/instructcodet5p-16b/modeling_codet5p.py", line 904, in forward
encoder_hidden_states = self.enc_to_dec_proj(encoder_hidden_states)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:3! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
```
Code for the above error is given below:
```
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import pandas as pd
import os
import torch
from peft import TaskType
from transformers import DataCollatorForSeq2Seq, Seq2SeqTrainer, Seq2SeqTrainingArguments, BitsAndBytesConfig
import evaluate
import nltk
import numpy as np
from nltk.tokenize import sent_tokenize
from datasets import Dataset, DatasetDict
import argparse
import pickle
import json
import statistics
import ast
from copy import deepcopy
device = 'cuda'
parser = argparse.ArgumentParser(description='Options')
parser.add_argument('--dataset_dir', default='data', type=str, help="folder in which the dataset is stored")
parser.add_argument('--output_dir', default="lora-instructcodet5p", type=str, help="output directory for the model")
parser.add_argument('--results_dir', default="results", type=str, help="where the results should be stored")
args = parser.parse_args()
tokenized_dataset = DatasetDict.load_from_disk(args.dataset_dir)
pad_tok = 50256
token_id="Salesforce/instructcodet5p-16b"
tokenizer = AutoTokenizer.from_pretrained(token_id)
def main():
# huggingface hub model id
model_id="instructcodet5p-16b"
if not os.path.exists(model_id):
model_id=token_id
# load model from the hub
model = AutoModelForSeq2SeqLM.from_pretrained(model_id,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True, decoder_start_token_id=1, pad_token_id=pad_tok, device_map="auto").to(device)
# we want to ignore tokenizer pad token in the loss
label_pad_token_id = pad_tok
# Data collator
data_collator = DataCollatorForSeq2Seq(
tokenizer,
model=model,
label_pad_token_id=label_pad_token_id,
pad_to_multiple_of=8
)
output_dir=args.output_dir
training_args = Seq2SeqTrainingArguments(
output_dir=output_dir,
per_device_train_batch_size=1,
predict_with_generate=True,
weight_decay=0.05,
warmup_steps=100,
fp16=False,
learning_rate=1e-3,
num_train_epochs=3,
logging_dir=f"{output_dir}/logs",
logging_strategy="epoch",
save_strategy="no",
report_to="tensorboard",
push_to_hub=False,
generation_max_length=200,
include_inputs_for_metrics = True,
lr_scheduler_type = 'cosine'
)
# Create Trainer instance
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=tokenized_dataset["train"]
)
# train model
trainer.train()
if __name__ == '__main__':
main()
```
### Expected behavior
Expected behavior is that the model should train in a multi-GPU setting without throwing any errors. The same script works in single-GPU setting but throws the above error in a multi-GPU setting
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24410/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24409
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24409/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24409/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24409/events
|
https://github.com/huggingface/transformers/issues/24409
| 1,768,337,059 |
I_kwDOCUB6oc5pZq6j
| 24,409 |
wandb metric argument is weird
|
{
"login": "edmcman",
"id": 1017189,
"node_id": "MDQ6VXNlcjEwMTcxODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1017189?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/edmcman",
"html_url": "https://github.com/edmcman",
"followers_url": "https://api.github.com/users/edmcman/followers",
"following_url": "https://api.github.com/users/edmcman/following{/other_user}",
"gists_url": "https://api.github.com/users/edmcman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/edmcman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/edmcman/subscriptions",
"organizations_url": "https://api.github.com/users/edmcman/orgs",
"repos_url": "https://api.github.com/users/edmcman/repos",
"events_url": "https://api.github.com/users/edmcman/events{/privacy}",
"received_events_url": "https://api.github.com/users/edmcman/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] |
[
"Thanks @edmcman for the ping.\r\n\r\n@ayulockin @morganmcg1 in case this is relevant to the integrations team. tbh, I don't remember much about this integration but I implemented it so happy to help in case there's a blocker.",
"Thanks for the detailed issue @edmcman; we are taking a look at this. Will connect with you @AyushExel in case of blocker.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Ping\r\n\r\nOn Mon, Jul 31, 2023, 11:03 AM github-actions[bot] -\r\n***@***.*** <github.edmcman.99c9f1b9d0.notifications#\r\n***@***.***> wrote:\r\n\r\n> This issue has been automatically marked as stale because it has not had\r\n> recent activity. If you think this still needs to be addressed please\r\n> comment on this thread.\r\n>\r\n> Please note that issues that do not follow the contributing guidelines\r\n> <https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md>\r\n> are likely to be ignored.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/24409#issuecomment-1658554442>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AAHYKZPH3FQDLIUC7DSYYRLXS7CKXANCNFSM6AAAAAAZPGOQMA>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"Ping\r\n\r\nOn Fri, Aug 25, 2023, 4:03 AM github-actions[bot] - ***@***.***\r\n***@***.***> wrote:\r\n\r\n> This issue has been automatically marked as stale because it has not had\r\n> recent activity. If you think this still needs to be addressed please\r\n> comment on this thread.\r\n>\r\n> Please note that issues that do not follow the contributing guidelines\r\n> <https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md>\r\n> are likely to be ignored.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/24409#issuecomment-1692940561>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AAHYKZPWTBMM362D7HGYQGTXXBL3JANCNFSM6AAAAAAZPGOQMA>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Ping\r\n\r\nOn Tue, Sep 19, 2023, 4:03 AM github-actions[bot] - ***@***.***\r\n***@***.***> wrote:\r\n\r\n> This issue has been automatically marked as stale because it has not had\r\n> recent activity. If you think this still needs to be addressed please\r\n> comment on this thread.\r\n>\r\n> Please note that issues that do not follow the contributing guidelines\r\n> <https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md>\r\n> are likely to be ignored.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/24409#issuecomment-1725018773>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AAHYKZKLSBIWBWDK3UV6WGDX3FGVJANCNFSM6AAAAAAZPGOQMA>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"Hey @edmcman, apologies for the delay. I am taking a look at this. Will let you know soon.",
"No worries, I was just keeping the issue open :)",
"Boop\r\n\r\nOn Sat, Oct 14, 2023, 4:06 AM github-actions[bot] - ***@***.***\r\n***@***.***> wrote:\r\n\r\n> This issue has been automatically marked as stale because it has not had\r\n> recent activity. If you think this still needs to be addressed please\r\n> comment on this thread.\r\n>\r\n> Please note that issues that do not follow the contributing guidelines\r\n> <https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md>\r\n> are likely to be ignored.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/24409#issuecomment-1762708379>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AAHYKZNRWB7DQT2CAGH3BZ3X7JBZVANCNFSM6AAAAAAZPGOQMA>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"Boop\r\n\r\nOn Fri, Nov 10, 2023, 3:07 AM github-actions[bot] - ***@***.***\r\n***@***.***> wrote:\r\n\r\n> This issue has been automatically marked as stale because it has not had\r\n> recent activity. If you think this still needs to be addressed please\r\n> comment on this thread.\r\n>\r\n> Please note that issues that do not follow the contributing guidelines\r\n> <https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md>\r\n> are likely to be ignored.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/24409#issuecomment-1805270715>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AAHYKZLZLJX6TNBZJ2LOJG3YDXOFZAVCNFSM6AAAAAAZPGOQMCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMBVGI3TANZRGU>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Boop\r\n\r\nOn Tue, Dec 5, 2023, 3:06 AM github-actions[bot] - ***@***.***\r\n***@***.***> wrote:\r\n\r\n> This issue has been automatically marked as stale because it has not had\r\n> recent activity. If you think this still needs to be addressed please\r\n> comment on this thread.\r\n>\r\n> Please note that issues that do not follow the contributing guidelines\r\n> <https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md>\r\n> are likely to be ignored.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/24409#issuecomment-1840213033>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AAHYKZJANX7W6GOKOVQQOWLYH3IZZAVCNFSM6AAAAAAZPGOQMCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNBQGIYTGMBTGM>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Boop\r\n\r\nOn Sat, Dec 30, 2023, 3:06 AM github-actions[bot] - ***@***.***\r\n***@***.***> wrote:\r\n\r\n> This issue has been automatically marked as stale because it has not had\r\n> recent activity. If you think this still needs to be addressed please\r\n> comment on this thread.\r\n>\r\n> Please note that issues that do not follow the contributing guidelines\r\n> <https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md>\r\n> are likely to be ignored.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/24409#issuecomment-1872479380>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AAHYKZMEV7QFVIKKLL7VA3TYL7DP7AVCNFSM6AAAAAAZPGOQMCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNZSGQ3TSMZYGA>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Bump\r\n\r\nOn Wed, Jan 24, 2024, 3:07 AM github-actions[bot] - ***@***.***\r\n***@***.***> wrote:\r\n\r\n> This issue has been automatically marked as stale because it has not had\r\n> recent activity. If you think this still needs to be addressed please\r\n> comment on this thread.\r\n>\r\n> Please note that issues that do not follow the contributing guidelines\r\n> <https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md>\r\n> are likely to be ignored.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/24409#issuecomment-1907600067>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AAHYKZJIJOVQKD7V5UOB2GDYQC6NPAVCNFSM6AAAAAAZPGOQMCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMBXGYYDAMBWG4>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Boop\r\n\r\nOn Sun, Feb 18, 2024, 3:07 AM github-actions[bot] - ***@***.***\r\n***@***.***> wrote:\r\n\r\n> This issue has been automatically marked as stale because it has not had\r\n> recent activity. If you think this still needs to be addressed please\r\n> comment on this thread.\r\n>\r\n> Please note that issues that do not follow the contributing guidelines\r\n> <https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md>\r\n> are likely to be ignored.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/24409#issuecomment-1950997682>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AAHYKZKVITBHJR65I2F5KTLYUGZCFAVCNFSM6AAAAAAZPGOQMCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNJQHE4TONRYGI>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n"
] | 1,687 | 1,708 | null |
NONE
| null |
### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.7.19-050719-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <yes>
- Using distributed or parallel set-up in script?: <no>
### Who can help?
@AyushExel
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
There are a few bugs/weirdness when specifying the `metric` in `wandb` hyperparameter search.
1. If a `metric` is specified via the `hp_space` argument but there is not a `metric` argument to `hyperparameter_search`, the metric in the `hp_space` is ignored and changed to `eval/loss`. See: https://github.com/huggingface/transformers/blame/6ce6d62b6f20040129ec9831e7c4f6576402ea42/src/transformers/integrations.py#L497
2. If a custom `hp_space` is provided that does not define `metric` at all, and the `metric` argument is specified to `hyperparameter_search`, this code throws an exception: https://github.com/huggingface/transformers/blame/6ce6d62b6f20040129ec9831e7c4f6576402ea42/src/transformers/integrations.py#L500
### Expected behavior
1. If a `hp_space` defines a metric, use that instead of overwriting it.
2. Don't throw an exception is the `hp_space` lacks a `metric` key.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24409/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/24408
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24408/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24408/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24408/events
|
https://github.com/huggingface/transformers/pull/24408
| 1,768,040,493 |
PR_kwDOCUB6oc5Tkicz
| 24,408 |
[WIP] Add Restormer
|
{
"login": "susnato",
"id": 56069179,
"node_id": "MDQ6VXNlcjU2MDY5MTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/susnato",
"html_url": "https://github.com/susnato",
"followers_url": "https://api.github.com/users/susnato/followers",
"following_url": "https://api.github.com/users/susnato/following{/other_user}",
"gists_url": "https://api.github.com/users/susnato/gists{/gist_id}",
"starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/susnato/subscriptions",
"organizations_url": "https://api.github.com/users/susnato/orgs",
"repos_url": "https://api.github.com/users/susnato/repos",
"events_url": "https://api.github.com/users/susnato/events{/privacy}",
"received_events_url": "https://api.github.com/users/susnato/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @amyeroberts for information.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,687 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds Restormer to HF and closes #22372
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24408/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24408",
"html_url": "https://github.com/huggingface/transformers/pull/24408",
"diff_url": "https://github.com/huggingface/transformers/pull/24408.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24408.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24407
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24407/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24407/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24407/events
|
https://github.com/huggingface/transformers/pull/24407
| 1,767,996,485 |
PR_kwDOCUB6oc5TkaOv
| 24,407 |
🚨🚨 Fix group beam search
|
{
"login": "hukuda222",
"id": 21185928,
"node_id": "MDQ6VXNlcjIxMTg1OTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/21185928?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hukuda222",
"html_url": "https://github.com/hukuda222",
"followers_url": "https://api.github.com/users/hukuda222/followers",
"following_url": "https://api.github.com/users/hukuda222/following{/other_user}",
"gists_url": "https://api.github.com/users/hukuda222/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hukuda222/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hukuda222/subscriptions",
"organizations_url": "https://api.github.com/users/hukuda222/orgs",
"repos_url": "https://api.github.com/users/hukuda222/repos",
"events_url": "https://api.github.com/users/hukuda222/events{/privacy}",
"received_events_url": "https://api.github.com/users/hukuda222/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"## Summary of the problem and corresponding fix (for the core maintainer and our future selves)\r\n\r\n### Problem\r\nThe generation loop in `group_beam_search` is correct, and it builds `num_beam_groups` distinct groups of sequences. However, the [`beam_scorer.finalize()` step](https://github.com/huggingface/transformers/blob/8e164c5400b7b413c7b8fb32e35132001effc970/src/transformers/generation/utils.py#L3711) was not taking `num_beam_groups` into consideration and the beam selection therein, when appending the last tokens, was free to write across groups. This should not happen at all, and it could entirely flush out the diversity in the different groups (when `num_beam_groups >= num_beams/2`), as we see in the example in the PR header.\r\n\r\n### Fix\r\nTwo different paths were possible: a) add logic to `finalize` to handle groups correctly; b) treat each group as an independent set of hypotheses. From the [paper](https://arxiv.org/pdf/1610.02424.pdf), we can read \"we divide the beam budget B into G groups and greedily optimize each group using beam search\", so option b), kindly implemented by @hukuda222, is closer to the reference. ",
"@gante \r\nThanks for the review, CI now passes, and I confirmed that `RUN_SLOW=1 py.test tests/generation/test_utils.py -vv` also passes.",
"@sgugger [this comment](https://github.com/huggingface/transformers/pull/24407#issuecomment-1605624032) summarizes the problem and the fix",
"@sgugger the breaking changes here in the generated outputs from `group_beam_search`, which are inevitable due to the bug fix. The method was underperforming (measured in log scores AND beam diversity, which is the point of the method) before these changes.\r\n\r\nSince it is a bug fix, there is no need to ensure retro compatibility, correct?"
] | 1,687 | 1,688 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
Diverse beam search is a method that generates `num_beams//num_beam_groups` sentences for each group independently. However, the current code uses one BeamHypotheses shared by all groups. Therefore, group A will generate two sentences before group B outputs a sentence. So, I created BeamHypotheses for each group so that inferences can be made independently.
Changes are as follows.
inference code:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-large-xsum")
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-xsum")
text = "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration."
outputs = model.generate(
tokenizer.encode(text, return_tensors="pt", max_length=512),
num_beam_groups=2,
num_beams=2,
diversity_penalty=1000000.0,
num_return_sequences=2,
)
print("\n".join(tokenizer.batch_decode(outputs, skip_special_tokens=True)))
```
before:
```
A number Of research projects have investigated the role of the brain's encoder and decoder in the control of the encoded sequences.
A number Of research projects have investigated the role of the brain's encoder and decoder in the control of the encoded sequences..
```
after:
```
The study of the activity of the brain's encoders and decoders has revealed a range of different models of how the brain processes information.
A number Of research projects have investigated the role of the brain's encoder and decoder in the control of the encoded sequences.
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #24369
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@gante
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24407/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24407",
"html_url": "https://github.com/huggingface/transformers/pull/24407",
"diff_url": "https://github.com/huggingface/transformers/pull/24407.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24407.patch",
"merged_at": 1687858991000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24406
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24406/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24406/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24406/events
|
https://github.com/huggingface/transformers/issues/24406
| 1,767,926,667 |
I_kwDOCUB6oc5pYGuL
| 24,406 |
Potential Memory Leakage during inference using DistilBert/Bert
|
{
"login": "TOP-RX",
"id": 103393767,
"node_id": "U_kgDOBimp5w",
"avatar_url": "https://avatars.githubusercontent.com/u/103393767?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TOP-RX",
"html_url": "https://github.com/TOP-RX",
"followers_url": "https://api.github.com/users/TOP-RX/followers",
"following_url": "https://api.github.com/users/TOP-RX/following{/other_user}",
"gists_url": "https://api.github.com/users/TOP-RX/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TOP-RX/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TOP-RX/subscriptions",
"organizations_url": "https://api.github.com/users/TOP-RX/orgs",
"repos_url": "https://api.github.com/users/TOP-RX/repos",
"events_url": "https://api.github.com/users/TOP-RX/events{/privacy}",
"received_events_url": "https://api.github.com/users/TOP-RX/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @ArthurZucker and @younesbelkada ",
"Hi @TOP-RX \r\nCan you try to add the `torch.cuda_empty_cache()` before `\"final\"`? Also you might need to combine it with \r\n```python\r\nimport gc\r\n\r\ngc.collect()\r\n```\r\nAfter that call. See this comment from a previous issue for reference: https://github.com/huggingface/transformers/issues/21094#issuecomment-1396951333",
"Hello @sgugger @younesbelkada @ArthurZucker,\r\n\r\nThanks for your reply! I tried to include the `torch.cuda.empty_cache()` before \"final\" and also include `gc.collect()` whether before or after `torch.cuda.empty_cache()`, the issue still happened. And I also use `print(torch.cuda.memory_allocated(\"cuda:4\")/ 1024 / 1024)` to check the allocated memory:\r\n\r\n```\r\n @torch.no_grad()\r\n def inference(self, hidden_state, mask, id):\r\n self.eval()\r\n print(\"begin_max\", torch.cuda.max_memory_allocated(\"cuda:4\")/ 1024 / 1024)\r\n print(\"begin\", torch.cuda.memory_allocated(\"cuda:4\")/ 1024 / 1024)\r\n distilbert_output = self.bert(inputs_embeds=hidden_state, attention_mask=mask, return_dict=False)\r\n print(\"middle_max\", torch.cuda.max_memory_allocated(\"cuda:4\")/ 1024 / 1024)\r\n print(\"middle\", torch.cuda.memory_allocated(\"cuda:4\")/ 1024 / 1024)\r\n\r\n gc.collect()\r\n \r\n torch.cuda.empty_cache()\r\n\r\n gc.collect()\r\n\r\n hidden_state = distilbert_output[0] \r\n pooled_output = hidden_state[:, 0] \r\n x = pooled_output\r\n x = F.dropout(x, p=args.dropout, training=self.training)\r\n\r\n del hidden_state, pooled_output\r\n \r\n print(\"final_max\", torch.cuda.max_memory_allocated(\"cuda:4\")/ 1024 / 1024)\r\n print(\"final\", torch.cuda.memory_allocated(\"cuda:4\")/ 1024 / 1024)\r\n for i, lin in enumerate(self.lins[:-1]):\r\n x = lin(x)\r\n #x = self.bns[i](x)\r\n x = F.relu(x)\r\n x = F.dropout(x, p=args.dropout, training=self.training)\r\n x = self.lins[-1](x)\r\n self.z_mlp = self.z_mlp.to(device)\r\n self.z_mlp[id] = x.clone().detach()\r\n print(\"final2\", torch.cuda.max_memory_allocated(\"cuda:4\")/ 1024 / 1024)\r\n torch.cuda.empty_cache()\r\n return x\r\n```\r\nHere are the results:\r\n\r\nbegin_max 4217.28662109375\r\nbegin 4217.28662109375\r\nmiddle_max 39844.28662109375\r\nmiddle 7967.28662109375\r\nfinal_max 39844.28662109375\r\nfinal 7996.58349609375\r\n\r\nthere is also a big gap between the `max_memory_allocated` and `memory_allocated`, could I have some further advices? Thanks.",
"I have tired several different ways which I found, but the problem is still existing, is this normal?",
"So one thing to note, you are using `gc.collect();torch.cuda.empty_cache()` but you the `del hidden_state, pooled_output` is after. You should first delete, then call `gc.collect();torch.cuda.empty_cache()`. ",
"Note: regarding ` max_memory_allocated`\r\n\r\n> By default, this returns the peak allocated memory since the beginning of this program. [reset_peak_memory_stats()](https://pytorch.org/docs/stable/generated/torch.cuda.reset_peak_memory_stats.html#torch.cuda.reset_peak_memory_stats) can be used to reset the starting point in tracking this metric.\r\n\r\nI don't see a problem with those larger values for `max_memory_allocated`. They are just the peak values.",
"Hello @ydshieh @ArthurZucker,\r\n\r\nThanks for your help! Before I call the function as shown above, I use:\r\n\r\n```\r\nnum_layers = 1 # Specify the number of layers to remove\r\nencoder_layers = model.bert.transformer.layer[-num_layers:]\r\nmodel.bert.transformer.layer = nn.ModuleList(encoder_layers)\r\n```\r\n\r\nto control the number of transformer layers in my model. However, if I include `@torch.no_grad()` in the function I showed above, here are the results:\r\n\r\n> 1 transformer layer: allocated memory: 800MB, max allocated: 2300MB\r\n> 2 transformer layers: allocated memory: 840MB, max allocated: 2560MB\r\n\r\nif I just comment out the `@torch.no_grad()` in the above code to do a comparison:\r\n\r\n> 1 transformer layer: allocated memory: 4107MB, max allocated: 4299MB\r\n> 2 transformer layers: allocated memory: 7564MB, max allocated: 7756MB\r\n\r\nFor the first case with `@torch.no_grad()`, we don't need to store the intermediate value for backward, it's reasonable the GPU memory is less than the second case. In the second case, the GPU usage is proportional to the number of layers I used which is consistent with my intuition. What makes me confused is no matter how many transformer layers I used in the first case(with `@torch.no_grad()`), the GPU memory usage is almost same. I am wondering if I misunderstand something?\r\n\r\nAny help would be appreciated!",
"Hi @TOP-RX \r\n\r\nIt's necessary to provide a self-complete code snippet. With only the definition of `def inference` but not the inputs you passed to it, nothing we can help. Also please import everything necessary in the code snippet.\r\n\r\n> What makes me confused is no matter how many transformer layers I used in the first case(with @torch.no_grad()),\r\n\r\nCould you let us know where do you put the print statements in the function `inference` to get:\r\n\r\n> transformer layer: allocated memory\r\n> transformer layers: allocated memory",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,687 | 1,690 | 1,690 |
NONE
| null |
### System Info
transformer: 4.24.0
python: 3.8.13
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hello,
I am potentially facing the memory leakage problem when using DistilBert or Bert to do inference, the following is my code:
```
@torch.no_grad()
def inference(self, hidden_state, mask, id):
self.eval()
print("begin", torch.cuda.max_memory_allocated("cuda:4")/ 1024 / 1024)
distilbert_output = self.bert(inputs_embeds=hidden_state, attention_mask=mask, return_dict=False)
print("middle", torch.cuda.max_memory_allocated("cuda:4")/ 1024 / 1024)
hidden_state = distilbert_output[0]
pooled_output = hidden_state[:, 0]
x = pooled_output
x = F.dropout(x, p=args.dropout, training=self.training)
del hidden_state, pooled_output
print("final", torch.cuda.max_memory_allocated("cuda:4")/ 1024 / 1024)
for i, lin in enumerate(self.lins[:-1]):
x = lin(x)
x = F.relu(x)
x = F.dropout(x, p=args.dropout, training=self.training)
x = self.lins[-1](x)
print("final2", torch.cuda.max_memory_allocated("cuda:4")/ 1024 / 1024)
torch.cuda.empty_cache()
return x
```
And the result of each printing of memory usage is:
begin : 4200
middle : 38000
final : 38000
final 2 : 38000
It seems `distilbert_output = self.bert(inputs_embeds=hidden_state, attention_mask=mask, return_dict=False)` will not release the memory as MLP does, there is a huge increase between "begin" and "middle", but no increase between "final" and "final 2". Could I have some ideas about this issue? Thanks.
### Expected behavior
GPU should release memory after transformer inference
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24406/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24405
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24405/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24405/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24405/events
|
https://github.com/huggingface/transformers/pull/24405
| 1,767,811,650 |
PR_kwDOCUB6oc5TjyiF
| 24,405 |
Fix accumulation by epoch with Accelerate
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Solved the problem for me with small naming fix in trainer.py:\r\n```python\r\n accumulation_plugin = GradientAccumulationPlugin(\r\n num_steps=self.args.gradient_accumulation_steps, sync_with_dataloader=False\r\n )\r\n```\r\n(field name is num_steps not gradient_accumulation_steps)",
"We can close this one, as I have added a missing edge case to #24415 "
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
Should solve accumulating via epoch when using Accelerate (seen in https://github.com/huggingface/transformers/issues/23935#issuecomment-1588134562). Requires https://github.com/huggingface/accelerate/pull/1624
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@pacman100
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24405/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24405",
"html_url": "https://github.com/huggingface/transformers/pull/24405",
"diff_url": "https://github.com/huggingface/transformers/pull/24405.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24405.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24404
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24404/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24404/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24404/events
|
https://github.com/huggingface/transformers/pull/24404
| 1,767,708,853 |
PR_kwDOCUB6oc5Tjbys
| 24,404 |
TF safetensors reduced mem usage
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This looks good in my testing! cc @Narsil for review and @amyeroberts for core maintainer review to save Sylvain from having to read code while he's distracted by the hallucinatory dental anaesthetic goblins \r\n\r\nTo explain one detail: In general, all TF ops can take NumPy inputs. Often the \"idiomatic\" way to write TF code is to do preprocessing/once-off stuff in NumPy and just pass the result to TF. Since Safetensors have very efficient Numpy loading, I generally open them in `\"np\"` format and let TF handle any necessary conversions.\r\n\r\nThe one exception is when loading a PyTorch state dict archive. The reason here is that PyTorch weights often need to be transposed to load them into TF, and NumPy transposes on CPU are much slower than TF transposes on GPU. Model loading was several seconds slower when I passed `\"np\"` format archives to the PyTorch state dict crossloading function, but with GPU transposes this PR has almost no impact on performance, while hugely reducing peak memory usage.",
"Yeah, actually, I thought about it a little more - using `np` is idiomatic for TF, but it's a bit messy if the PyTorch crossloading uses `tf` for fast GPU transposes and the others use `np`, especially when it doesn't really matter. I'll use `tf` globally for consistency!",
"Just ran the slow tests for a few models that load safetensor checkpoints and this still seems good",
"Finished slow testing - performance and memory usage look good!",
"Everything looks good at this point - any objections to merging?"
] | 1,687 | 1,687 | 1,687 |
MEMBER
| null |
When we load safetensors files in TF, the entire safetensors state dict is materialized on GPU alongside the randomly initialized weights. This inflates our memory usage during loading a lot, up to about 2X - 2.5X the amount we actually need.
This PR grabs tensors iteratively from the underlying safetensors archives and assigns them. ~It's working for TF-formatted safetensors archives right now, and I'll add torch-format support next.~ Now supports PT and TF formatted archives! Load times still seem very quick for me locally, so I don't think this negatively impacts anything!
Fixes #24393
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24404/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24404/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24404",
"html_url": "https://github.com/huggingface/transformers/pull/24404",
"diff_url": "https://github.com/huggingface/transformers/pull/24404.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24404.patch",
"merged_at": 1687439176000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24403
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24403/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24403/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24403/events
|
https://github.com/huggingface/transformers/pull/24403
| 1,767,681,243 |
PR_kwDOCUB6oc5TjVrq
| 24,403 |
Update activations.py with nn.GELU
|
{
"login": "nikitakapitan",
"id": 101126304,
"node_id": "U_kgDOBgcQoA",
"avatar_url": "https://avatars.githubusercontent.com/u/101126304?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikitakapitan",
"html_url": "https://github.com/nikitakapitan",
"followers_url": "https://api.github.com/users/nikitakapitan/followers",
"following_url": "https://api.github.com/users/nikitakapitan/following{/other_user}",
"gists_url": "https://api.github.com/users/nikitakapitan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikitakapitan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikitakapitan/subscriptions",
"organizations_url": "https://api.github.com/users/nikitakapitan/orgs",
"repos_url": "https://api.github.com/users/nikitakapitan/repos",
"events_url": "https://api.github.com/users/nikitakapitan/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikitakapitan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,687 | 1,690 | 1,690 |
NONE
| null |
use nn.GELU
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24403/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24403",
"html_url": "https://github.com/huggingface/transformers/pull/24403",
"diff_url": "https://github.com/huggingface/transformers/pull/24403.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24403.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24402
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24402/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24402/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24402/events
|
https://github.com/huggingface/transformers/pull/24402
| 1,767,664,827 |
PR_kwDOCUB6oc5TjSEt
| 24,402 |
Clean up dist import
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
Cleans up the `torch.distributed.X` imports in `training_args` to use the already imported `dist` module, which helps clean up our logic and cases quite a bit.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts (cc @sgugger )
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24402/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24402/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24402",
"html_url": "https://github.com/huggingface/transformers/pull/24402",
"diff_url": "https://github.com/huggingface/transformers/pull/24402.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24402.patch",
"merged_at": 1687360783000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24401
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24401/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24401/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24401/events
|
https://github.com/huggingface/transformers/pull/24401
| 1,767,648,328 |
PR_kwDOCUB6oc5TjOcK
| 24,401 |
Remove redundant code from TrainingArgs
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
Removes some more redundant code that Accelerate can handle directly. Namely:
- World size
- Process index
- `main_process_first`
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger @pacman100
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24401/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24401",
"html_url": "https://github.com/huggingface/transformers/pull/24401",
"diff_url": "https://github.com/huggingface/transformers/pull/24401.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24401.patch",
"merged_at": 1687362687000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24400
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24400/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24400/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24400/events
|
https://github.com/huggingface/transformers/pull/24400
| 1,767,442,576 |
PR_kwDOCUB6oc5TihLE
| 24,400 |
Check auto mappings could be imported via `from transformers`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
As shown in #24364, we easily forget to add model mappings like `TF_MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING` to some `__init__` files.
Let's add a check to avoid and could detect such issues as early as possible.
Along this new check, also add some missing mappings to `__init__` files.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24400/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24400",
"html_url": "https://github.com/huggingface/transformers/pull/24400",
"diff_url": "https://github.com/huggingface/transformers/pull/24400.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24400.patch",
"merged_at": 1687361518000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24399
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24399/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24399/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24399/events
|
https://github.com/huggingface/transformers/pull/24399
| 1,767,146,412 |
PR_kwDOCUB6oc5Thgfo
| 24,399 |
byebye Hub connection timeout - Recast
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
It's a bit hard to break up with timeout failure, but @Wauplin was working on it to have 60 seconds instead
https://github.com/huggingface/huggingface_hub/pull/1523
We need to change the commit hash to that one in our CircleCI job though.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24399/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24399/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24399",
"html_url": "https://github.com/huggingface/transformers/pull/24399",
"diff_url": "https://github.com/huggingface/transformers/pull/24399.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24399.patch",
"merged_at": 1687343794000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24398
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24398/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24398/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24398/events
|
https://github.com/huggingface/transformers/pull/24398
| 1,767,105,979 |
PR_kwDOCUB6oc5ThXf1
| 24,398 |
feat: add support for protobuf 4
|
{
"login": "jose-turintech",
"id": 93319775,
"node_id": "U_kgDOBY_yXw",
"avatar_url": "https://avatars.githubusercontent.com/u/93319775?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jose-turintech",
"html_url": "https://github.com/jose-turintech",
"followers_url": "https://api.github.com/users/jose-turintech/followers",
"following_url": "https://api.github.com/users/jose-turintech/following{/other_user}",
"gists_url": "https://api.github.com/users/jose-turintech/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jose-turintech/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jose-turintech/subscriptions",
"organizations_url": "https://api.github.com/users/jose-turintech/orgs",
"repos_url": "https://api.github.com/users/jose-turintech/repos",
"events_url": "https://api.github.com/users/jose-turintech/events{/privacy}",
"received_events_url": "https://api.github.com/users/jose-turintech/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"As you can see from all the red crosses above, this sadly requires more work than just unpinning protobuf.",
"@sgugger yes, thanks for taking the time to pointing that out. Currently trying to identify the scope of needed changes and viability. If I can't work on providing right/complete support I'll close the PR."
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
add support for protobuf 4
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24398/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24398/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24398",
"html_url": "https://github.com/huggingface/transformers/pull/24398",
"diff_url": "https://github.com/huggingface/transformers/pull/24398.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24398.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24397
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24397/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24397/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24397/events
|
https://github.com/huggingface/transformers/pull/24397
| 1,767,096,326 |
PR_kwDOCUB6oc5ThVay
| 24,397 |
Add `ffmpeg` for `doc_test_job` on CircleCI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
Need this at least for `docs/source/en/task_summary.md`.
Otherwise, [job](https://app.circleci.com/pipelines/github/huggingface/transformers/66845/workflows/ae6bcd25-5071-4f48-a9ba-d446ae6e060f/jobs/833148) fails
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24397/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24397",
"html_url": "https://github.com/huggingface/transformers/pull/24397",
"diff_url": "https://github.com/huggingface/transformers/pull/24397.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24397.patch",
"merged_at": 1687338759000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24396
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24396/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24396/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24396/events
|
https://github.com/huggingface/transformers/pull/24396
| 1,766,976,918 |
PR_kwDOCUB6oc5Tg7HR
| 24,396 |
[`pipeline`] Fix str device issue
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Also \r\n\r\n```\r\npython -c 'from transformers import pipeline; pipe = pipeline(model=\"gpt2\", device=\"cuda\")'\r\n```\r\n\r\nWorks on `main`.. So I'm not sure what's the issue\r\n",
"@Narsil what you shared works on main but it should throw an error if you try to run an example with it (I attached a reproducible snippet above)\r\n\r\nAlternatively, this fails on main and this PR fixes it\r\n\r\n```bash\r\npython -c 'from transformers import pipeline; pipe = pipeline(model=\"gpt2\", device=\"cuda\"); pipe(\"hello\")'\r\n```",
"Can we remove the `set_device` instead then ? Seems better:\r\n\r\n```patch\r\ndiff --git a/src/transformers/pipelines/base.py b/src/transformers/pipelines/base.py\r\nindex 510c07cf5..b5975d081 100644\r\n--- a/src/transformers/pipelines/base.py\r\n+++ b/src/transformers/pipelines/base.py\r\n@@ -901,10 +901,8 @@ class Pipeline(_ScikitCompat):\r\n with tf.device(\"/CPU:0\" if self.device == -1 else f\"/device:GPU:{self.device}\"):\r\n yield\r\n else:\r\n- if self.device.type == \"cuda\":\r\n- torch.cuda.set_device(self.device)\r\n-\r\n- yield\r\n+ with torch.cuda.device(self.device):\r\n+ yield\r\n```",
"The initial thing fails indeed, and seems to be linked to the fact that there are multiple `set_device` happening causing issues.\r\n\r\nBy removing it the issue is indeed removed (but the test you added in the test suite isn't failing on main, and since this is what supposed to catch the regression, this is what I tried :) )\r\n",
"I am happy to revert some of the changes I proposed and add yours, it looks much better. However I have few questions \r\n1- is it ok to call that context manager if `self.device` is CPU? I think we need a check on top of that to make sure we're not on CPU (similarly as what we had before)\r\n\r\n```python\r\nimport torch\r\ndevice = torch.device(\"cpu\")\r\n\r\nwith torch.cuda.device(device):\r\n print(torch.randn(1))\r\n```\r\nThrows:\r\n```bash\r\n raise ValueError('Expected a cuda device, but got: {}'.format(device))\r\nValueError: Expected a cuda device, but got: cpu\r\n```\r\nEDIT: just `with torch.device(self.device)` seems to work\r\n\r\n2- I am not sure but I think the `with device` context manager is only available since PT2.0 no?\r\n",
"> 2- I am not sure but I think the with device context manager is only available since PT2.0 no?\r\n\r\nI don't know, all those are very good questions for which I don't have the answer to. I just know that now `set_device` is strongly discouraged so it's probably the source of our issues.",
"Thanks ! \r\nI can confirm the context manager doesn't work for PT==1.9 which is [should be supported by us](https://github.com/huggingface/transformers/blob/4c6e42958951ca66a6b498b1afce8d8ad4ac2274/setup.py#L178): \r\n\r\n```python\r\nTraceback (most recent call last):\r\n File \"scratch.py\", line 203, in <module>\r\n with torch.device(device):\r\nAttributeError: __enter__\r\n```\r\n\r\nTherefore I just added some changes to ensure backward compatibility with older PT versions. WDYT?",
"Hi @Narsil \r\nLet me know if the changes look all good to you, happy to address any additional comments you have ",
"May I attempt a different thing ?\r\n\r\nI think the fix is correct, but I'm wondering if simply relying on `torch.cuda.device` context manager couldn't help remove the need for the compat layer.",
"Sure yes! ",
"Cannot push\r\n\r\n```patch\r\ndiff --git a/src/transformers/pipelines/base.py b/src/transformers/pipelines/base.py\r\nindex 626d33a3d..ee117e62a 100644\r\n--- a/src/transformers/pipelines/base.py\r\n+++ b/src/transformers/pipelines/base.py\r\n@@ -50,7 +50,6 @@ if is_torch_available():\r\n from torch.utils.data import DataLoader, Dataset\r\n\r\n from ..models.auto.modeling_auto import AutoModel\r\n- from ..pytorch_utils import is_torch_greater_or_equal_than_2_0\r\n\r\n # Re-export for backward compatibility\r\n from .pt_utils import KeyDataset\r\n@@ -794,16 +793,11 @@ class Pipeline(_ScikitCompat):\r\n if isinstance(device, torch.device):\r\n self.device = device\r\n elif isinstance(device, str):\r\n- if device == \"cuda\" and not is_torch_greater_or_equal_than_2_0:\r\n- # for backward compatiblity if using `set_device` and `cuda`\r\n- device = f\"cuda:{torch.cuda.current_device()}\"\r\n self.device = torch.device(device)\r\n elif device < 0:\r\n self.device = torch.device(\"cpu\")\r\n- elif isinstance(device, int):\r\n- self.device = torch.device(f\"cuda:{device}\")\r\n else:\r\n- raise ValueError(f\"Device type not supported. Got {device}\")\r\n+ self.device = torch.device(f\"cuda:{device}\")\r\n else:\r\n self.device = device if device is not None else -1\r\n self.torch_dtype = torch_dtype\r\n@@ -908,13 +902,10 @@ class Pipeline(_ScikitCompat):\r\n with tf.device(\"/CPU:0\" if self.device == -1 else f\"/device:GPU:{self.device}\"):\r\n yield\r\n else:\r\n- if is_torch_greater_or_equal_than_2_0:\r\n- with torch.device(self.device):\r\n+ if self.device.type == \"cuda\":\r\n+ with torch.cuda.device(self.device):\r\n yield\r\n- # for backward compatibility\r\n else:\r\n- if self.device.type == \"cuda\":\r\n- torch.cuda.set_device(self.device)\r\n yield\r\n```",
"`torch.cuda.device` is defined for torch==1.9 so it should work.\r\n\r\nAnd `torch.device(\"cpu\")` ... well it's the default there's no need to context manage it.",
"Hi @Narsil \r\nI am not sure if `with torch.cuda.device(self.device):` is supported for torch<2.0 \r\n\r\nhttps://pytorch.org/tutorials/recipes/recipes/changing_default_device.html\r\n\r\nMaybe we should merge this PR for now to unblock also @thomasw21 & @NouamaneTazi . what do you think?",
"I don't think we're blocked by this. \r\n\r\n> And torch.device(\"cpu\") ... well it's the default there's no need to context manage it.\r\n\r\nNot sure of the context of this sentence, but we're overriding the default to `cuda`, so having a context manager to switch back to `cpu` makes sense to me.",
"\r\nhttps://pytorch.org/docs/1.9.0/generated/torch.cuda.device.html?highlight=torch%20cuda%20device#torch.cuda.device\r\n\r\nIt is supported from 1.9.0+, at least in the docs.",
"Great ! agreed with those changes "
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
Addresses: https://github.com/huggingface/transformers/pull/24140#issuecomment-1584617146
Currently passing `device="cuda"` is not supported when creating a pipeline.
This is because `torch.cuda.set_device(self.device)` expects the device to have an explicit index. The fix is to create an indexed device when initializing a pipeline with a str device
Handy reproducible snippet:
```python
from transformers import pipeline
# this works
pipe = pipeline("text-generation", device=0)
pipe("Hello")
# this works
pipe = pipeline("text-generation", device="cuda:0")
pipe("Hello")
# this fails
pipe = pipeline("text-generation", device="cuda")
pipe("Hello")
```
cc @amyeroberts @Narsil
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24396/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24396",
"html_url": "https://github.com/huggingface/transformers/pull/24396",
"diff_url": "https://github.com/huggingface/transformers/pull/24396.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24396.patch",
"merged_at": 1687780716000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24395
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24395/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24395/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24395/events
|
https://github.com/huggingface/transformers/issues/24395
| 1,766,907,749 |
I_kwDOCUB6oc5pUN9l
| 24,395 |
load_in_4bit seems dosen't work as experted which actually increase GPU memory usage when using zero3 via accelerate
|
{
"login": "tqjack",
"id": 38412243,
"node_id": "MDQ6VXNlcjM4NDEyMjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/38412243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tqjack",
"html_url": "https://github.com/tqjack",
"followers_url": "https://api.github.com/users/tqjack/followers",
"following_url": "https://api.github.com/users/tqjack/following{/other_user}",
"gists_url": "https://api.github.com/users/tqjack/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tqjack/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tqjack/subscriptions",
"organizations_url": "https://api.github.com/users/tqjack/orgs",
"repos_url": "https://api.github.com/users/tqjack/repos",
"events_url": "https://api.github.com/users/tqjack/events{/privacy}",
"received_events_url": "https://api.github.com/users/tqjack/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hello, QLoRA and DeepSpeed can't be used together. DeepSpeed doesn't work with quantized parameters.",
"> Hello, QLoRA and DeepSpeed can't be used together. DeepSpeed doesn't work with quantized parameters.\r\n\r\nis that means I can't do zero optimization while using qlora, at least for now? ddp is the only parallelism method compatible with qlora?",
"Hello, yes",
"cc @younesbelkada for adding more context just in case",
"Hi there!\r\nIndeed 4bit + 8bit is not supported with DS, let's maybe add that check on accelerate.preprare (I can work on that)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,687 | 1,690 | 1,690 |
NONE
| null |
### System Info
transformers==4.31.0
deepspeed==0.9.2
peft==0.4.0
bitsandbytes==0.39.0
torch==1.13.0
CUDA Version 11.8
GPUS 8x A100 80gb
### Who can help?
@pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I compared the from_pretrained method w/o load_in_4bit passed in and I found out that after passing load_in_4bit in from_pretrained method I can't load model with same hardware and same accelerate config.
accelerate config:
ds_zero3.yaml
```
compute_environment: LOCAL_MACHINE
deepspeed_config:
gradient_accumulation_steps: 1
gradient_clipping: 1.0
offload_optimizer_device: 'none'
offload_param_device: 'cpu'
zero3_init_flag: true
zero3_save_16bit_model: true
zero_stage: 3
distributed_type: DEEPSPEED
downcast_bf16: 'no'
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 8
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
load in normal way:
test.py
```import torch
from accelerate import Accelerator
from transformers import (
AutoModelForCausalLM,
)
def main():
accelerator = Accelerator()
model_name_or_path = "local path of sambanovasystems/BLOOMChat-176B-v1"
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
trust_remote_code=True)
if __name__ == "__main__":
main()
```
run command:
```
accelerate launch --config_file ds_zero3.yaml test.py
```
it works just fine,
GPU memory usage: 22GB x 8
peak CPU memory usage: ~500GB
then, I try to use load_in_4bit=True
test_4bit.py
```import torch
from accelerate import Accelerator
from transformers import (
AutoModelForCausalLM,
)
def main():
accelerator = Accelerator()
model_name_or_path = "local path of sambanovasystems/BLOOMChat-176B-v1"
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
trust_remote_code=True,
load_in_4bit=True)
if __name__ == "__main__":
main()
```
run command:
```
accelerate launch --config_file ds_zero3.yaml test_4bit.py
```
OOM error, it seems nether zero3 nor parameters offlaod working as experted. peak cpu usage ~500GB in this case.
also tried
```
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16
)
```
OOM
### Expected behavior
I am trying to finetune bloomchat176b using qlora and qlora should using less hardware resource. Thanks for help
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24395/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24394
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24394/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24394/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24394/events
|
https://github.com/huggingface/transformers/pull/24394
| 1,766,661,521 |
PR_kwDOCUB6oc5Tf-of
| 24,394 |
[WIP] Add SPTSv2
|
{
"login": "JesseSilverberg",
"id": 35343284,
"node_id": "MDQ6VXNlcjM1MzQzMjg0",
"avatar_url": "https://avatars.githubusercontent.com/u/35343284?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JesseSilverberg",
"html_url": "https://github.com/JesseSilverberg",
"followers_url": "https://api.github.com/users/JesseSilverberg/followers",
"following_url": "https://api.github.com/users/JesseSilverberg/following{/other_user}",
"gists_url": "https://api.github.com/users/JesseSilverberg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JesseSilverberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JesseSilverberg/subscriptions",
"organizations_url": "https://api.github.com/users/JesseSilverberg/orgs",
"repos_url": "https://api.github.com/users/JesseSilverberg/repos",
"events_url": "https://api.github.com/users/JesseSilverberg/events{/privacy}",
"received_events_url": "https://api.github.com/users/JesseSilverberg/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @alaradirik for information. Please let us know when your model is ready for review or if you need any help :-)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,687 | 1,690 | 1,690 |
NONE
| null |
# What does this PR do?
This PR adds SPTSv2. Per the docs, I am opening this PR immediately after generating the boilerplate.
Fixes #24235
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24394/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24394",
"html_url": "https://github.com/huggingface/transformers/pull/24394",
"diff_url": "https://github.com/huggingface/transformers/pull/24394.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24394.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24393
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24393/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24393/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24393/events
|
https://github.com/huggingface/transformers/issues/24393
| 1,766,652,619 |
I_kwDOCUB6oc5pTPrL
| 24,393 |
Increased peak memory usage when upgrading to `transformers` v4.30 and inclusion of `safetensors`
|
{
"login": "mariecwhite",
"id": 5143063,
"node_id": "MDQ6VXNlcjUxNDMwNjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5143063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariecwhite",
"html_url": "https://github.com/mariecwhite",
"followers_url": "https://api.github.com/users/mariecwhite/followers",
"following_url": "https://api.github.com/users/mariecwhite/following{/other_user}",
"gists_url": "https://api.github.com/users/mariecwhite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariecwhite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariecwhite/subscriptions",
"organizations_url": "https://api.github.com/users/mariecwhite/orgs",
"repos_url": "https://api.github.com/users/mariecwhite/repos",
"events_url": "https://api.github.com/users/mariecwhite/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariecwhite/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey @mariecwhite 👋 \r\n\r\nThe script you shared with us is quite large -- any chance you could help us narrow it down? Ideally, we'd start from a short stand-alone script, unless we are unable to reproduce the issue :)\r\n\r\nBTW, my immediate suspicion goes towards the `.from_pretrained()` function, as (de)serialization functions should be the only ones using `safetensors`.",
"Investigating and trying to make a minimal reproducer - in preliminary testing I do see some differences and I'd also guess that `safetensors` is the cause.",
"Confirmed that the issue is caused by loading `safetensors` weights with `from_pretrained` as @gante suspected. The difference in memory usage appears before training has even begun. It is transient and only occurs during weight loading - so unless your GPU goes OOM during weight loading itself, the rest of training will not be affected by this.\r\n\r\nMy guess is that the `safetensors` loading is creating two (or more?) copies of the weights on the GPU during the loading process before eventually cleaning them up. The most likely cause is that tensors in TF are created on-device by default, whereas in torch they are created on CPU and must be moved, so probably some of the code is accidentally creating some temporary variables on GPU? \r\n\r\nTo test memory usage for loading in TF I used the following:\r\n```python\r\nimport tensorflow as tf\r\nfrom transformers import TFAutoModel\r\n\r\nmodel = TFAutoModel.from_pretrained(repo_dir)\r\nprint(tf.config.experimental.get_memory_info(\"GPU:0\"))\r\n```\r\n\r\nIn my testing, peak GPU memory usage when loading `bert-large-cased` was 1.5GB when loading from TF `.h5` weights and 4.1GB when loading from `safetensors`, which matches @mariecwhite's benchmark.\r\n\r\ncc @Narsil ",
"Further investigation: I think the cause is in the `safetensors` code [here](https://github.com/huggingface/safetensors/blob/main/bindings/python/py_src/safetensors/tensorflow.py#L130) - `tf.convert_to_tensor()` creates the tensor on the GPU by default if one is present, so the entire state dict is materialized on the GPU alongside the randomly initialized weights during loading.",
"Hi @mariecwhite, thanks again for the bug report! This is a significant issue and we really appreciate the warning. The PR to fix it is open at #24404 and will hopefully be merged soon. If you'd like to try using the PR branch before then, you can install it with\r\n```python\r\npip install git+https://github.com/huggingface/transformers.git@tf_safetensors_reduced_mem_usage\r\n```",
"Thank you for the quick follow-up!",
"No probs - it's our fault for missing this issue! The PR has now been merged, so you can just install from `main` to use it.\r\n```\r\npip install git+https://github.com/huggingface/transformers.git\r\n```\r\nIt'll be included in the next patch or full release of `transformers`, at which point you can go back to just `pip install transformers`. Thanks again for the clear bug report and the work you did tracing memory usage in different scenarios!"
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.19.0-1022-gcp-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@gante @Rocketknight1
From June 8, after upgrading to `transformers` version 4.30 which automatically installs `safetensors`, peak memory usage in BertLarge and T5Large (and possibly other models that we have not measured) increased to what appears to be a fixed value for smaller batch sizes.

### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the commands below on a CUDA enabled Linux machine:
```
# Clone benchmarking repo
git clone https://github.com/iree-org/iree-samples.git
cd iree-samples/iree-tf/benchmark
# Setup output file.
OUTPUT_PATH=/tmp/tf.json
echo "{\"trigger\": { \"timestamp\": \"$(date +'%s')\" }, \"benchmarks\": []}" > "${OUTPUT_PATH}"
# Setup virtual environment.
TENSORFLOW_VERSION=2.12.0 VENV_DIR=tf.venv ./setup_venv.sh
source tf.venv/bin/activate
# Run benchmark.
BENCHMARK_ID=47cb0d3a-5eb7-41c7-9d7c-97aae7023ecf-MODEL_BERT_LARGE-fp32-TF-384xi32-batch1
python benchmark_model.py --benchmark_id="${BENCHMARK_ID}" --device=gpu --output_path="${OUTPUT_PATH}" --iterations=5
```
Benchmark output will show `"device_memory_peak_mb": 4157.236992`
Now remove `safetensors` (which was installed with v4.30):
```
pip uninstall safetensors
python benchmark_model.py --benchmark_id="${BENCHMARK_ID}" --device=gpu --output_path="${OUTPUT_PATH}" --iterations=5
```
Benchmark output will show `"device_memory_peak_mb": 1591.090432`
### Expected behavior
Device peak memory usage should not have increased by 2.5x.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24393/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24392
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24392/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24392/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24392/events
|
https://github.com/huggingface/transformers/issues/24392
| 1,766,630,109 |
I_kwDOCUB6oc5pTKLd
| 24,392 |
Allow `TextClassificationPipeline` to handle input longer than `model_max_length` tokens
|
{
"login": "boyleconnor",
"id": 6520892,
"node_id": "MDQ6VXNlcjY1MjA4OTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6520892?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/boyleconnor",
"html_url": "https://github.com/boyleconnor",
"followers_url": "https://api.github.com/users/boyleconnor/followers",
"following_url": "https://api.github.com/users/boyleconnor/following{/other_user}",
"gists_url": "https://api.github.com/users/boyleconnor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/boyleconnor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/boyleconnor/subscriptions",
"organizations_url": "https://api.github.com/users/boyleconnor/orgs",
"repos_url": "https://api.github.com/users/boyleconnor/repos",
"events_url": "https://api.github.com/users/boyleconnor/events{/privacy}",
"received_events_url": "https://api.github.com/users/boyleconnor/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
open
| false | null |
[] |
[
"I don't have a lot of time to review this atm.\r\n\r\n@amyeroberts Do you know someone that could ?\r\n\r\nOverall I'm hesitant to think it's a good idea. Maintenance is much higher for those `ChunkPipeline` and splitting a document into bits is relatively easy to do outside of the pipeline.\r\nThe merge strategies are also not entirely obvious to me.\r\nThat being said, it's definitely very convenient if implemented directly in the pipeline.",
"@boyleconnor Thanks for opening this feature request and for opening an example PR! As @Narsil mentions, there's a maintenance cost to adding this and the set of people who could review this are all pretty busy. \r\n\r\nWhat I suggest is leaving the PR as an example for anyone who might wish to see how to implement this. If this issue gets a lot of attention (we'll measure with 👍 on the feature description) then we can revisit. \r\n\r\n",
"@amyeroberts, is there any particular threshold for when HF would consider adding this feature? As I write this comment, there are 13 👍's on the feature description.\r\n\r\nI should also note that I posted about this feature request on social media, and so most of these 👍's are from NLP professionals & academics whom I personally know.",
"WDYT @Rocketknight1 @ArthurZucker?",
"I think the solution is valid and the PR is well-implemented, but my intuition is not to add it!\r\n\r\nMy reasoning is:\r\n- Chunking inputs in the pipeline makes the pipeline more awkward, and will make it harder to add other features.\r\n- Models are getting longer and longer context lengths, including the addition of things like alibi/rope position embeddings that let them scale up to arbitrarily long contexts. That means this feature will be less useful over time.\r\n\r\nThus, I feel this feature is a workaround for a very temporary limitation that some models have, but with more modern models the workaround shouldn't be needed, and so we shouldn't hardcode it into our pipelines in 2023. Instead, we should consider replacing some of the default models for tasks like `sentiment-analysis` with newer models that have better performance and that support longer inputs."
] | 1,687 | 1,696 | null |
CONTRIBUTOR
| null |
### Problem
Running a `TextClassificationPipeline` on a text with more tokens than its model's maximum position embeddings (e.g. 512 for BERT) like so:
```python
from transformers import pipeline
classifier = pipeline('sentiment-analysis')
classifier("Hello, world! " * 1000)
```
will lead to this error:
```
RuntimeError: The size of tensor a (4002) must match the size of tensor b (512) at non-singleton dimension 1
```
Note: the numbers (`4002` and `512`, above) will vary depending on the max length of the model in use and the length (in tokens) of the text that triggered the error.
(_**If you found this issue through web-searching for the above error or some other means, look at the linked PR for an implemented code fix to this problem, and consider giving a thumbs-up to this comment if you think it should be merged into the main codebase**_)
### Feature request
We should add "chunking"/"sliding window" functionality to `TextClassificationPipeline`, allowing it to process documents longer than the `model_max_length` of its `.model`. Specifically, this would run an instance of the model on each of several "sliding window" views of each input sequence, then take the mean, similar to (but somewhat simpler than) how [`TokenClassificationPipeline`](https://github.com/huggingface/transformers/blob/ad78d9597b224443e9fe65a94acc8c0bc48cd039/src/transformers/pipelines/token_classification.py#L96) does so in part by subclassing from `ChunkPipeline`.
### Motivation
It would be nice to easily do, e.g., sentiment analysis on documents longer than the `model_max_length` of the given model/tokenizer. I have in the past tried to do this in a time-sensitive context and was unable to do so.
### Your contribution
I have already opened a draft PR: #24312. I would be happy to finish the missing parts (e.g. documentation) if someone on the Huggingface team (I believe @Narsil is the appropriate person to tag) can confirm that they would accept this feature as I plan to implement it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24392/reactions",
"total_count": 13,
"+1": 13,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24392/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/24391
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24391/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24391/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24391/events
|
https://github.com/huggingface/transformers/issues/24391
| 1,766,600,745 |
I_kwDOCUB6oc5pTDAp
| 24,391 |
Bug on Gather all remaining tensors and put them back on the CPU
|
{
"login": "jinmang2",
"id": 37775784,
"node_id": "MDQ6VXNlcjM3Nzc1Nzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/37775784?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jinmang2",
"html_url": "https://github.com/jinmang2",
"followers_url": "https://api.github.com/users/jinmang2/followers",
"following_url": "https://api.github.com/users/jinmang2/following{/other_user}",
"gists_url": "https://api.github.com/users/jinmang2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jinmang2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jinmang2/subscriptions",
"organizations_url": "https://api.github.com/users/jinmang2/orgs",
"repos_url": "https://api.github.com/users/jinmang2/repos",
"events_url": "https://api.github.com/users/jinmang2/events{/privacy}",
"received_events_url": "https://api.github.com/users/jinmang2/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@jinmang2 can you provide the example code you ran? Would be an excellent way for us to write a test-case around it :)",
"@muellerzr sure!\r\n\r\n### dialogue state tracking results (my own code)\r\n- colab link: https://colab.research.google.com/drive/1afT8O5OrUaTaZ07xvi_3AMW07P2nnRuC?usp=sharing\r\n- code link: https://github.com/jinmang2/KLUE-DST/blob/main/run.py\r\n- results\r\n\r\n```python\r\n...\r\neval_results = trainer.evaluation_loop(\r\n trainer.get_eval_dataloader(),\r\n description=\"Evaluation\",\r\n prediction_loss_only=False,\r\n ignore_keys=None,\r\n metric_key_prefix=\"eval\",\r\n)\r\nlen(trainer.eval_dataset), eval_results.predictions[0].shape\r\n```\r\n```\r\n(7224, (824, 9, 71))\r\n```\r\n- expected shape: `(7224, 9, 71)`\r\n\r\n### glue mnli results (huggingface's example code)\r\n- colab link: https://colab.research.google.com/drive/1Yfoh4-Pl5LqGUWBZZqbc3OGN1R3x3O_w?usp=sharing\r\n- code link: https://github.com/huggingface/transformers/blob/ba695c1efd55091e394eb59c90fb33ac3f9f0d41/examples/pytorch/text-classification/run_glue.py\r\n- results\r\n\r\n```python\r\n...\r\neval_results = trainer.evaluation_loop(\r\n trainer.get_eval_dataloader(),\r\n description=\"Evaluation\",\r\n prediction_loss_only=False,\r\n ignore_keys=None,\r\n metric_key_prefix=\"eval\",\r\n)\r\n# The total number of samples in the eval example is 9815.\r\n# However, it can be seen that only 3 samples of prediction used for evaluation remain.\r\nlen(trainer.eval_dataset), eval_results.predictions[0].shape\r\n```\r\n```\r\n(9815, (3,))\r\n```\r\n- expected shape: `(9815,)`",
"Since the `evaluate` method does not know how many eval samples have been evaluated (internally, only necessary values are loaded into the metrics dictionary), the `evaluation_loop` method directly receives `eval_results` and checks the eval samples.",
"Thanks @jinmang2, this indeed was a big from the integration and the original logic should have been maintained. A PR will be opened shortly with the solution (and also solves a failing test!) thanks again for your solution and thorough analysis",
"Thanks for fixing it! :-)"
] | 1,687 | 1,688 | 1,688 |
NONE
| null |
### System Info
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu)
- Jax version: 0.4.10
- JaxLib version: 0.4.10
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger @muellerzr @ArthurZucker
### Information
- [X] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
### Colab Link
- https://colab.research.google.com/drive/1afT8O5OrUaTaZ07xvi_3AMW07P2nnRuC?usp=sharing
### Expected behavior
### What is problem?
In the trainer's `evaluation_loop`, the huggingface trainer collects all tensors and puts them into the `compute_metrics` method all at once. If the tensor is too large, a CUDA OOM error occurs, so it's prevented by sending it to the cpu in advance (using `nested_numpify` method) in the middle with `eval_accumulation_step`.
For samples smaller than `eval_accumulation_step`, metrics were calculated after additional summing using the code below.
https://github.com/huggingface/transformers/blob/66fd3a8d626a32989f4569260db32785c6cbf42a/src/transformers/trainer.py#L3304-L3318
However, since the code changes as below in PR #24028, the problem as occurred.
https://github.com/huggingface/transformers/blob/ebd94b0f6f215f6bc0f70e61eba075eb9196f9ef/src/transformers/trainer.py#L3184-L3192
The code above doesn't merge the remaining tensors into the final container, it just allocates them. In fact, in the example code I ran, even though `len(eval_dataset)` was `7224`, with `per_device_eval_batch_size=16`, `eval_accumulation_steps=100`, gpu-cpu communication was performed 4 times, and only `824` eval samples remained.
Please check it out and hope the PR will be corrected. Thank you!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24391/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24391/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24390
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24390/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24390/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24390/events
|
https://github.com/huggingface/transformers/issues/24390
| 1,766,385,065 |
I_kwDOCUB6oc5pSOWp
| 24,390 |
AttributeError: 'AutoformerModel' object has no attribute 'embedder'
|
{
"login": "pourmatin",
"id": 19475339,
"node_id": "MDQ6VXNlcjE5NDc1MzM5",
"avatar_url": "https://avatars.githubusercontent.com/u/19475339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pourmatin",
"html_url": "https://github.com/pourmatin",
"followers_url": "https://api.github.com/users/pourmatin/followers",
"following_url": "https://api.github.com/users/pourmatin/following{/other_user}",
"gists_url": "https://api.github.com/users/pourmatin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pourmatin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pourmatin/subscriptions",
"organizations_url": "https://api.github.com/users/pourmatin/orgs",
"repos_url": "https://api.github.com/users/pourmatin/repos",
"events_url": "https://api.github.com/users/pourmatin/events{/privacy}",
"received_events_url": "https://api.github.com/users/pourmatin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @kashif ",
"While waiting our expert @kashif , the code in \r\n\r\n```python\r\n if config.num_static_categorical_features > 0:\r\n self.embedder = AutoformerFeatureEmbedder(\r\n cardinalities=config.cardinality, embedding_dims=config.embedding_dimension\r\n )\r\n```\r\ntogether\r\n```\r\n if static_categorical_features is not None:\r\n embedded_cat = self.embedder(static_categorical_features)\r\n```\r\n\r\nSince you pass `static_categorical_features` to the model's forward, you can check your config's `num_static_categorical_features ` attribute. Probably it is 0 and `self.embedder` is not created. In this case, I think we should not pass `static_categorical_features` to the model.\r\n",
"thanks @ydshieh having a look!",
"@pourmatin that is correct, if you do not specify any categorical features, then you should not pass the model a list of categorical features... I believe we had a check for this, let me confirm!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,687 | 1,690 | 1,690 |
NONE
| null |
### System Info
Darwin Kernel Version 22.4.0: Mon Mar 6 21:00:41 PST 2023; root:xnu-8796.101.5~3/RELEASE_ARM64_T8103
### Who can help?
@ydshieh
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. declare the model:
`config = AutoformerConfig()
config.prediction_length = 5
config.context_length = 55
config.lags_sequence = [1,2, 3, 4, 5]
model = AutoformerModel(config)`
2. invoke the forward method of the model by calling it:
`outputs = model(
past_values=batches["past_values"][:16],
past_time_features=batches["past_time_features"][:16],
past_observed_mask=batches["past_observed_mask"][:16],
static_categorical_features=batches["static_categorical_features"][:16],
future_values=batches["future_values"][:16],
future_time_features=batches["future_time_features"][:16],
)`
### Expected behavior
I'd expect the model to run forward method successfully. Instead, I get the following error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/var/folders/2n/p53ld4x51l1_yhdkj7q4y3t00000gr/T/ipykernel_49714/3104772270.py in <module>
----> 1 outputs = model(
2 past_values=batches["past_values"][:16],
3 past_time_features=batches["past_time_features"][:16],
4 past_observed_mask=batches["past_observed_mask"][:16],
5 static_categorical_features=batches["static_categorical_features"][:16],
~/opt/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *args, **kwargs)
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
~/opt/anaconda3/lib/python3.9/site-packages/transformers/models/autoformer/modeling_autoformer.py in forward(self, past_values, past_time_features, past_observed_mask, static_categorical_features, static_real_features, future_values, future_time_features, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, output_hidden_states, output_attentions, use_cache, return_dict)
1725 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1726
-> 1727 transformer_inputs, temporal_features, loc, scale, static_feat = self.create_network_inputs(
1728 past_values=past_values,
1729 past_time_features=past_time_features,
~/opt/anaconda3/lib/python3.9/site-packages/transformers/models/autoformer/modeling_autoformer.py in create_network_inputs(self, past_values, past_time_features, static_categorical_features, static_real_features, past_observed_mask, future_values, future_time_features)
1637 static_feat = torch.cat((static_real_features, static_feat), dim=1)
1638 if static_categorical_features is not None:
-> 1639 embedded_cat = self.embedder(static_categorical_features)
1640 static_feat = torch.cat((embedded_cat, static_feat), dim=1)
1641 expanded_static_feat = static_feat.unsqueeze(1).expand(-1, time_feat.shape[1], -1)
~/opt/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
1612 if name in modules:
1613 return modules[name]
-> 1614 raise AttributeError("'{}' object has no attribute '{}'".format(
1615 type(self).__name__, name))
1616
AttributeError: 'AutoformerModel' object has no attribute 'embedder'
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24390/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24389
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24389/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24389/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24389/events
|
https://github.com/huggingface/transformers/pull/24389
| 1,766,339,710 |
PR_kwDOCUB6oc5TfDPZ
| 24,389 |
[Trainer] Fix optimizer step on PyTorch TPU
|
{
"login": "cowanmeg",
"id": 6570496,
"node_id": "MDQ6VXNlcjY1NzA0OTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6570496?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cowanmeg",
"html_url": "https://github.com/cowanmeg",
"followers_url": "https://api.github.com/users/cowanmeg/followers",
"following_url": "https://api.github.com/users/cowanmeg/following{/other_user}",
"gists_url": "https://api.github.com/users/cowanmeg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cowanmeg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cowanmeg/subscriptions",
"organizations_url": "https://api.github.com/users/cowanmeg/orgs",
"repos_url": "https://api.github.com/users/cowanmeg/repos",
"events_url": "https://api.github.com/users/cowanmeg/events{/privacy}",
"received_events_url": "https://api.github.com/users/cowanmeg/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @muellerzr and @pacman100 ",
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you @cowanmeg for the fix!"
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
Update optimizer step for TPUs to user `self.optimizer.step()` instead of `xm.optimizer_step(self.optimizer)`.
AcceleratedOptimizer properly calls `xm.optimizer_step` on the optimizer (https://github.com/huggingface/accelerate/blob/main/src/accelerate/optimizer.py#L129).
This fixes a bug in transformers/trainer.py when using Pytorch on TPUs:
File "/usr/local/lib/python3.8/dist-packages/torch_xla/core/xla_model.py", line 471, in _fetch_gradients for param_group in optimizer.getstate()['param_groups']: KeyError: 'param_groups'
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24389/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24389",
"html_url": "https://github.com/huggingface/transformers/pull/24389",
"diff_url": "https://github.com/huggingface/transformers/pull/24389.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24389.patch",
"merged_at": 1687346681000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24388
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24388/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24388/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24388/events
|
https://github.com/huggingface/transformers/pull/24388
| 1,766,329,118 |
PR_kwDOCUB6oc5TfBzv
| 24,388 |
[docs] Fix NLLB-MoE links
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
MEMBER
| null |
Fixes the broken links raised in #24382.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24388/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24388",
"html_url": "https://github.com/huggingface/transformers/pull/24388",
"diff_url": "https://github.com/huggingface/transformers/pull/24388.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24388.patch",
"merged_at": 1687307660000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24387
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24387/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24387/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24387/events
|
https://github.com/huggingface/transformers/pull/24387
| 1,766,255,972 |
PR_kwDOCUB6oc5Tezp7
| 24,387 |
Update deprecated torch.ger
|
{
"login": "kit1980",
"id": 420184,
"node_id": "MDQ6VXNlcjQyMDE4NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/420184?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kit1980",
"html_url": "https://github.com/kit1980",
"followers_url": "https://api.github.com/users/kit1980/followers",
"following_url": "https://api.github.com/users/kit1980/following{/other_user}",
"gists_url": "https://api.github.com/users/kit1980/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kit1980/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kit1980/subscriptions",
"organizations_url": "https://api.github.com/users/kit1980/orgs",
"repos_url": "https://api.github.com/users/kit1980/repos",
"events_url": "https://api.github.com/users/kit1980/events{/privacy}",
"received_events_url": "https://api.github.com/users/kit1980/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@sgugger please take a look.",
"_The documentation is not available anymore as the PR was closed or merged._",
"A test (I believe unrelated to the change) timed out https://app.circleci.com/pipelines/github/huggingface/transformers/66846/workflows/ae63fb76-7260-490e-8301-7c6cf986e693/jobs/833163",
"Yes it's unrelated, merging :-)"
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
` torch.ger` was deprecated long time ago and `torch.outer` is a direct replacement: https://pytorch.org/docs/stable/generated/torch.ger.html
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24387/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24387",
"html_url": "https://github.com/huggingface/transformers/pull/24387",
"diff_url": "https://github.com/huggingface/transformers/pull/24387.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24387.patch",
"merged_at": 1687306873000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24386
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24386/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24386/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24386/events
|
https://github.com/huggingface/transformers/issues/24386
| 1,766,146,530 |
I_kwDOCUB6oc5pRUHi
| 24,386 |
Loading Trained RAG Model
|
{
"login": "YichiRockyZhang",
"id": 29335344,
"node_id": "MDQ6VXNlcjI5MzM1MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29335344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YichiRockyZhang",
"html_url": "https://github.com/YichiRockyZhang",
"followers_url": "https://api.github.com/users/YichiRockyZhang/followers",
"following_url": "https://api.github.com/users/YichiRockyZhang/following{/other_user}",
"gists_url": "https://api.github.com/users/YichiRockyZhang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YichiRockyZhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YichiRockyZhang/subscriptions",
"organizations_url": "https://api.github.com/users/YichiRockyZhang/orgs",
"repos_url": "https://api.github.com/users/YichiRockyZhang/repos",
"events_url": "https://api.github.com/users/YichiRockyZhang/events{/privacy}",
"received_events_url": "https://api.github.com/users/YichiRockyZhang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @YichiRockyZhang \r\nThanks for the issue, looking at your environment (transformers == 4.13.0) I would probably give it a try with on of the newest version of transformers. It seems the config didn't saved properly the model identifier for some reason. Would it be possible to use a recent version of the lib for you? ",
"Hi @YichiRockyZhang\r\n\r\n\r\nIf @younesbelkada's above suggesion is still not working, it would help a lot if you can provide a short but a bit more complete code example that you:\r\n- **create/load the initialize model(s)**\r\n- **save it to the checkpoint (without the need of training/fine-tuning)**\r\n- _(you already provide this part)_ the way you try to load the saved checkpoint\r\n\r\nThis way, it's easier and fast for us to reproduce and look into the issue. Thank you in advance.",
"Hi @younesbelkada . Thanks for the response! This did help as running the finetuning script now results in a more sensible saved checkpoint.\r\n\r\n\r\n\r\nI can now load the model with the following:\r\n```py\r\npath = \"/fs/nexus-scratch/yzhang42/rag_end2end/model_checkpoints_MS/checkpoint31\"\r\n\r\nrag_tokenizer = RagTokenizer.from_pretrained(path)\r\nrag_retriever = RagRetriever.from_pretrained(\r\n path,\r\n use_dummy_dataset=False,\r\n indexed_dataset=ds,\r\n index_name=\"compressed\",\r\n)\r\n\r\nrag_model = RagTokenForGeneration.from_pretrained(path, retriever=rag_retriever)\r\n```\r\n\r\nHi @ydshieh ! Unfortunately, I believe my problem is specific to fine-tuning. I'm using the only fine-tuning script for this model that I can find (in huggingface documentation and even on the internet). The script uses pytorch lightning to train and save the model. The below snippet from [`finetune_rag.py`](https://github.com/huggingface/transformers/blob/main/examples/research_projects/rag-end2end-retriever/finetune_rag.py) details how the models is saved.\r\n\r\n```py\r\n @pl.utilities.rank_zero_only\r\n def on_save_checkpoint(self, checkpoint: Dict[str, Any]) -> None:\r\n save_path = self.output_dir.joinpath(\"checkpoint{}\".format(self.step_count))\r\n self.model.config.save_step = self.step_count\r\n # self.model.save_pretrained(save_path)\r\n self.tokenizer.save_pretrained(save_path)\r\n\r\n if self.custom_config.end2end:\r\n modified_state_dict = self.model.state_dict()\r\n for key in self.model.state_dict().keys():\r\n if key.split(\".\")[1] == \"ctx_encoder\":\r\n del modified_state_dict[key]\r\n self.model.save_pretrained(save_directory=save_path, state_dict=modified_state_dict)\r\n\r\n save_path_dpr = os.path.join(self.dpr_ctx_check_dir, \"checkpoint{}\".format(self.step_count))\r\n self.model.rag.ctx_encoder.save_pretrained(save_path_dpr)\r\n self.context_tokenizer.save_pretrained(save_path_dpr)\r\n```\r\n\r\nI understand HF does not maintain these scripts, but for what it's worth, I think retrieval-augmented models are very important and should have a bit more support!",
"@YichiRockyZhang \r\n\r\nThanks for sharing more details. What I means is that you can still make a **self-complete** code snippet:\r\n\r\n- how you (or the script create the model)\r\n- then save that model using the logic in the method `on_save_checkpoint` you provided\r\n\r\nYou don't need to go through the training part in the script, just the create/save part. By ` self-complete`, it means we can just run it directly to see the failure you have. Of course y**ou will have to wrap up things in your own way** (not just showing us the definition of `on_save_checkpoint`). I hope this makes my previous comment a bit clear and look forward to see a reproducible code snippet 🤗 ",
"@ydshieh Hi, thank you for the quick responses! I've edited my above reply to reflect the fact that upgrading to transformers==4.30.2 seemed to have worked after making sure my data was ASCII encoded. Though it does seem that the fine-tuning script is only saving the whole model after the first epoch. I've adjusted the code to be \r\n\r\n```py\r\n @pl.utilities.rank_zero_only\r\n def on_save_checkpoint(self, checkpoint: Dict[str, Any]) -> None:\r\n save_path = self.output_dir.joinpath(\"checkpoint{}\".format(self.step_count))\r\n self.model.config.save_step = self.step_count\r\n # self.model.save_pretrained(save_path)\r\n self.tokenizer.save_pretrained(save_path)\r\n\r\n if self.custom_config.end2end:\r\n modified_state_dict = self.model.state_dict()\r\n for key in self.model.state_dict().keys():\r\n if key.split(\".\")[1] == \"ctx_encoder\":\r\n del modified_state_dict[key]\r\n self.model.save_pretrained(save_directory=save_path, state_dict=modified_state_dict)\r\n\r\n save_path_dpr = os.path.join(self.dpr_ctx_check_dir, \"checkpoint{}\".format(self.step_count))\r\n self.model.rag.ctx_encoder.save_pretrained(save_path_dpr)\r\n self.context_tokenizer.save_pretrained(save_path_dpr)\r\n else: #NEW\r\n state_dict = self.model.state_dict()\r\n self.model.save_pretrained(save_directory=save_path, state_dict=state_dict)\r\n```\r\n\r\nI will update this thread in the morning once fine-tuning is finished. If my fix doesn't work out, I'll try to put together a more minimal and self-complete script for debugging purposes! 🤗",
"Nice and good luck :-) !",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,687 | 1,690 | 1,690 |
NONE
| null |
### System Info
Python 3.9.16
Transformers 4.13.0
WSL
### Who can help?
@ArthurZucker @younesbelkada @shamanez
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
After finetuning RAG, I'm left with the following directory, and I'm not sure how to load the resulting checkpoint.

I should note the checkpoint is ~6 GB while the [original huggingface checkpoint](https://huggingface.co/facebook/rag-token-base/tree/main) is 2 GB. I suspect this is because I used the [`finetune_rag_ray_end2end.sh`](https://github.com/huggingface/transformers/tree/main/examples/research_projects/rag-end2end-retriever) script, so it includes all 3 models (reader, retriever, generator).
Below are my attempts to load the checkpoint
**Attempt 1**
```py
ds = load_dataset(path='wiki_dpr', name='psgs_w100.multiset.compressed', split='train')
rag_tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-base")
rag_retriever = RagRetriever.from_pretrained(
"facebook/rag-token-base",
use_dummy_dataset=False,
indexed_dataset=ds,
index_name="embeddings",
)
rag_model = RagTokenForGeneration.from_pretrained("facebook/rag-token-base", retriever=rag_retriever)
checkpoint_path = "/fs/nexus-scratch/yzhang42/rag_end2end/model_checkpoints_MS/val_avg_em=0.0026-step_count=601.0.ckpt"
rag_model.load_state_dict(torch.load(checkpoint_path))
```
The program runs forever with the following traceback when I interrupt it:
```
Some weights of RagTokenForGeneration were not initialized from the model checkpoint at facebook/rag-token-base and are newly initialized: ['rag.generator.lm_head.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
/fs/nexus-scratch/yzhang42/miniconda3/envs/qa3/lib/python3.9/site-packages/ray/_private/services.py:238: UserWarning: Not all Ray Dashboard dependencies were found. To use the dashboard please install Ray using `pip install ray[default]`. To disable this message, set RAY_DISABLE_IMPORT_WARNING env var to '1'.
warnings.warn(warning_message)
^CTraceback (most recent call last):
File "/nfshomes/yzhang42/rag/notebooks/rag_eval.py", line 37, in <module>
rag_model.load_state_dict(torch.load(checkpoint_path))
File "/fs/nexus-scratch/yzhang42/miniconda3/envs/qa3/lib/python3.9/site-packages/torch/serialization.py", line 712, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/fs/nexus-scratch/yzhang42/miniconda3/envs/qa3/lib/python3.9/site-packages/torch/serialization.py", line 1049, in _load
result = unpickler.load()
File "/fs/nexus-scratch/yzhang42/miniconda3/envs/qa3/lib/python3.9/site-packages/ray/actor.py", line 1005, in _deserialization_helper
return worker.core_worker.deserialize_and_register_actor_handle(
File "python/ray/_raylet.pyx", line 1594, in ray._raylet.CoreWorker.deserialize_and_register_actor_handle
File "python/ray/_raylet.pyx", line 1563, in ray._raylet.CoreWorker.make_actor_handle
File "/fs/nexus-scratch/yzhang42/miniconda3/envs/qa3/lib/python3.9/site-packages/ray/_private/function_manager.py", line 402, in load_actor_class
actor_class = self._load_actor_class_from_gcs(
File "/fs/nexus-scratch/yzhang42/miniconda3/envs/qa3/lib/python3.9/site-packages/ray/_private/function_manager.py", line 487, in _load_actor_class_from_gcs
time.sleep(0.001)
KeyboardInterrupt
```
**Attempt 2**
```py
from transformers import AutoConfig, AutoModel, PretrainedConfig, RagTokenizer, RagRetriever, BartForConditionalGeneration, RagTokenForGeneration, RagSequenceForGeneration, RagConfig
from transformers import BartModel
qe_config = PretrainedConfig(
name_or_path=\
"/fs/nexus-scratch/yzhang42/rag_end2end/model_checkpoints_MS/checkpoint601/generator_tokenizer/tokenizer_config.json")
gen_config = PretrainedConfig(
name_or_path=\
"/fs/nexus-scratch/yzhang42/rag_end2end/model_checkpoints_MS/checkpoint601/question_encoder_tokenizer/tokenizer_config.json")
RagConfig.from_question_encoder_generator_configs(
question_encoder_config=qe_config,
generator_config=gen_config
)
```
Gives the following error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[2], line 11
4 qe_config = PretrainedConfig(
5 name_or_path=\
6 "/fs/nexus-scratch/yzhang42/rag_end2end/model_checkpoints_MS/checkpoint601/generator_tokenizer/tokenizer_config.json")
7 gen_config = PretrainedConfig(
8 name_or_path=\
9 "/fs/nexus-scratch/yzhang42/rag_end2end/model_checkpoints_MS/checkpoint601/question_encoder_tokenizer/tokenizer_config.json")
---> 11 RagConfig.from_question_encoder_generator_configs(
12 question_encoder_config=qe_config,
13 generator_config=gen_config
14 )
File /fs/nexus-scratch/yzhang42/miniconda3/envs/qa3/lib/python3.9/site-packages/transformers/models/rag/configuration_rag.py:183, in RagConfig.from_question_encoder_generator_configs(cls, question_encoder_config, generator_config, **kwargs)
172 @classmethod
173 def from_question_encoder_generator_configs(
174 cls, question_encoder_config: PretrainedConfig, generator_config: PretrainedConfig, **kwargs
175 ) -> PretrainedConfig:
176 r"""
177 Instantiate a :class:`~transformers.EncoderDecoderConfig` (or a derived class) from a pre-trained encoder model
178 configuration and decoder model configuration.
(...)
181 :class:`EncoderDecoderConfig`: An instance of a configuration object
182 """
--> 183 return cls(question_encoder=question_encoder_config.to_dict(), generator=generator_config.to_dict(), **kwargs)
File /fs/nexus-scratch/yzhang42/miniconda3/envs/qa3/lib/python3.9/site-packages/transformers/models/rag/configuration_rag.py:140, in RagConfig.__init__(self, vocab_size, is_encoder_decoder, prefix, bos_token_id, pad_token_id, eos_token_id, decoder_start_token_id, title_sep, doc_sep, n_docs, max_combined_length, retrieval_vector_size, retrieval_batch_size, dataset, dataset_split, index_name, index_path, passages_path, use_dummy_dataset, reduce_loss, label_smoothing, do_deduplication, exclude_bos_score, do_marginalize, output_retrieved, use_cache, forced_eos_token_id, **kwargs)
136 decoder_model_type = decoder_config.pop("model_type")
138 from ..auto.configuration_auto import AutoConfig
--> 140 self.question_encoder = AutoConfig.for_model(question_encoder_model_type, **question_encoder_config)
141 self.generator = AutoConfig.for_model(decoder_model_type, **decoder_config)
143 self.reduce_loss = reduce_loss
File /fs/nexus-scratch/yzhang42/miniconda3/envs/qa3/lib/python3.9/site-packages/transformers/models/auto/configuration_auto.py:492, in AutoConfig.for_model(cls, model_type, *args, **kwargs)
490 config_class = CONFIG_MAPPING[model_type]
491 return config_class(*args, **kwargs)
--> 492 raise ValueError(
493 f"Unrecognized model identifier: {model_type}. Should contain one of {', '.join(CONFIG_MAPPING.keys())}"
494 )
ValueError: Unrecognized model identifier: . Should contain one of imagegpt, qdqbert, vision-encoder-decoder, trocr, fnet, segformer, vision-text-dual-encoder, perceiver, gptj, layoutlmv2, beit, rembert, visual_bert, canine, roformer, clip, bigbird_pegasus, deit, luke, detr, gpt_neo, big_bird, speech_to_text_2, speech_to_text, vit, wav2vec2, m2m_100, convbert, led, blenderbot-small, retribert, ibert, mt5, t5, mobilebert, distilbert, albert, bert-generation, camembert, xlm-roberta, pegasus, marian, mbart, megatron-bert, mpnet, bart, blenderbot, reformer, longformer, roberta, deberta-v2, deberta, flaubert, fsmt, squeezebert, hubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm-prophetnet, prophetnet, xlm, ctrl, electra, speech-encoder-decoder, encoder-decoder, funnel, lxmert, dpr, layoutlm, rag, tapas, splinter, sew-d, sew, unispeech-sat, unispeech
```
### Expected behavior
I'm not sure what expected behavior is supposed to be.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24386/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24385
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24385/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24385/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24385/events
|
https://github.com/huggingface/transformers/issues/24385
| 1,766,135,653 |
I_kwDOCUB6oc5pRRdl
| 24,385 |
How to unwrap after auto_wrap in FSDP?
|
{
"login": "ZN1010",
"id": 13196992,
"node_id": "MDQ6VXNlcjEzMTk2OTky",
"avatar_url": "https://avatars.githubusercontent.com/u/13196992?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZN1010",
"html_url": "https://github.com/ZN1010",
"followers_url": "https://api.github.com/users/ZN1010/followers",
"following_url": "https://api.github.com/users/ZN1010/following{/other_user}",
"gists_url": "https://api.github.com/users/ZN1010/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZN1010/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZN1010/subscriptions",
"organizations_url": "https://api.github.com/users/ZN1010/orgs",
"repos_url": "https://api.github.com/users/ZN1010/repos",
"events_url": "https://api.github.com/users/ZN1010/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZN1010/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This is more a question for the PyTorch forums as it's purely related to FSDP. Still cc-ing @pacman100 in case he has any idea.",
"Hello, as Sylvain mentioned this question is for the PyTorch forums. `summon_full_params` usage can be found in these tests: https://github.com/pytorch/pytorch/blob/main/test/distributed/checkpoint/test_fsdp_optim_state.py#L56-L59\r\n\r\nI am not sure if it contains the information related to the gradients of a given parameter. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,687 | 1,690 | 1,690 |
NONE
| null |
I am currently fine-tuning a LLM (LLaMA) and would like to retrieve the gradients of each weight (parameter) after every gradient update. However, I notice that weights are (auto) wrapped into stuff like “_fsdp_wrapped_module._flat_param” during training. I need to map these wrapped weights to the original LLaMA architecture such as “self_attn.v_proj”. Any code examples?
I guess “summon_full_params()” might be the function that I look for, but I am not sure if that is correct. I also have difficulty using this function. Thanks a lot for any help!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24385/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24384
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24384/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24384/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24384/events
|
https://github.com/huggingface/transformers/pull/24384
| 1,766,129,169 |
PR_kwDOCUB6oc5TeeWV
| 24,384 |
Refactor hyperparameter search backends
|
{
"login": "alexmojaki",
"id": 3627481,
"node_id": "MDQ6VXNlcjM2Mjc0ODE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3627481?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexmojaki",
"html_url": "https://github.com/alexmojaki",
"followers_url": "https://api.github.com/users/alexmojaki/followers",
"following_url": "https://api.github.com/users/alexmojaki/following{/other_user}",
"gists_url": "https://api.github.com/users/alexmojaki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexmojaki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexmojaki/subscriptions",
"organizations_url": "https://api.github.com/users/alexmojaki/orgs",
"repos_url": "https://api.github.com/users/alexmojaki/repos",
"events_url": "https://api.github.com/users/alexmojaki/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexmojaki/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"> Using abstract classes like this is not really the way the Transformers library is designed\r\n\r\nThat's fine, I was very unsure which approach to take. `abc` offers additional safety and IDE assistance as the standard way to ensure that all abstract methods are implemented, but it's probably overkill here and I also didn't like how heavy it was. I've pushed a much simpler strategy.\r\n\r\n> I recommended to just complete the error message to include wandb.\r\n\r\nThe point of this is that it's difficult to see all the missing bits. The current code isn't just missing wandb in the error message, it's also missing from `default_hp_space` (fixed in this PR) and the docstring (not enforced in this PR, although it could be, I just didn't want to jump there just yet).",
"> I also don't see how you would have benefits for IDE as you show in the PR description.\r\n\r\nSorry for the confusion, that's not part of this PR to keep the scope focused, but if this is merged I can follow it up with another which adds constructors to each backend class which accept the precise kwargs that the backend `run` supports.",
"Opened https://github.com/huggingface/huggingface_hub/issues/1526 in regards to the unrelated test failure.",
"Thanks a lot for your contribution!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24384). All of your documentation changes will be reflected on that endpoint."
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
Fixes https://github.com/huggingface/transformers/issues/24379
The goal here is to clearly group the essential info/functionality about each backend together to make reading/changing things easier. For example if another backend integration is added it should be less likely for something to be forgotten as apparently happened with wandb.
@sgugger sorry I didn't get a full confirmation to go ahead with this, it just seemed easier to show what I meant with code rather than continue explaining in the issue. There's many other ways this could be done and I can change the approach but I hope that the general direction at least is clear from this PR.
I also think this would help move towards improving the user facing API since as mentioned in https://github.com/huggingface/transformers/issues/24278#issuecomment-1599189018 (cc @hugocool) the kwargs have no type hints and are not very easy to use. So maybe instead of:
```python
best_run = trainer.hyperparameter_search(
direction="maximize",
backend="ray",
# this is just **kwargs, not so clear what's possible...
storage_path="...",
callbacks=...,
)
```
one could write:
```python
best_run = trainer.hyperparameter_search(
direction="maximize",
backend=RayTuneBackend(
# now more assistance is possible
storage_path="...",
callbacks=...,
),
)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24384/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24384/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24384",
"html_url": "https://github.com/huggingface/transformers/pull/24384",
"diff_url": "https://github.com/huggingface/transformers/pull/24384.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24384.patch",
"merged_at": 1687458505000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24383
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24383/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24383/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24383/events
|
https://github.com/huggingface/transformers/issues/24383
| 1,766,078,398 |
I_kwDOCUB6oc5pRDe-
| 24,383 |
How BERT 512 limit works?
|
{
"login": "Bisht9887",
"id": 26176160,
"node_id": "MDQ6VXNlcjI2MTc2MTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/26176160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bisht9887",
"html_url": "https://github.com/Bisht9887",
"followers_url": "https://api.github.com/users/Bisht9887/followers",
"following_url": "https://api.github.com/users/Bisht9887/following{/other_user}",
"gists_url": "https://api.github.com/users/Bisht9887/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bisht9887/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bisht9887/subscriptions",
"organizations_url": "https://api.github.com/users/Bisht9887/orgs",
"repos_url": "https://api.github.com/users/Bisht9887/repos",
"events_url": "https://api.github.com/users/Bisht9887/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bisht9887/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) for such questions. You did not pass your input the model, just the tokenizer.",
"sure! I will take care of that. I am little new to this. Thanks for info"
] | 1,687 | 1,687 | 1,687 |
NONE
| null |
### System Info
I passed a long text of 3000 tokens and it did not give me any error. Does BERT not have 512 token limit? Why it is not giving any error? This is the code I used. You can pass any input with tokens more than 512
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
# Long input text with more than 512 tokens
text = "This is a very long text with more than 512 tokens..."
tokens = tokenizer.tokenize(text)
print(len(tokens))
### Expected behavior
error when processing more than 512
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24383/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24381
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24381/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24381/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24381/events
|
https://github.com/huggingface/transformers/pull/24381
| 1,765,835,534 |
PR_kwDOCUB6oc5TdgQ1
| 24,381 |
fixing layer indexing error when pipeline parallel > 1
|
{
"login": "xshaun",
"id": 8446322,
"node_id": "MDQ6VXNlcjg0NDYzMjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8446322?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xshaun",
"html_url": "https://github.com/xshaun",
"followers_url": "https://api.github.com/users/xshaun/followers",
"following_url": "https://api.github.com/users/xshaun/following{/other_user}",
"gists_url": "https://api.github.com/users/xshaun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xshaun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xshaun/subscriptions",
"organizations_url": "https://api.github.com/users/xshaun/orgs",
"repos_url": "https://api.github.com/users/xshaun/repos",
"events_url": "https://api.github.com/users/xshaun/events{/privacy}",
"received_events_url": "https://api.github.com/users/xshaun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24381). All of your documentation changes will be reflected on that endpoint."
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
when applying pipeline parallel, the index of layers converted from transformers to megatron is wrong since there is not offset applied successfully.
For example, if having 4 layers, and pipeline parallel is 2, we want to the result looks like `layers.0 + layers.1` and `layers.2 + layers.3`, but now the result is `layers.0 + layers.1` and `layers.0 + layers.1`, since user should use `pp_layer_id` that is calculated with `layer + offset` , instead of `layer` that is only a index of range loop.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24381/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24381",
"html_url": "https://github.com/huggingface/transformers/pull/24381",
"diff_url": "https://github.com/huggingface/transformers/pull/24381.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24381.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24380
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24380/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24380/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24380/events
|
https://github.com/huggingface/transformers/issues/24380
| 1,765,798,908 |
I_kwDOCUB6oc5pP_P8
| 24,380 |
WavLM error when running forward
|
{
"login": "MorenoLaQuatra",
"id": 10062811,
"node_id": "MDQ6VXNlcjEwMDYyODEx",
"avatar_url": "https://avatars.githubusercontent.com/u/10062811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MorenoLaQuatra",
"html_url": "https://github.com/MorenoLaQuatra",
"followers_url": "https://api.github.com/users/MorenoLaQuatra/followers",
"following_url": "https://api.github.com/users/MorenoLaQuatra/following{/other_user}",
"gists_url": "https://api.github.com/users/MorenoLaQuatra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MorenoLaQuatra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MorenoLaQuatra/subscriptions",
"organizations_url": "https://api.github.com/users/MorenoLaQuatra/orgs",
"repos_url": "https://api.github.com/users/MorenoLaQuatra/repos",
"events_url": "https://api.github.com/users/MorenoLaQuatra/events{/privacy}",
"received_events_url": "https://api.github.com/users/MorenoLaQuatra/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I think there's a small typo with your codesnippet:\r\n```diff\r\n- input = fe(audio, return_tensor=\"pt\")\r\n+ input = fe(audio, return_tensors=\"pt\")\r\n```\r\n\r\nE.g. running the following works for me:\r\n```python\r\nfrom transformers import AutoModel, AutoFeatureExtractor\r\nimport torch\r\nimport numpy as np\r\n\r\nmodel = AutoModel.from_pretrained(\"microsoft/wavlm-base-plus\")\r\nfe = AutoFeatureExtractor.from_pretrained(\"microsoft/wavlm-base-plus\")\r\n\r\naudio = np.random.randn(16000) # random 1 second input audio\r\n\r\ninput = fe(audio, sampling_rate=16000, return_tensors=\"pt\")\r\n\r\nwith torch.no_grad():\r\n model(input_values=input[\"input_values\"])\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey @MorenoLaQuatra - did the above suggestion fix your issue? Feel free to close this thread if so",
"Well, actually my problem is more linked with the hidden_states when using output_hidden_states=True (the typo was my fault when reporting the snippet here on GitHub). However, I cannot reproduce it at the moment, so I will close for now.\r\n\r\nThanks @sanchit-gandhi !",
"Thanks for clarifying @MorenoLaQuatra! Feel free to open the issue again with a code repro if you find the model isn't working and we can take a deeper dive into it"
] | 1,687 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
### System Info
**Relevant Libraries**
transformers==4.26.1
torchaudio==2.0.2
torch==2.0.1
OS: Ubuntu 20.04
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following code:
```python
from transformers import AutoModel, AutoFeatureExtractor
model = AutoModel.from_pretrained("microsoft/wavlm-base-plus")
fe = AutoFeatureExtractor.from_pretrained("microsoft/wavlm-base-plus")
audio_path = "..."
audio, sr = torchaudio.load(audio_path)
input = fe(audio, return_tensor="pt")
model(input_values=input["input_values"])
```
---
When I try running the previous code, I got the following error:
```bash
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "PATH_TO_MY_ENV_SITE_PACKAGES/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "PATH_TO_MY_ENV_SITE_PACKAGES/transformers/models/wavlm/modeling_wavlm.py", line 1229, in forward
extract_features = self.feature_extractor(input_values)
File "PATH_TO_MY_ENV_SITE_PACKAGES/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "PATH_TO_MY_ENV_SITE_PACKAGES/transformers/models/wavlm/modeling_wavlm.py", line 346, in forward
hidden_states = input_values[:, None]
TypeError: list indices must be integers or slices, not tuple
```
### Expected behavior
get the output with last_hidden_state and others. This is not happening with HuBERT or Wav2Vec2.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24380/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24380/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24379
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24379/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24379/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24379/events
|
https://github.com/huggingface/transformers/issues/24379
| 1,765,783,928 |
I_kwDOCUB6oc5pP7l4
| 24,379 |
`Trainer.hyperparameter_search` doesn't document `wandb` or offer it as a default backend
|
{
"login": "alexmojaki",
"id": 3627481,
"node_id": "MDQ6VXNlcjM2Mjc0ODE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3627481?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexmojaki",
"html_url": "https://github.com/alexmojaki",
"followers_url": "https://api.github.com/users/alexmojaki/followers",
"following_url": "https://api.github.com/users/alexmojaki/following{/other_user}",
"gists_url": "https://api.github.com/users/alexmojaki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexmojaki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexmojaki/subscriptions",
"organizations_url": "https://api.github.com/users/alexmojaki/orgs",
"repos_url": "https://api.github.com/users/alexmojaki/repos",
"events_url": "https://api.github.com/users/alexmojaki/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexmojaki/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Those are all integrations maintained by the authors of those libraries, we do not maintain them ourselves. It might be a bug, but it's up to the wandb folks to fix it in this case :-) ",
"Even the glue code in `trainer.py` that ties the various backends together?\r\n\r\nWould you accept a PR to refactor this stuff? For example this code:\r\n\r\n```python\r\n if backend is None:\r\n backend = default_hp_search_backend()\r\n if backend is None:\r\n raise RuntimeError(\r\n \"At least one of optuna or ray should be installed. \"\r\n \"To install optuna run `pip install optuna`. \"\r\n \"To install ray run `pip install ray[tune]`. \"\r\n \"To install sigopt run `pip install sigopt`.\"\r\n )\r\n backend = HPSearchBackend(backend)\r\n if backend == HPSearchBackend.OPTUNA and not is_optuna_available():\r\n raise RuntimeError(\"You picked the optuna backend, but it is not installed. Use `pip install optuna`.\")\r\n if backend == HPSearchBackend.RAY and not is_ray_tune_available():\r\n raise RuntimeError(\r\n \"You picked the Ray Tune backend, but it is not installed. Use `pip install 'ray[tune]'`.\"\r\n )\r\n if backend == HPSearchBackend.SIGOPT and not is_sigopt_available():\r\n raise RuntimeError(\"You picked the sigopt backend, but it is not installed. Use `pip install sigopt`.\")\r\n if backend == HPSearchBackend.WANDB and not is_wandb_available():\r\n raise RuntimeError(\"You picked the wandb backend, but it is not installed. Use `pip install wandb`.\")\r\n```\r\n\r\ncontains a lot of repetition that I'd be happy to clean up, and it's easy to see how the wandb integration author missed a place to add a reference to wandb.",
"The first bit with the runtime error is fine (though missing wandb). For the rest, it should be done in each integration which normally error very fast if the corresponding lib is not installed."
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
`Trainer.hyperparameter_search` seems to prioritise the `optuna/ray/sigopt` backends, while `wandb` almost seems like a second-class citizen in the code. Specifically, the docstring explicitly mentions the first three backends multiple times in different contexts but not `wandb`, and `default_hp_search_backend` won't return `wandb` even if it's available. Is this intentional or accidental?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24379/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24379/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24382
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24382/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24382/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24382/events
|
https://github.com/huggingface/transformers/issues/24382
| 1,765,922,167 |
I_kwDOCUB6oc5pQdV3
| 24,382 |
Some links in NLLB page are broken.
|
{
"login": "ranggihwang",
"id": 50730045,
"node_id": "MDQ6VXNlcjUwNzMwMDQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/50730045?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ranggihwang",
"html_url": "https://github.com/ranggihwang",
"followers_url": "https://api.github.com/users/ranggihwang/followers",
"following_url": "https://api.github.com/users/ranggihwang/following{/other_user}",
"gists_url": "https://api.github.com/users/ranggihwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ranggihwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ranggihwang/subscriptions",
"organizations_url": "https://api.github.com/users/ranggihwang/orgs",
"repos_url": "https://api.github.com/users/ranggihwang/repos",
"events_url": "https://api.github.com/users/ranggihwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/ranggihwang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @stevhliu ",
"Doc in version 4.30.0 still has problems, not only the doc section but other links :( @stevhliu ",
"The fix is on main, so it will only be reflected on the main version of the documentation. And if you have found other links to fix, please do tell us or open directly PRs to fix them :-)"
] | 1,687 | 1,687 | 1,687 |
NONE
| null |
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
Hi, this is just a simple notification for crashed links.
https://huggingface.co/docs/transformers/v4.30.0/model_doc/nllb-moe
On this page, two links in the "documentation resources" are broken.
Thank you.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24382/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24378
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24378/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24378/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24378/events
|
https://github.com/huggingface/transformers/pull/24378
| 1,765,743,865 |
PR_kwDOCUB6oc5TdNIy
| 24,378 |
Skip a tapas (tokenization) test in past CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24378). All of your documentation changes will be reflected on that endpoint."
] | 1,687 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
Same as in #24251 where 1 test (from the tokenization test file) is missed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24378/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24378",
"html_url": "https://github.com/huggingface/transformers/pull/24378",
"diff_url": "https://github.com/huggingface/transformers/pull/24378.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24378.patch",
"merged_at": 1687278945000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24377
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24377/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24377/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24377/events
|
https://github.com/huggingface/transformers/pull/24377
| 1,765,709,257 |
PR_kwDOCUB6oc5TdFyF
| 24,377 |
Better test name and enable pipeline test for `pix2struct`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
In #24364, `pix2struct` test file didn't get `pipeline_model_mapping` being updated, due to the `heuristic` of finding a test class not working well (try to find the shortest test class name).
Let's try to give the test class a better/short/clear name, (despite we don't really have a base model), `Pix2StructModelTest` instead of `Pix2StructTextImageModelTest`.
This enables the script `add_pipeline_model_mapping_to_test.py` works for `pix2struct`, and then we get the pipeline test being run.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24377/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24377",
"html_url": "https://github.com/huggingface/transformers/pull/24377",
"diff_url": "https://github.com/huggingface/transformers/pull/24377.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24377.patch",
"merged_at": 1687278571000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24376
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24376/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24376/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24376/events
|
https://github.com/huggingface/transformers/pull/24376
| 1,765,674,035 |
PR_kwDOCUB6oc5Tc-8s
| 24,376 |
Migrate doc files to Markdown.
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I'd add a :warning: emoji as a prefix for the disclaimer, but other than that it looks good to me!",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
The new UI in GitHub makes MDX pretty hard to read for diffs, so this PR migrates the doc files from mdx to md. This shouldn't break anything in the doc-builder.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24376/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24376",
"html_url": "https://github.com/huggingface/transformers/pull/24376",
"diff_url": "https://github.com/huggingface/transformers/pull/24376.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24376.patch",
"merged_at": 1687298868000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24375
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24375/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24375/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24375/events
|
https://github.com/huggingface/transformers/pull/24375
| 1,765,667,213 |
PR_kwDOCUB6oc5Tc9es
| 24,375 |
TF LLaMA Port
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24375). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,687 | 1,690 | 1,690 |
MEMBER
| null |
This is an autoconversion of the LLaMA code to TF by GPT-4. As always, expect things to be broken until I finish debugging it!
TODO list:
- [ ] Get tests to pass
- [ ] No `MainLayer` - we shouldn't need it! Make sure weight naming can still be controlled.
- [ ] Explore full `float16` weights
- [ ] Explore passing `DTensor` layouts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24375/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24375/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24375",
"html_url": "https://github.com/huggingface/transformers/pull/24375",
"diff_url": "https://github.com/huggingface/transformers/pull/24375.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24375.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24374
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24374/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24374/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24374/events
|
https://github.com/huggingface/transformers/pull/24374
| 1,765,665,393 |
PR_kwDOCUB6oc5Tc9HC
| 24,374 |
Rename test to be more accurate
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24374). All of your documentation changes will be reflected on that endpoint."
] | 1,687 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
Tiny fix but this integration test actually tests Finn to English so let's name it accordingly.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24374/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24374",
"html_url": "https://github.com/huggingface/transformers/pull/24374",
"diff_url": "https://github.com/huggingface/transformers/pull/24374.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24374.patch",
"merged_at": 1687276495000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24373
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24373/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24373/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24373/events
|
https://github.com/huggingface/transformers/pull/24373
| 1,765,632,291 |
PR_kwDOCUB6oc5Tc2eu
| 24,373 |
Add a check in `ImageToTextPipeline._forward`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
Inside `ImageToTextPipeline.preprocess`, we have
```python
if self.model.config.model_type == "git" and prompt is None:
model_inputs["input_ids"] = None
```
So it may happen with a list of `None` (for Git model), and the `_forward` fail.
This PR add a check and change the above case to a single `None` value to avoid failure.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24373/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24373",
"html_url": "https://github.com/huggingface/transformers/pull/24373",
"diff_url": "https://github.com/huggingface/transformers/pull/24373.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24373.patch",
"merged_at": 1687277254000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24372
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24372/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24372/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24372/events
|
https://github.com/huggingface/transformers/issues/24372
| 1,765,495,469 |
I_kwDOCUB6oc5pO1Kt
| 24,372 |
`resize_token_embeddings` breaks `gpt2` generation
|
{
"login": "vwxyzjn",
"id": 5555347,
"node_id": "MDQ6VXNlcjU1NTUzNDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5555347?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vwxyzjn",
"html_url": "https://github.com/vwxyzjn",
"followers_url": "https://api.github.com/users/vwxyzjn/followers",
"following_url": "https://api.github.com/users/vwxyzjn/following{/other_user}",
"gists_url": "https://api.github.com/users/vwxyzjn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vwxyzjn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vwxyzjn/subscriptions",
"organizations_url": "https://api.github.com/users/vwxyzjn/orgs",
"repos_url": "https://api.github.com/users/vwxyzjn/repos",
"events_url": "https://api.github.com/users/vwxyzjn/events{/privacy}",
"received_events_url": "https://api.github.com/users/vwxyzjn/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"You are adding a random line in the model. Without fine-tuning it there is no reason for it to continue working."
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
### System Info
```
- `transformers` version: 4.30.1
- Platform: Linux-5.15.0-1023-aws-x86_64-with-glibc2.2.5
- Python version: 3.8.11
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
```
### Who can help?
Maybe @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModelForCausalLM, AutoTokenizer
pretrained_model = AutoModelForCausalLM.from_pretrained("gpt2")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
device = "cpu"
input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='pt')
greedy_output = pretrained_model.generate(
input_ids=input_ids.to(device),
max_new_tokens=50,
temperature=0.7,
pad_token_id=tokenizer.pad_token_id,
)
print("Output:\n" + 100 * '-')
print(tokenizer.decode(greedy_output[0]))
###
pretrained_model = AutoModelForCausalLM.from_pretrained("gpt2")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
tokenizer.add_special_tokens({"pad_token": "[PAD]"})
device = "cpu"
input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='pt')
greedy_output = pretrained_model.generate(
input_ids=input_ids.to(device),
max_new_tokens=50,
temperature=0.7,
pad_token_id=tokenizer.pad_token_id,
)
print("Output2:\n" + 100 * '-')
print(tokenizer.decode(greedy_output[0]))
###
pretrained_model = AutoModelForCausalLM.from_pretrained("gpt2")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
tokenizer.add_special_tokens({"pad_token": "[PAD]"})
device = "cpu"
pretrained_model.resize_token_embeddings(len(tokenizer))
# encode context the generation is conditioned on
input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='pt')
greedy_output = pretrained_model.generate(
input_ids=input_ids.to(device),
max_new_tokens=50,
temperature=0.7,
)
print("Output3:\n" + 100 * '-')
print(tokenizer.decode(greedy_output[0]))
```
```
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
Output:
----------------------------------------------------------------------------------------------------
I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with my dog. I'm not sure if I'll ever be able to walk with my dog.
I'm not sure if I'll ever be able to walk with my
Output2:
----------------------------------------------------------------------------------------------------
I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with my dog. I'm not sure if I'll ever be able to walk with my dog.
I'm not sure if I'll ever be able to walk with my
Output3:
----------------------------------------------------------------------------------------------------
I enjoy walking with my cute dog[PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD]
```
### Expected behavior
According to https://stackoverflow.com/a/69194717/6611317,
```
tokenizer.add_special_tokens({"pad_token": "[PAD]"})
pretrained_model.resize_token_embeddings(len(tokenizer))
```
Should work as expected and does not completely break generation.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24372/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24371
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24371/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24371/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24371/events
|
https://github.com/huggingface/transformers/issues/24371
| 1,765,441,743 |
I_kwDOCUB6oc5pOoDP
| 24,371 |
Future compatibility with LangChain
|
{
"login": "LucasMartinCalderon",
"id": 25382998,
"node_id": "MDQ6VXNlcjI1MzgyOTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/25382998?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LucasMartinCalderon",
"html_url": "https://github.com/LucasMartinCalderon",
"followers_url": "https://api.github.com/users/LucasMartinCalderon/followers",
"following_url": "https://api.github.com/users/LucasMartinCalderon/following{/other_user}",
"gists_url": "https://api.github.com/users/LucasMartinCalderon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LucasMartinCalderon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LucasMartinCalderon/subscriptions",
"organizations_url": "https://api.github.com/users/LucasMartinCalderon/orgs",
"repos_url": "https://api.github.com/users/LucasMartinCalderon/repos",
"events_url": "https://api.github.com/users/LucasMartinCalderon/events{/privacy}",
"received_events_url": "https://api.github.com/users/LucasMartinCalderon/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,687 | 1,690 | 1,690 |
NONE
| null |
Is there a specific timeline for future LLM agents compatibility with LangChain?
What other current compatibility solutions with LangChain are there?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24371/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24369
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24369/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24369/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24369/events
|
https://github.com/huggingface/transformers/issues/24369
| 1,765,063,421 |
I_kwDOCUB6oc5pNLr9
| 24,369 |
Additional option for text generation when setting num_beam_groups
|
{
"login": "hukuda222",
"id": 21185928,
"node_id": "MDQ6VXNlcjIxMTg1OTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/21185928?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hukuda222",
"html_url": "https://github.com/hukuda222",
"followers_url": "https://api.github.com/users/hukuda222/followers",
"following_url": "https://api.github.com/users/hukuda222/following{/other_user}",
"gists_url": "https://api.github.com/users/hukuda222/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hukuda222/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hukuda222/subscriptions",
"organizations_url": "https://api.github.com/users/hukuda222/orgs",
"repos_url": "https://api.github.com/users/hukuda222/repos",
"events_url": "https://api.github.com/users/hukuda222/events{/privacy}",
"received_events_url": "https://api.github.com/users/hukuda222/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @gante ",
"Hey @hukuda222 \r\n\r\n`generate` is already a configuration behemoth, and we would be adding one more flag. By default, we are reluctant to add more flags unless the benefits are large OR there is demand for the option. As such, I'm going to propose the same as I do in similar issues ([e.g.](https://github.com/huggingface/transformers/issues/22168#issuecomment-1477998997))!\r\n\r\nIf this comment gets 10 reactions/this issue gets mentioned 10 times, then it means that folks have been searching for this feature. In that case, I'll greenlight the suggestion, and let's add it to the codebase. That way, we can balance HF's limited maintenance resources with actual feature demand! (Whoever does the 10th react, plz tag me)\r\n\r\n@hukuda222 does that sound good to you?",
"@gante Sounds good. Thank you for your quick response.",
"@gante \r\nSorry for the delay. I thought the output I presented earlier might be a bug in the current code. Diverse beam search is a method that generates `num_beams//num_beam_groups` sentences for each group independently. However, the current code uses one BeamHypotheses shared by all groups. Therefore, group A will generate two sentences before group B outputs a sentence.\r\n\r\nhttps://github.com/huggingface/transformers/blob/ad78d9597b224443e9fe65a94acc8c0bc48cd039/src/transformers/generation/beam_search.py#L178-L186\r\n\r\nThis is a problem that can be solved by creating as many BeamHypotheses as there are groups. I would like to address this inconvenience in the form of a modification to the diverse beam search implementation, rather than adding an option. If you don't mind, could you give me your opinion?",
"Hey @hukuda222 👋 \r\n\r\nI've had a deeper look at group beam search, and it doesn't not seem to be working properly. For instance, the snippet below produces the same sequence on all beams, and that should not happen (each beam should generate different continuations).\r\n\r\nI don't have the bandwidth to fix it immediately, so if you're able to contribute we'd deeply appreciate 🙌 Since it is broken (and thus no retrocompatibility needs to be respected), feel free to also change the behavior of `num_return_sequences` in group beam search to prioritize returning from different beams, which makes more sense.\r\n\r\n___________________________________\r\n```python\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\"gpt2\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"gpt2\")\r\ninputs = tokenizer([\"The full name of Donald is Donald\"], return_tensors=\"pt\")\r\n\r\noutputs = model.generate(**inputs, num_beams=4, num_beam_groups=4, num_return_sequences=4)\r\nprint(\"\\n\".join(tokenizer.batch_decode(outputs, skip_special_tokens=True)))\r\n# Outputs the following sequence 4 times. Each beam should return different sequences.\r\n# The full name of Donald is Donald J. Trump Jr. The full name of Donald is Donald J\r\n```",
"@gante \r\nThanks for doing the research. I will send PR as soon as I can fix it."
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
### Feature request
I propose to add `num_return_sequences_per_groups` as an argument to `generate` function of `transformers.GenerationMixin`. Setting this will output `num_return_sequences_per_groups` sentences each group. Use cases are as follows:
code:
```python
outputs = model.generate(
tokenizer.encode(text, return_tensors="pt", max_length=512),
num_beam_groups=3,
num_beams=12,
diversity_penalty=1.0,
num_return_sequences_per_groups=2,
)
for output in outputs:
print(tokenizer.decode(output, skip_special_tokens=True))
```
output:
```
A flock of birds flying over the ocean.
A flock of birds flying over a beach.
Birds flying over the water in the sun.
Birds flying the water near a mountain.
Several birds are flying over a body of water.
Several birds flying over a body of water.
```
The example referred to https://arxiv.org/abs/1610.02424 .
### Motivation
As shown below, the output may have little difference when `num_beam_groups` and `num_beams` have the same values.
code:
```python
from transformers import (
AutoTokenizer,
AutoModelForSeq2SeqLM,
)
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-large-xsum")
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-xsum")
text = "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration."
outputs = model.generate(
tokenizer.encode(text, return_tensors="pt", max_length=512),
num_beam_groups=2,
num_beams=2,
diversity_penalty=1000000.0,
num_return_sequences=2,
)
for output in outputs:
print(tokenizer.decode(output, skip_special_tokens=True))
```
output:
```
A number Of research projects have investigated the role of the brain's encoder and decoder in the control of the encoded sequences.
A number Of research projects have investigated the role of the brain's encoder and decoder in the control of the encoded sequences..
```
This problem occurs because num_beam*2 is searched as an implementation of beam search. Such output is undesirable.
This example is only for clarity. Even in the general case, the current implementation does not guarantee diversity because the output is ordered by score. Therefore, I would like to enable a variety of outputs with this option.
### Your contribution
If it looks good, I will implement it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24369/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24369/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24368
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24368/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24368/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24368/events
|
https://github.com/huggingface/transformers/pull/24368
| 1,764,971,867 |
PR_kwDOCUB6oc5Tan3Q
| 24,368 |
[Tokenizer doc] Clarification about `add_prefix_space`
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This blog post on constrained decoding also uses `add_prefix_space` in `__call__` https://huggingface.co/blog/constrained-beam-search\r\n",
"The blog is not hosted on `transformers` but on `blog`, will open a PR for that too later on, thanks for the catch 😉 "
] | 1,687 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
Adresses #17391, updates the documentation that suggested to use `add_prefix_space` when calling the tokenizer
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24368/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24368",
"html_url": "https://github.com/huggingface/transformers/pull/24368",
"diff_url": "https://github.com/huggingface/transformers/pull/24368.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24368.patch",
"merged_at": 1687278120000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24367
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24367/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24367/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24367/events
|
https://github.com/huggingface/transformers/pull/24367
| 1,764,933,368 |
PR_kwDOCUB6oc5Tafjx
| 24,367 |
[Whisper Docs] Nits
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
Adresses #24342, where it is mentioned that the documentation is conter intuitive. Indeed, after a lot of changes, the default value for the `bos_token` that we use is different, thus no official models (hosted on the hub) use `bos_token = "<startoftranscript>"`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24367/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24367",
"html_url": "https://github.com/huggingface/transformers/pull/24367",
"diff_url": "https://github.com/huggingface/transformers/pull/24367.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24367.patch",
"merged_at": 1687281532000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24366
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24366/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24366/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24366/events
|
https://github.com/huggingface/transformers/issues/24366
| 1,764,835,743 |
I_kwDOCUB6oc5pMUGf
| 24,366 |
Format of the documentation (文档格式和可读性的问题)
|
{
"login": "KKIverson",
"id": 49222488,
"node_id": "MDQ6VXNlcjQ5MjIyNDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/49222488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KKIverson",
"html_url": "https://github.com/KKIverson",
"followers_url": "https://api.github.com/users/KKIverson/followers",
"following_url": "https://api.github.com/users/KKIverson/following{/other_user}",
"gists_url": "https://api.github.com/users/KKIverson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KKIverson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KKIverson/subscriptions",
"organizations_url": "https://api.github.com/users/KKIverson/orgs",
"repos_url": "https://api.github.com/users/KKIverson/repos",
"events_url": "https://api.github.com/users/KKIverson/events{/privacy}",
"received_events_url": "https://api.github.com/users/KKIverson/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"sorry,我的浏览器兼容性问题。\r\nMy bad!!!!"
] | 1,687 | 1,687 | 1,687 |
NONE
| null |
### System Info
huggingface的文档格式真的看起来很乱,读起来很不流畅,强烈建议官方优化一下:
例如:https://huggingface.co/docs/transformers/installation
1. 段落之间几乎没有分隔符,没有标题,字体大小、行间距之类的也很差;
2. 超链和普通文字的格式几乎一样,容易误触;
3. 代码段也不够美观,看起来就像是从interactive 的IDE写的,不方便初学者复制。
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
希望能改进吧
### Expected behavior
希望能改进吧
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24366/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24365
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24365/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24365/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24365/events
|
https://github.com/huggingface/transformers/issues/24365
| 1,764,824,885 |
I_kwDOCUB6oc5pMRc1
| 24,365 |
ValueError: Unexpected result of `train_function` (Empty logs). Please use `Model.compile(..., run_eagerly=True)`, or `tf.config.run_functions_eagerly(True)` for more information of where went wrong, or file a issue/bug to `tf.keras`.
|
{
"login": "ErHimani",
"id": 62006705,
"node_id": "MDQ6VXNlcjYyMDA2NzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/62006705?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ErHimani",
"html_url": "https://github.com/ErHimani",
"followers_url": "https://api.github.com/users/ErHimani/followers",
"following_url": "https://api.github.com/users/ErHimani/following{/other_user}",
"gists_url": "https://api.github.com/users/ErHimani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ErHimani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ErHimani/subscriptions",
"organizations_url": "https://api.github.com/users/ErHimani/orgs",
"repos_url": "https://api.github.com/users/ErHimani/repos",
"events_url": "https://api.github.com/users/ErHimani/events{/privacy}",
"received_events_url": "https://api.github.com/users/ErHimani/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @ErHimani, thanks for raising an issue. \r\n\r\nCould you follow the issue template and provide: \r\n* The running environment: run `transformers-cli env` in the terminal and copy-paste the output\r\n* A minimal code snippet so that was can reproduce the error\r\n\r\nWithout these, we're unable to help.",
"hii @amyeroberts I am using the following command:\r\n`python examples\\tensorflow\\contrastive-image-text\\run_clip.py --output_dir .\\clip-roberta-finetuned --vision_model_name_or_path openai/clip-vit-base-patch32 --text_model_name_or_path roberta-base --train_file descriptions.json --image_column image_path --caption_column text --remove_unused_columns=False --do_train --per_device_train_batch_size=\"64\" --per_device_eval_batch_size=\"64\" --learning_rate=\"5e-5\" --warmup_steps=\"0\" --weight_decay 0.1`",
"@ErHimani Thanks for providing this. In order for us to be able to reproduce, we'll need `descriptions.json`, or an example sample from the dataset to be able to reproduce. We also require the running environment information, as noted above. ",
"@amyeroberts Please find Link of the custom dataset\r\ndescription.json:[https://drive.google.com/file/d/14FGJwXRsxns679-ILGlLcBRpqe8UUmGu/view?usp=sharing](url)\r\nImages:[https://drive.google.com/drive/folders/1yr8zapcCPdxlN-5ZSczOIiIeIyS3K_Vt?usp=sharing](url)",
"@ErHimani Apologies for the delay in getting back to this issue. The links to the datasets are no longer working. Is the issue still persisting with the latest transformers release? If so, would you be willing to re-share the example dataset? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,687 | 1,701 | 1,701 |
NONE
| null |
### System Info
I am trying to train the CLIP model with my custom dataset But I am facing the above issue
my current version of tensorflow ==2.12.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction

### Expected behavior
Please provide me the guidance how I can use the code t train on custom dataset
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24365/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24365/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24364
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24364/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24364/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24364/events
|
https://github.com/huggingface/transformers/pull/24364
| 1,764,014,081 |
PR_kwDOCUB6oc5TXcTT
| 24,364 |
Update tiny models for pipeline testing.
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
Update tiny models for pipeline testing.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24364/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24364",
"html_url": "https://github.com/huggingface/transformers/pull/24364",
"diff_url": "https://github.com/huggingface/transformers/pull/24364.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24364.patch",
"merged_at": 1687264990000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24363
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24363/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24363/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24363/events
|
https://github.com/huggingface/transformers/pull/24363
| 1,763,940,101 |
PR_kwDOCUB6oc5TXM8I
| 24,363 |
[modelcard] add audio classification to task list
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
Adds audio classification to the modelcard tasks lists, thus enabling model cards to be created for this task (required for https://github.com/huggingface/audio-transformers-course/pull/46)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24363/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24363",
"html_url": "https://github.com/huggingface/transformers/pull/24363",
"diff_url": "https://github.com/huggingface/transformers/pull/24363.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24363.patch",
"merged_at": 1687266078000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24362
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24362/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24362/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24362/events
|
https://github.com/huggingface/transformers/issues/24362
| 1,763,889,516 |
I_kwDOCUB6oc5pItFs
| 24,362 |
For loop support in python interpreter of Transformer agent
|
{
"login": "dcy0577",
"id": 50020414,
"node_id": "MDQ6VXNlcjUwMDIwNDE0",
"avatar_url": "https://avatars.githubusercontent.com/u/50020414?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dcy0577",
"html_url": "https://github.com/dcy0577",
"followers_url": "https://api.github.com/users/dcy0577/followers",
"following_url": "https://api.github.com/users/dcy0577/following{/other_user}",
"gists_url": "https://api.github.com/users/dcy0577/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dcy0577/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dcy0577/subscriptions",
"organizations_url": "https://api.github.com/users/dcy0577/orgs",
"repos_url": "https://api.github.com/users/dcy0577/repos",
"events_url": "https://api.github.com/users/dcy0577/events{/privacy}",
"received_events_url": "https://api.github.com/users/dcy0577/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @sgugger ",
"Would you like to open a PR for this?",
"I would like to, but I don't know how to implement it yet. Do you have any suggestions?",
"Should be added with the PR mentioned above :-)"
] | 1,687 | 1,687 | 1,687 |
NONE
| null |
### Feature request
Hello, I would like to add for loop support in https://github.com/huggingface/transformers/blame/c2393cad085e3875ee2206d917d46d15e50602a3/src/transformers/tools/python_interpreter.py
Any idea about how to implement this?
### Motivation
For loop is quite common in generated code. Usually it will not cause infinite loop.
### Your contribution
An additional 'elif' for 'ast.For' before the 'else' will be nice: https://github.com/huggingface/transformers/blame/c2393cad085e3875ee2206d917d46d15e50602a3/src/transformers/tools/python_interpreter.py#L132
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24362/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24361
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24361/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24361/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24361/events
|
https://github.com/huggingface/transformers/issues/24361
| 1,763,790,540 |
I_kwDOCUB6oc5pIU7M
| 24,361 |
Accelerate preprocessing crashing due to non-tensor input
|
{
"login": "ElleLeonne",
"id": 87243032,
"node_id": "MDQ6VXNlcjg3MjQzMDMy",
"avatar_url": "https://avatars.githubusercontent.com/u/87243032?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ElleLeonne",
"html_url": "https://github.com/ElleLeonne",
"followers_url": "https://api.github.com/users/ElleLeonne/followers",
"following_url": "https://api.github.com/users/ElleLeonne/following{/other_user}",
"gists_url": "https://api.github.com/users/ElleLeonne/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ElleLeonne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ElleLeonne/subscriptions",
"organizations_url": "https://api.github.com/users/ElleLeonne/orgs",
"repos_url": "https://api.github.com/users/ElleLeonne/repos",
"events_url": "https://api.github.com/users/ElleLeonne/events{/privacy}",
"received_events_url": "https://api.github.com/users/ElleLeonne/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @ElleLeonne, \r\n\r\nSo that we can best help, could you share a minimal code snippet so that we can reproduce the error and information about the running environment: run `transformers-cli env` in the terminal and copy-paste the output?",
"Out of town today, but I'll have something shortly.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I'm also getting this error when trying to use data that isn't a tensor. @ElleLeonne did you ever find a solution to this?",
"@irowberryFS If you could provide a reproducer for this issue, the full error traceback and information about the running environment then we'll be able to look into it. ",
"@amyeroberts I couldn't reproduce it using a small training script. For the small script I am using PyTorch Geometric datasets, and their DataLoaders and Accelerator worked fine. Unfortunately all the other threads about this issue don't contain any snippets of training code. I believe it has to do with the `IterableDataset` as described in this issue #26548 . I have temporarily bypassed the issue by not wrapping the DataLoader in the `accelerator.prepare()` call. Here is my dataset processing code. \r\n\r\n```\r\ndataset = load_dataset(\"parquet\", data_dir=\"../../mobTrain/\", streaming=True)['train']\r\ndataset = dataset.shuffle(seed=42)\r\n\r\ndef convert_mobs(row):\r\n row['mob1'] = process_mob(row['mob1'])\r\n row['mob2'] = process_mob(row['mob2'])\r\n return row\r\n\r\ndataset = dataset.map(convert_mobs, remove_columns=['...', '...'])\r\ndl = DataLoader(dataset, batch_size)\r\n```\r\nHere is the traceback\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/username/projects/GraphBlocking/train_model.py\", line 121, in <module>\r\n main()\r\n File \"/home/username/projects/GraphBlocking/train_model.py\", line 118, in main\r\n train()\r\n File \"/home/username/projects/GraphBlocking/train_model.py\", line 93, in train\r\n for batch in tepoch:\r\n File \"/home/username/projects/venv/lib/python3.11/site-packages/tqdm/std.py\", line 1178, in __iter__\r\n for obj in iterable:\r\n File \"/home/username/projects/venv/lib/python3.11/site-packages/accelerate/data_loader.py\", line 639, in __iter__\r\n next_batch, next_batch_info = self._fetch_batches(main_iterator)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/username/projects/venv/lib/python3.11/site-packages/accelerate/data_loader.py\", line 602, in _fetch_batches\r\n batch = concatenate(batches, dim=0)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/username/projects/venv/lib/python3.11/site-packages/accelerate/utils/operations.py\", line 530, in concatenate\r\n return type(data[0])({k: concatenate([d[k] for d in data], dim=dim) for k in data[0].keys()})\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/username/projects/venv/lib/python3.11/site-packages/accelerate/utils/operations.py\", line 530, in <dictcomp>\r\n return type(data[0])({k: concatenate([d[k] for d in data], dim=dim) for k in data[0].keys()})\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/username/projects/venv/lib/python3.11/site-packages/accelerate/utils/operations.py\", line 532, in concatenate\r\n raise TypeError(f\"Can only concatenate tensors but got {type(data[0])}\")\r\nTypeError: Can only concatenate tensors but got <class 'torch_geometric.data.batch.HeteroDataBatch'>\r\n```\r\nThe `DataLoader` is a PyG wrapper of the PyTorch `DataLoader`. I access the data in the typical way of `for batch in dataloader:` Again, I believe the issue is using Accelerate with an `IterableDataset`",
"@irowberryFS Have you tried any the suggestions in [this comment](https://github.com/huggingface/transformers/issues/26548#issuecomment-1885798533) or the rest of the thread you linked to? ",
"@amyeroberts yes. I set `dispatch_batches=False` and it fixed the issue. ",
"@irowberryFS Thanks for reporting back! "
] | 1,687 | 1,705 | 1,690 |
NONE
| null |
### System Info
I believe a recent update has caused Accelerate to try and concatenate all tensor data in the input dictionary.
This is a problem, because my inputs contain non-tensor data, due to the fact that such data is intermittent and not always provided in the batch.
Rather than skipping over this information, it instead tries to torch.cat the data, which results in crashes.
```
Traceback (most recent call last):
File "/home/lily/Desktop/Project/finetune_dynamic.py", line 304, in <module>
fire.Fire(train)
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/home/lily/Desktop/Emme (copy)/finetune_dynamic.py", line 295, in train
trainer.train(resume_from_checkpoint=resume_from_checkpoint)
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/transformers/trainer.py", line 1779, in _inner_training_loop
for step, inputs in enumerate(epoch_iterator):
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/accelerate/data_loader.py", line 553, in __iter__
next_batch, next_batch_info = self._fetch_batches(main_iterator)
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/accelerate/data_loader.py", line 521, in _fetch_batches
batch = concatenate(batches, dim=0)
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/accelerate/utils/operations.py", line 413, in concatenate
return type(data[0])({k: concatenate([d[k] for d in data], dim=dim) for k in data[0].keys()})
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/accelerate/utils/operations.py", line 413, in <dictcomp>
return type(data[0])({k: concatenate([d[k] for d in data], dim=dim) for k in data[0].keys()})
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/accelerate/utils/operations.py", line 411, in concatenate
return honor_type(data[0], (concatenate([d[i] for d in data], dim=dim) for i in range(len(data[0]))))
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/accelerate/utils/operations.py", line 84, in honor_type
return type(obj)(generator)
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/accelerate/utils/operations.py", line 411, in <genexpr>
return honor_type(data[0], (concatenate([d[i] for d in data], dim=dim) for i in range(len(data[0]))))
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/accelerate/utils/operations.py", line 411, in concatenate
return honor_type(data[0], (concatenate([d[i] for d in data], dim=dim) for i in range(len(data[0]))))
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/accelerate/utils/operations.py", line 84, in honor_type
return type(obj)(generator)
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/accelerate/utils/operations.py", line 411, in <genexpr>
return honor_type(data[0], (concatenate([d[i] for d in data], dim=dim) for i in range(len(data[0]))))
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/accelerate/utils/operations.py", line 415, in concatenate
raise TypeError(f"Can only concatenate tensors but got {type(data[0])}")
TypeError: Can only concatenate tensors but got <class 'str'>
```
I'd like to know if there's a way to turn off this feature in accelerate. I can (and have been, up until now) handle batching my own data. It's only now that it's become a problem.
Issue raised on accelerate as well:
https://github.com/huggingface/accelerate/issues/1611
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Reproduce:
1: Create model that accepts non-tensor inputs as a nested list
2: Feed input to model via huggingface trainer
3: Observe crash
### Expected behavior
Accelerate should ignore the data that it can't process, and pass it to the model as normal.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24361/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24361/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24360
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24360/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24360/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24360/events
|
https://github.com/huggingface/transformers/pull/24360
| 1,763,767,703 |
PR_kwDOCUB6oc5TWnxS
| 24,360 |
TensorFlow CI fixes
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
MEMBER
| null |
I made a lot of changes to the TF tests, and this exposed a few issues. This PR fixes all the exposed issues, so hopefully after this the only remaining CI issues should be related to generation or the `SharedEmbeddings` refactor.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24360/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24360",
"html_url": "https://github.com/huggingface/transformers/pull/24360",
"diff_url": "https://github.com/huggingface/transformers/pull/24360.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24360.patch",
"merged_at": 1687262362000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24359
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24359/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24359/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24359/events
|
https://github.com/huggingface/transformers/issues/24359
| 1,763,753,112 |
I_kwDOCUB6oc5pILyY
| 24,359 |
ValueError: Found `optimizer` configured in the DeepSpeed config, but no `scheduler`. Please configure a scheduler in the DeepSpeed config.
|
{
"login": "luohao123",
"id": 49749220,
"node_id": "MDQ6VXNlcjQ5NzQ5MjIw",
"avatar_url": "https://avatars.githubusercontent.com/u/49749220?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luohao123",
"html_url": "https://github.com/luohao123",
"followers_url": "https://api.github.com/users/luohao123/followers",
"following_url": "https://api.github.com/users/luohao123/following{/other_user}",
"gists_url": "https://api.github.com/users/luohao123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luohao123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luohao123/subscriptions",
"organizations_url": "https://api.github.com/users/luohao123/orgs",
"repos_url": "https://api.github.com/users/luohao123/repos",
"events_url": "https://api.github.com/users/luohao123/events{/privacy}",
"received_events_url": "https://api.github.com/users/luohao123/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 5616426447,
"node_id": "LA_kwDOCUB6oc8AAAABTsPdzw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/solved",
"name": "solved",
"color": "B1D6DC",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] |
[
"Hi @luohao123, \r\n\r\nSo that we can help you, could you follow the issue template and provide a minimal code snippet to reproduce the error and the running environment: run `transformers-cli env` in the terminal and copy-paste the output? \r\n\r\ncc @pacman100 ",
"**TLDR;** if you're in a rush, downgrading to version `<4.30` (4.29.2) worked for me\r\n\r\n**I've had the same issue 👇**\r\nI believe the previous behaviour allowed you to not include any DeepSpeed configuration `scheduler` key and the one specified in your `TrainerArguments` would be used. Now it seems you have to include the corresponding scheduler between DeepSpeed and Hugging Face `Trainer`.\r\n\r\ni.e.\r\n\r\n| DeepSpeed scheduler | Trainer scheduler | Resulting scheduler |\r\n| ----------- | ----------- | ----------- |\r\n| WarmupLR | constant_with_warmup | constant_with_warmup |\r\n| WarmupDecayLR | linear | linear |\r\n\r\nwhereas before you could just ignore the first column and leave it blank to get the same result\r\n\r\n| DeepSpeed scheduler | Trainer scheduler | Resulting scheduler |\r\n| ----------- | ----------- | ----------- |\r\n| | constant_with_warmup | constant_with_warmup |\r\n| | linear | linear |\r\n\r\npersonally, I found it handier before where I only had to specify the scheduler in one place rather than tracking this over a DeepSpeed config and a Trainer config which are generally separate objects.",
"Hello, the supported combinations now are:\r\n1. Trainer optimizer + Trainer scheduler - Don't specify these in the DS config and use trainer args\r\n2. DeepSpeed optimizer + DeeepSpeed Scheduler - Specify both in DeepSpeed config and no need to use/specify them via Trainer args (@jackapbutler, please note this as you happen to be doing both)\r\n3. Trainer optimizer + DeepSpeed Scheduler - Don't specify optimizer in DS config; only set the scheduler there. Don't specify the scheduler via Trainer args.\r\n\r\n@luohao123, the case you want is DeepSpeed Optimizer + Trainer Scheduler which isn't supported now. The suggested approach in your case would be to use `Trainer optimizer + Trainer scheduler` (Settting 1. above). \r\n\r\nHope this helps. \r\n\r\n",
"@pacman100 I actually got some errors when specifci via trainingargs with cosine scheduler while not specific in deepspeed config:\r\n\r\n```\r\n│ ❱ 485 │ │ self.initialize_optimizer_states() │\r\n│ 486 │ │ see_memory_usage(\"After initializing optimizer states\", force=True) │\r\n│ 487 │ │ │\r\n│ 488 │ │ if dist.get_rank() == 0: │\r\n│ │\r\n│ /root/anaconda3/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py:620 in │\r\n│ initialize_optimizer_states │\r\n│ │\r\n│ 617 │ │ if isinstance(self.optimizer, torch.optim.Adagrad): │\r\n│ 618 │ │ │ self.optimizer = torch.optim.Adagrad(self.single_partition_of_fp32_groups, * │\r\n│ 619 │ │ else: │\r\n│ ❱ 620 │ │ │ self.optimizer.step() │\r\n│ 621 │ │ │\r\n│ 622 │ │ if not self.cpu_offload: │\r\n│ 623 │ │ │ for group in self.single_partition_of_fp32_groups: │\r\n│ │\r\n│ /root/anaconda3/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:69 in wrapper │\r\n│ │\r\n│ 66 │ │ │ │ instance = instance_ref() │\r\n│ 67 │ │ │ │ instance._step_count += 1 │\r\n│ 68 │ │ │ │ wrapped = func.__get__(instance, cls) │\r\n│ ❱ 69 │ │ │ │ return wrapped(*args, **kwargs) │\r\n│ 70 │ │ │ │\r\n│ 71 │ │ │ # Note that the returned function here is no longer a bound method, │\r\n│ 72 │ │ │ # so attributes like `__func__` and `__self__` no longer exist. │\r\n│ │\r\n│ /root/anaconda3/lib/python3.10/site-packages/torch/optim/optimizer.py:280 in wrapper │\r\n│ │\r\n│ 277 │ │ │ │ │ │ │ raise RuntimeError(f\"{func} must return None or a tuple of ( │\r\n│ 278 │ │ │ │ │ │ │ │ │ │ │ f\"but got {result}.\") │\r\n│ 279 │ │ │ │ │\r\n│ ❱ 280 │ │ │ │ out = func(*args, **kwargs) │\r\n│ 281 │ │ │ │ self._optimizer_step_code() │\r\n│ 282 │ │ │ │ │\r\n│ 283 │ │ │ │ # call optimizer step post hooks │\r\n│ │\r\n│ /root/anaconda3/lib/python3.10/site-packages/torch/optim/optimizer.py:33 in _use_grad │\r\n│ │\r\n│ 30 │ │ prev_grad = torch.is_grad_enabled() │\r\n│ 31 │ │ try: │\r\n│ 32 │ │ │ torch.set_grad_enabled(self.defaults['differentiable']) │\r\n│ ❱ 33 │ │ │ ret = func(self, *args, **kwargs) │\r\n│ 34 │ │ finally: │\r\n│ 35 │ │ │ torch.set_grad_enabled(prev_grad) │\r\n│ 36 │ │ return ret │\r\n│ │\r\n│ /root/anaconda3/lib/python3.10/site-packages/torch/optim/adamw.py:171 in step │\r\n│ │\r\n│ 168 │ │ │ │ state_steps, │\r\n│ 169 │ │ │ ) │\r\n│ 170 │ │ │ │\r\n│ ❱ 171 │ │ │ adamw( │\r\n│ 172 │ │ │ │ params_with_grad, │\r\n│ 173 │ │ │ │ grads, │\r\n│ 174 │ │ │ │ exp_avgs, │\r\n│ │\r\n│ /root/anaconda3/lib/python3.10/site-packages/torch/optim/adamw.py:321 in adamw │\r\n│ │\r\n│ 318 │ else: │\r\n│ 319 │ │ func = _single_tensor_adamw │\r\n│ 320 │ │\r\n│ ❱ 321 │ func( │\r\n│ 322 │ │ params, │\r\n│ 323 │ │ grads, │\r\n│ 324 │ │ exp_avgs, │\r\n│ │\r\n│ /root/anaconda3/lib/python3.10/site-packages/torch/optim/adamw.py:564 in _multi_tensor_adamw │\r\n│ │\r\n│ 561 │ │ │ │ torch._foreach_div_(max_exp_avg_sq_sqrt, bias_correction2_sqrt) │\r\n│ 562 │ │ │ │ denom = torch._foreach_add(max_exp_avg_sq_sqrt, eps) │\r\n│ 563 │ │ │ else: │\r\n│ ❱ 564 │ │ │ │ exp_avg_sq_sqrt = torch._foreach_sqrt(device_exp_avg_sqs) │\r\n│ 565 │ │ │ │ torch._foreach_div_(exp_avg_sq_sqrt, bias_correction2_sqrt) │\r\n│ 566 │ │ │ │ denom = torch._foreach_add(exp_avg_sq_sqrt, eps) │\r\n│ 567 │\r\n╰──────────────────────────────────────────────────────────────────────────────────────────────────╯\r\nRuntimeError: CUDA error: an illegal memory access was encountered\r\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\r\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.\r\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\r\n```\r\n\r\nWhich is not right, A100, can u take a llok?\r\n\r\nthis is my ds config:\r\n\r\n```\r\n{\r\n \"zero_allow_untested_optimizer\": true,\r\n \"fp16\": {\r\n \"enabled\": \"auto\",\r\n \"opt_level\": \"O2\",\r\n \"initial_scale_power\": 16,\r\n \"loss_scale_window\": 1000,\r\n \"hysteresis\": 2,\r\n \"min_loss_scale\": 1,\r\n \"loss_scale\": 0\r\n },\r\n \"zero_optimization\": {\r\n \"stage\": 2,\r\n \"allgather_partitions\": true,\r\n \"allgather_bucket_size\": 5e8,\r\n \"overlap_comm\": false,\r\n \"reduce_scatter\": true,\r\n \"reduce_bucket_size\": 5e8,\r\n \"contiguous_gradients\": true\r\n },\r\n \"train_micro_batch_size_per_gpu\": \"auto\",\r\n \"gradient_accumulation_steps\": \"auto\"\r\n}\r\n\r\n```\r\n\r\nthis is my training args:\r\n\r\n```\r\nCUDA_VISIBLE_DEVICES=2,3 deepspeed --master_port 61000 train_full.py \\\r\n --data_path ./data/train_data.json \\\r\n --model_name_or_path ./checkpoints/baichuan-7B/ \\\r\n --per_device_train_batch_size 4 --output_dir out/bc_full \\\r\n --bf16 --num_train_epochs 3 \\\r\n --per_device_eval_batch_size 4 \\\r\n --gradient_accumulation_steps 16 \\\r\n --learning_rate 2e-5 --weight_decay 0. \\\r\n --warmup_ratio 0.03 --lr_scheduler_type \"cosine\" \\\r\n --model_max_length 1024 \\\r\n --logging_steps 50 \\\r\n --lazy_preprocess True \\\r\n --deepspeed configs/ds_s2_fschat.json\r\n```\r\n\r\nwhat did wrong???",
"Hello @luohao123, please provide minimal reproducible example for further deep dive. Things work fine for me with official example:\r\n\r\nds config:\r\n\r\n```\r\n{\r\n \"fp16\": {\r\n \"enabled\": \"auto\",\r\n \"loss_scale\": 0,\r\n \"loss_scale_window\": 1000,\r\n \"initial_scale_power\": 16,\r\n \"hysteresis\": 2,\r\n \"min_loss_scale\": 1\r\n },\r\n\r\n \"bf16\": {\r\n \"enabled\": \"auto\"\r\n },\r\n\r\n \"zero_optimization\": {\r\n \"stage\": 2,\r\n \"offload_optimizer\": {\r\n \"device\": \"cpu\",\r\n \"pin_memory\": true\r\n },\r\n \"allgather_partitions\": true,\r\n \"allgather_bucket_size\": 2e8,\r\n \"overlap_comm\": true,\r\n \"reduce_scatter\": true,\r\n \"reduce_bucket_size\": 2e8,\r\n \"contiguous_gradients\": true\r\n },\r\n \"gradient_accumulation_steps\": \"auto\",\r\n \"gradient_clipping\": \"auto\",\r\n \"steps_per_print\": 2000,\r\n \"train_batch_size\": \"auto\",\r\n \"train_micro_batch_size_per_gpu\": \"auto\",\r\n \"wall_clock_breakdown\": false\r\n}\r\n```\r\n\r\nCommand:\r\n```\r\ncd transformers\r\nexport TASK_NAME=mrpc\r\nCUDA_VISIBLE_DEVICES=2,3 deepspeed ./examples/pytorch/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 5e-5 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --deepspeed ds_config_zero2.json --lr_scheduler_type \"cosine\"\r\n```\r\n\r\noutput logs:\r\n```\r\n[2023-06-22 09:47:48,765] [INFO] [config.py:964:print] zero_enabled ................. True\r\n[2023-06-22 09:47:48,765] [INFO] [config.py:964:print] zero_force_ds_cpu_optimizer .. True\r\n[2023-06-22 09:47:48,765] [INFO] [config.py:964:print] zero_optimization_stage ...... 2\r\n[2023-06-22 09:47:48,765] [INFO] [config.py:950:print_user_config] json = {\r\n \"fp16\": {\r\n \"enabled\": false, \r\n \"loss_scale\": 0, \r\n \"loss_scale_window\": 1000, \r\n \"initial_scale_power\": 16, \r\n \"hysteresis\": 2, \r\n \"min_loss_scale\": 1\r\n }, \r\n \"bf16\": {\r\n \"enabled\": false\r\n }, \r\n \"zero_optimization\": {\r\n \"stage\": 2, \r\n \"offload_optimizer\": {\r\n \"device\": \"cpu\", \r\n \"pin_memory\": true\r\n }, \r\n \"allgather_partitions\": true, \r\n \"allgather_bucket_size\": 2.000000e+08, \r\n \"overlap_comm\": true, \r\n \"reduce_scatter\": true, \r\n \"reduce_bucket_size\": 2.000000e+08, \r\n \"contiguous_gradients\": true\r\n }, \r\n \"gradient_accumulation_steps\": 1, \r\n \"gradient_clipping\": 1.0, \r\n \"steps_per_print\": inf, \r\n \"train_batch_size\": 32, \r\n \"train_micro_batch_size_per_gpu\": 16, \r\n \"wall_clock_breakdown\": false, \r\n \"zero_allow_untested_optimizer\": true\r\n}\r\nUsing /raid/sourab/.cache/huggingface/torch_extensions/py311_cu118 as PyTorch extensions root...\r\nNo modifications detected for re-loaded extension module utils, skipping build step...\r\nLoading extension module utils...\r\nTime to load utils op: 0.00022840499877929688 seconds\r\n[INFO|trainer.py:1680] 2023-06-22 09:47:48,766 >> ***** Running training *****\r\n[INFO|trainer.py:1681] 2023-06-22 09:47:48,766 >> Num examples = 3,668\r\n[INFO|trainer.py:1682] 2023-06-22 09:47:48,766 >> Num Epochs = 3\r\n[INFO|trainer.py:1683] 2023-06-22 09:47:48,766 >> Instantaneous batch size per device = 16\r\n[INFO|trainer.py:1684] 2023-06-22 09:47:48,766 >> Total train batch size (w. parallel, distributed & accumulation) = 32\r\n[INFO|trainer.py:1685] 2023-06-22 09:47:48,766 >> Gradient Accumulation steps = 1\r\n[INFO|trainer.py:1686] 2023-06-22 09:47:48,766 >> Total optimization steps = 345\r\n[INFO|trainer.py:1687] 2023-06-22 09:47:48,766 >> Number of trainable parameters = 108,311,810\r\n[INFO|integrations.py:727] 2023-06-22 09:47:48,767 >> Automatic Weights & Biases logging enabled, to disable set os.environ[\"WANDB_DISABLED\"] = \"true\"\r\nwandb: Currently logged in as: smangrul. Use `wandb login --relogin` to force relogin\r\nwandb: Tracking run with wandb version 0.15.4\r\nwandb: Run data is saved locally in /home/sourab/transformers/wandb/run-20230622_094749-h2mion2e\r\nwandb: Run `wandb offline` to turn off syncing.\r\nwandb: Syncing run rose-vortex-320\r\nwandb: ⭐️ View project at https://wandb.ai/smangrul/huggingface\r\nwandb: 🚀 View run at https://wandb.ai/smangrul/huggingface/runs/h2mion2e\r\n 0%| | 0/345 [00:00<?, ?it/s]/home/sourab/miniconda3/envs/ml/lib/python3.11/site-packages/deepspeed/runtime/zero/stage_1_and_2.py:1829: UserWarning: The torch.cuda.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='cuda') to create tensors. (Triggered internally at /opt/conda/conda-bld/pytorch_1687280020902/work/torch/csrc/tensor/python_tensor.cpp:83.)\r\n overflow_gpu = get_accelerator().ByteTensor([overflow])\r\n/home/sourab/miniconda3/envs/ml/lib/python3.11/site-packages/deepspeed/runtime/zero/stage_1_and_2.py:1829: UserWarning: The torch.cuda.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='cuda') to create tensors. (Triggered internally at /opt/conda/conda-bld/pytorch_1687280020902/work/torch/csrc/tensor/python_tensor.cpp:83.)\r\n overflow_gpu = get_accelerator().ByteTensor([overflow])\r\n100%|████████████████████████████████████████████████████████████████████████████████████████| 345/345 [00:57<00:00, 6.13it/s][INFO|trainer.py:1924] 2023-06-22 09:48:49,820 >> \r\n\r\nTraining completed. Do not forget to share your model on huggingface.co/models =)\r\n\r\n\r\n{'train_runtime': 61.0539, 'train_samples_per_second': 180.234, 'train_steps_per_second': 5.651, 'train_loss': 0.4465487715126812, 'epoch': 3.0}\r\n100%|████████████████████████████████████████████████████████████████████████████████████████| 345/345 [00:57<00:00, 6.03it/s]\r\n[INFO|trainer.py:2832] 2023-06-22 09:48:49,823 >> Saving model checkpoint to /tmp/mrpc/\r\n[INFO|configuration_utils.py:458] 2023-06-22 09:48:49,824 >> Configuration saved in /tmp/mrpc/config.json\r\n[INFO|modeling_utils.py:1845] 2023-06-22 09:48:50,616 >> Model weights saved in /tmp/mrpc/pytorch_model.bin\r\n[INFO|tokenization_utils_base.py:2215] 2023-06-22 09:48:50,617 >> tokenizer config file saved in /tmp/mrpc/tokenizer_config.json\r\n[INFO|tokenization_utils_base.py:2222] 2023-06-22 09:48:50,617 >> Special tokens file saved in /tmp/mrpc/special_tokens_map.json\r\n***** train metrics *****\r\n epoch = 3.0\r\n train_loss = 0.4465\r\n train_runtime = 0:01:01.05\r\n train_samples = 3668\r\n train_samples_per_second = 180.234\r\n train_steps_per_second = 5.651\r\n06/22/2023 09:48:50 - INFO - __main__ - *** Evaluate ***\r\n[INFO|trainer.py:769] 2023-06-22 09:48:50,645 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: sentence1, sentence2, idx. If sentence1, sentence2, idx are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.\r\n[INFO|trainer.py:3106] 2023-06-22 09:48:50,646 >> ***** Running Evaluation *****\r\n[INFO|trainer.py:3108] 2023-06-22 09:48:50,646 >> Num examples = 408\r\n[INFO|trainer.py:3111] 2023-06-22 09:48:50,646 >> Batch size = 8\r\n100%|██████████████████████████████████████████████████████████████████████████████████████████| 26/26 [00:00<00:00, 52.94it/s]\r\n***** eval metrics *****\r\n epoch = 3.0\r\n eval_accuracy = 0.8431\r\n eval_combined_score = 0.8664\r\n eval_f1 = 0.8897\r\n eval_loss = 0.3868\r\n eval_runtime = 0:00:00.51\r\n eval_samples = 408\r\n eval_samples_per_second = 797.59\r\n eval_steps_per_second = 50.827\r\nwandb: Waiting for W&B process to finish... (success).\r\n[2023-06-22 09:48:52,926] [INFO] [launch.py:347:main] Process 3002010 exits successfully.\r\nwandb: \r\nwandb: Run history:\r\nwandb: eval/accuracy ▁\r\nwandb: eval/combined_score ▁\r\nwandb: eval/f1 ▁\r\nwandb: eval/loss ▁\r\nwandb: eval/runtime ▁\r\nwandb: eval/samples_per_second ▁\r\nwandb: eval/steps_per_second ▁\r\nwandb: train/epoch ▁▁\r\nwandb: train/global_step ▁▁\r\nwandb: train/total_flos ▁\r\nwandb: train/train_loss ▁\r\nwandb: train/train_runtime ▁\r\nwandb: train/train_samples_per_second ▁\r\nwandb: train/train_steps_per_second ▁\r\nwandb: \r\nwandb: Run summary:\r\nwandb: eval/accuracy 0.84314\r\nwandb: eval/combined_score 0.8664\r\nwandb: eval/f1 0.88966\r\nwandb: eval/loss 0.38684\r\nwandb: eval/runtime 0.5115\r\nwandb: eval/samples_per_second 797.59\r\nwandb: eval/steps_per_second 50.827\r\nwandb: train/epoch 3.0\r\nwandb: train/global_step 345\r\nwandb: train/total_flos 726186493739008.0\r\nwandb: train/train_loss 0.44655\r\nwandb: train/train_runtime 61.0539\r\nwandb: train/train_samples_per_second 180.234\r\nwandb: train/train_steps_per_second 5.651\r\nwandb: \r\nwandb: 🚀 View run rose-vortex-320 at: https://wandb.ai/smangrul/huggingface/runs/h2mion2e\r\nwandb: Synced 6 W&B file(s), 0 media file(s), 2 artifact file(s) and 0 other file(s)\r\nwandb: Find logs at: ./wandb/run-20230622_094749-h2mion2e/logs\r\n[2023-06-22 09:49:01,927] [INFO] [launch.py:347:main] Process 3002009 exits successfully.\r\n```",
"@pacman100 thank u, let me try your config and have a test again, I notice your config are not exactly as mine.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> Hello, the supported combinations now are:\r\n> \r\n> 1. Trainer optimizer + Trainer scheduler - Don't specify these in the DS config and use trainer args\r\n> 2. DeepSpeed optimizer + DeeepSpeed Scheduler - Specify both in DeepSpeed config and no need to use/specify them via Trainer args (@jackapbutler, please note this as you happen to be doing both)\r\n> 3. Trainer optimizer + DeepSpeed Scheduler - Don't specify optimizer in DS config; only set the scheduler there. Don't specify the scheduler via Trainer args.\r\n> \r\n> @luohao123, the case you want is DeepSpeed Optimizer + Trainer Scheduler which isn't supported now. The suggested approach in your case would be to use `Trainer optimizer + Trainer scheduler` (Settting 1. above).\r\n> \r\n> Hope this helps.\r\n\r\nHi, I want to know if I use setting 1, will the optimizer utilize DeepSpeed's cpuAdam? ",
"> Hi, I want to know if I use setting 1, will the optimizer utilize DeepSpeed's cpuAdam?\r\n\r\nYes, by default `zero_force_ds_cpu_optimizer` is set to True if not explicitly specified in the ds_config. As such, it will leverage the DeepSpeed's cpuAdam when offloading as it is strongly recommended by DeepSpeed team",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I'm trying to use DeepSpeed optimizer + Trainer scheduler because DeepSpeed has the most best optimizer (fused Adam) and Trainer has the best scheduler for my use case (cosine). DeepSpeed does not support cosine. Why was `DeepSpeed optimizer + Trainer scheduler` deprecate without any warning? I think this is a mistake and that you should reconsider @pacman100.",
"Hello @michaelroyzen, the PRs https://github.com/huggingface/transformers/pull/25863 and https://github.com/huggingface/accelerate/pull/1909 should bring back the support for `DeepSpeed optimizer + Trainer scheduler`. Could you try it out and let us know.",
"Seems to work well so far @pacman100. Thanks!",
"Hi @pacman100 ,\r\nIs PR #25863 part of the latest transformers version?\r\n\r\nI still observe the following error, despite using the `ds_config_z3_ds_optim_hf_scheduler.json`.\r\n ValueError: Found `optimizer` configured in the DeepSpeed config, but no `scheduler`. Please configure a scheduler in the DeepSpeed config. ",
"Hello @awasthiabhijeet, it should be part of the latest release, could you recheck it?",
"Thanks, @pacman100 :)\r\nYes, hf scheduler + ds optimizer combination is working well with the latest release!\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,687 | 1,698 | 1,698 |
NONE
| null |
ValueError: Found `optimizer` configured in the DeepSpeed config, but no `scheduler`. Please configure a scheduler in the DeepSpeed config.
Am using `--warmup_ratio 0.03 --lr_scheduler_type "cosine" \`
Here, and I didn't found a properly shceduler in deepspeed ssame as cosine, what should to set?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24359/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24358
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24358/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24358/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24358/events
|
https://github.com/huggingface/transformers/pull/24358
| 1,763,612,699 |
PR_kwDOCUB6oc5TWGJQ
| 24,358 |
Fix the order in `GPTNeo`'s docstring
|
{
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Et voilà :)",
"_The documentation is not available anymore as the PR was closed or merged._",
"@qgallouedec To get the setup_and_quality tests passing, you'll need to run `make style` at the top level of the repo and push any changes to this branch. "
] | 1,687 | 1,701 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24358/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24358",
"html_url": "https://github.com/huggingface/transformers/pull/24358",
"diff_url": "https://github.com/huggingface/transformers/pull/24358.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24358.patch",
"merged_at": 1687197576000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24357
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24357/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24357/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24357/events
|
https://github.com/huggingface/transformers/pull/24357
| 1,763,568,939 |
PR_kwDOCUB6oc5TV8lw
| 24,357 |
Make `AutoFormer` work with previous torch version
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
Without `import torch.utils.checkpoint` (which we have in other files, like `Bart`), with torch 1.13, we got an error
(running `RUN_SLOW=1 python3 -m pytest -v tests/models/autoformer/test_modeling_autoformer.py::AutoformerModelTest::test_training_gradient_checkpointing`)
```bash
> layer_outputs = torch.utils.checkpoint.checkpoint(
create_custom_forward(encoder_layer),
hidden_states,
attention_mask,
(head_mask[idx] if head_mask is not None else None),
)
E AttributeError: module 'torch.utils' has no attribute 'checkpoint'
```
Let's make it work with previous torch version(s) ❤️ .
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24357/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24357/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24357",
"html_url": "https://github.com/huggingface/transformers/pull/24357",
"diff_url": "https://github.com/huggingface/transformers/pull/24357.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24357.patch",
"merged_at": 1687183326000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24356
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24356/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24356/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24356/events
|
https://github.com/huggingface/transformers/issues/24356
| 1,763,555,047 |
I_kwDOCUB6oc5pHbbn
| 24,356 |
Deepspeed OOM When training 7B model on V100 16GB (2)
|
{
"login": "shahules786",
"id": 25312635,
"node_id": "MDQ6VXNlcjI1MzEyNjM1",
"avatar_url": "https://avatars.githubusercontent.com/u/25312635?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shahules786",
"html_url": "https://github.com/shahules786",
"followers_url": "https://api.github.com/users/shahules786/followers",
"following_url": "https://api.github.com/users/shahules786/following{/other_user}",
"gists_url": "https://api.github.com/users/shahules786/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shahules786/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shahules786/subscriptions",
"organizations_url": "https://api.github.com/users/shahules786/orgs",
"repos_url": "https://api.github.com/users/shahules786/repos",
"events_url": "https://api.github.com/users/shahules786/events{/privacy}",
"received_events_url": "https://api.github.com/users/shahules786/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Did you solved? "
] | 1,687 | 1,697 | 1,687 |
NONE
| null |
### System Info
- Python version 3.8
- transformers - installed from source latest
**Describe the bug**
OOM When training 7B model on V100 16GB (2) with zero stage 2 and CPU offloading even when memory estimation showed far less per GPU mem requirement.
```
-- memory estimation--
DEVICES ['Tesla V100-PCIE-16GB', 'Tesla V100-PCIE-16GB']
-------------ZERO 2------------
Estimated memory needed for params, optim states and gradients for a:
HW: Setup with 1 node, 2 GPUs per node.
SW: Model with 6650M total params.
per CPU | per GPU | Options
148.66GB | 12.39GB | offload_optimizer=cpu
74.33GB | 74.33GB | offload_optimizer=none
```
**Screenshots**
nvidia-smi during run
<img width="648" alt="Screenshot 2023-06-19 at 5 30 41 PM" src="https://github.com/microsoft/DeepSpeed/assets/25312635/47a512c6-b509-49c2-b111-8f7e9dac8532">
RAM usage
<img width="723" alt="Screenshot 2023-06-19 at 5 55 08 PM" src="https://github.com/microsoft/DeepSpeed/assets/25312635/e7e97e07-7203-42a7-8b89-468c2de35546">
Can see free RAM available.
**System info (please complete the following information):**
- OS: CentOS Linux
- GPU count and types : V100 16B X 2 single node
### Who can help?
@pacman100
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the behavior:
My code is [here](https://github.com/explodinggradients/Funtuner/blob/main/funtuner/trainer.py)
Run deepspeed funtuner/trainer.py
export PYTHONPATH="${PYTHONPATH}:/your-path/Funtuner"
please change the log_dir to your folder [here](https://github.com/explodinggradients/Funtuner/blob/c4e66209d5ee276a7eb8caf582435f1eaafbf18f/funtuner/config/config.yaml#L4) also you might want to set log_wandb=False
`dev-train` branch
### Expected behavior
Run w/o OOM error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24356/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24356/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24354
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24354/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24354/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24354/events
|
https://github.com/huggingface/transformers/issues/24354
| 1,763,502,622 |
I_kwDOCUB6oc5pHOoe
| 24,354 |
PEFT Models are not resuming from checkpoint as expected.
|
{
"login": "techthiyanes",
"id": 25921035,
"node_id": "MDQ6VXNlcjI1OTIxMDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/25921035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/techthiyanes",
"html_url": "https://github.com/techthiyanes",
"followers_url": "https://api.github.com/users/techthiyanes/followers",
"following_url": "https://api.github.com/users/techthiyanes/following{/other_user}",
"gists_url": "https://api.github.com/users/techthiyanes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/techthiyanes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/techthiyanes/subscriptions",
"organizations_url": "https://api.github.com/users/techthiyanes/orgs",
"repos_url": "https://api.github.com/users/techthiyanes/repos",
"events_url": "https://api.github.com/users/techthiyanes/events{/privacy}",
"received_events_url": "https://api.github.com/users/techthiyanes/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @techthiyanes \r\nThank you very much for double checking, here are the snippets that I have ran and they work fine on my end using the branh you have mentioned:\r\n\r\n<details><summary>Without `resume_from_checkpoint`</summary>\r\n\r\n```python\r\n\r\nimport os\r\nfrom transformers import TrainingArguments\r\nfrom datasets import load_dataset\r\nfrom trl import SFTTrainer\r\nfrom peft import LoraConfig\r\n\r\ndataset = load_dataset(\"imdb\", split=\"train\")\r\noutput_dir = \"test\"\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=output_dir,\r\n per_device_train_batch_size=1,\r\n per_device_eval_batch_size=1,\r\n max_steps=5,\r\n save_steps=1,\r\n save_strategy='steps'\r\n)\r\n\r\npeft_config = LoraConfig(\r\n r=16,\r\n lora_alpha=32,\r\n lora_dropout=0.05,\r\n bias=\"none\",\r\n task_type=\"CAUSAL_LM\",\r\n)\r\n\r\ntrainer = SFTTrainer(\r\n \"EleutherAI/gpt-neo-125m\",\r\n train_dataset=dataset,\r\n args=training_args,\r\n dataset_text_field=\"text\",\r\n peft_config=peft_config\r\n)\r\ntrainer.train()\r\ntrainer.save_model(os.path.join(output_dir, \"checkpoint-1\"))\r\ntrainer.train(resume_from_checkpoint=True)\r\n```\r\n\r\n</details>\r\n\r\n<details><summary>With `resume_from_checkpoint`</summary>\r\n\r\n```python\r\n\r\nimport os\r\nfrom transformers import TrainingArguments\r\nfrom datasets import load_dataset\r\nfrom trl import SFTTrainer\r\nfrom peft import LoraConfig\r\n\r\ndataset = load_dataset(\"imdb\", split=\"train\")\r\noutput_dir = \"test\"\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=output_dir,\r\n per_device_train_batch_size=1,\r\n per_device_eval_batch_size=1,\r\n max_steps=5,\r\n save_steps=1,\r\n save_strategy='steps'\r\n)\r\n\r\npeft_config = LoraConfig(\r\n r=16,\r\n lora_alpha=32,\r\n lora_dropout=0.05,\r\n bias=\"none\",\r\n task_type=\"CAUSAL_LM\",\r\n)\r\n\r\ntrainer = SFTTrainer(\r\n \"EleutherAI/gpt-neo-125m\",\r\n train_dataset=dataset,\r\n args=training_args,\r\n dataset_text_field=\"text\",\r\n peft_config=peft_config\r\n)\r\ntrainer.train()\r\ntrainer.save_model(os.path.join(output_dir, \"checkpoint-1\"))\r\ntrainer.train()\r\n```\r\n\r\n</details>\r\n\r\nCan you elaborate more on:\r\n\r\n> For resuming from checkpoint i have updated num of epochs much higher than previous one.\r\nwhile passing as trainer.train(resume from checkpoint=True) then it is showing as can't find a valid checkpoint.\r\nAlso while passing as trainer.train(resume from checkpoint = path of saved model)then it is showing as can't find a valid checkpoint.\r\n\r\nThanks! ",
"> ```python\r\n> ```python\r\n> trainer.train(resume_from_checkpoint=True)\r\n> ```\r\n> \r\n> \r\n> \r\n> \r\n> \r\n> \r\n> \r\n> \r\n> \r\n> \r\n> \r\n> ```\r\n\r\nSo far I'm able to replicate the issue.\r\n\r\nSteps I have followed:\r\n\r\nLibaries Installed:\r\n! pip install datasets peft evaluate\r\n!pip install git+https://github.com/huggingface/transformers\r\n\r\nClone PEFT resume from chekpoint branch:\r\n!git clone https://github.com/llohann-speranca/transformers.git -b fix-resume-checkpoint-for-peftmodel\r\n\r\nReplace this folder where the transformers library installed:\r\n!cp -r /content/transformers /usr/local/lib/python3.10/dist-packages/transformers\r\n\r\nRestart the run time.\r\n\r\nThen below code snippet:\r\n\r\nimport os\r\nfrom transformers import TrainingArguments\r\nfrom datasets import load_dataset\r\nfrom trl import SFTTrainer\r\nfrom peft import LoraConfig\r\n\r\ndataset = load_dataset(\"imdb\", split=\"train\")\r\noutput_dir = \"test\"\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=output_dir,\r\n per_device_train_batch_size=1,\r\n per_device_eval_batch_size=1,\r\n max_steps=5,\r\n save_steps=1,\r\n save_strategy='steps'\r\n)\r\n\r\npeft_config = LoraConfig(\r\n r=16,\r\n lora_alpha=32,\r\n lora_dropout=0.05,\r\n bias=\"none\",\r\n task_type=\"CAUSAL_LM\",\r\n)\r\n\r\ntrainer = SFTTrainer(\r\n \"EleutherAI/gpt-neo-125m\",\r\n train_dataset=dataset,\r\n args=training_args,\r\n dataset_text_field=\"text\",\r\n peft_config=peft_config\r\n)\r\ntrainer.train()\r\ntrainer.save_model(os.path.join(output_dir, \"checkpoint-1\"))\r\ntrainer.train(resume_from_checkpoint=True)\r\n\r\n\r\n\r\n@younesbelkada @@llohann-speranca\r\n\r\nI guess you would have run the snippet via already from modified trainer code that resides internally.\r\n\r\nCould you please try running the code that is downloaded from git on specific branch?\r\n\r\nThanks a lot on your effort on validating this.\r\n",
"Hi @techthiyanes \r\nCan you try to install `transformers` with the following command ?\r\n```bash\r\npip install git+https://github.com/llohann-speranca/transformers.git@fix-resume-checkpoint-for-peftmodel\r\n```\r\nThe line 1991 of your traceback doesn't match with the line 1991 of the fork: https://github.com/llohann-speranca/transformers/blob/e01a4aa77073b847b9451c92c2df718a67960df1/src/transformers/trainer.py#L1991 so I believe you did not installed correctly transformers from that branch",
"> ```shell\r\n> pip install git+https://github.com/llohann-speranca/transformers.git@fix-resume-checkpoint-for-peftmodel\r\n> ```\r\n\r\nThanks a lot on finding and fixing to help this issue.\r\nNow I am able to resume from checkpoint. It's working for classification and seq2seq models as well."
] | 1,687 | 1,687 | 1,687 |
NONE
| null |
### System Info
transformers : 4.30
### Who can help?
@llohann-speranca @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Please try below code snippet as per example:
```python
import os
from transformers import TrainingArguments
from datasets import load_dataset
from trl import SFTTrainer
from peft import LoraConfig
dataset = load_dataset("imdb", split="train")
output_dir = "test"
training_args = TrainingArguments(
output_dir=output_dir,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
max_steps=5,
save_steps=1,
save_strategy='steps'
)
peft_config = LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
)
trainer = SFTTrainer(
"EleutherAI/gpt-neo-125m",
train_dataset=dataset,
args=training_args,
dataset_text_field="text",
peft_config=peft_config
)
trainer.train()
trainer.save_model(os.path.join(output_dir, "checkpoint-1"))
trainer.train()
```
For the above code snippet I have pulled @llohann-speranca's resume from checkpoint repo then replaced the installed transformers repo.
Inital version of trainer.train() is working without any issues.
As mentioned that I have overridden the model by using trainer.save_model(path of saved model).
For resuming from checkpoint i have updated num of epochs much higher than previous one.
while passing as trainer.train(resume from checkpoint=True) then it is showing as can't find a valid checkpoint.
Also while passing as trainer.train(resume from checkpoint = path of saved model)then it is showing as can't find a valid checkpoint.
The same issue persists in the transformers source installed version as well.
### Expected behavior
The model should be resumed from checkpoint.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24354/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24353
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24353/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24353/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24353/events
|
https://github.com/huggingface/transformers/pull/24353
| 1,763,486,363 |
PR_kwDOCUB6oc5TVqWA
| 24,353 |
Fix ImageGPT doctest
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh Running the doctests (properly this time :) ), the tests pass with the ignore statement on the for loop, and fail without (in the same way as on the CI). "
] | 1,687 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
#24317 Resolved the ImageGPT doc test failing issue, as `clusters` in the image processor were not stored as numpy arrays as expected. This was tested by running the code directly, but I didn't run using
` pytest --doctest-modules src/transformers/models/imagegpt/modeling_imagegpt.py ` 🙃
The tests were failing because some code produces an output e.g. model architecture when caling `model.to`, but no "expected" output is provided. We don't want to check these outputs, so this PR adds controls to ignore.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24353/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24353",
"html_url": "https://github.com/huggingface/transformers/pull/24353",
"diff_url": "https://github.com/huggingface/transformers/pull/24353.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24353.patch",
"merged_at": 1687184610000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24352
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24352/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24352/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24352/events
|
https://github.com/huggingface/transformers/pull/24352
| 1,763,429,594 |
PR_kwDOCUB6oc5TVd4f
| 24,352 |
Fix device issue in `SwitchTransformers`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Merge now. Don't hesitate to leave comments if any @ArthurZucker ."
] | 1,687 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
Need a tiny fix after #24300.
Currently, we have a failure
```bash
self = <tests.models.switch_transformers.test_modeling_switch_transformers.SwitchTransformersEncoderOnlyModelTest testMethod=test_multi_gpu_data_parallel_forward>
@staticmethod
def forward(ctx, target_device, dim, *inputs):
> assert all(i.device.type != 'cpu' for i in inputs), (
'Gather function not implemented for CPU tensors'
)
E AssertionError: Gather function not implemented for CPU tensors
/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/_functions.py:56: AssertionError
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24352/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24352/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24352",
"html_url": "https://github.com/huggingface/transformers/pull/24352",
"diff_url": "https://github.com/huggingface/transformers/pull/24352.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24352.patch",
"merged_at": 1687179965000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24351
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24351/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24351/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24351/events
|
https://github.com/huggingface/transformers/pull/24351
| 1,763,169,502 |
PR_kwDOCUB6oc5TUki9
| 24,351 |
pin `apex` to a speicifc commit (for DeepSpeed CI docker image)
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
The docker image build for DeepSpeed job in CI fails since ~ one week due to this [apex issue](https://github.com/NVIDIA/apex/issues/1679).
Let's pin to the previous commit until the above mentioned issue is resolved on `apex` side.
Currently, the DeepSpeed job fails as the above failure prevents to use newer images that include some fixes on `accelerate` side.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24351/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24351/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24351",
"html_url": "https://github.com/huggingface/transformers/pull/24351",
"diff_url": "https://github.com/huggingface/transformers/pull/24351.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24351.patch",
"merged_at": 1687171733000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24350
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24350/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24350/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24350/events
|
https://github.com/huggingface/transformers/pull/24350
| 1,763,141,993 |
PR_kwDOCUB6oc5TUeqy
| 24,350 |
byebye Hub connection timeout
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
No more timeout for connection to Hub in CI, and everyone is happy with ✅
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24350/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24350/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24350",
"html_url": "https://github.com/huggingface/transformers/pull/24350",
"diff_url": "https://github.com/huggingface/transformers/pull/24350.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24350.patch",
"merged_at": 1687171820000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24349
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24349/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24349/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24349/events
|
https://github.com/huggingface/transformers/pull/24349
| 1,763,128,388 |
PR_kwDOCUB6oc5TUbu_
| 24,349 |
[GPTNeoX] Nit in config
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
Fixes #23081: when the number of heads is not a divisor of the hidden size, the attention will not work. This is most probably from the design of GPTNeoX's attention.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24349/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24349",
"html_url": "https://github.com/huggingface/transformers/pull/24349",
"diff_url": "https://github.com/huggingface/transformers/pull/24349.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24349.patch",
"merged_at": 1687281559000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24348
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24348/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24348/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24348/events
|
https://github.com/huggingface/transformers/pull/24348
| 1,763,101,314 |
PR_kwDOCUB6oc5TUV3x
| 24,348 |
Add mul choice train script
|
{
"login": "HDThang",
"id": 132823983,
"node_id": "U_kgDOB-q7rw",
"avatar_url": "https://avatars.githubusercontent.com/u/132823983?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HDThang",
"html_url": "https://github.com/HDThang",
"followers_url": "https://api.github.com/users/HDThang/followers",
"following_url": "https://api.github.com/users/HDThang/following{/other_user}",
"gists_url": "https://api.github.com/users/HDThang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HDThang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HDThang/subscriptions",
"organizations_url": "https://api.github.com/users/HDThang/orgs",
"repos_url": "https://api.github.com/users/HDThang/repos",
"events_url": "https://api.github.com/users/HDThang/events{/privacy}",
"received_events_url": "https://api.github.com/users/HDThang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,687 | 1,687 | 1,687 |
NONE
| null |
- Modify train script all done tasks
- Add common libraries for environments in env.yaml
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24348/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24348/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24348",
"html_url": "https://github.com/huggingface/transformers/pull/24348",
"diff_url": "https://github.com/huggingface/transformers/pull/24348.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24348.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24346
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24346/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24346/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24346/events
|
https://github.com/huggingface/transformers/pull/24346
| 1,762,923,238 |
PR_kwDOCUB6oc5TTvWs
| 24,346 |
Clean up disk sapce during docker image build for `transformers-pytorch-gpu`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
PyTorch pipeline CI job start to fail due to
```bash
ImportError: accelerate>=0.20.3 is required for a normal functioning of this module, but found accelerate==0.20.2.
Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git main
```
**The root cause is: docker image for this job failed to build due to disk space issue**
```bash
RROR: Could not install packages due to an OSError: [Errno 28] No space left on device
```
As usual, let's us save Space!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24346/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24346/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24346",
"html_url": "https://github.com/huggingface/transformers/pull/24346",
"diff_url": "https://github.com/huggingface/transformers/pull/24346.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24346.patch",
"merged_at": 1687172043000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24345
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24345/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24345/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24345/events
|
https://github.com/huggingface/transformers/issues/24345
| 1,762,688,184 |
I_kwDOCUB6oc5pEHy4
| 24,345 |
Trainer reports batch size different from argument on multiple GPUs with DP
|
{
"login": "cgbahk",
"id": 34672141,
"node_id": "MDQ6VXNlcjM0NjcyMTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/34672141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cgbahk",
"html_url": "https://github.com/cgbahk",
"followers_url": "https://api.github.com/users/cgbahk/followers",
"following_url": "https://api.github.com/users/cgbahk/following{/other_user}",
"gists_url": "https://api.github.com/users/cgbahk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cgbahk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cgbahk/subscriptions",
"organizations_url": "https://api.github.com/users/cgbahk/orgs",
"repos_url": "https://api.github.com/users/cgbahk/repos",
"events_url": "https://api.github.com/users/cgbahk/events{/privacy}",
"received_events_url": "https://api.github.com/users/cgbahk/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"How are you launching your training script? If it's just with python (no distributed), the `Trainer` will use `DataParallel` which requires your batch size to be mulitiplied by the number of GPUs to work properly. I'm guessing that's why you see the \"instanteneous batch size\" at 4x what you put.\r\n\r\nThis is the only case it will happen (if you launch in distributed mode, the batch size per device will show up correctly) and is a good mean to track whether you are using Distributed training properly (you shouldn't use DataParallel as per PyTorch documentation) so you should launch your script with `torchrun` or `accelerate launch`.",
"I just begin to try training with multiple GPUs :smile: And everybody gives warning on using DP, and recommends to use DDP over DP. Okay I'll try.\r\n\r\nBut that is out of this issue topic. So let's not talk about it anymore here.\r\n\r\n---\r\n\r\n> How are you launching your training script? If it's just with python (no distributed), the `Trainer` will use `DataParallel`\r\n\r\nYes this is the case I meant. This issue is about DP not DDP.\r\n\r\nI think in this communication, it is extremely important to use same terms for same concepts, especially about several 'batch' concepts.\r\n\r\nLet me use term\r\n- `batch size per update`: count of input that used for one model parameter update\r\n- `device`: in this case, let's fix this to GPU\r\n - And I think that is what term 'device' mean in training arg `per_device_train_batch_size` and log `Instanteneous batch size per device`.\r\n https://github.com/huggingface/transformers/blob/66fd3a8d626a32989f4569260db32785c6cbf42a/src/transformers/training_args.py#L193-L194\r\n- `batch size per device`: count of input source for each device(i.e. GPU) for one model parameter update iteration\r\n - Depending on documentations and communications, ambiguous terms used, like \"mini-batch\"([:link:](https://github.com/huggingface/transformers/blob/v4.30.2/docs/source/en/perf_train_gpu_many.mdx#L89) [:link:](https://github.com/huggingface/transformers/blob/v4.30.2/docs/source/en/perf_train_gpu_many.mdx#L95)) or \"sub mini-batch\" [:link:](https://www.telesens.co/2019/04/04/distributed-data-parallel-training-using-pytorch-on-aws/). So let's fix to this term for this issue communication.\r\n - I expect this is the same concept with training arg `per_device_train_batch_size` and log `Instanteneous batch size per device`\r\n\r\n\r\n> `DataParallel` which requires your batch size to be mulitiplied by the number of GPUs to work properly. I'm guessing that's why you see the \"instanteneous batch size\" at 4x what you put.\r\n\r\nIn your comment, 'batch size' seems to mean `batch size per update`. And yes, that is true, it should be 'GPU count' x `batch size per device`.\r\n\r\nBut the log `Instantaneous batch size per device` means `batch size per device`, not `batch size per update`. That is what I'm pointing out as a bug, which can lead user to misunderstanding.\r\n",
"(I will only use DDP, so this issue is not anymore important for me. But if I'm someone who cares about the project, like maintainer, I would leave this open before the bug fixed. Any maintainers can close this issue if they want so :smile:)",
"I made the PR linked above to clarify the logging a bit more. Let me know if it's better!"
] | 1,687 | 1,687 | 1,687 |
NONE
| null |
### System Info
- `transformers` version: 4.30.1
- Platform: Linux-4.18.0-240.el8.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 1.12.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
Not sure why "PyTorch version (GPU?)" is False. I think this is because GPU not connected when I run this on report time. On actual training, GPU connected.
I'm pretty sure my pytorch environment is with GPU support, like I can use same conda environment to train normal, single GPU training exploiting GPU resource.
```
$ conda list | grep pytorch
pytorch 1.12.1 py3.10_cuda11.3_cudnn8.3.2_0 pytorch
```
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I've observed `Instantaneous batch size per device` in trainer log reported as `per_device_train_batch_size` x GPU count, reproducible in multiple cases.
I can't give full reproduction detail, but pretty sure that scenario below can give idea of the situation.
For example, I tried to train with 2 GPUs in DP sense(DP as described in [:link:](https://github.com/huggingface/transformers/blob/v4.30.1/docs/source/en/perf_train_gpu_many.mdx#data-parallelism)), with following TrainingArgument:
```py
TrainingArgument(
auto_find_batch_size=False,
per_device_train_batch_size=1,
...
)
```
Then training log looks like this. Note on `Instantaneous batch size per device` value. I expected 1 from `per_device_train_batch_size`
```
***** Running training *****
Num examples = ...
Num Epochs = ...
Instantaneous batch size per device = 2
Total train batch size (w. parallel, distributed & accumulation) = ...
Gradient Accumulation steps = ...
Total optimization steps = ...
Number of trainable parameters = ...
...
```
(I've experienced some other logging bug, like `Total train batch size` especially when with `auto_find_batch_size=True` but let's only focus on batch size mismatch in this issue)
I could check `Instantaneous batch size per device` reported as `per_device_train_batch_size` x GPU count happens again in other cases, like
- 4 GPUs / `per_device_train_batch_size=128` -> `Instantaneous batch size per device = 512`
This maybe
- correct actual behavior but logging is not correct, or
- actual bug, or
- I may have misunderstanding about DP, in this case please blame me :smile:
### Expected behavior
I expected
- `Instantaneous batch size per device` reported as `per_device_train_batch_size`
not
- `Instantaneous batch size per device` reported as `per_device_train_batch_size` x GPU count
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24345/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24344
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24344/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24344/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24344/events
|
https://github.com/huggingface/transformers/pull/24344
| 1,762,551,554 |
PR_kwDOCUB6oc5TSe6k
| 24,344 |
docs: add BentoML to awesome-transformers
|
{
"login": "aarnphm",
"id": 29749331,
"node_id": "MDQ6VXNlcjI5NzQ5MzMx",
"avatar_url": "https://avatars.githubusercontent.com/u/29749331?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aarnphm",
"html_url": "https://github.com/aarnphm",
"followers_url": "https://api.github.com/users/aarnphm/followers",
"following_url": "https://api.github.com/users/aarnphm/following{/other_user}",
"gists_url": "https://api.github.com/users/aarnphm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aarnphm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aarnphm/subscriptions",
"organizations_url": "https://api.github.com/users/aarnphm/orgs",
"repos_url": "https://api.github.com/users/aarnphm/repos",
"events_url": "https://api.github.com/users/aarnphm/events{/privacy}",
"received_events_url": "https://api.github.com/users/aarnphm/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @LysandreJik ",
"I have updated the docs to the bottom of the page.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24344). All of your documentation changes will be reflected on that endpoint."
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Kindly ask to add BentoML to the list of awesome projects that has transformers support
cc @parano
Signed-off-by: Aaron <[email protected]>
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
cc @sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24344/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24344",
"html_url": "https://github.com/huggingface/transformers/pull/24344",
"diff_url": "https://github.com/huggingface/transformers/pull/24344.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24344.patch",
"merged_at": 1687191450000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24343
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24343/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24343/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24343/events
|
https://github.com/huggingface/transformers/issues/24343
| 1,762,545,472 |
I_kwDOCUB6oc5pDk9A
| 24,343 |
Enable non-causal mask (to enable MLM) for VisionEncoderDecoder models
|
{
"login": "metemadi",
"id": 4220153,
"node_id": "MDQ6VXNlcjQyMjAxNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4220153?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/metemadi",
"html_url": "https://github.com/metemadi",
"followers_url": "https://api.github.com/users/metemadi/followers",
"following_url": "https://api.github.com/users/metemadi/following{/other_user}",
"gists_url": "https://api.github.com/users/metemadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/metemadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/metemadi/subscriptions",
"organizations_url": "https://api.github.com/users/metemadi/orgs",
"repos_url": "https://api.github.com/users/metemadi/repos",
"events_url": "https://api.github.com/users/metemadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/metemadi/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
closed
| false | null |
[] |
[
"Hi @metemadi, thanks for opening this issue! \r\n\r\nThis sounds like an interesting project! I believe there's a few places that would need to be adapted in order to enable this properly, such as not forcing `add_cross_attention` to the decoder config and not shifting tokens (cc @ydshieh). The VisionEncoderDecoder model is not intended to be compatible with all encoder-decoder pairs or use cases. This isn't something we'll add to the library at the moment, but feel free to share a fork branch with an implementation here if you'd like!",
"Thank you for the insanely fast reply - HuggingFace is amazing as always! This all makes sense. Thanks again.",
"Sorry, I forgot to reply:\r\n\r\nThere is however `class VisionTextDualEncoderModel`. One checkpoint on the Hub is [clip-italian](https://huggingface.co/clip-italian/clip-italian). If you look the config file, it uses `BertForMaskedLM` and `clip_vision_model`.\r\n\r\nIt might be helpful, but some slight modification might be necessary if the goal is to do what have been done in the paper you mentioned.\r\n"
] | 1,687 | 1,687 | 1,687 |
NONE
| null |
### Feature request
Hello! The current (amazing!) VisionEncoderDecoder library supports text generation via a standard causal LM. Some recent work (linked [here](https://arxiv.org/abs/2306.07915)) has shown promise in having the text decoder be a MLM instead of a causal LM. I believe this is doable with the current VisionEncoderDecoder library by passing in [MASK] tokens for the decoder_input_ids and passing in the labels as usual, but this would still result in a causal mask. The code comment is as follows which makes me think this:
```
decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*):
Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also
be used by default.
```
Is there a way to turn off causal masking to predict multiple text tokens at once using a VisionEncoderDecoder model?
### Motivation
Masked language modeling on top of a Vision encoder appears to be a promising new approach for image captioning and pre-training of vision models according to [this recent work](https://arxiv.org/abs/2306.07915).
### Your contribution
Thank you!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24343/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24342
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24342/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24342/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24342/events
|
https://github.com/huggingface/transformers/issues/24342
| 1,762,232,559 |
I_kwDOCUB6oc5pCYjv
| 24,342 |
Wrong pre-trained Whisper's BOS token?
|
{
"login": "tonywu71",
"id": 28306721,
"node_id": "MDQ6VXNlcjI4MzA2NzIx",
"avatar_url": "https://avatars.githubusercontent.com/u/28306721?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tonywu71",
"html_url": "https://github.com/tonywu71",
"followers_url": "https://api.github.com/users/tonywu71/followers",
"following_url": "https://api.github.com/users/tonywu71/following{/other_user}",
"gists_url": "https://api.github.com/users/tonywu71/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tonywu71/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tonywu71/subscriptions",
"organizations_url": "https://api.github.com/users/tonywu71/orgs",
"repos_url": "https://api.github.com/users/tonywu71/repos",
"events_url": "https://api.github.com/users/tonywu71/events{/privacy}",
"received_events_url": "https://api.github.com/users/tonywu71/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi!\r\n\r\nThe first issue seems to be a feature of the whisper model. It has `<|endoftext|>` as token text for `bos`, `eos`, `pad` and `unk`. I see there are no dedicated tokens for `unk` and `pad`, so I think this is a feature of the model, and not a bug. If you look at the [original code](https://github.com/openai/whisper/blob/main/whisper/tokenizer.py), you can see that there is no dedicated token for `eos`, `bos`, `pad` or `unk`. This seems to indicate that these tokens are simply not used by the model.\r\n\r\nThe second issue is due to `add_special_tokens` being set to `True` by default. So this is not unexpected behavior.\r\n\r\n```python\r\ntokenizer.encode([\"<|startoftranscript|>\"], add_special_tokens=False)\r\n>>> [50258]\r\ntokenizer.decode([50258])\r\n>>> '<|startoftranscript|>'\r\n```\r\n\r\n\r\n",
"cc @ArthurZucker ",
"Hey, not entirely sure which part of the documentation you are referring to, but this is expected. The `bos_token` is not used to start a transcript. More details [here](https://huggingface.co/openai/whisper-base) about the starting tokens, and why we don't use this `bos`.",
"> Hi!\r\n> \r\n> The first issue seems to be a feature of the whisper model. It has `<|endoftext|>` as token text for `bos`, `eos`, `pad` and `unk`. I see there are no dedicated tokens for `unk` and `pad`, so I think this is a feature of the model, and not a bug. If you look at the [original code](https://github.com/openai/whisper/blob/main/whisper/tokenizer.py), you can see that there is no dedicated token for `eos`, `bos`, `pad` or `unk`. This seems to indicate that these tokens are simply not used by the model.\r\n> \r\n> The second issue is due to `add_special_tokens` being set to `True` by default. So this is not unexpected behavior.\r\n> \r\n> ```python\r\n> tokenizer.encode([\"<|startoftranscript|>\"], add_special_tokens=False)\r\n> >>> [50258]\r\n> tokenizer.decode([50258])\r\n> >>> '<|startoftranscript|>'\r\n> ```\r\n\r\nThanks, makes sense!",
"> Hey, not entirely sure which part of the documentation you are referring to, but this is expected. The `bos_token` is not used to start a transcript. More details [here](https://huggingface.co/openai/whisper-base) about the starting tokens, and why we don't use this `bos`.\r\n\r\nAccording to this [part](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperTokenizer.bos_token), we have:\r\n\r\n> bos_token (str, optional, defaults to \"<|startoftranscript|>\") — The beginning of sequence token.\r\n\r\nWhich in my opinion is a bit confusing.\r\n\r\nBut I do understand your point and how I should handle the `<|startoftranscript|>` now. Thanks for the help!",
"I'll update the documentation to make it less confusing. The token used to store the ` \"<|startoftranscript|>\"` token is `decoder_start_token_id`. The `bos_token` is pretty much unused, which is why it was set to the same as `eos_token`. "
] | 1,687 | 1,687 | 1,687 |
NONE
| null |
### System Info
- `transformers` version: 4.30.2
- Platform: macOS-13.4-arm64-arm-64bit
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sanchit
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import WhisperTokenizer
WhisperTokenizer.from_pretrained("openai/whisper-tiny").bos_token
>> '<|endoftext|>'
```
### Expected behavior
Dear Gandhi,
From the [documentation](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperTokenizer) and from what I expect from the Whisper tokenizer, `processor.tokenizer.bos_token` should be equal to `"<|startoftranscript|>"` when using one of the official vanilla Whisper model. Currently, it is equal to `"<|endoftext|>"`. Is it an intended behavior? What do you think?
On a different note, there is another weird behavior when encoding/decoding:
```python
tokenizer.encode(["<|startoftranscript|>"])
>> [50258, 50363, 50258, 50257]
processor.tokenizer.decode([50258, 50363, 50258, 50257])
>> '<|startoftranscript|><|notimestamps|><|startoftranscript|><|endoftext|>'
```
while I was expecting the last line to return `'<|startoftranscript|>'` only.
Yours sincerely,
Tony
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24342/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24341
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24341/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24341/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24341/events
|
https://github.com/huggingface/transformers/issues/24341
| 1,762,223,297 |
I_kwDOCUB6oc5pCWTB
| 24,341 |
Colab Translation notebook link not found
|
{
"login": "SoyGema",
"id": 24204714,
"node_id": "MDQ6VXNlcjI0MjA0NzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/24204714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SoyGema",
"html_url": "https://github.com/SoyGema",
"followers_url": "https://api.github.com/users/SoyGema/followers",
"following_url": "https://api.github.com/users/SoyGema/following{/other_user}",
"gists_url": "https://api.github.com/users/SoyGema/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SoyGema/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SoyGema/subscriptions",
"organizations_url": "https://api.github.com/users/SoyGema/orgs",
"repos_url": "https://api.github.com/users/SoyGema/repos",
"events_url": "https://api.github.com/users/SoyGema/events{/privacy}",
"received_events_url": "https://api.github.com/users/SoyGema/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @Rocketknight1 ",
"I opened a PR to the notebooks repo here to fix this: https://github.com/huggingface/notebooks/pull/398\r\n\r\nThanks for warning us about the issue - we appreciate the help to keep our docs up to date!"
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
### System Info
Hello There!
First and foremost, congrats for Transformers Translation [tutorial](https://huggingface.co/docs/transformers/tasks/translation). 👍
It serves as a Spark for building english-to-many translation languages models!
I´m following it along with TF mostly reproducing it in a jupyter Notebook with TF for mac with GPU enabled
At the end of the [Train](https://huggingface.co/docs/transformers/tasks/translation) section , it is showed
_For a more in-depth example of how to finetune a model for translation, take a look at the corresponding PyTorch notebook or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb)._
Inside the notebook, at cell [4] , there shows a message
_**You** can find a script version of this notebook to fine-tune your model in a distributed fashion using multiple GPUs or TPUs [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq)._
The link is broken .
## Potential fix.
Maybe it could point to Transformer [performance docs](https://huggingface.co/docs/transformers/performance) if you want to go for a more general overview or some specific part of [run_translation.py](https://github.com/huggingface/transformers/blob/main/examples/tensorflow/translation/run_translation.py) script facilitated by team member [here](https://github.com/huggingface/transformers/issues/24254#issuecomment-1594830054) during #24254 help? Please , don't hesitate to share the link as there could be a benefit in implementing it
Thanks so much for the time dedicated to this
Keep up the amazing work in the Open!
### Who can help?
@Rocket
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Follow tutorial in docs . Go to Notebook at the end of Train Section
2. Go to Tensorflow Notebook
3. Click link in cell [4] . It seems to go to /seq2seq examples
### Expected behavior
The link should point at a fine-tune script version of the notebook, or at least to docs
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24341/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24340
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24340/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24340/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24340/events
|
https://github.com/huggingface/transformers/pull/24340
| 1,762,137,831 |
PR_kwDOCUB6oc5TRK2c
| 24,340 |
Fix TypeError: Object of type int64 is not JSON serializable
|
{
"login": "xiaoli",
"id": 458922,
"node_id": "MDQ6VXNlcjQ1ODkyMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/458922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiaoli",
"html_url": "https://github.com/xiaoli",
"followers_url": "https://api.github.com/users/xiaoli/followers",
"following_url": "https://api.github.com/users/xiaoli/following{/other_user}",
"gists_url": "https://api.github.com/users/xiaoli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiaoli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiaoli/subscriptions",
"organizations_url": "https://api.github.com/users/xiaoli/orgs",
"repos_url": "https://api.github.com/users/xiaoli/repos",
"events_url": "https://api.github.com/users/xiaoli/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiaoli/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @xiaoli, thanks for opening this PR. \r\n\r\nCould you provide some more information about when the error occurs? Does this happen when running with the values from [the example readme](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification#pytorch-version-no-trainer)?",
"Hi @amyeroberts, it happened on executing [./run_no_trainer.sh](https://github.com/huggingface/transformers/blob/main/examples/pytorch/token-classification/run_no_trainer.sh), and everything works smoothly but the last step of that saving results into JSON file.\r\n\r\nI got this error:\r\n`TypeError: Object of type int64 is not JSON serializable`, so this commit is trying to fix that.\r\n\r\nThis was happened on my Ubuntu 22.04 workstation.",
"```sh\r\n(transformers) ➜ token-classification git:(main) ./run_no_trainer.sh && echo $(date +%d.%m.%y-%H:%M:%S)\r\nThe following values were not passed to `accelerate launch` and had defaults used instead:\r\n\t`--num_processes` was set to a value of `0`\r\n\t`--num_machines` was set to a value of `1`\r\n\t`--mixed_precision` was set to a value of `'no'`\r\n\t`--dynamo_backend` was set to a value of `'no'`\r\nTo avoid this warning pass in values for each of the problematic parameters or run `accelerate config`.\r\n06/20/2023 10:54:40 - INFO - __main__ - Distributed environment: DistributedType.NO\r\nNum processes: 1\r\nProcess index: 0\r\nLocal process index: 0\r\nDevice: mps\r\n\r\nMixed precision type: no\r\n\r\nDownloading builder script: 100%|████████████████████████████████████████████| 9.57k/9.57k [00:00<00:00, 8.80MB/s]\r\nDownloading metadata: 100%|██████████████████████████████████████████████████| 3.73k/3.73k [00:00<00:00, 9.41MB/s]\r\nDownloading readme: 100%|████████████████████████████████████████████████████| 12.3k/12.3k [00:00<00:00, 16.9MB/s]\r\nDownloading and preparing dataset conll2003/conll2003 to /Users/xiaoliwang/.cache/huggingface/datasets/conll2003/conll2003/1.0.0/9a4d16a94f8674ba3466315300359b0acd891b68b6c8743ddf60b9c702adce98...\r\nDownloading data: 100%|████████████████████████████████████████████████████████| 983k/983k [00:00<00:00, 3.57MB/s]\r\nGenerating train split: 0%| | 0/14041 [00:00<?, ? examples/s]06/20/2023 10:54:47 - INFO - datasets_modules.datasets.conll2003.9a4d16a94f8674ba3466315300359b0acd891b68b6c8743ddf60b9c702adce98.conll2003 - ⏳ Generating examples from = /Users/xiaoliwang/.cache/huggingface/datasets/downloads/extracted/31a52031f62b2a9281d3b6c2723006e2fa05b33157a4249729067b79f7aa068a/train.txt\r\nGenerating validation split: 0%| | 0/3250 [00:00<?, ? examples/s]06/20/2023 10:54:48 - INFO - datasets_modules.datasets.conll2003.9a4d16a94f8674ba3466315300359b0acd891b68b6c8743ddf60b9c702adce98.conll2003 - ⏳ Generating examples from = /Users/xiaoliwang/.cache/huggingface/datasets/downloads/extracted/31a52031f62b2a9281d3b6c2723006e2fa05b33157a4249729067b79f7aa068a/valid.txt\r\nGenerating test split: 0%| | 0/3453 [00:00<?, ? examples/s]06/20/2023 10:54:48 - INFO - datasets_modules.datasets.conll2003.9a4d16a94f8674ba3466315300359b0acd891b68b6c8743ddf60b9c702adce98.conll2003 - ⏳ Generating examples from = /Users/xiaoliwang/.cache/huggingface/datasets/downloads/extracted/31a52031f62b2a9281d3b6c2723006e2fa05b33157a4249729067b79f7aa068a/test.txt\r\nDataset conll2003 downloaded and prepared to /Users/xiaoliwang/.cache/huggingface/datasets/conll2003/conll2003/1.0.0/9a4d16a94f8674ba3466315300359b0acd891b68b6c8743ddf60b9c702adce98. Subsequent calls will reuse this data.\r\n100%|█████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1282.14it/s]\r\nloading configuration file config.json from cache at /Users/xiaoliwang/.cache/huggingface/hub/models--bert-base-uncased/snapshots/a265f773a47193eed794233aa2a0f0bb6d3eaa63/config.json\r\nModel config BertConfig {\r\n \"_name_or_path\": \"bert-base-uncased\",\r\n \"architectures\": [\r\n \"BertForMaskedLM\"\r\n ],\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"classifier_dropout\": null,\r\n \"gradient_checkpointing\": false,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"id2label\": {\r\n \"0\": \"LABEL_0\",\r\n \"1\": \"LABEL_1\",\r\n \"2\": \"LABEL_2\",\r\n \"3\": \"LABEL_3\",\r\n \"4\": \"LABEL_4\",\r\n \"5\": \"LABEL_5\",\r\n \"6\": \"LABEL_6\",\r\n \"7\": \"LABEL_7\",\r\n \"8\": \"LABEL_8\"\r\n },\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"label2id\": {\r\n \"LABEL_0\": 0,\r\n \"LABEL_1\": 1,\r\n \"LABEL_2\": 2,\r\n \"LABEL_3\": 3,\r\n \"LABEL_4\": 4,\r\n \"LABEL_5\": 5,\r\n \"LABEL_6\": 6,\r\n \"LABEL_7\": 7,\r\n \"LABEL_8\": 8\r\n },\r\n \"layer_norm_eps\": 1e-12,\r\n \"max_position_embeddings\": 512,\r\n \"model_type\": \"bert\",\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 12,\r\n \"pad_token_id\": 0,\r\n \"position_embedding_type\": \"absolute\",\r\n \"transformers_version\": \"4.31.0.dev0\",\r\n \"type_vocab_size\": 2,\r\n \"use_cache\": true,\r\n \"vocab_size\": 30522\r\n}\r\n\r\nloading configuration file config.json from cache at /Users/xiaoliwang/.cache/huggingface/hub/models--bert-base-uncased/snapshots/a265f773a47193eed794233aa2a0f0bb6d3eaa63/config.json\r\nModel config BertConfig {\r\n \"_name_or_path\": \"bert-base-uncased\",\r\n \"architectures\": [\r\n \"BertForMaskedLM\"\r\n ],\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"classifier_dropout\": null,\r\n \"gradient_checkpointing\": false,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"layer_norm_eps\": 1e-12,\r\n \"max_position_embeddings\": 512,\r\n \"model_type\": \"bert\",\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 12,\r\n \"pad_token_id\": 0,\r\n \"position_embedding_type\": \"absolute\",\r\n \"transformers_version\": \"4.31.0.dev0\",\r\n \"type_vocab_size\": 2,\r\n \"use_cache\": true,\r\n \"vocab_size\": 30522\r\n}\r\n\r\nloading file vocab.txt from cache at /Users/xiaoliwang/.cache/huggingface/hub/models--bert-base-uncased/snapshots/a265f773a47193eed794233aa2a0f0bb6d3eaa63/vocab.txt\r\nloading file tokenizer.json from cache at /Users/xiaoliwang/.cache/huggingface/hub/models--bert-base-uncased/snapshots/a265f773a47193eed794233aa2a0f0bb6d3eaa63/tokenizer.json\r\nloading file added_tokens.json from cache at None\r\nloading file special_tokens_map.json from cache at None\r\nloading file tokenizer_config.json from cache at /Users/xiaoliwang/.cache/huggingface/hub/models--bert-base-uncased/snapshots/a265f773a47193eed794233aa2a0f0bb6d3eaa63/tokenizer_config.json\r\nloading configuration file config.json from cache at /Users/xiaoliwang/.cache/huggingface/hub/models--bert-base-uncased/snapshots/a265f773a47193eed794233aa2a0f0bb6d3eaa63/config.json\r\nModel config BertConfig {\r\n \"_name_or_path\": \"bert-base-uncased\",\r\n \"architectures\": [\r\n \"BertForMaskedLM\"\r\n ],\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"classifier_dropout\": null,\r\n \"gradient_checkpointing\": false,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"layer_norm_eps\": 1e-12,\r\n \"max_position_embeddings\": 512,\r\n \"model_type\": \"bert\",\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 12,\r\n \"pad_token_id\": 0,\r\n \"position_embedding_type\": \"absolute\",\r\n \"transformers_version\": \"4.31.0.dev0\",\r\n \"type_vocab_size\": 2,\r\n \"use_cache\": true,\r\n \"vocab_size\": 30522\r\n}\r\n\r\nDownloading model.safetensors: 100%|███████████████████████████████████████████| 440M/440M [00:22<00:00, 19.8MB/s]\r\nloading weights file model.safetensors from cache at /Users/xiaoliwang/.cache/huggingface/hub/models--bert-base-uncased/snapshots/a265f773a47193eed794233aa2a0f0bb6d3eaa63/model.safetensors\r\nSome weights of the model checkpoint at bert-base-uncased were not used when initializing BertForTokenClassification: ['cls.predictions.transform.LayerNorm.bias', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight']\r\n- This IS expected if you are initializing BertForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing BertForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of BertForTokenClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier.weight', 'classifier.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n06/20/2023 10:55:15 - INFO - __main__ - Sample 622 of the training set: {'input_ids': [101, 2522, 6657, 15222, 6962, 1015, 19739, 20486, 2072, 1014, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'labels': [-100, 3, -100, -100, -100, 0, 3, -100, -100, 0, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100]}.\r\n06/20/2023 10:55:15 - INFO - __main__ - Sample 12142 of the training set: {'input_ids': [101, 2019, 26354, 4861, 2056, 2008, 9779, 9048, 2015, 1010, 2007, 2095, 1011, 2203, 2727, 7045, 1997, 2149, 1002, 2184, 1012, 1023, 2454, 1998, 10067, 1997, 1002, 2184, 1012, 1019, 2454, 1010, 2052, 2022, 3205, 2006, 1996, 5548, 4518, 3863, 1010, 2021, 2106, 2025, 2360, 2043, 1012, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'labels': [-100, 0, 3, 0, 0, 0, 3, -100, -100, 0, 0, 0, -100, -100, 0, 0, 0, 7, -100, 0, -100, -100, 0, 0, 0, 0, 0, 0, -100, -100, 0, 0, 0, 0, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100]}.\r\n06/20/2023 10:55:15 - INFO - __main__ - Sample 4570 of the training set: {'input_ids': [101, 2117, 2679, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'labels': [-100, 0, 0, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100]}.\r\nDownloading builder script: 100%|████████████████████████████████████████████| 6.34k/6.34k [00:00<00:00, 9.02MB/s]\r\n06/20/2023 10:55:18 - INFO - __main__ - ***** Running training *****\r\n06/20/2023 10:55:18 - INFO - __main__ - Num examples = 14041\r\n06/20/2023 10:55:18 - INFO - __main__ - Num Epochs = 3\r\n06/20/2023 10:55:18 - INFO - __main__ - Instantaneous batch size per device = 8\r\n06/20/2023 10:55:18 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 8\r\n06/20/2023 10:55:18 - INFO - __main__ - Gradient Accumulation steps = 1\r\n06/20/2023 10:55:18 - INFO - __main__ - Total optimization steps = 5268\r\n 33%|███████████████████████▋ | 1756/5268 [24:08<1:29:30, 1.53s/it]epoch 0: {'LOC_precision': 0.9499192245557351, 'LOC_recall': 0.9602612955906369, 'LOC_f1': 0.9550622631293991, 'LOC_number': 1837, 'MISC_precision': 0.8572972972972973, 'MISC_recall': 0.8600867678958786, 'MISC_f1': 0.858689767190038, 'MISC_number': 922, 'ORG_precision': 0.8539482879105521, 'ORG_recall': 0.9112602535421327, 'ORG_f1': 0.8816738816738816, 'ORG_number': 1341, 'PER_precision': 0.9776810016330975, 'PER_recall': 0.9766177270255574, 'PER_f1': 0.9771490750816105, 'PER_number': 1839, 'overall_precision': 0.9214876033057852, 'overall_recall': 0.9387102205758545, 'overall_f1': 0.9300191842522312, 'overall_accuracy': 0.9868336482091035}\r\n 67%|████████████████████████████████████████████████▋ | 3512/5268 [50:27<18:04, 1.62it/s]epoch 1: {'LOC_precision': 0.9637760702524698, 'LOC_recall': 0.9559063690800218, 'LOC_f1': 0.9598250888220825, 'LOC_number': 1837, 'MISC_precision': 0.8524251805985552, 'MISC_recall': 0.89587852494577, 'MISC_f1': 0.8736118455843469, 'MISC_number': 922, 'ORG_precision': 0.892675852066715, 'ORG_recall': 0.9179716629381058, 'ORG_f1': 0.9051470588235293, 'ORG_number': 1341, 'PER_precision': 0.9721925133689839, 'PER_recall': 0.9885807504078303, 'PER_f1': 0.9803181450525748, 'PER_number': 1839, 'overall_precision': 0.9322847682119205, 'overall_recall': 0.9481394174103385, 'overall_f1': 0.940145254194841, 'overall_accuracy': 0.9880217361665661}\r\n100%|███████████████████████████████████████████████████████████████████████| 5268/5268 [1:15:39<00:00, 1.44it/s]epoch 2: {'LOC_precision': 0.9538378958668814, 'LOC_recall': 0.9673380511703865, 'LOC_f1': 0.9605405405405405, 'LOC_number': 1837, 'MISC_precision': 0.8783351120597652, 'MISC_recall': 0.8926247288503254, 'MISC_f1': 0.8854222700376547, 'MISC_number': 922, 'ORG_precision': 0.9074759437453738, 'ORG_recall': 0.9142431021625652, 'ORG_f1': 0.9108469539375927, 'ORG_number': 1341, 'PER_precision': 0.9751619870410367, 'PER_recall': 0.9820554649265906, 'PER_f1': 0.978596586290978, 'PER_number': 1839, 'overall_precision': 0.9381975678827253, 'overall_recall': 0.94830779592524, 'overall_f1': 0.9432255903533747, 'overall_accuracy': 0.9891513935687436}\r\nConfiguration saved in /tmp/test-ner/config.json\r\nModel weights saved in /tmp/test-ner/pytorch_model.bin\r\ntokenizer config file saved in /tmp/test-ner/tokenizer_config.json\r\nSpecial tokens file saved in /tmp/test-ner/special_tokens_map.json\r\nTraceback (most recent call last):\r\n File \"/Users/xiaoliwang/repo/research/huggingface/transformers/examples/pytorch/token-classification/run_ner_no_trainer.py\", line 784, in <module>\r\n main()\r\n File \"/Users/xiaoliwang/repo/research/huggingface/transformers/examples/pytorch/token-classification/run_ner_no_trainer.py\", line 780, in main\r\n json.dump(all_results, f)\r\n File \"/Users/xiaoliwang/development/miniforge3/envs/transformers/lib/python3.11/json/__init__.py\", line 179, in dump\r\n for chunk in iterable:\r\n File \"/Users/xiaoliwang/development/miniforge3/envs/transformers/lib/python3.11/json/encoder.py\", line 432, in _iterencode\r\n yield from _iterencode_dict(o, _current_indent_level)\r\n File \"/Users/xiaoliwang/development/miniforge3/envs/transformers/lib/python3.11/json/encoder.py\", line 406, in _iterencode_dict\r\n yield from chunks\r\n File \"/Users/xiaoliwang/development/miniforge3/envs/transformers/lib/python3.11/json/encoder.py\", line 439, in _iterencode\r\n o = _default(o)\r\n ^^^^^^^^^^^\r\n File \"/Users/xiaoliwang/development/miniforge3/envs/transformers/lib/python3.11/json/encoder.py\", line 180, in default\r\n raise TypeError(f'Object of type {o.__class__.__name__} '\r\nTypeError: Object of type int64 is not JSON serializable\r\n100%|███████████████████████████████████████████████████████████████████████| 5268/5268 [1:17:11<00:00, 1.14it/s]\r\nTraceback (most recent call last):\r\n File \"/Users/xiaoliwang/development/miniforge3/envs/transformers/bin/accelerate\", line 8, in <module>\r\n sys.exit(main())\r\n ^^^^^^\r\n File \"/Users/xiaoliwang/development/miniforge3/envs/transformers/lib/python3.11/site-packages/accelerate/commands/accelerate_cli.py\", line 45, in main\r\n args.func(args)\r\n File \"/Users/xiaoliwang/development/miniforge3/envs/transformers/lib/python3.11/site-packages/accelerate/commands/launch.py\", line 969, in launch_command\r\n simple_launcher(args)\r\n File \"/Users/xiaoliwang/development/miniforge3/envs/transformers/lib/python3.11/site-packages/accelerate/commands/launch.py\", line 625, in simple_launcher\r\n raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)\r\nsubprocess.CalledProcessError: Command '['/Users/xiaoliwang/development/miniforge3/envs/transformers/bin/python3.11', 'run_ner_no_trainer.py', '--model_name_or_path', 'bert-base-uncased', '--dataset_name', 'conll2003', '--output_dir', '/tmp/test-ner', '--pad_to_max_length', '--task_name', 'ner', '--return_entity_level_metrics']' returned non-zero exit status 1.\r\n```\r\n\r\nI have reproduced this on my Macbook Air M1 with mps accleration enabled. The full error messages have been posted above here, same as on my Ubuntu workstation.",
"@amyeroberts Thanks for your comments!\r\n\r\nI think your idea is good, and I understand that your intention is obviously to avoid that `int` convertment of everything. \r\n\r\nBut according to this page https://docs.python.org/3/library/json.html\r\n```\r\nIf specified, default should be a function that gets called for objects that can’t otherwise be serialized. \r\nIt should return a JSON encodable version of the object or raise a [TypeError](https://docs.python.org/3/library/exceptions.html#TypeError). \r\nIf not specified, [TypeError](https://docs.python.org/3/library/exceptions.html#TypeError) is raised.\r\n```\r\n\r\nFrom my understanding, this `default` parameter is just likely giving a new converter function, and in this case that function is a concise `int()`, yes, that's it. I think we don't need to write a new handler function to handling all different object types here, because we only cannot handle/serialize the `np.int64` here.\r\nSo in the future if we have something more than that, I could definitely to write a new hanlder to take good care of them, hence for the time being, I think `default=int` is a good enough solution :)",
"Hi @amyeroberts, I have changed that a little bit as you mentioned before :)",
"_The documentation is not available anymore as the PR was closed or merged._",
"@xiaoli For the quality CI checks, you'll need to run `make style` at the top level of the repo and push any changes that are applied. Once this is done, CI should all be green and branch good to merge in 👍 ",
"> @xiaoli For the quality CI checks, you'll need to run `make style` at the top level of the repo and push any changes that are applied. Once this is done, CI should all be green and branch good to merge in 👍\r\n\r\n@amyeroberts Thanks for intructions, but I am afraid that so many files being changed after `make style` execution:\r\n\r\n```\r\n(transformers) ➜ transformers git:(main) ✗ git status\r\nOn branch main\r\nYour branch is ahead of 'origin/main' by 8 commits.\r\n (use \"git push\" to publish your local commits)\r\n\r\nChanges not staged for commit:\r\n (use \"git add <file>...\" to update what will be committed)\r\n (use \"git restore <file>...\" to discard changes in working directory)\r\n\tmodified: examples/research_projects/codeparrot/scripts/human_eval.py\r\n\tmodified: examples/research_projects/fsner/src/fsner/tokenizer_utils.py\r\n\tmodified: examples/research_projects/jax-projects/big_bird/prepare_natural_questions.py\r\n\tmodified: examples/research_projects/luke/run_luke_ner_no_trainer.py\r\n\tmodified: examples/research_projects/lxmert/modeling_frcnn.py\r\n\tmodified: examples/research_projects/visual_bert/modeling_frcnn.py\r\n\tmodified: src/transformers/generation/logits_process.py\r\n\tmodified: src/transformers/generation/tf_logits_process.py\r\n\tmodified: src/transformers/generation/tf_utils.py\r\n\tmodified: src/transformers/keras_callbacks.py\r\n\tmodified: src/transformers/models/bert/convert_bert_pytorch_checkpoint_to_original_tf.py\r\n\tmodified: src/transformers/models/bigbird_pegasus/convert_bigbird_pegasus_tf_to_pytorch.py\r\n\tmodified: src/transformers/models/deta/modeling_deta.py\r\n\tmodified: src/transformers/models/dpr/tokenization_dpr.py\r\n\tmodified: src/transformers/models/dpr/tokenization_dpr_fast.py\r\n\tmodified: src/transformers/models/pegasus/convert_pegasus_tf_to_pytorch.py\r\n\tmodified: src/transformers/models/sam/processing_sam.py\r\n\tmodified: tests/generation/test_framework_agnostic.py\r\n\tmodified: tests/models/codegen/test_modeling_codegen.py\r\n\tmodified: tests/models/data2vec/test_modeling_data2vec_audio.py\r\n\tmodified: tests/models/encodec/test_modeling_encodec.py\r\n\tmodified: tests/models/gpt2/test_modeling_gpt2.py\r\n\tmodified: tests/models/gptj/test_modeling_gptj.py\r\n\tmodified: tests/models/hubert/test_modeling_hubert.py\r\n\tmodified: tests/models/mctct/test_modeling_mctct.py\r\n\tmodified: tests/models/rwkv/test_modeling_rwkv.py\r\n\tmodified: tests/models/sew/test_modeling_sew.py\r\n\tmodified: tests/models/sew_d/test_modeling_sew_d.py\r\n\tmodified: tests/models/speecht5/test_modeling_speecht5.py\r\n\tmodified: tests/models/unispeech/test_modeling_unispeech.py\r\n\tmodified: tests/models/unispeech_sat/test_modeling_unispeech_sat.py\r\n\tmodified: tests/models/wav2vec2/test_modeling_flax_wav2vec2.py\r\n\tmodified: tests/models/wav2vec2/test_modeling_wav2vec2.py\r\n\tmodified: tests/models/wav2vec2_conformer/test_modeling_wav2vec2_conformer.py\r\n\tmodified: tests/models/wavlm/test_modeling_wavlm.py\r\n\tmodified: tests/models/whisper/test_modeling_whisper.py\r\n\tmodified: tests/onnx/test_onnx.py\r\n\tmodified: tests/test_modeling_tf_common.py\r\n\tmodified: tests/test_tokenization_common.py\r\n\tmodified: tests/trainer/test_trainer_seq2seq.py\r\n\tmodified: utils/check_copies.py\r\n\tmodified: utils/create_dummy_models.py\r\n\tmodified: utils/tests_fetcher.py\r\n\r\nno changes added to commit (use \"git add\" and/or \"git commit -a\")\r\n```",
"@amyeroberts `make style` changes are committed, thank you 😁"
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixed that "TypeError: Object of type int64 is not JSON serializable"
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24340/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24340",
"html_url": "https://github.com/huggingface/transformers/pull/24340",
"diff_url": "https://github.com/huggingface/transformers/pull/24340.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24340.patch",
"merged_at": 1687864549000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24339
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24339/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24339/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24339/events
|
https://github.com/huggingface/transformers/issues/24339
| 1,762,099,274 |
I_kwDOCUB6oc5pB4BK
| 24,339 |
feat: `agent.run(return_agent_types=True)`
|
{
"login": "aarnphm",
"id": 29749331,
"node_id": "MDQ6VXNlcjI5NzQ5MzMx",
"avatar_url": "https://avatars.githubusercontent.com/u/29749331?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aarnphm",
"html_url": "https://github.com/aarnphm",
"followers_url": "https://api.github.com/users/aarnphm/followers",
"following_url": "https://api.github.com/users/aarnphm/following{/other_user}",
"gists_url": "https://api.github.com/users/aarnphm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aarnphm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aarnphm/subscriptions",
"organizations_url": "https://api.github.com/users/aarnphm/orgs",
"repos_url": "https://api.github.com/users/aarnphm/repos",
"events_url": "https://api.github.com/users/aarnphm/events{/privacy}",
"received_events_url": "https://api.github.com/users/aarnphm/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey @aarnphm, could you provide a code sample with the return you'd like to receive so that I can play with it and see if it makes sense to implement it? Thanks!",
"For example, I'm currently building [OpenLLM](https://github.com/bentoml/OpenLLM) and came across a use case where one can define an agent to generate an image and then caption it via a pipeline using [BentoML Runner](https://docs.bentoml.org/en/latest/concepts/runner.html#what-is-runner)\r\n\r\nOpenLLM also provides support for HuggingFace Agent where users will can switch between The inference endpoint or hosting their own starcoder.\r\n\r\nGiven the following segment to save a `captioning` pipeline\r\n\r\n```python\r\nimport bentoml, transformers\r\n\r\nbentoml.transformers.save_model(\"captioning\", pipeline('image-captioning'))\r\n```\r\n\r\nRunner is distributed by nature, and it can be defined in a service.py like so:\r\n\r\n```python\r\nimport bentoml\r\nimport transformers\r\n\r\ncaptioning_runner = bentoml.transformers.get(\"captioning\").to_runner()\r\n\r\nagent = transformers.HfAgent(\"http://283.23.22.1:3000/hf/agent\") # `openllm start starcoder`\r\n\r\nservice = bentoml.Service(\"agent-with-runners\", runners=[captioning_runner])\r\n\r\ndef preprocess(input_tensor: torch.Tensor) -> torch.Tensor:\r\n\t... \r\n\r\[email protected](input=bentoml.io.Text(), output=bentoml.io.Text())\r\nasync def transcribe_audio_to_french(prompt: str):\r\n\timage_output: ImageAgentType = agent.run(prompt, ..., return_agent_types=True)\r\n\t# then I do some preprocess with this tensor\r\n\tinput_for_pipeline = preprocess(image_output.to_raw())\r\n\treturn await async captioning_runner.async_run(input_for_pipeline)\r\n```\r\n\r\nYou can run this with `bentoml serve service.py:svc`\r\n\r\nThis can be one use case of the AgentType that can be useful here, where one can access the tensor directly without having to convert from PIL.Image output (which is currently what the agent.run returns if it returns an image if I understand it correctly)\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,687 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
### Feature request
Currently, `agent.run` on main will run materializer from `AgentType` to return its corresponding type.
I think it would be a great addition to just return this `AgentType` directly for external libraries to build on top of!
```python
agent = transformers.HfAgent("inference-api-endpoint")
res: AgentType = agent.run(..., return_agent_types=True)
```
### Motivation
I'm currently playing around with the new agent API, and found that in cases where I don't want to return the decoded outputs immediately, it would be nice to get `AgentType` and manage the materialize myself.
### Your contribution
I can help to create PR, but I know that the Agent API are still very experimental and unstable
cc @LysandreJik on this
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24339/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24338
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24338/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24338/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24338/events
|
https://github.com/huggingface/transformers/pull/24338
| 1,762,016,390 |
PR_kwDOCUB6oc5TQx9A
| 24,338 |
Add SophiaG.
|
{
"login": "guilt",
"id": 195178,
"node_id": "MDQ6VXNlcjE5NTE3OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/195178?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guilt",
"html_url": "https://github.com/guilt",
"followers_url": "https://api.github.com/users/guilt/followers",
"following_url": "https://api.github.com/users/guilt/following{/other_user}",
"gists_url": "https://api.github.com/users/guilt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guilt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guilt/subscriptions",
"organizations_url": "https://api.github.com/users/guilt/orgs",
"repos_url": "https://api.github.com/users/guilt/repos",
"events_url": "https://api.github.com/users/guilt/events{/privacy}",
"received_events_url": "https://api.github.com/users/guilt/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"[Semi-related] also linking to Paper page on HF: https://huggingface.co/papers/2305.14342",
"Thank you all for encouraging this. \r\n\r\nAs a first cut, I am working with the authors to see if this can be a PyPi package if authors agree. License is MIT, so hopefully we can get this out soon. [Sophia on PyPi](https://github.com/Liuhong99/Sophia/issues/29)",
"As Younes said before, we won't merge this PR but can leave it for anyone who want to try this out: Transformers is a library of models, not optimizers (the optimizers inside the library are actually deprecated). Once there is a package supporting this optimizer we can add support for the Trainer like we did for the `bitsandbytes` optimizers.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,687 | 1,690 | 1,690 |
NONE
| null |
# What does this PR do?
This is a scratch PR showing how to test Sophia with Transformers. This is no way
production ready, and certainly needs to look at licensing. But, this is helpful if someone needs
to try this right away. I'm re-using **AdamW**'s `beta` values. Plus if you see carefully,
there's an ugly hack where I'm using `eps` as `rho`.
This is code directly copy-pasta-ed from: @Liuhong99 's [Sophia](https://github.com/Liuhong99/Sophia);
I am putting it here so people can experiment with it and see how it compares to **AdamW**. If there
is sufficient interest in adding this and it can be licensed, would be happy to work on it here. Anyone is free to take
this and turn it into something of value. Please close this as necessary too.
## Before submitting
This PR does none of the above. It is too early to do this, but if there is sufficient interest would be happy to go through this process.
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Models:
- text models: @ArthurZucker and @younesbelkada
Common:
- trainer: @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24338/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24338",
"html_url": "https://github.com/huggingface/transformers/pull/24338",
"diff_url": "https://github.com/huggingface/transformers/pull/24338.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24338.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24337
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24337/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24337/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24337/events
|
https://github.com/huggingface/transformers/issues/24337
| 1,762,014,491 |
I_kwDOCUB6oc5pBjUb
| 24,337 |
past_key_values is not working as expected for falcon-7b
|
{
"login": "orgadhadas",
"id": 27919205,
"node_id": "MDQ6VXNlcjI3OTE5MjA1",
"avatar_url": "https://avatars.githubusercontent.com/u/27919205?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orgadhadas",
"html_url": "https://github.com/orgadhadas",
"followers_url": "https://api.github.com/users/orgadhadas/followers",
"following_url": "https://api.github.com/users/orgadhadas/following{/other_user}",
"gists_url": "https://api.github.com/users/orgadhadas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orgadhadas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orgadhadas/subscriptions",
"organizations_url": "https://api.github.com/users/orgadhadas/orgs",
"repos_url": "https://api.github.com/users/orgadhadas/repos",
"events_url": "https://api.github.com/users/orgadhadas/events{/privacy}",
"received_events_url": "https://api.github.com/users/orgadhadas/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @orgadhadas \r\nThanks for the issue, I think the canonical way to use past key values is to set `use_cache=True` when calling `model.generate`. I think the remote code supports that argument as you can see here: https://huggingface.co/tiiuae/falcon-7b-instruct/blob/main/modelling_RW.py#L699 Can you share with us why you want to define a custom past key value mechanism?",
"I need to use the model.forward method, to get access to the hidden states computed during inference (for research). I hope this answers the question.",
"You have the `output_hidden_states` argument which should output the hidden states no? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,687 | 1,690 | 1,690 |
NONE
| null |
### System Info
Hello,
I've been trying to use past_key_values to speed up text generation, but it doesn't seem to work: instead of generating coherent text like is done when I'm not using past_key_values, it seems to generate the same token over and over again. I've been trying to search the web for usage guidelines and it seemed to me like I'm doing everything correctly, but maybe I'm missing something.
Thank you!
### Who can help?
@ArthurZucker @younesbelkada - I think you're the relevant people for this.
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
import pandas as pd
import pickle
from transformers import pipeline
device = "cuda" if torch.cuda.is_available() else "cpu"
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model, padding_side="left")
model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-7b-instruct", trust_remote_code=True).to(device)
# WITH past_key_values
def get_answer_from_model1(model, tokenizer, q, max_new_tokens=50):
predicted_token_id = None
prompt = q
generated_text = ""
n_new_tokens = 0
past_key_values=None
while (predicted_token_id != tokenizer.eos_token_id) and (n_new_tokens < max_new_tokens):
if predicted_token_id is not None:
model_input = tokenizer(predicted_token, return_tensors='pt').to(device)
else:
model_input = tokenizer(prompt, return_tensors='pt').to(device)
with torch.no_grad():
model_output = model(model_input['input_ids'], past_key_values=past_key_values)
past_key_values = model_output['past_key_values']
logits = model_output['logits']
predicted_token_id = logits.argmax(-1)[0][-1]
predicted_token = tokenizer.decode(predicted_token_id)
if predicted_token_id != tokenizer.eos_token_id:
prompt += predicted_token
generated_text += predicted_token
n_new_tokens += 1
return generated_text
# WITHOUT past_key_values
def get_answer_from_model2(model, tokenizer, q, max_new_tokens=50):
predicted_token_id = None
prompt = q
generated_text = ""
n_new_tokens = 0
past_key_values=None
while (predicted_token_id != tokenizer.eos_token_id) and (n_new_tokens < max_new_tokens):
model_input = tokenizer(prompt, return_tensors='pt').to(device)
with torch.no_grad():
model_output = model(model_input['input_ids'], past_key_values=past_key_values)
logits = model_output['logits']
predicted_token_id = logits.argmax(-1)[0][-1]
predicted_token = tokenizer.decode(predicted_token_id)
if predicted_token_id != tokenizer.eos_token_id:
prompt += predicted_token
generated_text += predicted_token
n_new_tokens += 1
return generated_text
q="hello"
answer1 = get_answer_from_model1(model, tokenizer, q)
print(answer1)
answer2 = get_answer_from_model2(model, tokenizer, q)
print(answer2)
```
### Expected behavior
answer1 and answer2 should be the same
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24337/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24336
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24336/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24336/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24336/events
|
https://github.com/huggingface/transformers/pull/24336
| 1,761,997,298 |
PR_kwDOCUB6oc5TQuIN
| 24,336 |
Fix link to documentation in Install from Source
|
{
"login": "SoyGema",
"id": 24204714,
"node_id": "MDQ6VXNlcjI0MjA0NzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/24204714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SoyGema",
"html_url": "https://github.com/SoyGema",
"followers_url": "https://api.github.com/users/SoyGema/followers",
"following_url": "https://api.github.com/users/SoyGema/following{/other_user}",
"gists_url": "https://api.github.com/users/SoyGema/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SoyGema/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SoyGema/subscriptions",
"organizations_url": "https://api.github.com/users/SoyGema/orgs",
"repos_url": "https://api.github.com/users/SoyGema/repos",
"events_url": "https://api.github.com/users/SoyGema/events{/privacy}",
"received_events_url": "https://api.github.com/users/SoyGema/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@amyeroberts You are welcome! Thanks for creating Transformers library ! :)",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
Fix link to documentation _to install Transformers from Source_ .Probably the title changed at some point from 'Installing' to 'Install' and verbose message in utils broke.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes link to install from source in verbose message inside _utils_
Context : found during exploration of translation tutorial script and work related to #24254
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger @stevhliu
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24336/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24336",
"html_url": "https://github.com/huggingface/transformers/pull/24336",
"diff_url": "https://github.com/huggingface/transformers/pull/24336.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24336.patch",
"merged_at": 1687191176000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24335
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24335/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24335/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24335/events
|
https://github.com/huggingface/transformers/pull/24335
| 1,761,991,673 |
PR_kwDOCUB6oc5TQtBl
| 24,335 |
[Wav2Vec2 - MMS] Correct directly loading adapters weights
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Sorry, I accidentally submitted the review without a saved comment. I realised in the `from_pretrained` call why you were using `pass`. I still think raising an exception would be good, as otherwise we can get silent behaviour. Would it be possible to reliably check if `load_adaptive_weights` should be implemented for a model? \r\n\r\np.s. ignoring the wandb diffs, as they're just from being out-of-date from main"
] | 1,687 | 1,687 | 1,687 |
MEMBER
| null |
# What does this PR do?
This PR corrects incorrect behavior when loading MMS with non-default adapter weights via `from_pretrained(...)`. The issue is explained well [here](https://github.com/huggingface/transformers/issues/24223#issuecomment-1595856093).
In a nutshell, we cannot load specific weights in the init because these loaded weights are later overwritten again in `from_pretrained`. To solve this I propose to add a new generic
```py
load_adaptive_weights()
```
call to `from_pretrained` that can be overridden by models that inherit from `PretrainedModel`. This both solves the issue #24223
and is also cleaner IMO since weights shouldn't be loaded when calling the `__init__` method of a model anyways really. It was weird before that:
```py
model = Wav2Vec2ForCTC(config, target_lang="fra")
```
would try to load weights into the model.
cc @sgugger @sanchit-gandhi @amyeroberts wdyt about the design? Happy to add some more tests if ok for you
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24335/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24335",
"html_url": "https://github.com/huggingface/transformers/pull/24335",
"diff_url": "https://github.com/huggingface/transformers/pull/24335.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24335.patch",
"merged_at": 1687282792000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24334
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24334/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24334/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24334/events
|
https://github.com/huggingface/transformers/pull/24334
| 1,761,948,660 |
PR_kwDOCUB6oc5TQkd9
| 24,334 |
Generate: add SequenceBiasLogitsProcessor
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger @amyeroberts \r\n\r\nA further request for approval, as I've introduced a pattern that I'd like to repeat (assuming you're okay with it).\r\n\r\nIn the latest commit, you'll see:\r\n1. An example in the logit processor class;\r\n2. The new generation config field's docstring redirecting to the corresponding logit processor docs;\r\n\r\nThis allows me to write a clear explanation and example (or examples) for each configuration option, without creating a monster in the generation config docstring. The user gets high-level info in the generation config docstring, and details in each processor.\r\n\r\nThe examples are more useful if they are relative to common use cases, in this case different `.generate()` parameterization. However, the example is sitting in the logit processor class, and does not make **direct** reference to the class. The alternative, to create an example using the class directly, is not very desirable either -- very few people use the logit processor classes directly. \r\n\r\nLMK if you have suggestions and/or if you agree 🤗 \r\n",
"@gante, the m4 eval code broke after this PR was merged:\r\n\r\n```\r\nstderr: File \"/mnt/nvme0/code/huggingface/m4-master-3/m4/evaluation/launch.py\", line 143, in <module>\r\nstderr: main(args)\r\nstderr: File \"/mnt/nvme0/code/huggingface/m4-master-3/m4/evaluation/launch.py\", line 97, in main\r\nstderr: score = evaluator(task, accelerator, model, args)\r\nstderr: File \"/mnt/nvme0/code/huggingface/m4-master-3/m4/evaluation/evaluators/in_contexter.py\", line 262, in in_contexter\r\nstderr: metric = task.add_batch_metric(metric, **kwargs)\r\nstderr: File \"/mnt/nvme0/code/huggingface/m4-master-3/m4/models/vgpt2/evaluation_open_ended_vqa_in_context_vgpt2.py\", line 338, in add_batch_metric\r\nstderr: generated_tokens = self.generate_tokens(**kwargs)\r\nstderr: File \"/mnt/nvme0/code/huggingface/m4-master-3/m4/models/vgpt2/evaluation_open_ended_vqa_in_context_vgpt2.py\", line 314, in generate_tokens\r\nstderr: generated_tokens = unwrapped_model.generate(\r\nstderr: File \"/home/stas/anaconda3/envs/py38-pt20/lib/python3.8/site-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\nstderr: return func(*args, **kwargs)\r\nstderr: File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/generation/utils.py\", line 1627, in generate\r\nstderr: return self.beam_search(\r\nstderr: File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/generation/utils.py\", line 2951, in beam_search\r\nstderr: next_token_scores_processed = logits_processor(input_ids, next_token_scores)\r\nstderr: File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/generation/logits_process.py\", line 92, in __call__\r\nstderr: scores = processor(input_ids, scores)\r\nstderr: File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/generation/logits_process.py\", line 618, in __call__\r\nstderr: self._prepare_bias_variables(scores)\r\nstderr: File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/generation/logits_process.py\", line 678, in _prepare_bias_variables\r\nstderr: raise ValueError(\r\nstderr: ValueError: Setting a bias on sequences that share a common token termination is not yet supported. Please open an issue if you see this error message (after checking that it doesn't already exist).\r\n```\r\n\r\nwhat should we change?",
"@stas00 interesting, I thought no relevant use case would hit this issue. I will open a PR with a fix!\r\n\r\n(meanwhile, the solutions are either to a) downgrade transformers; or b) remove this exception if you're using the `bad_words_ids` generate argument, which should be fine)",
"Thank you, Joao\r\n\r\nWe are going to port the m4-pretrained model into `transformers` shortly, so neither of these proposals is an option in the long run. But a PR with a fix is - it's not urgent urgent as we meanwhile can use the older transformers."
] | 1,687 | 1,689 | 1,687 |
MEMBER
| null |
# What does this PR do?
Closes #22168
As per [popular demand](https://github.com/huggingface/transformers/issues/22168#issuecomment-1477998997), adds a logits processor that applies a bias to certain sequences -- `SequenceBiasLogitsProcessor`
This manipulation is a more general case of forbidding certain sequences --`NoBadWordsLogitsProcessor` corresponds to applying an infinite negative bias. As such, this PR makes `NoBadWordsLogitsProcessor` a subclass of the new processor. In the refactoring process, I've rewritten this class to a) be more readable (clear variable naming, comments, docstrings); and b) be faster (though some vectorization).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24334/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24334/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24334",
"html_url": "https://github.com/huggingface/transformers/pull/24334",
"diff_url": "https://github.com/huggingface/transformers/pull/24334.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24334.patch",
"merged_at": 1687342481000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24333
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24333/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24333/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24333/events
|
https://github.com/huggingface/transformers/pull/24333
| 1,761,923,401 |
PR_kwDOCUB6oc5TQfgb
| 24,333 |
Fix `KerasMetricCallback`: pass `generate_kwargs` even if `use_xla_generation` is False
|
{
"login": "Kripner",
"id": 9218121,
"node_id": "MDQ6VXNlcjkyMTgxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9218121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kripner",
"html_url": "https://github.com/Kripner",
"followers_url": "https://api.github.com/users/Kripner/followers",
"following_url": "https://api.github.com/users/Kripner/following{/other_user}",
"gists_url": "https://api.github.com/users/Kripner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kripner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kripner/subscriptions",
"organizations_url": "https://api.github.com/users/Kripner/orgs",
"repos_url": "https://api.github.com/users/Kripner/repos",
"events_url": "https://api.github.com/users/Kripner/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kripner/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
Currently, `KerasMetricCallback` ignores the `generate_kwargs` argument if `use_xla_generation` is set to `False` (which is the default). This means that when not using XLA, the user can't pass arguments like `max_new_tokens` to the `generate` method being called in `on_epoch_end`. It's also in contradiction with the docstring for `generate_kwargs`, which states:
> Keyword arguments to pass to `model.generate()` when generating. Has no effect if `predict_with_generate` is `False`.
This PR fixes the issue by passing `generate_kwargs` to `model.generate()` in the branch of execution where `use_xla_generation` is `False`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@Rocketknight1
@gante
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24333/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24333/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24333",
"html_url": "https://github.com/huggingface/transformers/pull/24333",
"diff_url": "https://github.com/huggingface/transformers/pull/24333.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24333.patch",
"merged_at": 1687175485000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24332
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24332/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24332/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24332/events
|
https://github.com/huggingface/transformers/issues/24332
| 1,761,760,676 |
I_kwDOCUB6oc5pAlWk
| 24,332 |
Why it always raise error like this?
|
{
"login": "ElinLiu0",
"id": 75596885,
"node_id": "MDQ6VXNlcjc1NTk2ODg1",
"avatar_url": "https://avatars.githubusercontent.com/u/75596885?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ElinLiu0",
"html_url": "https://github.com/ElinLiu0",
"followers_url": "https://api.github.com/users/ElinLiu0/followers",
"following_url": "https://api.github.com/users/ElinLiu0/following{/other_user}",
"gists_url": "https://api.github.com/users/ElinLiu0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ElinLiu0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ElinLiu0/subscriptions",
"organizations_url": "https://api.github.com/users/ElinLiu0/orgs",
"repos_url": "https://api.github.com/users/ElinLiu0/repos",
"events_url": "https://api.github.com/users/ElinLiu0/events{/privacy}",
"received_events_url": "https://api.github.com/users/ElinLiu0/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @RosterMouch, thanks for raising this issue. \r\n\r\nA few questions on our side to try and help dig into the issue: \r\n* Could you share which verion of `huggingface_hub` is beign run in your environment? \r\n* When you say \"I could promise that there is not a network connection problem at all\", could you share how this was tested?\r\n* Is this an error the consistently happens or sporadically? \r\n* Is this issue only ever seen with this checkpoint or with other checkpoints too? "
] | 1,686 | 1,689 | 1,689 |
NONE
| null |
### System Info
I could promise that there is not a network connection problem at all,but it true raised like this:
```bash
Traceback (most recent call last):
File "test.py", line 3, in <module>
model = AutoModelForCausalLM.from_pretrained("nomic-ai/gpt4all-j", revision="v1.2-jazzy")
File "/home/elin/anaconda3/envs/nemo/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 444, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/home/elin/anaconda3/envs/nemo/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 928, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/home/elin/anaconda3/envs/nemo/lib/python3.8/site-packages/transformers/configuration_utils.py", line 574, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/home/elin/anaconda3/envs/nemo/lib/python3.8/site-packages/transformers/configuration_utils.py", line 629, in _get_config_dict
resolved_config_file = cached_file(
File "/home/elin/anaconda3/envs/nemo/lib/python3.8/site-packages/transformers/utils/hub.py", line 452, in cached_file
raise EnvironmentError(
OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like nomic-ai/gpt4all-j is not the path to a directory containing a file named config.json.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.
```
And here is the SDKs version below:
```
Python 3.8.10
transformers 4.29.2
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Not only gpt4-j,but also fallcon by using the code they provided:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
### Expected behavior
Fix it!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24332/timeline
|
not_planned
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24331
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24331/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24331/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24331/events
|
https://github.com/huggingface/transformers/pull/24331
| 1,761,638,661 |
PR_kwDOCUB6oc5TPqms
| 24,331 |
style: add BitsAndBytesConfig __repr__ function
|
{
"login": "aarnphm",
"id": 29749331,
"node_id": "MDQ6VXNlcjI5NzQ5MzMx",
"avatar_url": "https://avatars.githubusercontent.com/u/29749331?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aarnphm",
"html_url": "https://github.com/aarnphm",
"followers_url": "https://api.github.com/users/aarnphm/followers",
"following_url": "https://api.github.com/users/aarnphm/following{/other_user}",
"gists_url": "https://api.github.com/users/aarnphm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aarnphm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aarnphm/subscriptions",
"organizations_url": "https://api.github.com/users/aarnphm/orgs",
"repos_url": "https://api.github.com/users/aarnphm/repos",
"events_url": "https://api.github.com/users/aarnphm/events{/privacy}",
"received_events_url": "https://api.github.com/users/aarnphm/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @amyeroberts I have addressed all of the issue. PTAL when you are available. Thanks a bunch.",
"Can you rebase your branch on main? This should fix the failing test. Thanks!",
"done.",
"Thanks a lot for working on this @aarnphm ! Nice job! "
] | 1,686 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
Add a `__repr__` to `transformers.BitsAndBytesConfig` to make it nice to print.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
cc @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24331/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24331/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24331",
"html_url": "https://github.com/huggingface/transformers/pull/24331",
"diff_url": "https://github.com/huggingface/transformers/pull/24331.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24331.patch",
"merged_at": 1687278368000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24330
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24330/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24330/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24330/events
|
https://github.com/huggingface/transformers/issues/24330
| 1,761,327,681 |
I_kwDOCUB6oc5o-7pB
| 24,330 |
Resuming / retraining the peft model
|
{
"login": "adityaaryan77",
"id": 69278251,
"node_id": "MDQ6VXNlcjY5Mjc4MjUx",
"avatar_url": "https://avatars.githubusercontent.com/u/69278251?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adityaaryan77",
"html_url": "https://github.com/adityaaryan77",
"followers_url": "https://api.github.com/users/adityaaryan77/followers",
"following_url": "https://api.github.com/users/adityaaryan77/following{/other_user}",
"gists_url": "https://api.github.com/users/adityaaryan77/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adityaaryan77/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adityaaryan77/subscriptions",
"organizations_url": "https://api.github.com/users/adityaaryan77/orgs",
"repos_url": "https://api.github.com/users/adityaaryan77/repos",
"events_url": "https://api.github.com/users/adityaaryan77/events{/privacy}",
"received_events_url": "https://api.github.com/users/adityaaryan77/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @adityaaryan77, thanks for raising an issue! \r\n\r\nIt seems like this is a case of catastrophic forgetting, rather than a bug per se in the model or transformers library. As such question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n\r\nIf you believe this behaviour is related to a bug in the code, could you produce: \r\n* The running environment: run `transformers-cli env` in the terminal and copy-paste the output\r\n* A _minimal_ code reproducer which we could run to replicate i.e. with data ",
"`transformers-cli env `\r\n- `transformers` version: 4.30.2\r\n- Platform: Linux-5.15.0-1040-azure-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- Huggingface_hub version: 0.15.1\r\n- Safetensors version: 0.3.1\r\n- PyTorch version (GPU?): 2.0.1+cu117 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?Yes\r\n- Using distributed or parallel set-up in script?: No\r\n\r\nSure here is the pastebin for the code snippet I am using : https://pastebin.pl/view/4e77a13d\r\n\r\nAnd for example for data set: Here is a small example\r\nFirst time fine tuning:\r\n[\r\n{\"question\":\"What is my cats name?\",\"answer\":\"Tom\"}\r\n]\r\nNow using generate with \"What is my cats name gives\" response as \"Tom\"\r\nNow saving this model and loading it with resume_from_checkpoint for further fine tuning with \r\n[\r\n{\"question\":\"What is my dogs name?\",\"answer\":\"Bob\"}\r\n]\r\nAnd asking \"What is my cats name?\" gives response as \"Bob\" or sometimes repeats the question\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,686 | 1,690 | 1,690 |
NONE
| null |
### System Info
Although resume_from_checkpoint is now working after @llohann-speranca solved the issue but now finetuning again with new data using train(resume_from_checkpoint) and then testing it makes it forget the old datas i.e. wont remember the things in old dataset.
Attaching the code below:
import json
import os
import bitsandbytes as bnb
import pandas as pd
import torch
import torch.nn as nn
import transformers
from datasets import load_dataset
from peft import (
LoraConfig,
PeftConfig,
PeftModel,
get_peft_model,
prepare_model_for_kbit_training,
)
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
)
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
MODEL_NAME = "tiiuae/falcon-7b"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME,
device_map="auto",
trust_remote_code=True,
quantization_config=bnb_config
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
tokenizer.pad_token = tokenizer.eos_token
def print_trainable_parameters(model):
"""
Prints the number of trainable parameters in the model.
"""
trainable_params = 0
all_param = 0
for _, param in model.named_parameters():
all_param += param.numel()
if param.requires_grad:
trainable_params += param.numel()
print(
f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}"
)
model.gradient_checkpointing_enable()
model = prepare_model_for_kbit_training(model)
config = LoraConfig(
r=16,
lora_alpha=32,
target_modules=["query_key_value"],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM"
)
model = get_peft_model(model, config)
print_trainable_parameters(model)
data = load_dataset("json", data_files="../localGPT/output.json")
def generate_prompt(data_point):
return f"""
: {data_point["question"]}
: {data_point["answer"]}
""".strip()
def generate_and_tokenize_prompt(data_point):
full_prompt = generate_prompt(data_point)
tokenized_full_prompt = tokenizer(full_prompt, padding=True, truncation=True)
return tokenized_full_prompt
data = data["train"].shuffle().map(generate_and_tokenize_prompt)
OUTPUT_DIR = "outputs"
training_args = transformers.TrainingArguments(
per_device_train_batch_size=1,
gradient_accumulation_steps=4,
num_train_epochs=1,
warmup_ratio=0.05,
max_steps=80,
learning_rate=2e-4,
fp16=True,
logging_steps=1,
save_total_limit=3,
output_dir=OUTPUT_DIR,
optim="paged_adamw_8bit",
lr_scheduler_type="cosine",
)
trainer = transformers.Trainer(
model=model,
train_dataset=data,
args=training_args,
data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False)
)
model.config.use_cache = False
trainer.train(resume_from_checkpoint=True)
trainer.save_model(os.path.join(OUTPUT_DIR, "checkpoint-2"))
PEFT_MODEL = OUTPUT_DIR+"/checkpoint-2"
config = PeftConfig.from_pretrained(PEFT_MODEL)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
return_dict=True,
quantization_config=bnb_config,
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, PEFT_MODEL)
generation_config = model.generation_config
generation_config.max_new_tokens = 20
generation_config.temperature = 0
generation_config.top_p = 0.7
generation_config.num_return_sequences = 1
generation_config.pad_token_id = tokenizer.eos_token_id
generation_config.eos_token_id = tokenizer.eos_token_id
DEVICE = "cuda:0"
prompt = """
:What is my cat's name?
:
""".strip()
encoding = tokenizer(prompt, return_tensors="pt").to(DEVICE)
with torch.inference_mode():
outputs = model.generate(
input_ids=encoding.input_ids,
attention_mask=encoding.attention_mask,
generation_config=generation_config,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I trained my model for my cats name in first iteration and saved it in checkpoint-1 then retrained it for my dogs name although now it knows my dogs name it forgets my cats name
### Expected behavior
To remember my cats name
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24330/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24330/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24329
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24329/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24329/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24329/events
|
https://github.com/huggingface/transformers/pull/24329
| 1,761,220,653 |
PR_kwDOCUB6oc5TOPaM
| 24,329 |
[Doc Fix] Fix model name path in the transformers doc for AutoClasses
|
{
"login": "riteshghorse",
"id": 25881114,
"node_id": "MDQ6VXNlcjI1ODgxMTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/25881114?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/riteshghorse",
"html_url": "https://github.com/riteshghorse",
"followers_url": "https://api.github.com/users/riteshghorse/followers",
"following_url": "https://api.github.com/users/riteshghorse/following{/other_user}",
"gists_url": "https://api.github.com/users/riteshghorse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/riteshghorse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riteshghorse/subscriptions",
"organizations_url": "https://api.github.com/users/riteshghorse/orgs",
"repos_url": "https://api.github.com/users/riteshghorse/repos",
"events_url": "https://api.github.com/users/riteshghorse/events{/privacy}",
"received_events_url": "https://api.github.com/users/riteshghorse/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"R: @stevhliu",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
Fixes the model name path in the transformers doc for the AutoTokenizer step.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24329/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24329/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24329",
"html_url": "https://github.com/huggingface/transformers/pull/24329",
"diff_url": "https://github.com/huggingface/transformers/pull/24329.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24329.patch",
"merged_at": 1687192015000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24328
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24328/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24328/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24328/events
|
https://github.com/huggingface/transformers/pull/24328
| 1,761,157,284 |
PR_kwDOCUB6oc5TOBMm
| 24,328 |
save full_osd in fsdp mode
|
{
"login": "amartino1",
"id": 77289753,
"node_id": "MDQ6VXNlcjc3Mjg5NzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/77289753?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amartino1",
"html_url": "https://github.com/amartino1",
"followers_url": "https://api.github.com/users/amartino1/followers",
"following_url": "https://api.github.com/users/amartino1/following{/other_user}",
"gists_url": "https://api.github.com/users/amartino1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amartino1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amartino1/subscriptions",
"organizations_url": "https://api.github.com/users/amartino1/orgs",
"repos_url": "https://api.github.com/users/amartino1/repos",
"events_url": "https://api.github.com/users/amartino1/events{/privacy}",
"received_events_url": "https://api.github.com/users/amartino1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24328). All of your documentation changes will be reflected on that endpoint.",
"Hello, PR #24446 addresses this issue. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,686 | 1,690 | 1,690 |
NONE
| null |
Fixes a typo in which the variable full_osd is referenced before definition if run in fsdp mode. The fix allows model files to be saved when running in fsdp.
Links to issue: https://github.com/huggingface/transformers/issues/24057
Fixes # 24057
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24328/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24328/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24328",
"html_url": "https://github.com/huggingface/transformers/pull/24328",
"diff_url": "https://github.com/huggingface/transformers/pull/24328.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24328.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24327
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24327/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24327/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24327/events
|
https://github.com/huggingface/transformers/issues/24327
| 1,761,141,342 |
I_kwDOCUB6oc5o-OJe
| 24,327 |
AutoModelForSequenceClassification.from_config doesn't support LlamaConfig
|
{
"login": "kungfu-eric",
"id": 87145506,
"node_id": "MDQ6VXNlcjg3MTQ1NTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/87145506?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kungfu-eric",
"html_url": "https://github.com/kungfu-eric",
"followers_url": "https://api.github.com/users/kungfu-eric/followers",
"following_url": "https://api.github.com/users/kungfu-eric/following{/other_user}",
"gists_url": "https://api.github.com/users/kungfu-eric/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kungfu-eric/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kungfu-eric/subscriptions",
"organizations_url": "https://api.github.com/users/kungfu-eric/orgs",
"repos_url": "https://api.github.com/users/kungfu-eric/repos",
"events_url": "https://api.github.com/users/kungfu-eric/events{/privacy}",
"received_events_url": "https://api.github.com/users/kungfu-eric/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @kungfu-eric, \r\n\r\nThis issue is arising because you need to pass an instance of the config, rather than the config class i.e.:\r\n\r\n```python\r\nfrom transformers import AutoModelForSequenceClassification\r\nfrom transformers.models.llama.configuration_llama import LlamaConfig\r\n\r\nconfig = LlamaConfig()\r\nmodel = AutoModelForSequenceClassification.from_config(config)\r\n```",
"What ended up fixing the issues was updating to transformers-4.30.2 from 4.28.1. @amyeroberts ah i wrote the simple example wrong. I did define the config in the actual full code. Thank you though."
] | 1,686 | 1,686 | 1,686 |
NONE
| null |
### System Info
Calling:
```
from transformers import AutoModelForSequenceClassification
from transformers.models.llama.configuration_llama import LlamaConfig
config = LlamaConfig()
model = AutoModelForSequenceClassification.from_config(config)
```
gives:
```
________________________________ Traceback (most recent call last) _________________________________
_ in <module>:1 _
_ _
_ /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:413 in _
_ from_config _
_ _
_ 410 _ _ _ model_class = _get_model_class(config, cls._model_mapping) _
_ 411 _ _ _ return model_class._from_config(config, **kwargs) _
_ 412 _ _ _
_ _ 413 _ _ raise ValueError( _
_ 414 _ _ _ f"Unrecognized configuration class {config.__class__} for this kind of AutoM _
_ 415 _ _ _ f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapp _
_ 416 _ _ ) _
____________________________________________________________________________________________________
ValueError: Unrecognized configuration class <class 'type'> for this kind of AutoModel: AutoModelForSequenceClassification.
Model type should be one of AlbertConfig, BartConfig, BertConfig, BigBirdConfig, BigBirdPegasusConfig, BloomConfig, CamembertConfig, CanineConfig, ConvBertConfig,
CTRLConfig, Data2VecTextConfig, DebertaConfig, DebertaV2Config, DistilBertConfig, ElectraConfig, ErnieConfig, ErnieMConfig, EsmConfig, FlaubertConfig, FNetConfig,
FunnelConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTJConfig, IBertConfig, LayoutLMConfig, LayoutLMv2Config, LayoutLMv3Config,
LEDConfig, LiltConfig, LlamaConfig, LongformerConfig, LukeConfig, MarkupLMConfig, MBartConfig, MegaConfig, MegatronBertConfig, MobileBertConfig, MPNetConfig, MvpConfig,
NezhaConfig, NystromformerConfig, OpenAIGPTConfig, OPTConfig, PerceiverConfig, PLBartConfig, QDQBertConfig, ReformerConfig, RemBertConfig, RobertaConfig,
RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, SqueezeBertConfig, TapasConfig, TransfoXLConfig, XLMConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig,
XmodConfig, YosoConfig.
```
### Who can help?
@ArthurZucker
### Expected behavior
Model should be able to be loaded from config with randomly initialized weights, preferably with bfloat16 and load_8bit support.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24327/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24327/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24326
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24326/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24326/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24326/events
|
https://github.com/huggingface/transformers/pull/24326
| 1,761,123,868 |
PR_kwDOCUB6oc5TN6Cr
| 24,326 |
Adding ddp_broadcast_buffers argument to Trainer
|
{
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
# What does this PR do?
In #22482, using the Trainer with `gpt2` and other similar models failed in naive distributed mode. Passing `ddp_broadcast_buffers=False` to Pytorch's DDP wrapper fixes the issue. This PR surfaces that argument to the Trainer user.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24326/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24326/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24326",
"html_url": "https://github.com/huggingface/transformers/pull/24326",
"diff_url": "https://github.com/huggingface/transformers/pull/24326.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24326.patch",
"merged_at": 1686942843000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24325
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24325/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24325/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24325/events
|
https://github.com/huggingface/transformers/pull/24325
| 1,760,968,723 |
PR_kwDOCUB6oc5TNZNQ
| 24,325 |
[Time-Series] Added link to the blog in Tips
|
{
"login": "elisim",
"id": 17675462,
"node_id": "MDQ6VXNlcjE3Njc1NDYy",
"avatar_url": "https://avatars.githubusercontent.com/u/17675462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elisim",
"html_url": "https://github.com/elisim",
"followers_url": "https://api.github.com/users/elisim/followers",
"following_url": "https://api.github.com/users/elisim/following{/other_user}",
"gists_url": "https://api.github.com/users/elisim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elisim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elisim/subscriptions",
"organizations_url": "https://api.github.com/users/elisim/orgs",
"repos_url": "https://api.github.com/users/elisim/repos",
"events_url": "https://api.github.com/users/elisim/events{/privacy}",
"received_events_url": "https://api.github.com/users/elisim/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24325). All of your documentation changes will be reflected on that endpoint.",
"file names changed, opened a new one here\r\nhttps://github.com/huggingface/transformers/pull/24482"
] | 1,686 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
@kashif @NielsRogge
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24325/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24325/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24325",
"html_url": "https://github.com/huggingface/transformers/pull/24325",
"diff_url": "https://github.com/huggingface/transformers/pull/24325.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24325.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24324
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24324/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24324/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24324/events
|
https://github.com/huggingface/transformers/pull/24324
| 1,760,961,588 |
PR_kwDOCUB6oc5TNXqU
| 24,324 |
Allow passing kwargs through to TFBertTokenizer
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Ping @amyeroberts for core maintainer review now that the extra functionality is working fine (see issue #23798)",
"I was a bit wary about the kwargs thing too - `FastBertTokenizer` and `BertTokenizerLayer` actually have wildly different arguments, so depending on which one you're using the kwargs you need will be totally different. Still, I think for an advanced use case it's fine - we're just trying to enable some power user behaviours without forcing them to edit the library source, and I'd prefer something general like this over specifically exposing the options I think people need (because I didn't even realize in advance that the `preserve_unused` arg would be valuable!)\r\n\r\nAnyway, merging!"
] | 1,686 | 1,687 | 1,687 |
MEMBER
| null |
There are some kwargs like `preserve_unused_tokens` in the underlying TF tokenizer layers that might be useful to expose to users. This PR exposes them by passing through any unrecognized `kwargs` in the model `__init__` to the TF tokenizer layer.
Fixes #23798
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24324/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24324/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24324",
"html_url": "https://github.com/huggingface/transformers/pull/24324",
"diff_url": "https://github.com/huggingface/transformers/pull/24324.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24324.patch",
"merged_at": 1687261746000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24323
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24323/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24323/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24323/events
|
https://github.com/huggingface/transformers/issues/24323
| 1,760,948,980 |
I_kwDOCUB6oc5o9fL0
| 24,323 |
Protobuf 4 support (again)
|
{
"login": "dustyketchum",
"id": 3933135,
"node_id": "MDQ6VXNlcjM5MzMxMzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3933135?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dustyketchum",
"html_url": "https://github.com/dustyketchum",
"followers_url": "https://api.github.com/users/dustyketchum/followers",
"following_url": "https://api.github.com/users/dustyketchum/following{/other_user}",
"gists_url": "https://api.github.com/users/dustyketchum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dustyketchum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dustyketchum/subscriptions",
"organizations_url": "https://api.github.com/users/dustyketchum/orgs",
"repos_url": "https://api.github.com/users/dustyketchum/repos",
"events_url": "https://api.github.com/users/dustyketchum/events{/privacy}",
"received_events_url": "https://api.github.com/users/dustyketchum/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @dustyketchum, \r\n\r\nThanks for raising this issue. Managing the different dependencies in the library can be quite complex. As noted in the linked issue, the blocker for upgrading protobuf support was third-party libraries support of protobuf 4 rather than our own. \r\n\r\nIf this is something you or someone else in the community believes is very important, please feel free to open a PR. Note that it's not just necessary for the CI to be green and protobuf 4 be supported, we must also remain backwards compatible with previous versions.\r\n\r\ncc @ydshieh ",
"BTW, what's blocking when you try to use python 3.10 without protobuf 4.x ?",
"I am not blocked; I should have been more precise.\r\n",
"Fixed in #24599"
] | 1,686 | 1,688 | 1,688 |
NONE
| null |
### Feature request
Looking at https://github.com/huggingface/transformers/issues/21677#issuecomment-1435072007, I notice there are now new versions of tensorflow and tensorboard that may help with protobuf 4.x compatibility. It would be awesome to get this upgraded, thanks!
### Motivation
easier upgrade path to python 3.10 and above
### Your contribution
nope, sorry.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24323/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24323/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24322
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24322/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24322/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24322/events
|
https://github.com/huggingface/transformers/pull/24322
| 1,760,689,621 |
PR_kwDOCUB6oc5TMb7t
| 24,322 |
Respect explicitly set framework parameter in pipeline
|
{
"login": "denis-ismailaj",
"id": 20902736,
"node_id": "MDQ6VXNlcjIwOTAyNzM2",
"avatar_url": "https://avatars.githubusercontent.com/u/20902736?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/denis-ismailaj",
"html_url": "https://github.com/denis-ismailaj",
"followers_url": "https://api.github.com/users/denis-ismailaj/followers",
"following_url": "https://api.github.com/users/denis-ismailaj/following{/other_user}",
"gists_url": "https://api.github.com/users/denis-ismailaj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/denis-ismailaj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/denis-ismailaj/subscriptions",
"organizations_url": "https://api.github.com/users/denis-ismailaj/orgs",
"repos_url": "https://api.github.com/users/denis-ismailaj/repos",
"events_url": "https://api.github.com/users/denis-ismailaj/events{/privacy}",
"received_events_url": "https://api.github.com/users/denis-ismailaj/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Updated so that `infer_framework_load_model` is not called at all if `model` is already loaded and `framework` is defined.\r\n\r\nHowever, it may still be worth keeping the check inside `infer_framework_load_model` so that if `framework` is defined but the `model` is a `str` and we do need to call `infer_framework_load_model`, at least we don't need to then call `infer_framework` inside of it.",
"Ok, this code breaks some tests. Can you fix them ?",
"Noticed a \"typo\", it is fixed now.",
"@denis-ismailaj it's ready to merge ! "
] | 1,686 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #24321
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24322/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24322",
"html_url": "https://github.com/huggingface/transformers/pull/24322",
"diff_url": "https://github.com/huggingface/transformers/pull/24322.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24322.patch",
"merged_at": 1687257833000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24321
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24321/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24321/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24321/events
|
https://github.com/huggingface/transformers/issues/24321
| 1,760,680,155 |
I_kwDOCUB6oc5o8djb
| 24,321 |
[pipeline] Explicitly set framework is ignored
|
{
"login": "denis-ismailaj",
"id": 20902736,
"node_id": "MDQ6VXNlcjIwOTAyNzM2",
"avatar_url": "https://avatars.githubusercontent.com/u/20902736?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/denis-ismailaj",
"html_url": "https://github.com/denis-ismailaj",
"followers_url": "https://api.github.com/users/denis-ismailaj/followers",
"following_url": "https://api.github.com/users/denis-ismailaj/following{/other_user}",
"gists_url": "https://api.github.com/users/denis-ismailaj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/denis-ismailaj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/denis-ismailaj/subscriptions",
"organizations_url": "https://api.github.com/users/denis-ismailaj/orgs",
"repos_url": "https://api.github.com/users/denis-ismailaj/repos",
"events_url": "https://api.github.com/users/denis-ismailaj/events{/privacy}",
"received_events_url": "https://api.github.com/users/denis-ismailaj/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,686 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.30.2
Omitting the rest because they aren't really relevant. Can submit later if needed.
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import pipeline
from transformers import WhisperProcessor, WhisperForConditionalGeneration
processor = WhisperProcessor.from_pretrained("openai/whisper-tiny")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny")
# Using this just to make sure an error if thrown if pipeline tries to check using module instead of specified framework.
class FakeWhisper:
def __getattr__(self, item):
return model.__getattr__(item)
pipe = pipeline(
"automatic-speech-recognition",
model=FakeWhisper(),
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
chunk_length_s=30,
device=model.device,
framework="pt",
)
```
The above code raises this error:
```
TypeError: Could not infer framework from class <class '__main__.FakeWhisper'>.
```
### Expected behavior
When specifying the framework explicitely, there is no need to infer it from the module of the model class, as mentioned here:
https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/__init__.py#L784-L796
But then, inside `infer_framework_load_model`, `infer_framework` is called regardless of the value of the `framework` parameter:
https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/base.py#L280
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24321/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24320
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24320/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24320/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24320/events
|
https://github.com/huggingface/transformers/pull/24320
| 1,760,661,965 |
PR_kwDOCUB6oc5TMV2f
| 24,320 |
Add test for proper TF input signatures
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,686 | 1,686 |
MEMBER
| null |
This is a relatively simple PR intended to check that all TF models have proper input signatures that match their inputs. I was calling the function `self._prune_signature` in a few places to verify this, which felt a bit hacky. This test should let us get rid of `self._prune_signature` by enforcing valid signatures for all models.
Edit: I'm also slipping in a typo fix (fine-tine -> fine-tune) I saw while I was running the tests
Double-edit: I'm also slipping in a fix to an incorrect indentation in the `test_dataset_conversion` test that was causing some unnecessary repetition
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24320/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24320",
"html_url": "https://github.com/huggingface/transformers/pull/24320",
"diff_url": "https://github.com/huggingface/transformers/pull/24320.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24320.patch",
"merged_at": 1686931393000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24319
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24319/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24319/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24319/events
|
https://github.com/huggingface/transformers/pull/24319
| 1,760,611,507 |
PR_kwDOCUB6oc5TMKrp
| 24,319 |
Fix ner average grouping with no groups
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"For bigger fixes I would add a test. This is small enough I think it's ok to skip. Let me know.",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #https://github.com/huggingface/transformers/issues/24314
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24319/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24319/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24319",
"html_url": "https://github.com/huggingface/transformers/pull/24319",
"diff_url": "https://github.com/huggingface/transformers/pull/24319.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24319.patch",
"merged_at": 1686926599000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24318
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24318/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24318/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24318/events
|
https://github.com/huggingface/transformers/issues/24318
| 1,760,348,442 |
I_kwDOCUB6oc5o7Mka
| 24,318 |
Recursion error when creating AutoTokenizer
|
{
"login": "markovalexander",
"id": 22663468,
"node_id": "MDQ6VXNlcjIyNjYzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/22663468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/markovalexander",
"html_url": "https://github.com/markovalexander",
"followers_url": "https://api.github.com/users/markovalexander/followers",
"following_url": "https://api.github.com/users/markovalexander/following{/other_user}",
"gists_url": "https://api.github.com/users/markovalexander/gists{/gist_id}",
"starred_url": "https://api.github.com/users/markovalexander/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/markovalexander/subscriptions",
"organizations_url": "https://api.github.com/users/markovalexander/orgs",
"repos_url": "https://api.github.com/users/markovalexander/repos",
"events_url": "https://api.github.com/users/markovalexander/events{/privacy}",
"received_events_url": "https://api.github.com/users/markovalexander/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Interesting, I can't even load the autotokenizer from remote, i.e., just loading `from_pretrained` using the remote identifier doesn't work and leads to a recursion error.",
"Hey! There seems to be something a bit strange with the `tokenizer_config.json`, since the `unk_token`, the `bos_token` as well as the `eos_token` are `\"\"`, which means that they are empty. This is the root cause of the issue. \r\nThe following works: \r\n```python \r\n>>> from transformers import AutoTokenizer; tok = AutoTokenizer.from_pretrained(path, unk_token =\"<s>\")\r\n```\r\nWhile this does not work:\r\n```python \r\n>>> from transformers import AutoTokenizer; tok = AutoTokenizer.from_pretrained(path, unk_token =\" <s>\")\r\n```\r\n`\"\"` is not part of the vocab, thus it cannot be used as an unknown token. You should update the tokenizer to have a `tok._unk_token = None`",
"There's something strange going on. I am facing the same problem and am unable to load a `LlamaTokenizer` on 4.30.2 that I have used previously (with 4.28.x). Here's my `tokenizer_config.json` in case that's relevant:\r\n\r\n```json\r\n{\r\n \"bos_token\": \"<s>\",\r\n \"clean_up_tokenization_spaces\": false,\r\n \"eos_token\": \"</s>\",\r\n \"model_max_length\": 1000000000000000019884624838656,\r\n \"tokenizer_class\": \"LlamaTokenizer\",\r\n \"unk_token\": \"<unk>\"\r\n}\r\n```",
"You are giving way too little informations, no traceback and this is not related to the mentioned issue. If you still have a problem, feel free to open a new issue, add a full reproducer and make sure you have a correctly converted tokenizer",
"> Hey! There seems to be something a bit strange with the `tokenizer_config.json`, since the `unk_token`, the `bos_token` as well as the `eos_token` are `\"\"`, which means that they are empty. This is the root cause of the issue. The following works:\r\n> \r\n> ```python\r\n> >>> from transformers import AutoTokenizer; tok = AutoTokenizer.from_pretrained(path, unk_token =\"<s>\")\r\n> ```\r\n> \r\n> While this does not work:\r\n> \r\n> ```python\r\n> >>> from transformers import AutoTokenizer; tok = AutoTokenizer.from_pretrained(path, unk_token =\" <s>\")\r\n> ```\r\n> \r\n> `\"\"` is not part of the vocab, thus it cannot be used as an unknown token. You should update the tokenizer to have a `tok._unk_token = None`\r\n\r\nI think it is not a solution. Why does simple save/load not work then? Maybe at least add that information to docs?",
"If you load using `from transformers import AutoTokenizer; tok = AutoTokenizer.from_pretrained(\"Yhyu13/chimera-inst-chat-13b-hf\", legacy=False, use_fast = False)`, you can load and save. \r\n\r\nThe reason why it does not work is because the tokenizer was saved using the `LlamaTokenizer` class. When doing the automatic conversion to a fast tokenizer, (`AutoTokenizer` automatically converts the slow to a fast tokenizer using the [`LlamaConverter` ](https://github.com/ArthurZucker/transformers/blob/1f2434777ecfc436aed40c282b074034f7232d6f/src/transformers/convert_slow_tokenizer.py#L1123). \r\n\r\nThe issue lies with `self.update_post_processor()`, which does not check if the `bos` and `eos` tokens are defined or if `add_bos_token` and `add_eos_token` are set to ` True`. \r\n\r\nHowever the configuration files are still wrong, the `eos` and `bos` and `unk` tokens from the slow tokenizer are going to be different:\r\n\r\n```python \r\n >>> from transformers import AutoTokenizer; tok = AutoTokenizer.from_pretrained(\"Yhyu13/chimera-inst-chat-13b-hf\", legacy=True, use_fast = False)\r\n>>> tok\r\nLlamaTokenizer(name_or_path='Yhyu13/chimera-inst-chat-13b-hf', vocab_size=32000, model_max_length=2048, is_fast=False, padding_side='right', truncation_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>', 'pad_token': '<pad>'}, clean_up_tokenization_spaces=False)\r\n```\r\nthe values the `tokenizer_config` were not used because the `[special_tokens_map](https://huggingface.co/Yhyu13/chimera-inst-chat-13b-hf/blob/main/special_tokens_map.json)` was saved. \r\n\r\nTLDR; the tokenizer was not properly saved, priority is given to the `tokenizer_config.json` when loading the tokenizer, which is wrong in this case. "
] | 1,686 | 1,691 | 1,691 |
CONTRIBUTOR
| null |
### System Info
```
- `transformers` version: 4.30.2
- Platform: Linux-5.15.0-1033-oracle-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1) Save `"Yhyu13/chimera-inst-chat-13b-hf"` tokenizer as `save_pretrained` to some folder
2) Try to create auto tokenizer from that folder
```bash
python -c "from transformers import AutoTokenizer; tok = AutoTokenizer.from_pretrained('/local/path/to/tokenizer'); print(tok)"
```
And see recursion error when running `.from_pretrained` (last lines in stack trace):
```
File "/home/alexander/llm_training/.venv/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 257, in _convert_token_to_id_with_added_voc
return self.unk_token_id
File "/home/alexander/llm_training/.venv/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1155, in unk_token_id
return self.convert_tokens_to_ids(self.unk_token)
File "/home/alexander/llm_training/.venv/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 250, in convert_tokens_to_ids
return self._convert_token_to_id_with_added_voc(tokens)
File "/home/alexander/llm_training/.venv/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 257, in _convert_token_to_id_with_added_voc
return self.unk_token_id
File "/home/alexander/llm_training/.venv/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1155, in unk_token_id
return self.convert_tokens_to_ids(self.unk_token)
File "/home/alexander/llm_training/.venv/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1035, in unk_token
return str(self._unk_token)
RecursionError: maximum recursion depth exceeded while getting the str of an object
```
### Expected behavior
After running `pip install transformers==4.29` everything works fine:
```bash
❯ python -c "from transformers import AutoTokenizer; tok = AutoTokenizer.from_pretrained('/local/path/to/tokenizer'); print(tok)"
LlamaTokenizerFast(name_or_path='/local/path/to/tokenizer', vocab_size=32000, model_max_length=2048, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>', 'pad_token': '<pad>'}, clean_up_tokenization_spaces=False)
```
Working transformers-cli env:
```
- `transformers` version: 4.29.0
- Platform: Linux-5.15.0-1033-oracle-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24318/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24318/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24317
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24317/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24317/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24317/events
|
https://github.com/huggingface/transformers/pull/24317
| 1,760,332,952 |
PR_kwDOCUB6oc5TLMdO
| 24,317 |
Fix ImageGPT doc example
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,686 | 1,686 |
COLLABORATOR
| null |
# What does this PR do?
There was a bug in the example, where the clusters in the image processor were stored as lists, but the example assumed they were numpy arrays.
At the moment, clusters are stored as a list of lists, but [converted to a numpy array during processing](https://github.com/huggingface/transformers/blob/0b7b4429c78de68acaf72224eb6dae43616d820c/src/transformers/models/imagegpt/image_processing_imagegpt.py#L230). This PR converts to a numpy array when setting the class attribute and if new `cluster` are passed into `preprocess`.
This:
* Maintains backwards compatibility with old configurations
* Saved configs aren't changed (`clusters` is still converted to a list of lists when serializing) see this [dummy image processor](https://huggingface.co/amyeroberts/dummy_imagegpt_image_processor_np_clusters) created from this branch.
* Is more efficient - we're not converting the same list of lists every batch.
A potential breaking change is if users were accessing the `clusters` attribute and using it as a list of lists. As this was caught because users were using the clusters as a numpy array (according to the example) I expect this impact to be low.
Fixes #24189
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24317/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24317/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24317",
"html_url": "https://github.com/huggingface/transformers/pull/24317",
"diff_url": "https://github.com/huggingface/transformers/pull/24317.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24317.patch",
"merged_at": 1686931282000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24316
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24316/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24316/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24316/events
|
https://github.com/huggingface/transformers/issues/24316
| 1,760,298,276 |
I_kwDOCUB6oc5o7AUk
| 24,316 |
[Tokenizer] `skip_special_tokens` not working as expected
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] |
closed
| false |
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Okay! the fix PR should be merged soon. Note that the behaviour is gonna be deprecated! "
] | 1,686 | 1,695 | 1,695 |
COLLABORATOR
| null |
# Reporting a failing API design
This is mostly to help me record some of the biggest issues with the current API for adding tokens.
This is linked to #23909. Here is a simple snippet:
```python
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("t5-base", use_fast = False)
>>> tokenizer = [
AddedToken("[ABC]", normalized=False),
AddedToken("[DEF]", normalized=False),
AddedToken("GHI IHG", normalized=False),
]
>>> tokenizer.add_tokens(new_toks)
>>> tokenizer.add_tokens([AddedToken("[SAMPLE]", normalized=True)], special_tokens = True)
>>> print(tokenizer.added_tokens_encoder)
>>> print( tokenizer.all_special_ids)
```
This will show that the newly added token (`[SAMPLE]`) is not part of the `all_special_ids`. However, `all_special_ids` is used when decoding, to check whether the token should be skipped or not:
```python
for token in filtered_tokens:
if skip_special_tokens and token in self.all_special_ids:
continue
if token in self.added_tokens_encoder:
if current_sub_text:
sub_texts.append(self.convert_tokens_to_string(current_sub_text))
current_sub_text = []
sub_texts.append(token)
else:
current_sub_text.append(token)
```
Thus
```python
>>> encoded = tokenizer.encode("[ABC] [DEF][SAMPLE]", add_special_tokens=False)
>>> tokenizer.decode(encoded, skip_special_tokens = True)
"[ABC] [DEF][SAMPLE]"
```
However, the token is in `added_tokens_encoder` but not in `additional_special_tokens`.
Now imagine you want `spaces_between_special_tokens` ? This will add spaces between all added tokens, and thus checks if a token is part of `tokenzier.added_tokens_encoder`.
```python
>>> encoded = tokenizer.encode("[ABC] [DEF][SAMPLE]", add_special_tokens=False)
>>> tokenizer.decode(encoded, spaces_between_special_tokens = True)
"[ABC] [DEF] [SAMPLE]"
>>> tokenizer.decode(encoded, spaces_between_special_tokens = False)
"[ABC][DEF][SAMPLE]"
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24316/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24316/timeline
|
completed
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.