url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/25523
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25523/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25523/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25523/events
|
https://github.com/huggingface/transformers/pull/25523
| 1,851,471,159 |
PR_kwDOCUB6oc5X-LhY
| 25,523 |
Use dynamic past key-values shape in TF-Whisper
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,692 | 1,692 | 1,692 |
MEMBER
| null |
TF-Whisper uses the static past key-values shape for some conditionals, which causes issues when compiling with past key-values whose shape is not a known constant at compile time.
This PR swaps it to the dynamic runtime shape for correct compilation.
Fixes #25522
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25523/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25523/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25523",
"html_url": "https://github.com/huggingface/transformers/pull/25523",
"diff_url": "https://github.com/huggingface/transformers/pull/25523.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25523.patch",
"merged_at": 1692118679000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25522
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25522/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25522/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25522/events
|
https://github.com/huggingface/transformers/issues/25522
| 1,851,370,246 |
I_kwDOCUB6oc5uWasG
| 25,522 |
TF Whisper export: Can't use caching?
|
{
"login": "DevinTDHa",
"id": 33089471,
"node_id": "MDQ6VXNlcjMzMDg5NDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/33089471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DevinTDHa",
"html_url": "https://github.com/DevinTDHa",
"followers_url": "https://api.github.com/users/DevinTDHa/followers",
"following_url": "https://api.github.com/users/DevinTDHa/following{/other_user}",
"gists_url": "https://api.github.com/users/DevinTDHa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DevinTDHa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DevinTDHa/subscriptions",
"organizations_url": "https://api.github.com/users/DevinTDHa/orgs",
"repos_url": "https://api.github.com/users/DevinTDHa/repos",
"events_url": "https://api.github.com/users/DevinTDHa/events{/privacy}",
"received_events_url": "https://api.github.com/users/DevinTDHa/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Good spot - this is actually a bug in the TF-Whisper code! I've opened a PR to fix it at #25523.",
"@DevinTDHa The fix has been merged! You can try it by installing from main with `pip install --upgrade https://github.com/huggingface/transformers.git`. It fixes the issue for me when I run your demo notebook, but please let me know if you encounter any other issues with TF-Whisper.\r\n\r\nThanks again for the bug report and the clean reproduction notebook!",
"@Rocketknight1 Thanks for the very quick fix, it works!"
] | 1,692 | 1,692 | 1,692 |
NONE
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-6.2.6-76060206-generic-x86_64-with-glibc2.17
- Python version: 3.8.17
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sanch @gante @Rocketknight1
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Colab Notebook:
https://colab.research.google.com/drive/1AZRoVs5aK9kM2nKNZ6D98ubJOCGOnQep?usp=sharing
### Expected behavior
Hi!
I am working on deploying a Whisper model and want to export it as a TF saved model and I am having issues with it.
The default export behavior for the saved model will result in a serving that does not separate the encoder and the decoder. I wanted to do this manually, for which I wrote a class (see the colab notebook.).
The custom class separates the decoder and encoder to different servings. Additionally, tensors for caching (`past_key_values`) are also split and ordered by decoder and encoder, so it can be traced.
However, when trying to export the decoder/encoder states for caching, an error occurs.
```python
File "python3.8/site-packages/transformers/models/whisper/modeling_tf_whisper.py", line 99, in _make_causal_mask *
if past_key_values_length > 0:
TypeError: '>' not supported between instances of 'NoneType' and 'int'
```
I am not sure, but I checked the code and I suspect that this error is due to the cache tensors having different sizes (for encoder/decoder). As we pass the cache tensors as a tuple to the model, there should be multiple sizes for these cache tensors (generated sequence length for the decoder, `1500` for the encoder). I am not sure, how else I could define the servings definition to cover this case.
Thanks in advance!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25522/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25521
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25521/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25521/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25521/events
|
https://github.com/huggingface/transformers/pull/25521
| 1,851,337,762 |
PR_kwDOCUB6oc5X9u3q
| 25,521 |
Document the test fetcher
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,692 | 1,692 | 1,692 |
COLLABORATOR
| null |
# What does this PR do?
This PR continues the work of properly documenting all our scripts with the test fetcher.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25521/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25521",
"html_url": "https://github.com/huggingface/transformers/pull/25521",
"diff_url": "https://github.com/huggingface/transformers/pull/25521.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25521.patch",
"merged_at": 1692188312000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25520
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25520/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25520/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25520/events
|
https://github.com/huggingface/transformers/pull/25520
| 1,851,205,799 |
PR_kwDOCUB6oc5X9S4e
| 25,520 |
[DINOv2] Add backbone class
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@amyeroberts I've addressed your comments, feel free to approve/merge :)"
] | 1,692 | 1,693 | 1,693 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR:
- adds `Dinov2Backbone` as it was requested at https://github.com/NielsRogge/Transformers-Tutorials/issues/343 and https://github.com/facebookresearch/dinov2/issues/153#issuecomment-1678465000.
- adds DINOv2 to the documentation tests.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25520/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25520",
"html_url": "https://github.com/huggingface/transformers/pull/25520",
"diff_url": "https://github.com/huggingface/transformers/pull/25520.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25520.patch",
"merged_at": 1693303528000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25519
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25519/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25519/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25519/events
|
https://github.com/huggingface/transformers/pull/25519
| 1,851,189,995 |
PR_kwDOCUB6oc5X9PeS
| 25,519 |
[TYPO] fix typo/format in quicktour.md
|
{
"login": "lishukan",
"id": 23066239,
"node_id": "MDQ6VXNlcjIzMDY2MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/23066239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lishukan",
"html_url": "https://github.com/lishukan",
"followers_url": "https://api.github.com/users/lishukan/followers",
"following_url": "https://api.github.com/users/lishukan/following{/other_user}",
"gists_url": "https://api.github.com/users/lishukan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lishukan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lishukan/subscriptions",
"organizations_url": "https://api.github.com/users/lishukan/orgs",
"repos_url": "https://api.github.com/users/lishukan/repos",
"events_url": "https://api.github.com/users/lishukan/events{/privacy}",
"received_events_url": "https://api.github.com/users/lishukan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25519). All of your documentation changes will be reflected on that endpoint.",
"@sgugger Hi, Dear sgugger , I met some problem on this PR.\r\nIt failed on the test [ check_circleci_user ]. what should i do can solve it ?\r\nI've spend lots of time to search document/question about this ,and the \r\nquestion link: [ https://support.circleci.com/hc/en-us/articles/360008097173-Troubleshooting-why-pull-requests-are-not-triggering-jobs-on-my-organization- ](https://support.circleci.com/hc/en-us/articles/360008097173-Troubleshooting-why-pull-requests-are-not-triggering-jobs-on-my-organization-) in test log was invalid. \r\nOnly i can do is just looking help from you,sorry about waste your precious time\r\n\r\n \r\n<img width=\"920\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/23066239/efec63e9-9c39-49c8-9b76-e1fbaa729a22\">\r\n\r\n",
"> I've pushed your branch to the main fork of Transformers, which re-triggers the tests. Normally there shouldn't be any problem since you are only touching doc files, but let's double-check!\r\n\r\nTanks a lot"
] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes some typo/format error:


It's due to a lack of blank lines before code markup.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger, @stevhliu and @MKhalusova
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25519/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25519",
"html_url": "https://github.com/huggingface/transformers/pull/25519",
"diff_url": "https://github.com/huggingface/transformers/pull/25519.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25519.patch",
"merged_at": 1692165803000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25518
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25518/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25518/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25518/events
|
https://github.com/huggingface/transformers/issues/25518
| 1,851,045,139 |
I_kwDOCUB6oc5uVLUT
| 25,518 |
Segmentation fault (core dumped) while loading gpt2-large
|
{
"login": "saurav935",
"id": 75733364,
"node_id": "MDQ6VXNlcjc1NzMzMzY0",
"avatar_url": "https://avatars.githubusercontent.com/u/75733364?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saurav935",
"html_url": "https://github.com/saurav935",
"followers_url": "https://api.github.com/users/saurav935/followers",
"following_url": "https://api.github.com/users/saurav935/following{/other_user}",
"gists_url": "https://api.github.com/users/saurav935/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saurav935/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saurav935/subscriptions",
"organizations_url": "https://api.github.com/users/saurav935/orgs",
"repos_url": "https://api.github.com/users/saurav935/repos",
"events_url": "https://api.github.com/users/saurav935/events{/privacy}",
"received_events_url": "https://api.github.com/users/saurav935/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey! Could you try isolating the bug to make sure this is actually an issue in transformers and not related to the `flower` library? (A minimal reproducer without other library 😉 )",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,692 | 1,695 | 1,695 |
NONE
| null |
### System Info
I am currently working on a federated learning project and trying to use gpt2-large model, but while loading it I am getting `Segmentation fault (core dumped)` error. How to fix it?
FYI: I am using Flower framework for Federated learning and my code has been inspired from here - https://github.com/adap/flower/blob/main/examples/quickstart-huggingface/client.py
@ArthurZucker and @younesbelkada
The error:
```
Some weights of GPT2ForSequenceClassification were not initialized from the model checkpoint at gpt2-large and are newly initialized: ['score.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Found cached dataset imdb (/root/.cache/huggingface/datasets/imdb/plain_text/1.0.0/d613c88cf8fa3bab83b4ded3713f1f74830d1100e171db75bbddb80b3345c9c0)
100%|████████████████████████████████████████| 3/3 [00:00<00:00, 618.57it/s]
Loading cached shuffled indices for dataset at /root/.cache/huggingface/datasets/imdb/plain_text/1.0.0/d613c88cf8fa3bab83b4ded3713f1f74830d1100e171db75bbddb80b3345c9c0/cache-9c48ce5d173413c7.arrow
Loading cached shuffled indices for dataset at /root/.cache/huggingface/datasets/imdb/plain_text/1.0.0/d613c88cf8fa3bab83b4ded3713f1f74830d1100e171db75bbddb80b3345c9c0/cache-c1eaa46e94dfbfd3.arrow
Loading cached shuffled indices for dataset at /root/.cache/huggingface/datasets/imdb/plain_text/1.0.0/d613c88cf8fa3bab83b4ded3713f1f74830d1100e171db75bbddb80b3345c9c0/cache-a1b3692aa5b43ab2.arrow
Loading cached processed dataset at /root/.cache/huggingface/datasets/imdb/plain_text/1.0.0/d613c88cf8fa3bab83b4ded3713f1f74830d1100e171db75bbddb80b3345c9c0/cache-7bcc7d0e79f05aec.arrow
Loading cached processed dataset at /root/.cache/huggingface/datasets/imdb/plain_text/1.0.0/d613c88cf8fa3bab83b4ded3713f1f74830d1100e171db75bbddb80b3345c9c0/cache-8ae345f0fd332f9f.arrow
Parsed address is: ('localhost', 5040, None)
INFO flwr 2023-08-15 07:49:38,937 | grpc.py:114 | Opened insecure gRPC connection (no certificates were passed)
Segmentation fault (core dumped)
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
This is my code for which I am getting the error:
```
# Importing the necessary libraries
from collections import OrderedDict
import warnings
import flwr as fl
import torch
import numpy as np
import random
from torch.utils.data import DataLoader
from datasets import load_dataset
from evaluate import load as load_metric
from transformers import AutoTokenizer, DataCollatorWithPadding
from transformers import AutoModelForSequenceClassification
from transformers import AdamW
import yaml
import os
warnings.filterwarnings("ignore", category=UserWarning)
# Using CPU
DEVICE = torch.device("cpu")
#CHECKPOINT = "bert-large-uncased" # transformer model checkpoint
CHECKPOINT = "gpt2-large"
# For GPU usage
# DEVICE = "cuda:2"
def load_data():
"""Load IMDB data (training and eval)"""
raw_datasets = load_dataset("imdb")
raw_datasets = raw_datasets.shuffle(seed=42)
# remove unnecessary data split
del raw_datasets["unsupervised"]
tokenizer = AutoTokenizer.from_pretrained(CHECKPOINT)
def tokenize_function(examples):
return tokenizer(examples["text"], truncation=True)
# random 100 samples
population = random.sample(range(len(raw_datasets["train"])), 100)
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
tokenized_datasets["train"] = tokenized_datasets["train"].select(population)
tokenized_datasets["test"] = tokenized_datasets["test"].select(population)
tokenized_datasets = tokenized_datasets.remove_columns("text")
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
trainloader = DataLoader(
tokenized_datasets["train"],
shuffle=True,
batch_size=32,
collate_fn=data_collator,
)
testloader = DataLoader(
tokenized_datasets["test"], batch_size=32, collate_fn=data_collator
)
return trainloader, testloader
def train(net, trainloader, epochs):
optimizer = AdamW(net.parameters(), lr=5e-5)
net.train()
for _ in range(epochs):
for batch in trainloader:
batch = {k: v.to(DEVICE) for k, v in batch.items()}
outputs = net(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
def test(net, testloader):
metric = load_metric("accuracy")
loss = 0
net.eval()
for batch in testloader:
batch = {k: v.to(DEVICE) for k, v in batch.items()}
with torch.no_grad():
outputs = net(**batch)
logits = outputs.logits
loss += outputs.loss.item()
predictions = torch.argmax(logits, dim=-1)
metric.add_batch(predictions=predictions, references=batch["labels"])
loss /= len(testloader.dataset)
accuracy = metric.compute()["accuracy"]
return loss, accuracy
def main():
net = AutoModelForSequenceClassification.from_pretrained(
CHECKPOINT, num_labels=2
).to(DEVICE)
trainloader, testloader = load_data()
# Flower client
class IMDBClient(fl.client.NumPyClient):
def get_parameters(self, config):
return [val.cpu().numpy() for _, val in net.state_dict().items()]
def set_parameters(self, parameters):
params_dict = zip(net.state_dict().keys(), parameters)
state_dict = OrderedDict({k: torch.Tensor(v) for k, v in params_dict})
net.load_state_dict(state_dict, strict=True)
def fit(self, parameters, config):
self.set_parameters(parameters)
print("Training Started...")
# train(net, trainloader, epochs=1)
print("Training Finished.")
return self.get_parameters(config={}), len(trainloader), {}
def evaluate(self, parameters, config):
self.set_parameters(parameters)
loss, accuracy = test(net, testloader)
return float(loss), len(testloader), {"accuracy": float(accuracy)}
# Yaml file for importing the necessary configuration data
CONFIG_PATH = "./"
# Function to load yaml configuration file
def load_config(config_name):
with open(os.path.join(CONFIG_PATH, config_name)) as file:
config = yaml.safe_load(file)
return config
config = load_config("client_config.yaml")
fl.client.start_numpy_client(server_addresses=config["server_addresses"], client=IMDBClient())
if __name__ == "__main__":
main()
```
### Expected behavior
To work successfully :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25518/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25517
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25517/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25517/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25517/events
|
https://github.com/huggingface/transformers/pull/25517
| 1,851,004,289 |
PR_kwDOCUB6oc5X8n4w
| 25,517 |
add __repr__ to the BitsAndBytesConfig class
|
{
"login": "ranchlai",
"id": 5043767,
"node_id": "MDQ6VXNlcjUwNDM3Njc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5043767?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ranchlai",
"html_url": "https://github.com/ranchlai",
"followers_url": "https://api.github.com/users/ranchlai/followers",
"following_url": "https://api.github.com/users/ranchlai/following{/other_user}",
"gists_url": "https://api.github.com/users/ranchlai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ranchlai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ranchlai/subscriptions",
"organizations_url": "https://api.github.com/users/ranchlai/orgs",
"repos_url": "https://api.github.com/users/ranchlai/repos",
"events_url": "https://api.github.com/users/ranchlai/events{/privacy}",
"received_events_url": "https://api.github.com/users/ranchlai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25517). All of your documentation changes will be reflected on that endpoint."
] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
# What does this PR do?
add __repr__ to the BitsAndBytesConfig class for better visualization and debugging
before
```python
>>> print(bnb_config)
>>> BitsAndBytesConfig(quant_method=<QuantizationMethod.BITS_AND_BYTES: 'bitsandbytes'>)
```
after
```python
>>> print(bnb_config)
>>> BitsAndBytesConfig {
"bnb_4bit_compute_dtype": "bfloat16",
"bnb_4bit_quant_type": "nf4",
"bnb_4bit_use_double_quant": true,
"llm_int8_enable_fp32_cpu_offload": false,
"llm_int8_has_fp16_weight": false,
"llm_int8_skip_modules": null,
"llm_int8_threshold": 6.0,
"load_in_4bit": true,
"load_in_8bit": false,
"quant_method": "bitsandbytes"
}
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25517/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25517",
"html_url": "https://github.com/huggingface/transformers/pull/25517",
"diff_url": "https://github.com/huggingface/transformers/pull/25517.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25517.patch",
"merged_at": 1692090689000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25516
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25516/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25516/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25516/events
|
https://github.com/huggingface/transformers/issues/25516
| 1,850,950,117 |
I_kwDOCUB6oc5uU0Hl
| 25,516 |
Tokenizer behavior is not aligned when initialized from vocab_file and from_pretrained.
|
{
"login": "KawaiiNotHawaii",
"id": 36587375,
"node_id": "MDQ6VXNlcjM2NTg3Mzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/36587375?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KawaiiNotHawaii",
"html_url": "https://github.com/KawaiiNotHawaii",
"followers_url": "https://api.github.com/users/KawaiiNotHawaii/followers",
"following_url": "https://api.github.com/users/KawaiiNotHawaii/following{/other_user}",
"gists_url": "https://api.github.com/users/KawaiiNotHawaii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KawaiiNotHawaii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KawaiiNotHawaii/subscriptions",
"organizations_url": "https://api.github.com/users/KawaiiNotHawaii/orgs",
"repos_url": "https://api.github.com/users/KawaiiNotHawaii/repos",
"events_url": "https://api.github.com/users/KawaiiNotHawaii/events{/privacy}",
"received_events_url": "https://api.github.com/users/KawaiiNotHawaii/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @ArthurZucker ",
"Hey! Thanks for reporting, this is a duplicate of #23930, and will be fixed by #23909 this week! ",
"Pr is just waiting for a final review! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,692 | 1,697 | 1,697 |
NONE
| null |
### System Info
transformers 4.31.0
python 3.10
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Load the llama tokenizer using `transformers.AutoTokenizer.from_pretrained()` and specify the pad_token='<pad>'
2. The pad_token_id returns as 32000 (same as '<unk>'), and the total number of tokens is 32001
3. Load the llama tokenizer by specifying the vocab_file and pad_token='<pad>'
4. The pad_token_id returns as 0 (same as '<unk>'), and the total number of tokens is 32000
### Expected behavior
Two different loading methods should result in the same number of tokens.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25516/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25516/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25515
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25515/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25515/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25515/events
|
https://github.com/huggingface/transformers/issues/25515
| 1,850,844,174 |
I_kwDOCUB6oc5uUaQO
| 25,515 |
Cfg support
|
{
"login": "lucasjinreal",
"id": 21303438,
"node_id": "MDQ6VXNlcjIxMzAzNDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/21303438?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucasjinreal",
"html_url": "https://github.com/lucasjinreal",
"followers_url": "https://api.github.com/users/lucasjinreal/followers",
"following_url": "https://api.github.com/users/lucasjinreal/following{/other_user}",
"gists_url": "https://api.github.com/users/lucasjinreal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucasjinreal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucasjinreal/subscriptions",
"organizations_url": "https://api.github.com/users/lucasjinreal/orgs",
"repos_url": "https://api.github.com/users/lucasjinreal/repos",
"events_url": "https://api.github.com/users/lucasjinreal/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucasjinreal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This has already been added, check the main branch.\r\ncc @gante",
"This has already been added, check the main branch.\r\ncc @gante",
"Awesome, may I ask when would we see it in pypi?",
"@lucasjinreal this will be part of the next release, which should happen within a month or so. Meanwhile, feel free to install from `main` to access it :) (`pip install --upgrade git+https://github.com/huggingface/transformers.git`)\r\n\r\nCheck [the PR](https://github.com/huggingface/transformers/pull/24654) for more information about CFG. Example [here](https://github.com/huggingface/transformers/pull/24654/files#diff-d23b812af8462833ad280d968f3e6e2ee7558bacfc2716cdde44a07bead5e065R1267)\r\n\r\n",
"thank u all so much"
] | 1,692 | 1,692 | 1,692 |
NONE
| null |
### Feature request
Will add cfg support for LLMs?
### Motivation
for better aligned with prompt by make negative mask
### Your contribution
not
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25515/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25514
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25514/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25514/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25514/events
|
https://github.com/huggingface/transformers/pull/25514
| 1,850,753,873 |
PR_kwDOCUB6oc5X7zJz
| 25,514 |
Check for case where `auxiliary_head` is `None` in `UperNetPreTrainedModel`
|
{
"login": "mmurray",
"id": 142318,
"node_id": "MDQ6VXNlcjE0MjMxOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/142318?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmurray",
"html_url": "https://github.com/mmurray",
"followers_url": "https://api.github.com/users/mmurray/followers",
"following_url": "https://api.github.com/users/mmurray/following{/other_user}",
"gists_url": "https://api.github.com/users/mmurray/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmurray/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmurray/subscriptions",
"organizations_url": "https://api.github.com/users/mmurray/orgs",
"repos_url": "https://api.github.com/users/mmurray/repos",
"events_url": "https://api.github.com/users/mmurray/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmurray/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
# What does this PR do?
`UperNetConfig` has an option called [`use_auxiliary_head`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/upernet/configuration_upernet.py#L45). When `use_auxiliary_head` is `False`, the [`auxiliary_head` is set to `None`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/upernet/modeling_upernet.py#L359). However, [`UperNetPreTrainedModel` assumes that `auxiliary_head` is not `None`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/upernet/modeling_upernet.py#L314), resulting in an error when loading a pretrained upernet model with `use_auxiliary_head=False` (see issue #25513 ).
This pull request updates `UperNetPreTrainedModel` to appropriately handle the case where `auxiliary_head` is `None`.
Fixes #25513
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25514/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25514/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25514",
"html_url": "https://github.com/huggingface/transformers/pull/25514",
"diff_url": "https://github.com/huggingface/transformers/pull/25514.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25514.patch",
"merged_at": 1692081862000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25513
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25513/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25513/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25513/events
|
https://github.com/huggingface/transformers/issues/25513
| 1,850,745,315 |
I_kwDOCUB6oc5uUCHj
| 25,513 |
`UperNetPreTrainedModel` throws an `AttributeError` when `use_auxiliary_head=False`
|
{
"login": "mmurray",
"id": 142318,
"node_id": "MDQ6VXNlcjE0MjMxOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/142318?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmurray",
"html_url": "https://github.com/mmurray",
"followers_url": "https://api.github.com/users/mmurray/followers",
"following_url": "https://api.github.com/users/mmurray/following{/other_user}",
"gists_url": "https://api.github.com/users/mmurray/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmurray/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmurray/subscriptions",
"organizations_url": "https://api.github.com/users/mmurray/orgs",
"repos_url": "https://api.github.com/users/mmurray/repos",
"events_url": "https://api.github.com/users/mmurray/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmurray/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.31.0 (also tried on 4.32.0.dev0)
- Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.29
- Python version: 3.8.11
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 1.10.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code snippet:
```
from transformers import UperNetForSemanticSegmentation
model = UperNetForSemanticSegmentation.from_pretrained("openmmlab/upernet-swin-base", use_auxiliary_head=False)
```
Resulting error / stack trace:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/mike/.pyenv/versions/grip/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2700, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/home/mike/.pyenv/versions/grip/lib/python3.8/site-packages/transformers/models/upernet/modeling_upernet.py", line 362, in __init__
self.post_init()
File "/home/mike/.pyenv/versions/grip/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1108, in post_init
self.init_weights()
File "/home/mike/.pyenv/versions/grip/lib/python3.8/site-packages/transformers/models/upernet/modeling_upernet.py", line 314, in init_weights
self.auxiliary_head.init_weights()
AttributeError: 'NoneType' object has no attribute 'init_weights'
```
### Expected behavior
I expect that the model should initialize with no error (and with the auxiliary head unused internally).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25513/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25512
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25512/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25512/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25512/events
|
https://github.com/huggingface/transformers/pull/25512
| 1,850,706,422 |
PR_kwDOCUB6oc5X7o2B
| 25,512 |
Bump tornado from 6.3.2 to 6.3.3 in /examples/research_projects/visual_bert
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
Bumps [tornado](https://github.com/tornadoweb/tornado) from 6.3.2 to 6.3.3.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/tornadoweb/tornado/blob/master/docs/releases.rst">tornado's changelog</a>.</em></p>
<blockquote>
<h1>Release notes</h1>
<p>.. toctree::
:maxdepth: 2</p>
<p>releases/v6.3.3
releases/v6.3.2
releases/v6.3.1
releases/v6.3.0
releases/v6.2.0
releases/v6.1.0
releases/v6.0.4
releases/v6.0.3
releases/v6.0.2
releases/v6.0.1
releases/v6.0.0
releases/v5.1.1
releases/v5.1.0
releases/v5.0.2
releases/v5.0.1
releases/v5.0.0
releases/v4.5.3
releases/v4.5.2
releases/v4.5.1
releases/v4.5.0
releases/v4.4.3
releases/v4.4.2
releases/v4.4.1
releases/v4.4.0
releases/v4.3.0
releases/v4.2.1
releases/v4.2.0
releases/v4.1.0
releases/v4.0.2
releases/v4.0.1
releases/v4.0.0
releases/v3.2.2
releases/v3.2.1
releases/v3.2.0
releases/v3.1.1
releases/v3.1.0
releases/v3.0.2
releases/v3.0.1
releases/v3.0.0
releases/v2.4.1
releases/v2.4.0
releases/v2.3.0
releases/v2.2.1
releases/v2.2.0</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/tornadoweb/tornado/commit/e4d698433b44f350d4908da9ca2cac475c92dfdc"><code>e4d6984</code></a> Merge pull request <a href="https://redirect.github.com/tornadoweb/tornado/issues/3307">#3307</a> from bdarnell/branch6.3</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/6a9e6fbaf7830b3edae68805211f35f5954292ab"><code>6a9e6fb</code></a> ci: Don't test py312 in branch6.3</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/5c8a9a4fa792f8b18bd26bc7a8335e3bbe837852"><code>5c8a9a4</code></a> Set version to 6.3.3</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/7dfe8b597f2d179334d7b528f61e9449ac131273"><code>7dfe8b5</code></a> httpserver_test: Add ExpectLog to fix CI</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/217295b1dd30f556ea374d62007f6821688f00f0"><code>217295b</code></a> http1connection: Make content-length parsing more strict</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/e3aa6c5e2943242d8ab25448c2798365b3cb9945"><code>e3aa6c5</code></a> Merge pull request <a href="https://redirect.github.com/tornadoweb/tornado/issues/3267">#3267</a> from bdarnell/branch6.3</li>
<li>See full diff in <a href="https://github.com/tornadoweb/tornado/compare/v6.3.2...v6.3.3">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25512/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25512/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25512",
"html_url": "https://github.com/huggingface/transformers/pull/25512",
"diff_url": "https://github.com/huggingface/transformers/pull/25512.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25512.patch",
"merged_at": 1692082340000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25511
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25511/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25511/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25511/events
|
https://github.com/huggingface/transformers/pull/25511
| 1,850,705,618 |
PR_kwDOCUB6oc5X7oqa
| 25,511 |
Bump tornado from 6.3.2 to 6.3.3 in /examples/research_projects/lxmert
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
Bumps [tornado](https://github.com/tornadoweb/tornado) from 6.3.2 to 6.3.3.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/tornadoweb/tornado/blob/master/docs/releases.rst">tornado's changelog</a>.</em></p>
<blockquote>
<h1>Release notes</h1>
<p>.. toctree::
:maxdepth: 2</p>
<p>releases/v6.3.3
releases/v6.3.2
releases/v6.3.1
releases/v6.3.0
releases/v6.2.0
releases/v6.1.0
releases/v6.0.4
releases/v6.0.3
releases/v6.0.2
releases/v6.0.1
releases/v6.0.0
releases/v5.1.1
releases/v5.1.0
releases/v5.0.2
releases/v5.0.1
releases/v5.0.0
releases/v4.5.3
releases/v4.5.2
releases/v4.5.1
releases/v4.5.0
releases/v4.4.3
releases/v4.4.2
releases/v4.4.1
releases/v4.4.0
releases/v4.3.0
releases/v4.2.1
releases/v4.2.0
releases/v4.1.0
releases/v4.0.2
releases/v4.0.1
releases/v4.0.0
releases/v3.2.2
releases/v3.2.1
releases/v3.2.0
releases/v3.1.1
releases/v3.1.0
releases/v3.0.2
releases/v3.0.1
releases/v3.0.0
releases/v2.4.1
releases/v2.4.0
releases/v2.3.0
releases/v2.2.1
releases/v2.2.0</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/tornadoweb/tornado/commit/e4d698433b44f350d4908da9ca2cac475c92dfdc"><code>e4d6984</code></a> Merge pull request <a href="https://redirect.github.com/tornadoweb/tornado/issues/3307">#3307</a> from bdarnell/branch6.3</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/6a9e6fbaf7830b3edae68805211f35f5954292ab"><code>6a9e6fb</code></a> ci: Don't test py312 in branch6.3</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/5c8a9a4fa792f8b18bd26bc7a8335e3bbe837852"><code>5c8a9a4</code></a> Set version to 6.3.3</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/7dfe8b597f2d179334d7b528f61e9449ac131273"><code>7dfe8b5</code></a> httpserver_test: Add ExpectLog to fix CI</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/217295b1dd30f556ea374d62007f6821688f00f0"><code>217295b</code></a> http1connection: Make content-length parsing more strict</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/e3aa6c5e2943242d8ab25448c2798365b3cb9945"><code>e3aa6c5</code></a> Merge pull request <a href="https://redirect.github.com/tornadoweb/tornado/issues/3267">#3267</a> from bdarnell/branch6.3</li>
<li>See full diff in <a href="https://github.com/tornadoweb/tornado/compare/v6.3.2...v6.3.3">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25511/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25511/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25511",
"html_url": "https://github.com/huggingface/transformers/pull/25511",
"diff_url": "https://github.com/huggingface/transformers/pull/25511.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25511.patch",
"merged_at": 1692082351000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25510
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25510/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25510/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25510/events
|
https://github.com/huggingface/transformers/pull/25510
| 1,850,507,676 |
PR_kwDOCUB6oc5X68sT
| 25,510 |
[DOCS] MusicGen Docs Update
|
{
"login": "xNul",
"id": 894305,
"node_id": "MDQ6VXNlcjg5NDMwNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/894305?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xNul",
"html_url": "https://github.com/xNul",
"followers_url": "https://api.github.com/users/xNul/followers",
"following_url": "https://api.github.com/users/xNul/following{/other_user}",
"gists_url": "https://api.github.com/users/xNul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xNul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xNul/subscriptions",
"organizations_url": "https://api.github.com/users/xNul/orgs",
"repos_url": "https://api.github.com/users/xNul/repos",
"events_url": "https://api.github.com/users/xNul/events{/privacy}",
"received_events_url": "https://api.github.com/users/xNul/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25510). All of your documentation changes will be reflected on that endpoint.",
"@sanchit-gandhi all done?"
] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
# What does this PR do?
Adds a note about token and generation limitations to the MusicGen docs so users know how to use the MusicGen model.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25510/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25510/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25510",
"html_url": "https://github.com/huggingface/transformers/pull/25510",
"diff_url": "https://github.com/huggingface/transformers/pull/25510.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25510.patch",
"merged_at": 1692685366000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25509
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25509/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25509/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25509/events
|
https://github.com/huggingface/transformers/pull/25509
| 1,850,494,680 |
PR_kwDOCUB6oc5X65zX
| 25,509 |
Add GitForCausalLM model in VQA pipeline
|
{
"login": "jpizarrom",
"id": 111236,
"node_id": "MDQ6VXNlcjExMTIzNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/111236?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jpizarrom",
"html_url": "https://github.com/jpizarrom",
"followers_url": "https://api.github.com/users/jpizarrom/followers",
"following_url": "https://api.github.com/users/jpizarrom/following{/other_user}",
"gists_url": "https://api.github.com/users/jpizarrom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jpizarrom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jpizarrom/subscriptions",
"organizations_url": "https://api.github.com/users/jpizarrom/orgs",
"repos_url": "https://api.github.com/users/jpizarrom/repos",
"events_url": "https://api.github.com/users/jpizarrom/events{/privacy}",
"received_events_url": "https://api.github.com/users/jpizarrom/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @Narsil ",
"Hi @Narsil, this PR is ready for review. Could you please take a look? Thanks :) ",
"> Can we have a single flag to switch behavior ?\r\ndo you mean to try use the same flag in preprocess as in _forward/postprocess?\r\n\r\nthe model_type was used in preprocess because there could other models that can generate but don't require the same custom GIT preprocessing e.g. Salesforce/blip2-opt-2.7b\r\n",
"This sounds like an issue with the tokenizer itself, no ?",
"+1 on @Narsil comment. Custom processing like this should be contained within our processing classes (tokenizers, image processors, processors) - the pipelines shouldn't need to know about the model type",
"Thanks for the feedback @amyeroberts and @Narsil\r\n\r\nAs you recommended, I will take a look on how to move the custom processing from the vqa pipeline to the processing classes (tokenizers, image processors, processors), and maybe it could also allow to remove it from the image_to_text pipeline as well.\r\n\r\nhttps://github.com/huggingface/transformers/blob/3b39b906183ed08d9961908eb73104aeea345d11/src/transformers/pipelines/image_to_text.py#L125-L130",
"FWIW, the `image-to-text` pipeline also does preprocessing based on the model type, as each model has its own specifics: https://github.com/huggingface/transformers/blob/0f08cd205a440d23e6bf924cddd73ff48e09fe35/src/transformers/pipelines/image_to_text.py#L123-L142",
"> the pipelines shouldn't need to know about the model type\r\n\r\nI fully agree with that statement. I just want to note that exceptions already exist within pipelines. But they are a maintenance burden, and we really need a solid argument as to why it cannot be done in lower level tools to justify.\r\nAlso when behavior is directly in the tokenizer/image_processor etc.. it makes like much easier to users who don't have to guess those snippets of code anymore.",
"Hi, I am very new, this is only my second contribution, but I have been trying to understand the code base, and possible options I have found are:\r\n- for the current PR, just uses the model type, like is being done in other pipelines\r\n- add custom parameters to the pipeline to populate preprocess_params, then the pipe could be called with `vqa_pipeline(image=image, question=question, add_special_tokens=False, add_cls_token=True)`, this could avoid the needs to mention a particular model in the pipeline, but the parameters shall be given on runtime\r\n- for the specific case of this PR _microsoft/git-base-textvqa_ currently uses a `BertTokenizer`, maybe a custom tokenizer could be implemented for GitForCausalLM for VisualQuestionAnswering, i believe this could require to publish a new model with the new tokenizer, or it could be done in a different way?\r\n- processors like [GitProcessor](https://github.com/huggingface/transformers/blob/main/src/transformers/models/git/processing_git.py#L23C7-L23C19) looks very promising, as they are already wrapping the _tokenizer_ and the _image processor_ , but it should be extended for the vqa case, and maybe a general strategy shall be found to add them to the pipelines, possible similar to the tokenizer/image processor/feature extraction instantiation in the [pipeline feactory](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/__init__.py#L518C5-L518C13), maybe for this PR a custom processor could be instantiated directly in vqa-pipeline\r\n\r\nI would be very grateful to know in which direction you recommend me to go",
"> Hi, I am very new, this is only my second contribution, but I have been trying to understand the code base, and possible options I have found are:\r\n> \r\n> * for the current PR, just uses the model type, like is being done in other pipelines\r\n> * add custom parameters to the pipeline to populate preprocess_params, then the pipe could be called with `vqa_pipeline(image=image, question=question, add_special_tokens=False, add_cls_token=True)`, this could avoid the needs to mention a particular model in the pipeline, but the parameters shall be given on runtime\r\n> * for the specific case of this PR _microsoft/git-base-textvqa_ currently uses a `BertTokenizer`, maybe a custom tokenizer could be implemented for GitForCausalLM for VisualQuestionAnswering, i believe this could require to publish a new model with the new tokenizer, or it could be done in a different way?\r\n> * processors like [GitProcessor](https://github.com/huggingface/transformers/blob/main/src/transformers/models/git/processing_git.py#L23C7-L23C19) looks very promising, as they are already wrapping the _tokenizer_ and the _image processor_ , but it should be extended for the vqa case, and maybe a general strategy shall be found to add them to the pipelines, possible similar to the tokenizer/image processor/feature extraction instantiation in the [pipeline feactory](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/__init__.py#L518C5-L518C13), maybe for this PR a custom processor could be instantiated directly in vqa-pipeline\r\n> \r\n> I would be very grateful to know in which direction you recommend me to go\r\n\r\nHi @amyeroberts @Narsil\r\nWhat do you think should be the approach I should follow for this PR. Thanks",
"Hi @jpizarrom, what I would propose is the following: \r\n\r\n* We open a new, separate PR which adds `GitTokenizer`. This tokenizer will prepare the inputs as necessary for the model. We don't need to add a new model. We just need to add `tokenization_git.py` and update `tokenization_auto.py` and update the official checkpoints to use the git tokenizer. \r\n* We add a new `use_legacy` argument to `GitProcessor` and `GitTokenizer`\r\n* If `use_legacy` is `True` then the tokenizer does the previous behaviour i.e. matching `BertTokenizer`. We also issue a warning that this will change in a future release (2 releases from the one this commit will be part of).\r\n* If `use_legacy` is `False` then the tokenizer does the preparation logic for the model. We might need to add a conditional `unsqueeze` option in the model's forward pass. \r\n* The default value for `use_legacy` is `True`. We change to `False` in a later version. \r\n* We still have a model-specific check in the pipeline. If model type is `git` we'll call: `self.tokenizer(inputs[\"question\"], return_tensors=self.framework, padding=padding, truncation=truncation, use_legacy=False)`. This can be removed once `use_legacy` defaults to `False` \r\n* We'll need to be careful with any `add_special_tokens` logic\r\n\r\nWDYT @ArthurZucker @Narsil - is this is reasonable plan? ",
"Only having to add a tokenizer is very reasonable and optimal IMO. So yes 👍🏻 for this!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,692 | 1,700 | 1,697 |
CONTRIBUTOR
| null |
# What does this PR do?
Add GitForCausalLM model in VisualQuestionAnsweringPipeline.
Fixes part of #21110 and is based on #23348 #21227 .
## Who can review?
Hi @NielsRogge what do you think of this??
Thanks!
## TODOs
- [x] Add GIT model in VQA pipelines
- [x] Add tests
- [ ] Move custom preprocessing to the tokenizer/processor class (this shall be done in other PR, recommended in https://github.com/huggingface/transformers/pull/25509#issuecomment-1719288200)
- [ ] Update docs
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25509/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25509/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25509",
"html_url": "https://github.com/huggingface/transformers/pull/25509",
"diff_url": "https://github.com/huggingface/transformers/pull/25509.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25509.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25508
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25508/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25508/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25508/events
|
https://github.com/huggingface/transformers/issues/25508
| 1,850,296,251 |
I_kwDOCUB6oc5uSUe7
| 25,508 |
Using `torch.compile` disables eval loss when using `Trainer`
|
{
"login": "RoniGurvich",
"id": 14060729,
"node_id": "MDQ6VXNlcjE0MDYwNzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/14060729?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RoniGurvich",
"html_url": "https://github.com/RoniGurvich",
"followers_url": "https://api.github.com/users/RoniGurvich/followers",
"following_url": "https://api.github.com/users/RoniGurvich/following{/other_user}",
"gists_url": "https://api.github.com/users/RoniGurvich/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RoniGurvich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RoniGurvich/subscriptions",
"organizations_url": "https://api.github.com/users/RoniGurvich/orgs",
"repos_url": "https://api.github.com/users/RoniGurvich/repos",
"events_url": "https://api.github.com/users/RoniGurvich/events{/privacy}",
"received_events_url": "https://api.github.com/users/RoniGurvich/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 5616426447,
"node_id": "LA_kwDOCUB6oc8AAAABTsPdzw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/solved",
"name": "solved",
"color": "B1D6DC",
"default": false,
"description": ""
}
] |
closed
| false |
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Could you try again on the latest release please?",
"Sure, I get the same output with `4.31.0`",
"cc @muellerzr ",
"@RoniGurvich can you give me some more information about libraries being used?\r\n\r\nWhat version of `accelerate` and `torch` specifically.\r\n\r\nI ask as running your code gives me an error when using the latest `main` on accelerate and PyTorch 2.0.1:\r\n```\r\nException: Please convert all Tensors to FakeTensors first or instantiate FakeTensorMode with 'allow_non_fake_inputs'. Found in aten.embedding.default(*(tensor([[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],\r\n [-0.0095, -0.0085, 0.0011, ..., -0.0015, -0.0191, 0.0116],\r\n [ 0.0036, 0.0067, 0.0181, ..., 0.0131, -0.0042, 0.0615],\r\n ...,\r\n [-0.0080, -0.0559, -0.0439, ..., 0.0162, 0.0055, -0.0214],\r\n [ 0.0052, -0.0169, -0.0228, ..., 0.0079, 0.0002, 0.0317],\r\n [-0.0319, -0.0168, -0.0156, ..., -0.0213, -0.0202, 0.0351]],\r\n device='cuda:0', grad_fn=<BroadcastBackward>), FakeTensor(FakeTensor(..., device='meta', size=(2, 128), dtype=torch.int64), cuda:0), 0), **{}) \r\n\r\nWhile executing %self_embeddings_word_embeddings : [#users=1] = call_module[target=self_embeddings_word_embeddings](args = (%input_ids,), kwargs = {})\r\nOriginal traceback:\r\n File \"/home/zach_mueller_huggingface_co/transformers/src/transformers/models/bert/modeling_bert.py\", line 232, in forward\r\n inputs_embeds = self.word_embeddings(input_ids)\r\n | File \"/home/zach_mueller_huggingface_co/transformers/src/transformers/models/bert/modeling_bert.py\", line 1015, in <graph break in forward>\r\n embedding_output = self.embeddings(\r\n```",
"```\r\naccelerate 0.20.3\r\ntorch 2.0.1+cu118\r\n```",
"Is there anything else I can do to help with this?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@RoniGurvich sorry for the delay, I recommend using the latest `transformers` and `accelerate` versions for this, as I was able to report the `eval_loss` just fine:\r\n\r\n```\r\n{'loss': 10.5259, 'learning_rate': 3.3333333333333335e-05, 'epoch': 0.33} \r\n{'eval_loss': 10.493117332458496, 'eval_runtime': 0.1002, 'eval_samples_per_second': 59.863, 'eval_steps_per_second': 29.931, 'epoch': 0.33} \r\n{'loss': 10.5693, 'learning_rate': 1.6666666666666667e-05, 'epoch': 0.67} \r\n{'eval_loss': 10.469869613647461, 'eval_runtime': 0.0749, 'eval_samples_per_second': 80.146, 'eval_steps_per_second': 40.073, 'epoch': 0.67} \r\n{'loss': 10.5174, 'learning_rate': 0.0, 'epoch': 1.0} \r\n{'eval_loss': 10.559216499328613, 'eval_runtime': 0.0741, 'eval_samples_per_second': 80.94, 'eval_steps_per_second': 40.47, 'epoch': 1.0} \r\n{'train_runtime': 1.5694, 'train_samples_per_second': 3.823, 'train_steps_per_second': 1.912, 'train_loss': 10.537548383076986, 'epoch': 1.0} \r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 1.91it/s]\r\n 0%| | 0/3 [00:00<?, ?it/s][2023-10-11 15:45:26,168] torch._inductor.utils: [WARNING] using triton random, expect difference from eager\r\n{'loss': 10.4365, 'learning_rate': 3.3333333333333335e-05, 'epoch': 0.33} \r\n{'eval_runtime': 8.5381, 'eval_samples_per_second': 0.703, 'eval_steps_per_second': 0.351, 'epoch': 0.33} \r\n{'loss': 10.5766, 'learning_rate': 1.6666666666666667e-05, 'epoch': 0.67} \r\n{'eval_runtime': 0.0631, 'eval_samples_per_second': 95.152, 'eval_steps_per_second': 47.576, 'epoch': 0.67} \r\n{'loss': 10.4973, 'learning_rate': 0.0, 'epoch': 1.0} \r\n{'eval_runtime': 0.0613, 'eval_samples_per_second': 97.835, 'eval_steps_per_second': 48.917, 'epoch': 1.0} \r\n{'train_runtime': 66.7434, 'train_samples_per_second': 0.09, 'train_steps_per_second': 0.045, 'train_loss': 10.50345547993978, 'epoch': 1.0} \r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [01:06<00:00, 22.25s/it]\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,692 | 1,699 | 1,699 |
NONE
| null |
### System Info
python version 3.10.6
transformers version 4.30.2
linux machine
running in a poetry env
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The following snippet is a minimal example of training a BERT model for masked language modeling (adopted from the examples).
When calling it with a compiled model the `eval_loss` log is missing. (initially observed in Tensorboard)
```python
import tempfile
import torch
from torch.utils.data import Dataset
from transformers import (
BertConfig,
BertForMaskedLM,
BertTokenizer,
DataCollatorForLanguageModeling,
Trainer,
TrainingArguments,
)
def _setup_model(max_tokens, pad_token_id, vocab_size, device):
bert_config = BertConfig(
vocab_size=vocab_size,
max_position_embeddings=max_tokens,
pad_token_id=pad_token_id,
)
model = BertForMaskedLM(config=bert_config).to(device)
return model
class MockDataset(Dataset):
def __init__(self, sample_size, dataset_size, vocab_size, device):
self.sample_size = sample_size
self.n_samples = dataset_size
self.vocab_size = vocab_size
self.device = device
def __len__(self):
return self.n_samples
def __getitem__(self, item):
return torch.randint(0, self.vocab_size, (self.sample_size,)).to(self.device)
def run_training(compile_model: bool):
batch_size = 2
max_tokens = 128
dataset_size = 6
mlm_proba = 0.15
device = "cpu"
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
vocab_size = tokenizer.vocab_size
model = _setup_model(max_tokens, tokenizer.pad_token_id, vocab_size, device)
if compile_model:
model = torch.compile(model)
dataset = MockDataset(
sample_size=max_tokens,
dataset_size=dataset_size,
vocab_size=vocab_size,
device=device,
)
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=True, mlm_probability=mlm_proba
)
with tempfile.TemporaryDirectory() as temp_dir:
trainer_args = TrainingArguments(
output_dir=temp_dir,
do_train=True,
do_eval=True,
num_train_epochs=1,
evaluation_strategy="steps",
eval_steps=1,
logging_steps=1,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
)
trainer = Trainer(
model=model,
args=trainer_args,
train_dataset=dataset,
eval_dataset=dataset,
data_collator=data_collator,
)
trainer.train()
if __name__ == "__main__":
for c in [False, True]:
run_training(compile_model=c)
```
Outputs:
```
{'loss': 10.3661, 'learning_rate': 3.3333333333333335e-05, 'epoch': 0.33}
{'eval_loss': 10.538872718811035, 'eval_runtime': 0.0573, 'eval_samples_per_second': 104.724, 'eval_steps_per_second': 52.362, 'epoch': 0.33}
{'loss': 10.5957, 'learning_rate': 1.6666666666666667e-05, 'epoch': 0.67}
{'eval_loss': 10.470108985900879, 'eval_runtime': 0.0562, 'eval_samples_per_second': 106.714, 'eval_steps_per_second': 53.357, 'epoch': 0.67}
{'loss': 10.4629, 'learning_rate': 0.0, 'epoch': 1.0}
{'eval_loss': 10.48947811126709, 'eval_runtime': 0.0583, 'eval_samples_per_second': 102.897, 'eval_steps_per_second': 51.449, 'epoch': 1.0}
{'train_runtime': 1.5504, 'train_samples_per_second': 3.87, 'train_steps_per_second': 1.935, 'train_loss': 10.474879264831543, 'epoch': 1.0}
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 1.94it/s]
{'loss': 10.4365, 'learning_rate': 3.3333333333333335e-05, 'epoch': 0.33}
{'eval_runtime': 9.5648, 'eval_samples_per_second': 0.627, 'eval_steps_per_second': 0.314, 'epoch': 0.33}
{'loss': 10.3686, 'learning_rate': 1.6666666666666667e-05, 'epoch': 0.67}
{'eval_runtime': 0.0502, 'eval_samples_per_second': 119.503, 'eval_steps_per_second': 59.751, 'epoch': 0.67}
{'loss': 10.2967, 'learning_rate': 0.0, 'epoch': 1.0}
{'eval_runtime': 0.0484, 'eval_samples_per_second': 123.961, 'eval_steps_per_second': 61.981, 'epoch': 1.0}
{'train_runtime': 37.8631, 'train_samples_per_second': 0.158, 'train_steps_per_second': 0.079, 'train_loss': 10.367273966471354, 'epoch': 1.0}
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:37<00:00, 12.62s/it]
```
### Expected behavior
Expected identical outputs from both calls to `run_training`, specifically I expected `eval_loss` to be logged in the second run as well.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25508/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25507
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25507/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25507/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25507/events
|
https://github.com/huggingface/transformers/issues/25507
| 1,850,243,778 |
I_kwDOCUB6oc5uSHrC
| 25,507 |
Is non-determinism in outputs generated by LlamaForCausalLM, the expected behavior?
|
{
"login": "rachithaiyappa",
"id": 44749902,
"node_id": "MDQ6VXNlcjQ0NzQ5OTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/44749902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rachithaiyappa",
"html_url": "https://github.com/rachithaiyappa",
"followers_url": "https://api.github.com/users/rachithaiyappa/followers",
"following_url": "https://api.github.com/users/rachithaiyappa/following{/other_user}",
"gists_url": "https://api.github.com/users/rachithaiyappa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rachithaiyappa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rachithaiyappa/subscriptions",
"organizations_url": "https://api.github.com/users/rachithaiyappa/orgs",
"repos_url": "https://api.github.com/users/rachithaiyappa/repos",
"events_url": "https://api.github.com/users/rachithaiyappa/events{/privacy}",
"received_events_url": "https://api.github.com/users/rachithaiyappa/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @gante\r\nI think the model has random sampling enabled by default in its [generation config](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf/blob/main/generation_config.json). For reproducible results, you will need to set `do_sample` to `False`.",
"Thank you!"
] | 1,692 | 1,692 | 1,692 |
NONE
| null |
https://github.com/huggingface/transformers/blob/e42587f596181396e1c4b63660abf0c736b10dae/src/transformers/models/llama/modeling_llama.py#L727
I've noticed that this class does not lead to deterministic outputs.
See discussion on the hugging face hub [here](https://discuss.huggingface.co/t/making-llama-text-generation-deterministic/50437).
**Actual issue:**
Following the text generation code template [here](https://huggingface.co/docs/transformers/main/model_doc/llama2), I’ve been trying to generate some outputs from llama2 but running into stochastic generations.
For instance, running the same prompt through the model.generate() twice results in two different outputs as shown in the example below.
I’ve used model.generate() with other LLMs (e.g., flant5) with the other parameters remaining the same and have obtained deterministic outputs.
Also tried AutoModelForCausalLM instead of LLamaForCausalLM but still got different outputs each time for the same prompt.
How do I make sure I get the same text generated each time?
Code to reproduce:
```
from transformers import AutoTokenizer, LlamaForCausalLM
model_name = "meta-llama/Llama-2-13b-chat-hf"
tokenizer = AutoTokenizer.from_pretrained(model_name, cache_dir="/data2/racball/llms")
model = LlamaForCausalLM.from_pretrained(
model_name,
cache_dir="/data2/racball/llms",
device_map = "sequential",
)
prompt = "What is up?"
inputs = tokenizer(prompt, return_tensors="pt")
# Generate
generate_ids = model.generate(inputs.input_ids, max_length=30)
tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
# Run1: 'What is up?\n\nI have a problem with my `docker-compose.yml` file. I have a service that should run a'
# Run2: "What is up?\n\nIt's been a while since I've posted, but I've been pretty busy with work and other"
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25507/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25506
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25506/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25506/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25506/events
|
https://github.com/huggingface/transformers/pull/25506
| 1,850,116,979 |
PR_kwDOCUB6oc5X5ntB
| 25,506 |
Adds `TRANSFORMERS_TEST_DEVICE`
|
{
"login": "vvvm23",
"id": 44398246,
"node_id": "MDQ6VXNlcjQ0Mzk4MjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/44398246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vvvm23",
"html_url": "https://github.com/vvvm23",
"followers_url": "https://api.github.com/users/vvvm23/followers",
"following_url": "https://api.github.com/users/vvvm23/following{/other_user}",
"gists_url": "https://api.github.com/users/vvvm23/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vvvm23/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vvvm23/subscriptions",
"organizations_url": "https://api.github.com/users/vvvm23/orgs",
"repos_url": "https://api.github.com/users/vvvm23/repos",
"events_url": "https://api.github.com/users/vvvm23/events{/privacy}",
"received_events_url": "https://api.github.com/users/vvvm23/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi thanks, it is a good suggestion to simply try creating the device. Perhaps `diffusers` can be updated to do the same?\r\n\r\nDoing this approach makes the list of backends obsolete, so I removed. I just let it throw an unhandled error as I feel the message is informative enough, but let me know if you want that to be caught.",
"Updated the error message, let me know if it works for you~",
"> Thanks a lot! Can you just run make style on your branch to fix the quality issue?\r\n\r\nWoops, I ran this but forgot to commit the changes.\r\n\r\n> Could you also add some documentation in testing.md to not make it hidden?\r\n\r\nWill do 👍 where do you think the best place is this for this in the file?\r\n\r\nI should say, to all the suggested changes here, should these also be mirrored in `diffusers`? My initial PR was basically a direct copy from there.",
"You can probably add it to `docs/source/en/testing.md`.\r\nNot sure about diffusers but documenting a new env variable should always be good 😉 ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25506). All of your documentation changes will be reflected on that endpoint.",
"@ArthurZucker added a small section following the \"To GPU or not to GPU\" section – in my head this fits well. Let me know how you feel about the language there 🤗 I ran `make style` also which I believe also formats the docs."
] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
Adds support for environment variable `TRANSFORMERS_TEST_DEVICE` to set device in use for running the test suite. This pattern is already in use in `diffusers`.
# What does this PR do?
Adds support for environment variable `TRANSFORMERS_TEST_DEVICE` to set device in use for running the test suite. This is a pattern already in use in [`diffusers`](https://github.com/huggingface/diffusers/blob/d93ca268938940839118fdb8d0cf4c1ca7a9fead/src/diffusers/utils/testing_utils.py#L45) which would be useful to have in transformers.
Additionally, I would like to propose removing the check on available backends, as is found in the diffusers version. I included it here to match diffusers, but it would be useful to remove it (for example, if testing new backends and the like). Let me know if that is okay and I will amend the PR.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger, git blame says you! :hugs:
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25506/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25506/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25506",
"html_url": "https://github.com/huggingface/transformers/pull/25506",
"diff_url": "https://github.com/huggingface/transformers/pull/25506.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25506.patch",
"merged_at": 1692272494000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25505
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25505/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25505/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25505/events
|
https://github.com/huggingface/transformers/pull/25505
| 1,850,089,105 |
PR_kwDOCUB6oc5X5hyD
| 25,505 |
Conditional DETR type hint fix
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks @Rocketknight1 "
] | 1,692 | 1,692 | 1,692 |
MEMBER
| null |
Quick fix to the type hints for Conditional DETR! cc @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25505/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25505/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25505",
"html_url": "https://github.com/huggingface/transformers/pull/25505",
"diff_url": "https://github.com/huggingface/transformers/pull/25505.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25505.patch",
"merged_at": 1692033126000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25504
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25504/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25504/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25504/events
|
https://github.com/huggingface/transformers/issues/25504
| 1,850,027,801 |
I_kwDOCUB6oc5uRS8Z
| 25,504 |
Maskformer is missing dataclass decorator
|
{
"login": "cchan-lm",
"id": 88676609,
"node_id": "MDQ6VXNlcjg4Njc2NjA5",
"avatar_url": "https://avatars.githubusercontent.com/u/88676609?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cchan-lm",
"html_url": "https://github.com/cchan-lm",
"followers_url": "https://api.github.com/users/cchan-lm/followers",
"following_url": "https://api.github.com/users/cchan-lm/following{/other_user}",
"gists_url": "https://api.github.com/users/cchan-lm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cchan-lm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cchan-lm/subscriptions",
"organizations_url": "https://api.github.com/users/cchan-lm/orgs",
"repos_url": "https://api.github.com/users/cchan-lm/repos",
"events_url": "https://api.github.com/users/cchan-lm/events{/privacy}",
"received_events_url": "https://api.github.com/users/cchan-lm/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@cchan-lm Indeed! Would you like to open a PR to add the decorator? This way you get the github contribution for spotting this :) ",
"Sure! I'll do so from my personal account, @rachthree.\r\n\r\nTo avoid this in the future, what are your thoughts on adding a test to check that all direct subclasses of `ModelOutput` are dataclasses, and/or add a check in `ModelOutput.__init__subclass__`?",
"@cchan-lm Great! Yep - a small test would be great to add alongside. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Resolved by https://github.com/huggingface/transformers/pull/25638."
] | 1,692 | 1,694 | 1,694 |
NONE
| null |
### System Info
- `transformers` version: 4.29.2
- Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35
- Python version: 3.9.11
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.2
- PyTorch version (GPU?): 1.13.0+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@amyeroberts
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hello! We are working on utilities for model investigations. Our utility attempts to process generic input/output types at the layer level, accounting for `dataclass` objects as well. We ran into an issue with Maskformer where an intermediate output from `MaskFormerPixelDecoderOutput` was not an expected dataclass.
To reproduce, run the below:
```python
from dataclasses import is_dataclass
from transformers.models.maskformer.modeling_maskformer import MaskFormerPixelLevelModuleOutput
from transformers.models.maskformer.modeling_maskformer import MaskFormerPixelDecoderOutput
pixel_level_module = MaskFormerPixelLevelModuleOutput()
assert is_dataclass(pixel_level_module) # passes
decoder_output = MaskFormerPixelDecoderOutput()
assert is_dataclass(decoder_output) # will fail
```
Looking at https://github.com/huggingface/transformers/blob/80f29a25a7d2c945c769b41c0fd41e89cf7de31d/src/transformers/models/maskformer/modeling_maskformer.py#L121, the `@dataclass` decorator appears to be missing from `MaskFormerPixelDecoderOutput`. I think currently this is the only `ModelOutput` subclass in `transformers.models` that does not have the decorator.
Thank you in advance for your response!
### Expected behavior
I expect `MaskFormerPixelDecoderOutput` to be a `dataclass` object and `assert is_dataclass(decoder_output)` to pass.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25504/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25504/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25503
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25503/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25503/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25503/events
|
https://github.com/huggingface/transformers/issues/25503
| 1,849,992,469 |
I_kwDOCUB6oc5uRKUV
| 25,503 |
Error finetuning Whisper using new tokenizer
|
{
"login": "PeterBagnegaard",
"id": 45558589,
"node_id": "MDQ6VXNlcjQ1NTU4NTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/45558589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PeterBagnegaard",
"html_url": "https://github.com/PeterBagnegaard",
"followers_url": "https://api.github.com/users/PeterBagnegaard/followers",
"following_url": "https://api.github.com/users/PeterBagnegaard/following{/other_user}",
"gists_url": "https://api.github.com/users/PeterBagnegaard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PeterBagnegaard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PeterBagnegaard/subscriptions",
"organizations_url": "https://api.github.com/users/PeterBagnegaard/orgs",
"repos_url": "https://api.github.com/users/PeterBagnegaard/repos",
"events_url": "https://api.github.com/users/PeterBagnegaard/events{/privacy}",
"received_events_url": "https://api.github.com/users/PeterBagnegaard/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This error is most probably indicating that the `embedding` layer received indices outside of its range. Did you properly resize the embedding layer to match the size of the tokenizers' length? (Running on CPU will allow you to see the actual source of the error)",
"Thank you so much for the quick reply. This is a show-stopper for me.\r\nI think you're right, but I don't know how to fix it. \r\n\r\nWhen training with no_cuda=True I get the following error: \r\n``` Python\r\nYou're using a WhisperTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\n\r\n---------------------------------------------------------------------------\r\nIndexError Traceback (most recent call last)\r\nCell In[13], line 2\r\n 1 ### print(\"Start training\")\r\n----> 2 trainer.train()\r\n 3 #trainer.evaluate()\r\n 4 print(\"Done training\")\r\n\r\nFile ~/anaconda3/lib/python3.10/site-packages/transformers/trainer.py:1662, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)\r\n 1657 self.model_wrapped = self.model\r\n 1659 inner_training_loop = find_executable_batch_size(\r\n 1660 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size\r\n 1661 )\r\n-> 1662 return inner_training_loop(\r\n 1663 args=args,\r\n 1664 resume_from_checkpoint=resume_from_checkpoint,\r\n 1665 trial=trial,\r\n 1666 ignore_keys_for_eval=ignore_keys_for_eval,\r\n 1667 )\r\n\r\nFile ~/anaconda3/lib/python3.10/site-packages/transformers/trainer.py:1929, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)\r\n 1927 tr_loss_step = self.training_step(model, inputs)\r\n 1928 else:\r\n-> 1929 tr_loss_step = self.training_step(model, inputs)\r\n 1931 if (\r\n 1932 args.logging_nan_inf_filter\r\n 1933 and not is_torch_tpu_available()\r\n 1934 and (torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step))\r\n 1935 ):\r\n 1936 # if loss is nan or inf simply add the average of previous logged losses\r\n 1937 tr_loss += tr_loss / (1 + self.state.global_step - self._globalstep_last_logged)\r\n\r\nFile ~/anaconda3/lib/python3.10/site-packages/transformers/trainer.py:2699, in Trainer.training_step(self, model, inputs)\r\n 2696 return loss_mb.reduce_mean().detach().to(self.args.device)\r\n 2698 with self.compute_loss_context_manager():\r\n-> 2699 loss = self.compute_loss(model, inputs)\r\n 2701 if self.args.n_gpu > 1:\r\n 2702 loss = loss.mean() # mean() to average on multi-gpu parallel training\r\n\r\nFile ~/anaconda3/lib/python3.10/site-packages/transformers/trainer.py:2731, in Trainer.compute_loss(self, model, inputs, return_outputs)\r\n 2729 else:\r\n 2730 labels = None\r\n-> 2731 outputs = model(**inputs)\r\n 2732 # Save past state if it exists\r\n 2733 # TODO: this needs to be fixed and made cleaner later.\r\n 2734 if self.args.past_index >= 0:\r\n\r\nFile ~/.local/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)\r\n 1496 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1497 # this function, and just call forward.\r\n 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1499 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1500 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1501 return forward_call(*args, **kwargs)\r\n 1502 # Do not call functions when jit is used\r\n 1503 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile ~/anaconda3/lib/python3.10/site-packages/transformers/models/whisper/modeling_whisper.py:1414, in WhisperForConditionalGeneration.forward(self, input_features, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 1409 if decoder_input_ids is None and decoder_inputs_embeds is None:\r\n 1410 decoder_input_ids = shift_tokens_right(\r\n 1411 labels, self.config.pad_token_id, self.config.decoder_start_token_id\r\n 1412 )\r\n-> 1414 outputs = self.model(\r\n 1415 input_features,\r\n 1416 attention_mask=attention_mask,\r\n 1417 decoder_input_ids=decoder_input_ids,\r\n 1418 encoder_outputs=encoder_outputs,\r\n 1419 decoder_attention_mask=decoder_attention_mask,\r\n 1420 head_mask=head_mask,\r\n 1421 decoder_head_mask=decoder_head_mask,\r\n 1422 cross_attn_head_mask=cross_attn_head_mask,\r\n 1423 past_key_values=past_key_values,\r\n 1424 decoder_inputs_embeds=decoder_inputs_embeds,\r\n 1425 use_cache=use_cache,\r\n 1426 output_attentions=output_attentions,\r\n 1427 output_hidden_states=output_hidden_states,\r\n 1428 return_dict=return_dict,\r\n 1429 )\r\n 1430 lm_logits = self.proj_out(outputs[0])\r\n 1432 loss = None\r\n\r\nFile ~/.local/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)\r\n 1496 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1497 # this function, and just call forward.\r\n 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1499 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1500 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1501 return forward_call(*args, **kwargs)\r\n 1502 # Do not call functions when jit is used\r\n 1503 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile ~/anaconda3/lib/python3.10/site-packages/transformers/models/whisper/modeling_whisper.py:1279, in WhisperModel.forward(self, input_features, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, decoder_inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 1272 encoder_outputs = BaseModelOutput(\r\n 1273 last_hidden_state=encoder_outputs[0],\r\n 1274 hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None,\r\n 1275 attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None,\r\n 1276 )\r\n 1278 # decoder outputs consists of (dec_features, past_key_value, dec_hidden, dec_attn)\r\n-> 1279 decoder_outputs = self.decoder(\r\n 1280 input_ids=decoder_input_ids,\r\n 1281 attention_mask=decoder_attention_mask,\r\n 1282 encoder_hidden_states=encoder_outputs[0],\r\n 1283 head_mask=decoder_head_mask,\r\n 1284 cross_attn_head_mask=cross_attn_head_mask,\r\n 1285 past_key_values=past_key_values,\r\n 1286 inputs_embeds=decoder_inputs_embeds,\r\n 1287 use_cache=use_cache,\r\n 1288 output_attentions=output_attentions,\r\n 1289 output_hidden_states=output_hidden_states,\r\n 1290 return_dict=return_dict,\r\n 1291 )\r\n 1293 if not return_dict:\r\n 1294 return decoder_outputs + encoder_outputs\r\n\r\nFile ~/.local/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)\r\n 1496 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1497 # this function, and just call forward.\r\n 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1499 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1500 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1501 return forward_call(*args, **kwargs)\r\n 1502 # Do not call functions when jit is used\r\n 1503 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile ~/anaconda3/lib/python3.10/site-packages/transformers/models/whisper/modeling_whisper.py:1030, in WhisperDecoder.forward(self, input_ids, attention_mask, encoder_hidden_states, head_mask, cross_attn_head_mask, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 1027 past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0\r\n 1029 if inputs_embeds is None:\r\n-> 1030 inputs_embeds = self.embed_tokens(input_ids)\r\n 1032 attention_mask = self._prepare_decoder_attention_mask(\r\n 1033 attention_mask, input_shape, inputs_embeds, past_key_values_length\r\n 1034 )\r\n 1036 # embed positions\r\n\r\nFile ~/.local/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)\r\n 1496 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1497 # this function, and just call forward.\r\n 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1499 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1500 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1501 return forward_call(*args, **kwargs)\r\n 1502 # Do not call functions when jit is used\r\n 1503 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile ~/.local/lib/python3.10/site-packages/torch/nn/modules/sparse.py:162, in Embedding.forward(self, input)\r\n 161 def forward(self, input: Tensor) -> Tensor:\r\n--> 162 return F.embedding(\r\n 163 input, self.weight, self.padding_idx, self.max_norm,\r\n 164 self.norm_type, self.scale_grad_by_freq, self.sparse)\r\n\r\nFile ~/.local/lib/python3.10/site-packages/torch/nn/functional.py:2210, in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)\r\n 2204 # Note [embedding_renorm set_grad_enabled]\r\n 2205 # XXX: equivalent to\r\n 2206 # with torch.no_grad():\r\n 2207 # torch.embedding_renorm_\r\n 2208 # remove once script supports set_grad_enabled\r\n 2209 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)\r\n-> 2210 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\n\r\nIndexError: index out of range in self\r\n```\r\n\r\nThis confuses me because I'm training the new tokenizer like this:\r\n``` Python\r\nnew_tokenizer = old_tokenizer.train_new_from_iterator(\r\n get_training_corpus(), \r\n old_tokenizer.vocab_size,\r\n special_tokens_map=old_tokenizer.special_tokens_map,\r\n new_special_tokens=old_tokenizer.all_special_tokens)\r\n```\r\nsaying that its vocab_size should be the same as the old one. the commands\r\n``` Python\r\nprint(old_tokenizer.vocab_size) # 50257\r\nprint(len(old_tokenizer.vocab)) # 50364\r\n```\r\ntell me that the vocab of the old tokenizer has appended the 107 special tokens at the end of the vocab, whereas the commands\r\n``` Python\r\nprint(new_tokenizer.vocab_size) # 50257\r\nprint(len(new_tokenizer.vocab)) # 50257\r\n```\r\ntells me that the new tokenizer has prepended(?) them. \r\nSo in the old tokenizer I have \r\n`vocab = [token1, token2, ..., special_token1, special_token2...] # length 50257 + 107`\r\nand in the new \r\n`vocab = [special_token1, special_token2..., token1, token2, ...] # length 50257`",
"Okay, you might find help in https://github.com/huggingface/tokenizers/issues/1277. \r\nThe tokenizer's length with additional special token is `len(tokenizer)` not `tokenizer.vocab_size`. You are probably using a `fast` tokenizer, which works a bit differently from a slow one. You need to debug which inputs gave tokens outside the range of the embedding layer and check what is the max embedding layer index! ",
"I've been trying to understand how issue 1277 can help, but unsuccessfully. The problem seems to be too different to what I'm trying to achieve. \r\nI've made some tests to see how the ids and tokens fit together. In the old model the special token ids start right after the normal token ids at 50257 and continue all the way up to len(tokenizer). The first two special tokens after the normal tokens are bos and eos. \r\nWhen using train_new_from_iterator it seems like it moves all the special tokens to the beginning of the vocab dict. \r\n\r\n``` Python\r\ndef test_tokenizer(tokenizer):\r\n idxs = [tokenizer.vocab[special_token] for special_token in tokenizer.all_special_tokens]\r\n is_wrong = all([idx < tokenizer.vocab_size for idx in idxs])\r\n print(f\"Are special tokens after normal tokens? {not is_wrong}\")\r\n print(f\"bos_token: {tokenizer.vocab['<|startoftranscript|>']} eos_token: {tokenizer.vocab['<|endoftext|>']}\")\r\n print(\"Special token ids: \" + \", \".join([str(idx) for idx in idxs]))\r\n\r\ndef max_key_val(tokenizer):\r\n d = tokenizer.vocab\r\n key = max(d, key=d.get)\r\n return key, d[key]\r\n\r\ndef min_key_val(tokenizer):\r\n d = tokenizer.vocab\r\n key = min(d, key=d.get)\r\n return key, d[key]\r\n\r\nprint(f\"Old tokenizer: \\n{len(old_tokenizer)=} | {old_tokenizer.vocab_size=} | {min_key_val(old_tokenizer)=} | {max_key_val(old_tokenizer)=}\")\r\ntest_tokenizer(old_tokenizer)\r\n\r\nprint(f\"\\nNew tokenizer: \\n{len(new_tokenizer)=} | {new_tokenizer.vocab_size=} | {min_key_val(new_tokenizer)=} | {max_key_val(new_tokenizer)=}\")\r\ntest_tokenizer(new_tokenizer)\r\n```\r\n```\r\nOld tokenizer: \r\nlen(old_tokenizer)=50364 | old_tokenizer.vocab_size=50257 | min_key_val(old_tokenizer)=('!', 0) | max_key_val(old_tokenizer)=('<|notimestamps|>', 50363)\r\nAre special tokens after normal tokens? True\r\nbos_token: 50258 eos_token: 50257\r\nSpecial token ids: 50257, 50256, 50257, 50258, 50259, 50260, 50261, 50262, 50263, 50264, 50265, 50266, 50267, 50268, 50269, 50270, 50271, 50272, 50273, 50274, 50275, 50276, 50277, 50278, 50279, 50280, 50281, 50282, 50283, 50284, 50285, 50286, 50287, 50288, 50289, 50290, 50291, 50292, 50293, 50294, 50295, 50296, 50297, 50298, 50299, 50300, 50301, 50302, 50303, 50304, 50305, 50306, 50307, 50308, 50309, 50310, 50311, 50312, 50313, 50314, 50315, 50316, 50317, 50318, 50319, 50320, 50321, 50322, 50323, 50324, 50325, 50326, 50327, 50328, 50329, 50330, 50331, 50332, 50333, 50334, 50335, 50336, 50337, 50338, 50339, 50340, 50341, 50342, 50343, 50344, 50345, 50346, 50347, 50348, 50349, 50350, 50351, 50352, 50353, 50354, 50355, 50356, 50357, 50358, 50359, 50360, 50361, 50362, 50363\r\n\r\nNew tokenizer: \r\nlen(new_tokenizer)=50257 | new_tokenizer.vocab_size=50257 | min_key_val(new_tokenizer)=('<|endoftext|>', 0) | max_key_val(new_tokenizer)=('sebiopsi', 50256)\r\nAre special tokens after normal tokens? False\r\nbos_token: 1 eos_token: 0\r\nSpecial token ids: 0, 107, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107\r\n```\r\nThe model expects the bos and eos at indices 50258 and 50257, but after using train_new_from_iterator these indices are wrong. \r\n\r\n``` Python\r\nmodel.config\r\n```\r\n```\r\nWhisperConfig {\r\n \"_name_or_path\": \"openai/whisper-medium\",\r\n \"activation_dropout\": 0.0,\r\n \"activation_function\": \"gelu\",\r\n \"apply_spec_augment\": false,\r\n \"architectures\": [\r\n \"WhisperForConditionalGeneration\"\r\n ],\r\n \"attention_dropout\": 0.0,\r\n \"begin_suppress_tokens\": [\r\n 220,\r\n 50257\r\n ],\r\n \"bos_token_id\": 50257, <==========\r\n \"classifier_proj_size\": 256,\r\n \"d_model\": 1024,\r\n \"decoder_attention_heads\": 16,\r\n \"decoder_ffn_dim\": 4096,\r\n \"decoder_layerdrop\": 0.0,\r\n \"decoder_layers\": 24,\r\n \"decoder_start_token_id\": 50258,\r\n \"dropout\": 0.0,\r\n \"encoder_attention_heads\": 16,\r\n \"encoder_ffn_dim\": 4096,\r\n \"encoder_layerdrop\": 0.0,\r\n \"encoder_layers\": 24,\r\n eos_token_id\": 50257, <==========\r\n \"forced_decoder_ids\": null,\r\n \"init_std\": 0.02,\r\n \"is_encoder_decoder\": true,\r\n \"mask_feature_length\": 10,\r\n \"mask_feature_min_masks\": 0,\r\n \"mask_feature_prob\": 0.0,\r\n \"mask_time_length\": 10,\r\n \"mask_time_min_masks\": 2,\r\n \"mask_time_prob\": 0.05,\r\n \"max_length\": 448,\r\n \"max_source_positions\": 1500,\r\n \"max_target_positions\": 448,\r\n \"model_type\": \"whisper\",\r\n \"num_hidden_layers\": 24,\r\n \"num_mel_bins\": 80,\r\n \"pad_token_id\": 50257,\r\n \"scale_embedding\": false,\r\n \"suppress_tokens\": [],\r\n \"torch_dtype\": \"float32\",\r\n \"transformers_version\": \"4.28.0.dev0\",\r\n \"use_cache\": true,\r\n \"use_weighted_layer_sum\": false,\r\n vocab_size\": 50364, <==========\r\n}\r\n```\r\n\r\nI can make the error go away by making the vocab_size = len(old_tokenizer), but the ids will still not line up. \r\n\r\nMaybe I should use a SentencePiece tokenizer to create a vocab file, but there are some problems with this too. \r\nIn my tokenizer folder I have both vocab.json and tokenizer.json, both of which contain the full vocab (for some reason?). \r\ntokenizer.json also contains information about special tokens which I'm interested in. \r\nI'm considering replacing tokenizer.json => 'model' => 'vocab' and vocab.json with the correct vocab, but because the special tokens have been added to these in the new tokenizer, I'll have to find all indices of normal tokens and shift them back by the number of special tokens. \r\nThere has got to be a simple way of doing this. This seems like an obvious error in train_new_from_iterator?\r\n",
"I'll try to have a look 😉 ",
"Okay, let's just take this step by step as the reproducer is huge and involved.\r\n\r\n1. What are you trying to achieve by training a new tokenizer? Do you have a new language?\r\n\r\n2. What could be wrong here:\r\n```python\r\nnew_tokenizer = old_tokenizer.train_new_from_iterator(\r\n get_training_corpus(), \r\n old_tokenizer.vocab_size,\r\n special_tokens_map=old_tokenizer.special_tokens_map,\r\n new_special_tokens=old_tokenizer.all_special_tokens)\r\n```\r\nfor me, this is problematic, because the content of `old_tokenizer.special_tokens_map` is also in `old_tokenizer.all_special_tokens`. Would heavily suggest removing this.\r\n\r\nAlso this was not in the training example provided so not really sure why you are adding it? ",
"Could you share a pushed v ersion of the tokenizers?",
"1. I have a dataset using specialized language. There is a lot of technical jargon which the standard whisper tokenizer doesn't handle well. \r\n2. This might very well be wrong, I added this in order to check whether it solved my problem. When training a new tokenizer as\r\n``` Python\r\nnew_tokenizer = old_tokenizer.train_new_from_iterator(\r\n get_training_corpus(), \r\n old_tokenizer.vocab_size,\r\n special_tokens_map=old_tokenizer.special_tokens_map,\r\n new_special_tokens=old_tokenizer.all_special_tokens)\r\n```\r\nand\r\n``` Python\r\nnew_tokenizer = old_tokenizer.train_new_from_iterator(\r\n get_training_corpus(), \r\n old_tokenizer.vocab_size,\r\n special_tokens_map=old_tokenizer.special_tokens_map)\r\n```\r\nand\r\n``` Python\r\nnew_tokenizer = old_tokenizer.train_new_from_iterator(\r\n get_training_corpus(), \r\n old_tokenizer.vocab_size)\r\n```\r\nI get the same error. In all cases, the special tokens will be placed in the beginning of new_tokenizer.vocab and not the end like in old_tokenizer.vocab.\r\n\r\n> Could you share a pushed v ersion of the tokenizers?\r\n\r\nDo you need me to share the folder containing vocab.json, tokenizer.json, merges.txt etc?",
"Yes, push the tokenizer to the hub and I'll be able to have a look at the internal state 😉 ",
"This is my first time using this feature. It should be available at peterBagnegaard/new_tokenizer.\r\n\r\nI made it using the following lines\r\n``` Python\r\nwhisper = WhisperTokenizerFast.from_pretrained(\"openai/whisper-medium\", language=\"danish\")\r\n\r\nwhisper_new = whisper.train_new_from_iterator(\r\n get_training_corpus(),\r\n whisper.vocab_size)\r\n\r\nwhisper_new.push_to_hub(\"new_tokenizer\")\r\n```",
"Thanks! We actually have a few tests on our CI that should ensure that we can train a tokenizer from an old tokenizers, so if this is indeed a bug we'll have to fix it! ",
"This might confuse more than it helps, but I've tried training my own tokenizer using the BpeTrainer, inspired by https://github.com/huggingface/tokenizers/issues/1277.\r\n\r\n``` Python \r\n# Based either on jstoone or openai\r\nold_tokenizer = WhisperTokenizerFast.from_pretrained(\"jstoone/whisper-medium-da\", language=\"danish\")\r\n# old_tokenizer = WhisperTokenizerFast.from_pretrained(\"openai/whisper-medium\", language=\"danish\")\r\n\r\ntokenizer = old_tokenizer.backend_tokenizer\r\n\r\n# Either adding special tokens to trainer or not\r\ntrainer = trainers.BpeTrainer(vocab_size=old_tokenizer.vocab_size)#, special_tokens=old_tokenizer.all_special_tokens)\r\n\r\ntokenizer.train_from_iterator(get_training_corpus(), trainer=trainer)\r\n\r\ntokenizer.save(\"tokenizer.json\")\r\n\r\nfast_tokenizer = WhisperTokenizerFast(\r\ntokenizer_file=\"tokenizer.json\",\r\nmodel_max_length=old_tokenizer.model_max_length,\r\nlanguage='danish',\r\ntask='transcribe',\r\npredict_timestamps=True)\r\n\r\nspecial_tokens = {\"bos_token\" : AddedToken(old_tokenizer.bos_token or \"\", normalized=True),\r\n \"eos_token\" : AddedToken(old_tokenizer.eos_token or \"\", normalized=True),\r\n \"unk_token\" : AddedToken(old_tokenizer.unk_token or \"[UNK]\", normalized=True),\r\n \"sep_token\" : old_tokenizer.sep_token or \"\",\r\n \"pad_token\" : old_tokenizer.pad_token or \"\",\r\n \"cls_token\" : old_tokenizer.cls_token or \"\",\r\n \"mask_token\" : old_tokenizer.mask_token or \"\",\r\n \"additional_special_tokens\" : old_tokenizer.additional_special_tokens}\r\n\r\nfast_tokenizer.add_special_tokens(special_tokens)\r\n\r\nfast_tokenizer.set_prefix_tokens(task='transcribe', language='danish')\r\n```\r\n\r\nI've been experimenting with using both openAis tokenizer, as well as a tokenizer made by Jstoone (the one I'm fine-tuning further).\r\nI've also tried adding special tokens to the trainer or not. This gives four possibilities:\r\n\r\n```\r\nOpenAi + added special tokens: [FAILS] special tokens are placed first in vocab\r\nJstoone + added special tokens: [FAILS] special tokens are placed first in vocab\r\nOpenAi + No added special tokens: [PANICS] train_from_iterator throws PanicException: Missing additional token\r\nJstoone + No added special tokens: [WORKS] special tokens are placed last in vocab\r\n```\r\nSo while I can technically continue, this seems like a problem (I am so confused!)\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Glad to know that this worked. A few major changes were recently pushed to the `transformers` library regarding added tokens which might have also fixed some issues you could have been facing! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@PeterBagnegaard Did you ever get this to work? I am doing the same thing as you, but my model is predicting gibberish at the end. \r\n\r\nWere you able to get Whisper to correctly learn a new tokenizer, and if you could, how did you?",
"If you train a new tokenizer, the model will have to be trained from scratch as you are learning a new mapping from token to ids which is literally miles away from the one it was trained on"
] | 1,692 | 1,703 | 1,700 |
NONE
| null |
### System Info
- `transformers` version: 4.28.0.dev0
- Platform: Linux-6.2.15-100.fc36.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.9
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### who can help
@ArthurZucker
## Information
I am using whisper-medium-da
and I've based my code on the tutorials
Training a new tokenizer from an old one
https://huggingface.co/learn/nlp-course/chapter6/2
and
Fine-Tune Whisper For Multilingual ASR with 🤗 Transformers
https://huggingface.co/blog/fine-tune-whisper
I'm trying to finetine Whisper using a tokenizer other than the one provided by whisper (but based on it)
This gives the following error
```
You're using a WhisperTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [102,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [103,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
---------------------------------------------------------------------------
You're using a WhisperTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [112,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [3,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[8], line 2
1 ### print("Start training")
----> 2 trainer.train()
3 #trainer.evaluate()
4 print("Done training")
File ~/anaconda3/lib/python3.10/site-packages/transformers/trainer.py:1662, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1657 self.model_wrapped = self.model
1659 inner_training_loop = find_executable_batch_size(
1660 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
1661 )
-> 1662 return inner_training_loop(
1663 args=args,
1664 resume_from_checkpoint=resume_from_checkpoint,
1665 trial=trial,
1666 ignore_keys_for_eval=ignore_keys_for_eval,
1667 )
File ~/anaconda3/lib/python3.10/site-packages/transformers/trainer.py:1929, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1927 tr_loss_step = self.training_step(model, inputs)
1928 else:
-> 1929 tr_loss_step = self.training_step(model, inputs)
1931 if (
1932 args.logging_nan_inf_filter
1933 and not is_torch_tpu_available()
1934 and (torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step))
1935 ):
1936 # if loss is nan or inf simply add the average of previous logged losses
1937 tr_loss += tr_loss / (1 + self.state.global_step - self._globalstep_last_logged)
File ~/anaconda3/lib/python3.10/site-packages/transformers/trainer.py:2699, in Trainer.training_step(self, model, inputs)
2696 return loss_mb.reduce_mean().detach().to(self.args.device)
2698 with self.compute_loss_context_manager():
-> 2699 loss = self.compute_loss(model, inputs)
2701 if self.args.n_gpu > 1:
2702 loss = loss.mean() # mean() to average on multi-gpu parallel training
File ~/anaconda3/lib/python3.10/site-packages/transformers/trainer.py:2731, in Trainer.compute_loss(self, model, inputs, return_outputs)
2729 else:
2730 labels = None
-> 2731 outputs = model(**inputs)
2732 # Save past state if it exists
2733 # TODO: this needs to be fixed and made cleaner later.
2734 if self.args.past_index >= 0:
File ~/.local/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File ~/anaconda3/lib/python3.10/site-packages/transformers/models/whisper/modeling_whisper.py:1414, in WhisperForConditionalGeneration.forward(self, input_features, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
1409 if decoder_input_ids is None and decoder_inputs_embeds is None:
1410 decoder_input_ids = shift_tokens_right(
1411 labels, self.config.pad_token_id, self.config.decoder_start_token_id
1412 )
-> 1414 outputs = self.model(
1415 input_features,
1416 attention_mask=attention_mask,
1417 decoder_input_ids=decoder_input_ids,
1418 encoder_outputs=encoder_outputs,
1419 decoder_attention_mask=decoder_attention_mask,
1420 head_mask=head_mask,
1421 decoder_head_mask=decoder_head_mask,
1422 cross_attn_head_mask=cross_attn_head_mask,
1423 past_key_values=past_key_values,
1424 decoder_inputs_embeds=decoder_inputs_embeds,
1425 use_cache=use_cache,
1426 output_attentions=output_attentions,
1427 output_hidden_states=output_hidden_states,
1428 return_dict=return_dict,
1429 )
1430 lm_logits = self.proj_out(outputs[0])
1432 loss = None
File ~/.local/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File ~/anaconda3/lib/python3.10/site-packages/transformers/models/whisper/modeling_whisper.py:1279, in WhisperModel.forward(self, input_features, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, decoder_inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)
1272 encoder_outputs = BaseModelOutput(
1273 last_hidden_state=encoder_outputs[0],
1274 hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None,
1275 attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None,
1276 )
1278 # decoder outputs consists of (dec_features, past_key_value, dec_hidden, dec_attn)
-> 1279 decoder_outputs = self.decoder(
1280 input_ids=decoder_input_ids,
1281 attention_mask=decoder_attention_mask,
1282 encoder_hidden_states=encoder_outputs[0],
1283 head_mask=decoder_head_mask,
1284 cross_attn_head_mask=cross_attn_head_mask,
1285 past_key_values=past_key_values,
1286 inputs_embeds=decoder_inputs_embeds,
1287 use_cache=use_cache,
1288 output_attentions=output_attentions,
1289 output_hidden_states=output_hidden_states,
1290 return_dict=return_dict,
1291 )
1293 if not return_dict:
1294 return decoder_outputs + encoder_outputs
File ~/.local/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File ~/anaconda3/lib/python3.10/site-packages/transformers/models/whisper/modeling_whisper.py:1032, in WhisperDecoder.forward(self, input_ids, attention_mask, encoder_hidden_states, head_mask, cross_attn_head_mask, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)
1029 if inputs_embeds is None:
1030 inputs_embeds = self.embed_tokens(input_ids)
-> 1032 attention_mask = self._prepare_decoder_attention_mask(
1033 attention_mask, input_shape, inputs_embeds, past_key_values_length
1034 )
1036 # embed positions
1037 if input_ids is not None:
File ~/anaconda3/lib/python3.10/site-packages/transformers/models/whisper/modeling_whisper.py:921, in WhisperDecoder._prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length)
918 combined_attention_mask = None
920 if input_shape[-1] > 1:
--> 921 combined_attention_mask = _make_causal_mask(
922 input_shape,
923 inputs_embeds.dtype,
924 device=inputs_embeds.device,
925 past_key_values_length=past_key_values_length,
926 )
928 if attention_mask is not None:
929 # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
930 expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1])
File ~/anaconda3/lib/python3.10/site-packages/transformers/models/whisper/modeling_whisper.py:79, in _make_causal_mask(input_ids_shape, dtype, device, past_key_values_length)
75 """
76 Make causal mask used for bi-directional self-attention.
77 """
78 bsz, tgt_len = input_ids_shape
---> 79 mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min, device=device), device=device)
80 mask_cond = torch.arange(mask.size(-1), device=device)
81 mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0)
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
The tokenizer from whisper-medium-da have special tokens added in the very end of the vocab dict (with indices around 50000) whereas new_tokenizer has special tokens in the very beginning (with indices around 0).
I'm expecting that the error arises because tokens like <|endoftext|> and <|startoftranscript|> don't have the same index.
It seems that whenever I try to train my own tokenizer, even when using train_new_from_iterator from, the special tokens move to the beginning of the vocabulary dict.
I'm under the impression that I don't have to retrain Whisper from scratch when retraining the tokenizer, and that I can simply set the new_tokenizer as explained above and finetune whisper-medium-da on my own data.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
``` python
from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments, WhisperProcessor, WhisperForConditionalGeneration, AutoTokenizer
from datasets import Audio, load_dataset, DatasetDict, Dataset
from typing import Any, Dict, List, Union
from dataclasses import dataclass
import evaluate
import torch
@dataclass
class DataCollatorSpeechSeq2SeqWithPadding:
processor: Any
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
input_features = [{"input_features": feature["input_features"]} for feature in features]
batch = self.processor.feature_extractor.pad(input_features, return_tensors="pt")
label_features = [{"input_ids": feature["labels"]} for feature in features]
labels_batch = self.processor.tokenizer.pad(label_features, return_tensors="pt")
labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
if (labels[:, 0] == self.processor.tokenizer.bos_token_id).all().cpu().item():
labels = labels[:, 1:]
batch["labels"] = labels
return batch
def compute_metrics(pred):
pred_ids = pred.predictions
label_ids = pred.label_ids
label_ids[label_ids == -100] = processor.tokenizer.pad_token_id
pred_str = processor.tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
label_str = processor.tokenizer.batch_decode(label_ids, skip_special_tokens=True)
wer = 100 * metric.compute(predictions=pred_str, references=label_str)
return {"wer": wer}
def prepare_dataset(batch):
audio = batch["audio"]
batch["input_features"] = processor.feature_extractor(audio["array"], sampling_rate=audio["sampling_rate"]).input_features[0]
batch["labels"] = processor.tokenizer(batch["sentence"]).input_ids
return batch
processor_checkpoint = "openai/whisper-medium"
tokenizer_checkpoint = "whisper_new"
model_checkpoint = "openai/whisper-medium"
# Retrain the tokenizer. This is what I'm unable to do
from datasets import load_dataset
dataset = load_dataset("wikitext", name="wikitext-2-raw-v1", split="train")
def get_training_corpus():
for i in range(0, len(dataset), 1000):
yield dataset[i : i + 1000]["text"]
old_tokenizer = AutoTokenizer.from_pretrained(processor_checkpoint)
new_tokenizer = old_tokenizer.train_new_from_iterator(get_training_corpus(), old_tokenizer.vocab_size)
new_tokenizer.save_pretrained(tokenizer_checkpoint)
# Create data_collator
processor = WhisperProcessor.from_pretrained(processor_checkpoint, language='Danish', task='transcribe')
processor.tokenizer = AutoTokenizer.from_pretrained(tokenizer_checkpoint)
data_collator = DataCollatorSpeechSeq2SeqWithPadding(processor=processor)
# Load data
dataset_dict = DatasetDict()
dataset_dict["train"] = load_dataset("mozilla-foundation/common_voice_11_0", "da", split="train+validation", use_auth_token=True)
dataset_dict["test"] = load_dataset("mozilla-foundation/common_voice_11_0", "da", split="test", use_auth_token=True)
dataset_dict = dataset_dict.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "path", "segment", "up_votes"])
dataset_dict = dataset_dict.cast_column("audio", Audio(sampling_rate=16000))
dataset_dict = dataset_dict.map(prepare_dataset, remove_columns=dataset_dict.column_names["train"], num_proc=4)
# Load model
model = WhisperForConditionalGeneration.from_pretrained(model_checkpoint)
model.config.forced_decoder_ids = None # ToDo Is this right?
model.config.suppress_tokens = []
model.resize_token_embeddings(len(processor.tokenizer))
# Train
metric = evaluate.load("wer")
training_args = Seq2SeqTrainingArguments(
output_dir="home",
per_device_train_batch_size=2,
gradient_accumulation_steps=8,
learning_rate=8*1e-6,
warmup_steps=500,
max_steps=10000,
gradient_checkpointing=True,
fp16=True,
evaluation_strategy="steps",
per_device_eval_batch_size=1,
predict_with_generate=True,
generation_max_length=225,
save_steps=50,
eval_steps=50,
logging_steps=25,
report_to="none", #["tensorboard"],
load_best_model_at_end=True,
metric_for_best_model="wer",
greater_is_better=False,
push_to_hub=False,
optim="adafactor"
)
trainer = Seq2SeqTrainer(
args=training_args,
model=model,
train_dataset=dataset_dict["train"],
eval_dataset=dataset_dict["test"],
data_collator=data_collator,
compute_metrics=compute_metrics,
tokenizer=processor.feature_extractor,
)
trainer.train()
```
### Expected behavior
the trainer.train() would run smoothly without errors, just like it does when using the tokenizer provided by whisper.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25503/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25503/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25502
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25502/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25502/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25502/events
|
https://github.com/huggingface/transformers/issues/25502
| 1,849,972,582 |
I_kwDOCUB6oc5uRFdm
| 25,502 |
Training RWKV is ~10x slower than GPT2 on GPU
|
{
"login": "ivnle",
"id": 41245369,
"node_id": "MDQ6VXNlcjQxMjQ1MzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/41245369?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivnle",
"html_url": "https://github.com/ivnle",
"followers_url": "https://api.github.com/users/ivnle/followers",
"following_url": "https://api.github.com/users/ivnle/following{/other_user}",
"gists_url": "https://api.github.com/users/ivnle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivnle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivnle/subscriptions",
"organizations_url": "https://api.github.com/users/ivnle/orgs",
"repos_url": "https://api.github.com/users/ivnle/repos",
"events_url": "https://api.github.com/users/ivnle/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivnle/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The problem was that I wasn't using RWKV's custom kernel. `pip install ninja` resolves the slowdown."
] | 1,692 | 1,692 | 1,692 |
NONE
| null |
### System Info
Training RWKV is ~10x slower than GPT2 on GPU and ~3x slower on CPU. Both use Huggingface's implementations.
cuda version: 12.1
torch version: 2.1.0.dev20230812+cu121
cuda driver: 8902
huggingface version: 4.30.2
operating system: posix
Linux 5.15.0-78-generic
GPU: 1x A6000
I was able to replicate these results after downgrading to Pytorch 2.01. I was also able to replicate it on a 2080ti on another machine.
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
"""
This is a minimal example of training Huggingface's implementations of
RWKV and GPT2. You should observe that RWKV is 10x slower than GPT2 on
GPU.
"""
import os
import platform
import torch
import torch.nn as nn
import torch.optim as optim
import transformers
import time
input_size = 128
seq_len = 32
hidden_size = input_size
num_layers = 2
num_heads = 2
batch_size = 128
num_epochs = 1000
# set seed
torch.manual_seed(0)
def count_paramters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
gp2_config = transformers.GPT2Config(
n_positions=input_size,
n_embd=hidden_size,
n_layer=num_layers,
n_head=num_heads,
n_inner=hidden_size * 4,
)
rwkv_config = transformers.RwkvConfig(
context_length=input_size,
hidden_size=hidden_size,
num_hidden_layers=num_layers,
intermediate_size=hidden_size * 4,
)
def train(model, device="cuda"):
head = nn.Linear(hidden_size, 1)
# move to device
model = model.to(device)
head = head.to(device)
model.train()
head.train()
# Define loss function and optimizer
criterion = nn.MSELoss() # Mean Squared Error loss
optimizer = optim.SGD(list(model.parameters()) + list(head.parameters()), lr=0.01)
# Training loop
start = time.time()
for epoch in range(num_epochs):
# Generate random inputs and labels
inputs = torch.rand(
batch_size, seq_len, input_size, device=device
) # Batch size: 32, Sequence length: 10, Input size: 10
labels = torch.rand(
batch_size, seq_len, 1, device=device
) # Batch size: 32, Sequence length: 10, Output size: 1
# Zero the gradients
optimizer.zero_grad()
# Forward pass
outputs = model(inputs_embeds=inputs).last_hidden_state
outputs = head(outputs)
# Compute the loss
loss = criterion(outputs, labels)
# Backpropagation
loss.backward()
# Update weights
optimizer.step()
# Print loss every 100 epochs
if (epoch + 1) % 100 == 0:
print(f"Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}")
end = time.time()
return end - start
for device in ["cpu", "cuda"]:
print(f"Training GPT2 on {device}")
gpt2_model = transformers.GPT2Model(gp2_config)
print(f"GPT2 parameters: {count_paramters(gpt2_model)}")
gpt2_train_time = train(gpt2_model, device=device)
print(f"GPT2 training time: {gpt2_train_time:.2f} seconds\n")
print(f"Training RWKV on {device}")
rwkv_model = transformers.RwkvModel(rwkv_config)
print(f"RWKV parameters: {count_paramters(rwkv_model)}\n")
rwkv_train_time = train(rwkv_model, device=device)
print(f"RWKV training time: {rwkv_train_time:.2f} seconds\n")
# compare speed up
print(f"RWKV / GPT2: {rwkv_train_time / gpt2_train_time:.2f}x\n")
print("-" * 80 + "\n")
# print cuda version
print(f"cuda version: {torch.version.cuda}")
# print torch version
print(f"torch version: {torch.__version__}")
# print what cuda driver is being used
print(f"cuda driver: {torch.backends.cudnn.version()}")
# print huggingface version
print(f"huggingface version: {transformers.__version__}")
# print system information like python version, operating system, etc.
print(f"operating system: {os.name}")
print(platform.system(), platform.release())
```
Running this script produced the following
```
Training GPT2 on cpu
GPT2 parameters: 6846080
Epoch [100/1000], Loss: 0.1093
Epoch [200/1000], Loss: 0.0973
Epoch [300/1000], Loss: 0.0898
Epoch [400/1000], Loss: 0.0887
Epoch [500/1000], Loss: 0.0880
Epoch [600/1000], Loss: 0.0838
Epoch [700/1000], Loss: 0.0844
Epoch [800/1000], Loss: 0.0853
Epoch [900/1000], Loss: 0.0841
Epoch [1000/1000], Loss: 0.0846
GPT2 training time: 60.22 seconds
Training RWKV on cpu
RWKV parameters: 6864768
Epoch [100/1000], Loss: 0.1054
Epoch [200/1000], Loss: 0.0845
Epoch [300/1000], Loss: 0.0842
Epoch [400/1000], Loss: 0.0823
Epoch [500/1000], Loss: 0.0850
Epoch [600/1000], Loss: 0.0823
Epoch [700/1000], Loss: 0.0836
Epoch [800/1000], Loss: 0.0812
Epoch [900/1000], Loss: 0.0836
Epoch [1000/1000], Loss: 0.0846
RWKV training time: 174.93 seconds
RWKV / GPT2: 2.91x
--------------------------------------------------------------------------------
Training GPT2 on cuda
GPT2 parameters: 6846080
Epoch [100/1000], Loss: 0.1088
Epoch [200/1000], Loss: 0.0966
Epoch [300/1000], Loss: 0.0899
Epoch [400/1000], Loss: 0.0898
Epoch [500/1000], Loss: 0.0830
Epoch [600/1000], Loss: 0.0832
Epoch [700/1000], Loss: 0.0830
Epoch [800/1000], Loss: 0.0844
Epoch [900/1000], Loss: 0.0858
Epoch [1000/1000], Loss: 0.0873
GPT2 training time: 8.16 seconds
Training RWKV on cuda
RWKV parameters: 6864768
Epoch [100/1000], Loss: 0.1078
Epoch [200/1000], Loss: 0.0864
Epoch [300/1000], Loss: 0.0832
Epoch [400/1000], Loss: 0.0833
Epoch [500/1000], Loss: 0.0831
Epoch [600/1000], Loss: 0.0827
Epoch [700/1000], Loss: 0.0809
Epoch [800/1000], Loss: 0.0827
Epoch [900/1000], Loss: 0.0845
Epoch [1000/1000], Loss: 0.0847
RWKV training time: 77.16 seconds
RWKV / GPT2: 9.45x
--------------------------------------------------------------------------------
cuda version: 12.1
torch version: 2.1.0.dev20230812+cu121
cuda driver: 8902
huggingface version: 4.30.2
operating system: posix
Linux 5.15.0-78-generic
```
### Expected behavior
The RWKV paper claims that "one of the defining characteristics of RWKV is its ability to offer parallelized training and robust scalability, similar to Transformers". This led me to believe that their training times should be comparable.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25502/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25502/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25501
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25501/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25501/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25501/events
|
https://github.com/huggingface/transformers/pull/25501
| 1,849,860,380 |
PR_kwDOCUB6oc5X4xxm
| 25,501 |
🚨🚨🚨 Remove softmax for EfficientNetForImageClassification 🚨🚨🚨
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @rwightman Managed to track down the issue with the efficient net predictions on the hub ",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,692 | 1,692 | 1,692 |
COLLABORATOR
| null |
# What does this PR do?
**\/!\ This is a breaking change /!\\**
EfficientNet implementation erroneously added a softmax to the models logits in the classification head. This breaks with convention and [documentation](https://github.com/huggingface/transformers/blob/87c9d8a10f3935a46fc7b254d3bea97bca4bfcce/src/transformers/modeling_outputs.py#L1215) for the [model outputs](https://github.com/huggingface/transformers/blob/87c9d8a10f3935a46fc7b254d3bea97bca4bfcce/src/transformers/models/efficientnet/modeling_efficientnet.py#L623).
This results in the very small probabilities seen in the hosted widgets on the checkpoints pages, as the logits effectively have softmax applied twice.
<img width="1568" alt="image" src="https://github.com/huggingface/transformers/assets/22614925/111c8346-f679-4c2e-8800-b36cf5b385ff">
Most of the efficientnet checkpoints were downloaded < 100 times in the past month. However, [efficientnet-b7](https://huggingface.co/google/efficientnet-b7) is more popular with a few thousand downloads.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25501/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25501/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25501",
"html_url": "https://github.com/huggingface/transformers/pull/25501",
"diff_url": "https://github.com/huggingface/transformers/pull/25501.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25501.patch",
"merged_at": 1692029327000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25500
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25500/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25500/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25500/events
|
https://github.com/huggingface/transformers/pull/25500
| 1,849,845,282 |
PR_kwDOCUB6oc5X4uvs
| 25,500 |
fix gptq nits
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,692 | 1,692 | 1,692 |
MEMBER
| null |
# What does this PR do ?
This PR fixes a few nits on the GPTQ integration (docs and `GPTQConfig` class)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25500/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25500/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25500",
"html_url": "https://github.com/huggingface/transformers/pull/25500",
"diff_url": "https://github.com/huggingface/transformers/pull/25500.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25500.patch",
"merged_at": 1692027819000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25499
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25499/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25499/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25499/events
|
https://github.com/huggingface/transformers/issues/25499
| 1,849,843,893 |
I_kwDOCUB6oc5uQmC1
| 25,499 |
CUDA out of memory
|
{
"login": "andysingal",
"id": 20493493,
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andysingal",
"html_url": "https://github.com/andysingal",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"repos_url": "https://api.github.com/users/andysingal/repos",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Please provide a full reproducer and a reason why this all should fit in the 16GB GPU you have available.",
"> Please provide a full reproducer and a reason why this all should fit in the 16GB GPU you have available.\r\n\r\n@sgugger Here is the full code(i hope you got the link shared)\r\nWhile going over lmsys repo i found that they are still doing research on stablevicuna + qlora,... i tried loraconfig, however Loraconfig target_variables do not work. I tried PromptConfig since i was working on Human/bot. Please let me know if you have any further question sor concerns ",
"Hello, could you please reshare the minimal reproducer: code, command you are using to launch the training, the hardware as well as the versions of PyTorch, Transformers, Accelerate and PEFT?",
"> Hello, could you please reshare the minimal reproducer: code, command you are using to launch the training, the hardware as well as the versions of PyTorch, Transformers, Accelerate and PEFT?\r\n\r\nThanks for your response. Here is the colab notebook: https://colab.research.google.com/drive/1By1tOO6HE5Oopj2prr3tkDduewDFNpZu?usp=sharing @pacman100 @sgugger ",
"> > Hello, could you please reshare the minimal reproducer: code, command you are using to launch the training, the hardware as well as the versions of PyTorch, Transformers, Accelerate and PEFT?\r\n> \r\n> Thanks for your response. Here is the colab notebook: https://colab.research.google.com/drive/1By1tOO6HE5Oopj2prr3tkDduewDFNpZu?usp=sharing @pacman100 @sgugger\r\n\r\nAny updates @pacman100 @sgugger ",
"I think the best one for this issue would be @SunMarc as the user is trying to use AutoGPTQ along with PEFT Prompt Tuning.\r\n\r\nWhen trying it on Colab with T4 GPU, I am getting below error which is probably related to the Flash Attention:\r\n```\r\nRuntimeError Traceback (most recent call last)\r\n[<ipython-input-20-e3a673c6a851>](https://localhost:8080/#) in <cell line: 38>()\r\n 36 # print(\"\\n If there's a warning about missing keys above, please disregard :)\")\r\n 37 \r\n---> 38 trainer.train()\r\n 39 gc.collect()\r\n 40 torch.cuda.empty_cache()\r\n\r\n5 frames\r\n[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)\r\n 1554 hf_hub_utils.enable_progress_bars()\r\n 1555 else:\r\n-> 1556 return inner_training_loop(\r\n 1557 args=args,\r\n 1558 resume_from_checkpoint=resume_from_checkpoint,\r\n\r\n[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)\r\n 1870 \r\n 1871 with self.accelerator.accumulate(model):\r\n-> 1872 tr_loss_step = self.training_step(model, inputs)\r\n 1873 \r\n 1874 if (\r\n\r\n[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in training_step(self, model, inputs)\r\n 2746 scaled_loss.backward()\r\n 2747 else:\r\n-> 2748 self.accelerator.backward(loss)\r\n 2749 \r\n 2750 return loss.detach() / self.args.gradient_accumulation_steps\r\n\r\n[/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py](https://localhost:8080/#) in backward(self, loss, **kwargs)\r\n 1984 self.scaler.scale(loss).backward(**kwargs)\r\n 1985 else:\r\n-> 1986 loss.backward(**kwargs)\r\n 1987 \r\n 1988 def set_trigger(self):\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/_tensor.py](https://localhost:8080/#) in backward(self, gradient, retain_graph, create_graph, inputs)\r\n 490 inputs=inputs,\r\n 491 )\r\n--> 492 torch.autograd.backward(\r\n 493 self, gradient, retain_graph, create_graph, inputs=inputs\r\n 494 )\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/autograd/__init__.py](https://localhost:8080/#) in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)\r\n 249 # some Python versions print out the first line of a multi-line function\r\n 250 # calls in the traceback and some print out the last line\r\n--> 251 Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass\r\n 252 tensors,\r\n 253 grad_tensors_,\r\n\r\nRuntimeError: Expected is_sm80 || is_sm90 to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)\r\n```\r\n\r\nThe notebook that I am trying out is having few changes on top of what the user shared above: https://colab.research.google.com/drive/1UDoYUoSK-YJoFMwEzClhNxbeyBkv5aza?usp=sharing",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,692 | 1,702 | 1,702 |
NONE
| null |
### System Info
Kaggle notebook
### Who can help?
@pacman100 @sgu
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
training_args = transformers.TrainingArguments(
per_device_train_batch_size=MICRO_BATCH_SIZE,
gradient_accumulation_steps=GRADIENT_ACCUMULATION_STEPS,
learning_rate=LEARNING_RATE,
num_train_epochs=1,
fp16=True,
save_total_limit=4,
logging_steps=25,
output_dir="./outputs",
save_strategy='epoch',
optim="paged_adamw_8bit",
lr_scheduler_type = 'cosine',
warmup_ratio = 0.05,
report_to="wandb" if wandb else []
)
trainer = transformers.Trainer(
model=model,
train_dataset=data,
args=training_args,
data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),
)
model.config.use_cache = False
old_state_dict = model.state_dict
model.state_dict = (
lambda self, *_, **__: get_peft_model_state_dict(self, old_state_dict())
).__get__(model, type(model))
# if torch.__version__ >= "2" and sys.platform != "win32":
# model = torch.compile(model)
print("\n If there's a warning about missing keys above, please disregard :)")
trainer.train()
gc.collect()
torch.cuda.empty_cache()
gc.collect()
model.save_pretrained(OUTPUT_DIR)
```
got error:
```
You're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
---------------------------------------------------------------------------
OutOfMemoryError Traceback (most recent call last)
Cell In[20], line 36
31 # if torch.__version__ >= "2" and sys.platform != "win32":
32 # model = torch.compile(model)
34 print("\n If there's a warning about missing keys above, please disregard :)")
---> 36 trainer.train()
37 gc.collect()
38 torch.cuda.empty_cache()
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1661, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1656 self.model_wrapped = self.model
1658 inner_training_loop = find_executable_batch_size(
1659 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
1660 )
-> 1661 return inner_training_loop(
1662 args=args,
1663 resume_from_checkpoint=resume_from_checkpoint,
1664 trial=trial,
1665 ignore_keys_for_eval=ignore_keys_for_eval,
1666 )
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1946, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1943 self.control = self.callback_handler.on_step_begin(args, self.state, self.control)
1945 with self.accelerator.accumulate(model):
-> 1946 tr_loss_step = self.training_step(model, inputs)
1948 if (
1949 args.logging_nan_inf_filter
1950 and not is_torch_tpu_available()
1951 and (torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step))
1952 ):
1953 # if loss is nan or inf simply add the average of previous logged losses
1954 tr_loss += tr_loss / (1 + self.state.global_step - self._globalstep_last_logged)
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:2753, in Trainer.training_step(self, model, inputs)
2750 return loss_mb.reduce_mean().detach().to(self.args.device)
2752 with self.compute_loss_context_manager():
-> 2753 loss = self.compute_loss(model, inputs)
2755 if self.args.n_gpu > 1:
2756 loss = loss.mean() # mean() to average on multi-gpu parallel training
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:2778, in Trainer.compute_loss(self, model, inputs, return_outputs)
2776 else:
2777 labels = None
-> 2778 outputs = model(**inputs)
2779 # Save past state if it exists
2780 # TODO: this needs to be fixed and made cleaner later.
2781 if self.args.past_index >= 0:
File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py:581, in convert_outputs_to_fp32.<locals>.forward(*args, **kwargs)
580 def forward(*args, **kwargs):
--> 581 return model_forward(*args, **kwargs)
File /opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py:569, in ConvertOutputsToFp32.__call__(self, *args, **kwargs)
568 def __call__(self, *args, **kwargs):
--> 569 return convert_to_fp32(self.model_forward(*args, **kwargs))
File /opt/conda/lib/python3.10/site-packages/torch/amp/autocast_mode.py:14, in autocast_decorator.<locals>.decorate_autocast(*args, **kwargs)
11 @functools.wraps(func)
12 def decorate_autocast(*args, **kwargs):
13 with autocast_instance:
---> 14 return func(*args, **kwargs)
File /opt/conda/lib/python3.10/site-packages/peft/peft_model.py:968, in PeftModelForCausalLM.forward(self, input_ids, attention_mask, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict, **kwargs)
966 prompts = prompts.to(inputs_embeds.dtype)
967 inputs_embeds = torch.cat((prompts, inputs_embeds), dim=1)
--> 968 return self.base_model(inputs_embeds=inputs_embeds, **kwargs)
File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)
163 output = old_forward(*args, **kwargs)
164 else:
--> 165 output = old_forward(*args, **kwargs)
166 return module._hf_hook.post_forward(module, output)
File /opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:688, in LlamaForCausalLM.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
685 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
687 # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
--> 688 outputs = self.model(
689 input_ids=input_ids,
690 attention_mask=attention_mask,
691 position_ids=position_ids,
692 past_key_values=past_key_values,
693 inputs_embeds=inputs_embeds,
694 use_cache=use_cache,
695 output_attentions=output_attentions,
696 output_hidden_states=output_hidden_states,
697 return_dict=return_dict,
698 )
700 hidden_states = outputs[0]
701 logits = self.lm_head(hidden_states)
File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)
163 output = old_forward(*args, **kwargs)
164 else:
--> 165 output = old_forward(*args, **kwargs)
166 return module._hf_hook.post_forward(module, output)
File /opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:578, in LlamaModel.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)
570 layer_outputs = torch.utils.checkpoint.checkpoint(
571 create_custom_forward(decoder_layer),
572 hidden_states,
(...)
575 None,
576 )
577 else:
--> 578 layer_outputs = decoder_layer(
579 hidden_states,
580 attention_mask=attention_mask,
581 position_ids=position_ids,
582 past_key_value=past_key_value,
583 output_attentions=output_attentions,
584 use_cache=use_cache,
585 )
587 hidden_states = layer_outputs[0]
589 if use_cache:
File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)
163 output = old_forward(*args, **kwargs)
164 else:
--> 165 output = old_forward(*args, **kwargs)
166 return module._hf_hook.post_forward(module, output)
File /opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:292, in LlamaDecoderLayer.forward(self, hidden_states, attention_mask, position_ids, past_key_value, output_attentions, use_cache)
289 hidden_states = self.input_layernorm(hidden_states)
291 # Self Attention
--> 292 hidden_states, self_attn_weights, present_key_value = self.self_attn(
293 hidden_states=hidden_states,
294 attention_mask=attention_mask,
295 position_ids=position_ids,
296 past_key_value=past_key_value,
297 output_attentions=output_attentions,
298 use_cache=use_cache,
299 )
300 hidden_states = residual + hidden_states
302 # Fully Connected
File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)
163 output = old_forward(*args, **kwargs)
164 else:
--> 165 output = old_forward(*args, **kwargs)
166 return module._hf_hook.post_forward(module, output)
File /opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:212, in LlamaAttention.forward(self, hidden_states, attention_mask, position_ids, past_key_value, output_attentions, use_cache)
208 value_states = torch.cat([past_key_value[1], value_states], dim=2)
210 past_key_value = (key_states, value_states) if use_cache else None
--> 212 attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
214 if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):
215 raise ValueError(
216 f"Attention weights should be of size {(bsz, self.num_heads, q_len, kv_seq_len)}, but is"
217 f" {attn_weights.size()}"
218 )
OutOfMemoryError: CUDA out of memory. Tried to allocate 1.26 GiB (GPU 0; 15.90 GiB total capacity; 13.60 GiB already allocated; 877.75 MiB free; 14.12 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
### Expected behavior
would like model to train but vicuna does not support qlora... i am using PromptConfig + Vicuna
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25499/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25499/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25498
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25498/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25498/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25498/events
|
https://github.com/huggingface/transformers/pull/25498
| 1,849,790,803 |
PR_kwDOCUB6oc5X4jI6
| 25,498 |
🌐 [i18n-KO] Translated `add_new_pipeline.md` to Korean
|
{
"login": "heuristicwave",
"id": 31366038,
"node_id": "MDQ6VXNlcjMxMzY2MDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/31366038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/heuristicwave",
"html_url": "https://github.com/heuristicwave",
"followers_url": "https://api.github.com/users/heuristicwave/followers",
"following_url": "https://api.github.com/users/heuristicwave/following{/other_user}",
"gists_url": "https://api.github.com/users/heuristicwave/gists{/gist_id}",
"starred_url": "https://api.github.com/users/heuristicwave/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/heuristicwave/subscriptions",
"organizations_url": "https://api.github.com/users/heuristicwave/orgs",
"repos_url": "https://api.github.com/users/heuristicwave/repos",
"events_url": "https://api.github.com/users/heuristicwave/events{/privacy}",
"received_events_url": "https://api.github.com/users/heuristicwave/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25498). All of your documentation changes will be reflected on that endpoint.",
"May you please review this PR @stevhliu ? cc: @sgugger , @ArthurZucker, @eunseojo\r\nThank you so much for your help!"
] | 1,692 | 1,693 | 1,693 |
CONTRIBUTOR
| null |
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `add_new_pipeline.md` to Korean" 으로 부탁드립니다! -->
# What does this PR do?
Translated the `add_new_pipeline.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
May you please review this PR? @nuatmochoi, @bolizabeth, @hyunhp, @mjk0618, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25498/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25498/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25498",
"html_url": "https://github.com/huggingface/transformers/pull/25498",
"diff_url": "https://github.com/huggingface/transformers/pull/25498.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25498.patch",
"merged_at": 1693323524000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25497
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25497/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25497/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25497/events
|
https://github.com/huggingface/transformers/pull/25497
| 1,849,740,618 |
PR_kwDOCUB6oc5X4YLs
| 25,497 |
MaskFormer post_process_instance_segmentation bug fix convert out side of loop
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi,\r\nI am using transformers Version: 4.32.0.dev0, \r\ndo you have an estimated time to release a version with this fix?\r\nthanks",
"@MayChi22 We release a new version roughly every month, so there will likely be a new release in a few weeks. "
] | 1,692 | 1,692 | 1,692 |
COLLABORATOR
| null |
# What does this PR do?
Resolves a bug in post processing logic in MaskFormer and Mask2Former.
If `return_coco_annotation` is set to `True` the generated segmentation map was converted to RLE before finishing iterating over all of the queries. This breaks the assumption that segmentation is an array of the same shape as `pred_masks` [here](https://github.com/huggingface/transformers/blob/e97deca9a3f4ddf2a6a44405ed928067d7b729f3/src/transformers/models/maskformer/image_processing_maskformer.py#L1073) in the for loop.
Fixes #25486
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25497/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25497/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25497",
"html_url": "https://github.com/huggingface/transformers/pull/25497",
"diff_url": "https://github.com/huggingface/transformers/pull/25497.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25497.patch",
"merged_at": 1692025257000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25496
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25496/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25496/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25496/events
|
https://github.com/huggingface/transformers/pull/25496
| 1,849,727,032 |
PR_kwDOCUB6oc5X4VWB
| 25,496 |
Remove logging code in TF Longformer that fails to compile
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"There are 6 occurrences of `if (...) padding_len > 0` in the source file [modeling_tf_longformer.py](https://github.com/huggingface/transformers/pull/25496/files#diff-782b222e9d393fe6750cf8e4cd870bcf3748a92ade5086e518b4d716a80080f8). I recommend fixing them all.",
"@rdisipio In most cases, TF's `autograph` should correctly recognize and compile those. I believe (though I'm not certain) that the issue arises here because the effect of the conditional is purely logging, without any TF code, and so autograph does not correctly convert that to TF graph operations.\r\n\r\nIf you encounter issues with the other lines, though, then I'm wrong - please reopen the issue and I'll fix it!",
"it works now! also thanks for the explanation, I'm not really familiar with what `autograph` does behind the scenes.\r\n\r\nCheers,\r\nR.",
"No-one is familiar with what `autograph` does behind the scenes, lol. If anyone is reading this and can give me a full explanation of why the conditional failed to compile here but the others succeed, let me know!"
] | 1,692 | 1,692 | 1,692 |
MEMBER
| null |
This PR fixes an issue where one of the logging blocks in `TFLongformer` was hidden behind a conditional that didn't play nicely with TF compilation. The issue is only triggered in certain compilation paths, which is why our tests didn't pick this up.
Fixes #25418
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25496/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25496/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25496",
"html_url": "https://github.com/huggingface/transformers/pull/25496",
"diff_url": "https://github.com/huggingface/transformers/pull/25496.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25496.patch",
"merged_at": 1692019335000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25495
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25495/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25495/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25495/events
|
https://github.com/huggingface/transformers/pull/25495
| 1,849,712,912 |
PR_kwDOCUB6oc5X4SPZ
| 25,495 |
[DO NOT MERGE] testing out `tokenizers==013.4.rc1`.
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@ydshieh FYI.\r\n\r\nCurrent failure seem all linked to small dummy tokenizers. I'll look into the errors and report back (not sure why stride is so much used though)",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25495). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,692 | 1,695 | 1,695 |
CONTRIBUTOR
| null |
# What does this PR do?
testing out `tokenizers==013.4.rc1`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25495/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25495/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25495",
"html_url": "https://github.com/huggingface/transformers/pull/25495",
"diff_url": "https://github.com/huggingface/transformers/pull/25495.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25495.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25494
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25494/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25494/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25494/events
|
https://github.com/huggingface/transformers/issues/25494
| 1,849,607,837 |
I_kwDOCUB6oc5uPsad
| 25,494 |
Error in LSTM-based RNN training: "IndexError: index out of range" during DataLoader iteration
|
{
"login": "capdescx",
"id": 133493021,
"node_id": "U_kgDOB_TxHQ",
"avatar_url": "https://avatars.githubusercontent.com/u/133493021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/capdescx",
"html_url": "https://github.com/capdescx",
"followers_url": "https://api.github.com/users/capdescx/followers",
"following_url": "https://api.github.com/users/capdescx/following{/other_user}",
"gists_url": "https://api.github.com/users/capdescx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/capdescx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/capdescx/subscriptions",
"organizations_url": "https://api.github.com/users/capdescx/orgs",
"repos_url": "https://api.github.com/users/capdescx/repos",
"events_url": "https://api.github.com/users/capdescx/events{/privacy}",
"received_events_url": "https://api.github.com/users/capdescx/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I'm not too sure why you are opening an issue here as this has nothing to do with the Transformers library. You should try the PyTorch forums.",
"@sgugger sorry and thank you",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,692 | 1,695 | 1,695 |
NONE
| null |
I'm currently working on training an LSTM-based RNN using PyTorch and encountering an error during the training loop I've tried various solutions to resolve this issue but I'm still facing the same error message. Here's the relevant code snippet
```import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, Dataset
from torch.nn.utils.rnn import pad_sequence
import pandas as pd
import json
class RNNModel(nn.Module):
def __init__(self, vocab_size, embedding_dim, hidden_dim):
super(RNNModel, self).__init__()
# Define your layers here
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.rnn = nn.LSTM(embedding_dim, hidden_dim, batch_first=True)
self.fc = nn.Linear(hidden_dim, vocab_size)
def forward(self, x):
embedded = self.embedding(x)
output, (h_n, c_n) = self.rnn(embedded)
return self.fc(output)
with open('cleaned_vocab.json', 'r', encoding='utf-8') as vocab_file:
vocab = json.load(vocab_file)
class CustomDataset(Dataset):
def __init__(self, csv_path, max_seq_length):
self.data = pd.read_csv(csv_path)
self.max_seq_length = max_seq_length
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
text = self.data.loc[idx, 'text']
tokens = [int(token) for token in text.split()]
if len(tokens) > self.max_seq_length:
tokens = tokens[:self.max_seq_length]
padded_sequence = tokens + [0] * (self.max_seq_length - len(tokens))
input_sequence = torch.tensor(padded_sequence[:-1]) # Input sequence without last token
target_sequence = torch.tensor(padded_sequence[1:]) # Target sequence without first token
return input_sequence, target_sequence
class CustomCollate:
def __init__(self, pad_idx):
self.pad_idx = pad_idx
def __call__(self, batch):
input_seqs, target_seqs = zip(*batch)
padded_input_seqs = pad_sequence(input_seqs, batch_first=True, padding_value=self.pad_idx)
padded_target_seqs = pad_sequence(target_seqs, batch_first=True, padding_value=self.pad_idx)
return padded_input_seqs, padded_target_seqs
max_sequence_length = 30 # Define your desired maximum sequence length
dataset = CustomDataset('processed_data.csv', max_sequence_length)
dataloader = DataLoader(dataset, batch_size=32, shuffle=True, collate_fn=CustomCollate(0))
vocab_size = len(vocab)
embedding_dim = 128
hidden_dim = 256
rnn_model = RNNModel(vocab_size, embedding_dim, hidden_dim)
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(rnn_model.parameters(), lr=0.001)
num_epochs = 10
for epoch in range(num_epochs):
for input_batch, target_batch in dataloader:
optimizer.zero_grad()
output = rnn_model(input_batch)
# Calculate loss and backpropagate
loss = criterion(output.transpose(1, 2), target_batch)
loss.backward()
optimizer.step()
torch.save(rnn_model.state_dict(), 'rnn_model.pth')
print("Training completed.")
```
I've verified my csv file adjusted the 'max_seq_length' parameter to ensure it is appropriate for my data Double-checked the data pre-processing steps, including padding and formatting
Any suggestions on how to further debug and resolve this issue would be greatly appreciated Thank you in advance!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25494/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25494/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25493
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25493/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25493/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25493/events
|
https://github.com/huggingface/transformers/pull/25493
| 1,849,437,260 |
PR_kwDOCUB6oc5X3U9t
| 25,493 |
Set can_generate for SpeechT5ForTextToSpeech
|
{
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,692 | 1,692 | 1,692 |
COLLABORATOR
| null |
# What does this PR do?
Following the [discussion](https://github.com/huggingface/transformers/pull/24952#discussion_r1293224895) on checking whether the generate or forward method will be used in the TTS pipeline, it makes sense to set `can_generate=True` for `SpeechT5ForTextToSpeech`, so that it's easier to check if it can generate.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
## Who can review?
Hey @sanchit-gandhi and @sgugger , what do you think of this??
Thanks!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25493/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25493/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25493",
"html_url": "https://github.com/huggingface/transformers/pull/25493",
"diff_url": "https://github.com/huggingface/transformers/pull/25493.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25493.patch",
"merged_at": 1692024107000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25492
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25492/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25492/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25492/events
|
https://github.com/huggingface/transformers/issues/25492
| 1,849,400,134 |
I_kwDOCUB6oc5uO5tG
| 25,492 |
LayerNormKernelImpl" not implemented for 'Half when using bloom-560m with int8, device cpu
|
{
"login": "Nguyendat-bit",
"id": 34463182,
"node_id": "MDQ6VXNlcjM0NDYzMTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/34463182?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nguyendat-bit",
"html_url": "https://github.com/Nguyendat-bit",
"followers_url": "https://api.github.com/users/Nguyendat-bit/followers",
"following_url": "https://api.github.com/users/Nguyendat-bit/following{/other_user}",
"gists_url": "https://api.github.com/users/Nguyendat-bit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nguyendat-bit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nguyendat-bit/subscriptions",
"organizations_url": "https://api.github.com/users/Nguyendat-bit/orgs",
"repos_url": "https://api.github.com/users/Nguyendat-bit/repos",
"events_url": "https://api.github.com/users/Nguyendat-bit/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nguyendat-bit/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Output exceeds the [size limit](command:workbench.action.openSettings?%5B%22notebook.output.textLineLimit%22%5D). Open the full output data [in a text editor](command:workbench.action.openLargeOutput?feedb4df-81a7-4c54-ab34-8c946d0d421b)---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\nCell In[12], line 1\r\n----> 1 generate(\"1 + 1 ? \")\r\n\r\nCell In[9], line 7, in generate(text)\r\n 2 with torch.no_grad():\r\n 3 encode= tokenizer.batch_encode_plus(list(map(lambda x: x.format(qus= text[:256]), prompts)), \r\n 4 padding= 'longest',\r\n 5 return_tensors= 'pt')\r\n----> 7 output_sequences= model.generate(input_ids= encode['input_ids'], attention_mask= encode['attention_mask'], max_new_tokens= 20)\r\n 8 return tokenizer.batch_decode(output_sequences, skip_special_tokens= True)\r\n\r\nFile [~/miniconda3/envs/cls-lm/lib/python3.10/site-packages/torch/utils/_contextlib.py:115](https://vscode-remote+ssh-002dremote-002b183-002e81-002e34-002e200.vscode-resource.vscode-cdn.net/home/tungnk/cls-lm/Doamin-news/experiment/~/miniconda3/envs/cls-lm/lib/python3.10/site-packages/torch/utils/_contextlib.py:115), in context_decorator..decorate_context(*args, **kwargs)\r\n 112 @functools.wraps(func)\r\n 113 def decorate_context(*args, **kwargs):\r\n 114 with ctx_factory():\r\n--> 115 return func(*args, **kwargs)\r\n\r\nFile [~/miniconda3/envs/cls-lm/lib/python3.10/site-packages/transformers/generation/utils.py:1538](https://vscode-remote+ssh-002dremote-002b183-002e81-002e34-002e200.vscode-resource.vscode-cdn.net/home/tungnk/cls-lm/Doamin-news/experiment/~/miniconda3/envs/cls-lm/lib/python3.10/site-packages/transformers/generation/utils.py:1538), in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, **kwargs)\r\n 1532 raise ValueError(\r\n 1533 \"num_return_sequences has to be 1 when doing greedy search, \"\r\n 1534 f\"but is {generation_config.num_return_sequences}.\"\r\n 1535 )\r\n 1537 # 11. run greedy search\r\n...\r\n 2543 layer_norm, (input, weight, bias), input, normalized_shape, weight=weight, bias=bias, eps=eps\r\n 2544 )\r\n-> 2545 return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)\r\n\r\nRuntimeError: \"LayerNormKernelImpl\" not implemented for 'Half'",
"The 8-bit quantization with `load_in_8bit=True` only works on the GPU. You can't use it (or in general flloat16) on the CPU.",
"> The 8-bit quantization with `load_in_8bit=True` only works on the GPU. You can't use it (or in general flloat16) on the CPU.\r\n\r\nThanks you "
] | 1,692 | 1,692 | 1,692 |
NONE
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.31
- Python version: 3.10.0
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0.dev20230810+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
Hi @ArthurZucker, I'm encountering an error: 'LayerNormKernelImpl' not implemented for 'Half' when generating text with the Bloom-560m + int8 model, device=cpu. However, when I run it on CUDA, the error does not occur. Could you help me, please?


Thanks you
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
.
### Expected behavior
.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25492/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25492/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25491
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25491/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25491/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25491/events
|
https://github.com/huggingface/transformers/issues/25491
| 1,849,311,280 |
I_kwDOCUB6oc5uOkAw
| 25,491 |
expected scalar type Float but found Half
|
{
"login": "Aniruddha-JU",
"id": 36475622,
"node_id": "MDQ6VXNlcjM2NDc1NjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/36475622?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aniruddha-JU",
"html_url": "https://github.com/Aniruddha-JU",
"followers_url": "https://api.github.com/users/Aniruddha-JU/followers",
"following_url": "https://api.github.com/users/Aniruddha-JU/following{/other_user}",
"gists_url": "https://api.github.com/users/Aniruddha-JU/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aniruddha-JU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aniruddha-JU/subscriptions",
"organizations_url": "https://api.github.com/users/Aniruddha-JU/orgs",
"repos_url": "https://api.github.com/users/Aniruddha-JU/repos",
"events_url": "https://api.github.com/users/Aniruddha-JU/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aniruddha-JU/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Model successfully loaded, but during model.generate line throwing the error.\r\n",
"Please copy-paste the whole traceback, as is asked in the issue templare.\r\ncc @ArthurZucker and @younesbelkada ",
"@ArthurZucker @younesbelkada @sgugger \r\n```python\r\nRuntimeError Traceback (most recent call last)\r\nCell In[35], line 1\r\n----> 1 outputs = model.generate(input_ids)\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)\r\n 112 @functools.wraps(func)\r\n 113 def decorate_context(*args, **kwargs):\r\n 114 with ctx_factory():\r\n--> 115 return func(*args, **kwargs)\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1486, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, negative_prompt_ids, negative_prompt_attention_mask, **kwargs)\r\n 1478 logger.warning(\r\n 1479 \"A decoder-only architecture is being used, but right-padding was detected! For correct \"\r\n 1480 \"generation results, please set `padding_side='left'` when initializing the tokenizer.\"\r\n 1481 )\r\n 1483 if self.config.is_encoder_decoder and \"encoder_outputs\" not in model_kwargs:\r\n 1484 # if model is encoder decoder encoder_outputs are created\r\n 1485 # and added to `model_kwargs`\r\n-> 1486 model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(\r\n 1487 inputs_tensor, model_kwargs, model_input_name\r\n 1488 )\r\n 1490 # 5. Prepare `input_ids` which will be used for auto-regressive generation\r\n 1491 if self.config.is_encoder_decoder:\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:655, in GenerationMixin._prepare_encoder_decoder_kwargs_for_generation(self, inputs_tensor, model_kwargs, model_input_name)\r\n 653 encoder_kwargs[\"return_dict\"] = True\r\n 654 encoder_kwargs[model_input_name] = inputs_tensor\r\n--> 655 model_kwargs[\"encoder_outputs\"]: ModelOutput = encoder(**encoder_kwargs)\r\n 657 return model_kwargs\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1502, in Module._wrapped_call_impl(self, *args, **kwargs)\r\n 1500 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\r\n 1501 else:\r\n-> 1502 return self._call_impl(*args, **kwargs)\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1511, in Module._call_impl(self, *args, **kwargs)\r\n 1506 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1507 # this function, and just call forward.\r\n 1508 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1509 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1510 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1511 return forward_call(*args, **kwargs)\r\n 1512 # Do not call functions when jit is used\r\n 1513 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/transformers/models/t5/modeling_t5.py:1123, in T5Stack.forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, cross_attn_head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 1110 layer_outputs = checkpoint(\r\n 1111 create_custom_forward(layer_module),\r\n 1112 hidden_states,\r\n (...)\r\n 1120 None, # past_key_value is always None with gradient checkpointing\r\n 1121 )\r\n 1122 else:\r\n-> 1123 layer_outputs = layer_module(\r\n 1124 hidden_states,\r\n 1125 attention_mask=extended_attention_mask,\r\n 1126 position_bias=position_bias,\r\n 1127 encoder_hidden_states=encoder_hidden_states,\r\n 1128 encoder_attention_mask=encoder_extended_attention_mask,\r\n 1129 encoder_decoder_position_bias=encoder_decoder_position_bias,\r\n 1130 layer_head_mask=layer_head_mask,\r\n 1131 cross_attn_layer_head_mask=cross_attn_layer_head_mask,\r\n 1132 past_key_value=past_key_value,\r\n 1133 use_cache=use_cache,\r\n 1134 output_attentions=output_attentions,\r\n 1135 )\r\n 1137 # layer_outputs is a tuple with:\r\n 1138 # hidden-states, key-value-states, (self-attention position bias), (self-attention weights), (cross-attention position bias), (cross-attention weights)\r\n 1139 if use_cache is False:\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1502, in Module._wrapped_call_impl(self, *args, **kwargs)\r\n 1500 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\r\n 1501 else:\r\n-> 1502 return self._call_impl(*args, **kwargs)\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1511, in Module._call_impl(self, *args, **kwargs)\r\n 1506 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1507 # this function, and just call forward.\r\n 1508 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1509 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1510 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1511 return forward_call(*args, **kwargs)\r\n 1512 # Do not call functions when jit is used\r\n 1513 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)\r\n 163 output = old_forward(*args, **kwargs)\r\n 164 else:\r\n--> 165 output = old_forward(*args, **kwargs)\r\n 166 return module._hf_hook.post_forward(module, output)\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/transformers/models/t5/modeling_t5.py:695, in T5Block.forward(self, hidden_states, attention_mask, position_bias, encoder_hidden_states, encoder_attention_mask, encoder_decoder_position_bias, layer_head_mask, cross_attn_layer_head_mask, past_key_value, use_cache, output_attentions, return_dict)\r\n 692 else:\r\n 693 self_attn_past_key_value, cross_attn_past_key_value = None, None\r\n--> 695 self_attention_outputs = self.layer[0](\r\n 696 hidden_states,\r\n 697 attention_mask=attention_mask,\r\n 698 position_bias=position_bias,\r\n 699 layer_head_mask=layer_head_mask,\r\n 700 past_key_value=self_attn_past_key_value,\r\n 701 use_cache=use_cache,\r\n 702 output_attentions=output_attentions,\r\n 703 )\r\n 704 hidden_states, present_key_value_state = self_attention_outputs[:2]\r\n 705 attention_outputs = self_attention_outputs[2:] # Keep self-attention outputs and relative position weights\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1502, in Module._wrapped_call_impl(self, *args, **kwargs)\r\n 1500 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\r\n 1501 else:\r\n-> 1502 return self._call_impl(*args, **kwargs)\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1511, in Module._call_impl(self, *args, **kwargs)\r\n 1506 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1507 # this function, and just call forward.\r\n 1508 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1509 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1510 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1511 return forward_call(*args, **kwargs)\r\n 1512 # Do not call functions when jit is used\r\n 1513 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)\r\n 163 output = old_forward(*args, **kwargs)\r\n 164 else:\r\n--> 165 output = old_forward(*args, **kwargs)\r\n 166 return module._hf_hook.post_forward(module, output)\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/transformers/models/t5/modeling_t5.py:601, in T5LayerSelfAttention.forward(self, hidden_states, attention_mask, position_bias, layer_head_mask, past_key_value, use_cache, output_attentions)\r\n 591 def forward(\r\n 592 self,\r\n 593 hidden_states,\r\n (...)\r\n 599 output_attentions=False,\r\n 600 ):\r\n--> 601 normed_hidden_states = self.layer_norm(hidden_states)\r\n 602 attention_output = self.SelfAttention(\r\n 603 normed_hidden_states,\r\n 604 mask=attention_mask,\r\n (...)\r\n 609 output_attentions=output_attentions,\r\n 610 )\r\n 611 hidden_states = hidden_states + self.dropout(attention_output[0])\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1502, in Module._wrapped_call_impl(self, *args, **kwargs)\r\n 1500 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\r\n 1501 else:\r\n-> 1502 return self._call_impl(*args, **kwargs)\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1511, in Module._call_impl(self, *args, **kwargs)\r\n 1506 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1507 # this function, and just call forward.\r\n 1508 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1509 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1510 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1511 return forward_call(*args, **kwargs)\r\n 1512 # Do not call functions when jit is used\r\n 1513 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)\r\n 163 output = old_forward(*args, **kwargs)\r\n 164 else:\r\n--> 165 output = old_forward(*args, **kwargs)\r\n 166 return module._hf_hook.post_forward(module, output)\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/apex/normalization/fused_layer_norm.py:386, in FusedRMSNorm.forward(self, input)\r\n 383 return manual_rms_norm(input, self.normalized_shape, self.weight, self.eps)\r\n 385 if self.elementwise_affine:\r\n--> 386 return fused_rms_norm_affine(input, self.weight, self.normalized_shape, self.eps)\r\n 387 else:\r\n 388 return fused_rms_norm(input, self.normalized_shape, self.eps)\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/apex/normalization/fused_layer_norm.py:189, in fused_rms_norm_affine(input, weight, normalized_shape, eps)\r\n 187 args = _cast_if_autocast_enabled(input, weight, normalized_shape, eps)\r\n 188 with torch.cuda.amp.autocast(enabled=False):\r\n--> 189 return FusedRMSNormAffineFunction.apply(*args)\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/torch/autograd/function.py:506, in Function.apply(cls, *args, **kwargs)\r\n 503 if not torch._C._are_functorch_transforms_active():\r\n 504 # See NOTE: [functorch vjp and autograd interaction]\r\n 505 args = _functorch.utils.unwrap_dead_wrappers(args)\r\n--> 506 return super().apply(*args, **kwargs) # type: ignore[misc]\r\n 508 if cls.setup_context == _SingleLevelFunction.setup_context:\r\n 509 raise RuntimeError(\r\n 510 'In order to use an autograd.Function with functorch transforms '\r\n 511 '(vmap, grad, jvp, jacrev, ...), it must override the setup_context '\r\n 512 'staticmethod. For more details, please see '\r\n 513 '[https://pytorch.org/docs/master/notes/extending.func.html](https://pytorch.org/docs/master/notes/extending.func.html%3C/span%3E%3Cspan) style=\"color:rgb(175,0,0)\">')\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/apex/normalization/fused_layer_norm.py:69, in FusedRMSNormAffineFunction.forward(ctx, input, weight, normalized_shape, eps)\r\n 67 input_ = input.contiguous()\r\n 68 weight_ = weight.contiguous()\r\n---> 69 output, invvar = fused_layer_norm_cuda.rms_forward_affine(\r\n 70 input_, ctx.normalized_shape, weight_, ctx.eps)\r\n 71 ctx.save_for_backward(input_, weight_, invvar)\r\n 72 return output\r\n\r\nRuntimeError: expected scalar type Float but found Half\r\n```",
"Hi @Aniruddha-JU \r\nThanks for the issue, this is know that T5 + apex + int8 does currently not work \r\nCheck out the related issues https://github.com/huggingface/transformers/issues/21656 / https://github.com/huggingface/transformers/issues/21391\r\nThe fix now is to uninstall apex from your env",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,692 | 1,695 | 1,695 |
NONE
| null |
### System Info
Install latest trsansformer, accelarate, and butsandbytes.
### Who can help?
from transformers import T5Tokenizer, T5ForConditionalGeneration
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xxl")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xxl", device_map="auto", load_in_8bit=True)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
### Expected behavior
error solve
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25491/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25491/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25490
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25490/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25490/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25490/events
|
https://github.com/huggingface/transformers/issues/25490
| 1,849,303,537 |
I_kwDOCUB6oc5uOiHx
| 25,490 |
Owlvit's Image Guided Detection Broken
|
{
"login": "flavourabbit",
"id": 45381460,
"node_id": "MDQ6VXNlcjQ1MzgxNDYw",
"avatar_url": "https://avatars.githubusercontent.com/u/45381460?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flavourabbit",
"html_url": "https://github.com/flavourabbit",
"followers_url": "https://api.github.com/users/flavourabbit/followers",
"following_url": "https://api.github.com/users/flavourabbit/following{/other_user}",
"gists_url": "https://api.github.com/users/flavourabbit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flavourabbit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flavourabbit/subscriptions",
"organizations_url": "https://api.github.com/users/flavourabbit/orgs",
"repos_url": "https://api.github.com/users/flavourabbit/repos",
"events_url": "https://api.github.com/users/flavourabbit/events{/privacy}",
"received_events_url": "https://api.github.com/users/flavourabbit/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"```python\r\nimport requests\r\nfrom PIL import Image\r\nimport torch\r\nfrom transformers import AutoProcessor, OwlViTForObjectDetection\r\n\r\nprocessor = AutoProcessor.from_pretrained(\"google/owlvit-base-patch16\")\r\nmodel = OwlViTForObjectDetection.from_pretrained(\"google/owlvit-base-patch16\")\r\nurl = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\n# query_url = \"http://images.cocodataset.org/val2017/000000001675.jpg\"\r\n# query_image = Image.open(requests.get(query_url, stream=True).raw)\r\nquery_image = skimage.data.astronaut()\r\nquery_image = Image.fromarray(np.uint8(query_image)).convert(\"RGB\")\r\n\r\ninputs = processor(images=image, query_images=query_image, return_tensors=\"pt\")\r\nwith torch.no_grad():\r\n outputs = model.image_guided_detection(**inputs)\r\n# Target image sizes (height, width) to rescale box predictions [batch_size, 2]\r\ntarget_sizes = torch.Tensor([image.size[::-1]])\r\n# Convert outputs (bounding boxes and class logits) to COCO API\r\nresults = processor.post_process_image_guided_detection(\r\n outputs=outputs, threshold=0.6, nms_threshold=0.3, target_sizes=target_sizes\r\n)\r\ni = 0 # Retrieve predictions for the first image\r\nboxes, scores = results[i][\"boxes\"], results[i][\"scores\"]\r\nfor box, score in zip(boxes, scores):\r\n box = [round(i, 2) for i in box.tolist()]\r\n print(f\"Detected similar object with confidence {round(score.item(), 3)} at location {box}\")\r\n```\r\n\r\nThe above official example code does same.\r\n(Target: cat image, query: astronaut image)",
"It seems that post_process_image_guided_detection normalizes `scores` where max = 1.0, so that there no threshold logic working. That means where there is object or not, the model will detect sth. However, I don't think this fits to the definition of Object Detection. ",
"Hi @flavourabbit, thanks for reporting this issue! \r\n\r\ncc @rafaelpadilla ",
"Hi @flavourabbit ,\r\n\r\nThanks for bringing this to our attention! 🙌 You've made a great observation.\r\n\r\nJust as you highlighted, the detected objects in the target image can differ based on the query text/image and the threshold we apply.\r\n\r\nIt's interesting to note that the official implementation of the paper \"[Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230)\" showcases the same behavior. You can check out the official code [here](https://github.com/google-research/scenic/blob/56b770da566520ec1022fff4d5a1922f98a2d11f/scenic/projects/owl_vit/notebooks/interactive.py#L211). The scores are scaled between 0.1 and 1 and then we apply the threshold. \r\n\r\nLet me break it down a bit:\r\n\r\nFor the query image `000000001675.jpg` (featuring a \"cat\"), the detections' scores before rescaling in descending order look like:\r\n`[0.8556, 0.7450, 0.1511, ...]`\r\nPost-rescaling, they become: `[1.000, 0.8564, 0.0850, ...]`\r\nPost-thresholding, only the top two scores (`1.000` and `0.8564)` remain. And so, we end up having their corresponding bounding boxes around both cats.\r\n\r\nNow, if we switch to the query image of the astronaut, the pre-rescale scores are `[0.4259, 0.0707, 0.0441, ...]`\r\nAfter rescaling, they become: `[1.0000, 0.0734, 0.0039, ...]`\r\nAfter this step, only the top score (`1.000`) is retained.\r\n\r\nThis means the peak score will always adjust to `1.000`. That's why you're seeing a score of `tensor([1.0000])` in all these examples.\r\n\r\nYou've made a valid point about the potential for confusion using the query image in the colab example. 👍\r\n\r\nHope this clears things up! If you have any other questions or feedback, feel free to share. 😊\r\n\r\n\r\n\r\n\r\n"
] | 1,692 | 1,692 | 1,692 |
NONE
| null |
### System Info
Colab env. for better reproducibility (transformers==4.32.0.dev0)
I ran the example for several times but it still give wrong results.
https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/zeroshot_object_detection_with_owlvit.ipynb
Dear @amyeroberts , @ArthurZucker
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Reproduction is easy,
1) Just run the following Colab Notebook to the bottom (https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/zeroshot_object_detection_with_owlvit.ipynb)
The last cell of the notebook, it is now showing one bounding box
```python
results = processor.post_process_image_guided_detection(outputs=outputs, threshold=0.6, nms_threshold=0.3, target_sizes=target_sizes)
boxes, scores = results[0]["boxes"], results[0]["scores"]
print(scores)
```
The above code shows weird output, tensor([1.0000])
2) Interestingly, if I change the query image to something irrelevant, like
```python
skimg = skimage.data.astronaut()
query_image = Image.fromarray(np.uint8(skimg)).convert("RGB")
# ... omiited lines
print(scores)
```
The above shows tensor([1.0000]) as well with a giant bbox.
### Expected behavior
The last cell of the notebook, it should show two bounding boxes for each cat in the image.
(finding two cat bboxes according to the query image)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25490/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25490/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25489
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25489/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25489/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25489/events
|
https://github.com/huggingface/transformers/issues/25489
| 1,849,163,126 |
I_kwDOCUB6oc5uN_12
| 25,489 |
Implement SuperPoint / SuperGlue
|
{
"login": "sbucaille",
"id": 24275548,
"node_id": "MDQ6VXNlcjI0Mjc1NTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/24275548?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sbucaille",
"html_url": "https://github.com/sbucaille",
"followers_url": "https://api.github.com/users/sbucaille/followers",
"following_url": "https://api.github.com/users/sbucaille/following{/other_user}",
"gists_url": "https://api.github.com/users/sbucaille/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sbucaille/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sbucaille/subscriptions",
"organizations_url": "https://api.github.com/users/sbucaille/orgs",
"repos_url": "https://api.github.com/users/sbucaille/repos",
"events_url": "https://api.github.com/users/sbucaille/events{/privacy}",
"received_events_url": "https://api.github.com/users/sbucaille/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false | null |
[] |
[
"I could help here, I have worked with both of these models as well as LoFTR which is the current SOTA in image matching. I am new to Transformers contributions as well, but would love to contribute if any of the repo maintainers agree to the inclusion of these models",
"Hi @arkouda , I was about to use this model as an opportunity to contribute to the project, so I'll be working on the SuperGlue model implementation myself but feel free to take care of the LoFTR :smile: "
] | 1,691 | 1,692 | null |
NONE
| null |
### Model description
The SuperGlue network is a Graph Neural Network combined with an Optimal Matching layer that is trained to perform matching on two sets of sparse image features.
SuperGlue is built on top of SuperPoint model which consists of detecting the most interesting keypoints in an image. With the keypoints of two different images, SuperGlue proceeds to the matching.
I noticed there was no image matching models implemented in transformers library so I propose this first one. I extensively used it in other activities and am new to transformers git contributions, so I am willing to implement it myself as a first contribution.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
SuperPoint and SuperGlue code and weights are available at https://github.com/magicleap/SuperGluePretrainedNetwork
The original paper of SuperPoint : https://arxiv.org/abs/1712.07629
The original paper of SuperGlue : https://arxiv.org/abs/1911.11763
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25489/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25489/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/25488
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25488/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25488/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25488/events
|
https://github.com/huggingface/transformers/pull/25488
| 1,849,051,669 |
PR_kwDOCUB6oc5X1_0E
| 25,488 |
Add type hints to Blip2QFormer, BigBirdForQA and ConditionalDetr family models
|
{
"login": "nablabits",
"id": 33068707,
"node_id": "MDQ6VXNlcjMzMDY4NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/33068707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nablabits",
"html_url": "https://github.com/nablabits",
"followers_url": "https://api.github.com/users/nablabits/followers",
"following_url": "https://api.github.com/users/nablabits/following{/other_user}",
"gists_url": "https://api.github.com/users/nablabits/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nablabits/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nablabits/subscriptions",
"organizations_url": "https://api.github.com/users/nablabits/orgs",
"repos_url": "https://api.github.com/users/nablabits/repos",
"events_url": "https://api.github.com/users/nablabits/events{/privacy}",
"received_events_url": "https://api.github.com/users/nablabits/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
# What does this PR do?
Addresses a few type hints in https://github.com/huggingface/transformers/issues/16059
## Who can review?
@Rocketknight1 please
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25488/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25488/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25488",
"html_url": "https://github.com/huggingface/transformers/pull/25488",
"diff_url": "https://github.com/huggingface/transformers/pull/25488.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25488.patch",
"merged_at": 1692020674000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25487
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25487/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25487/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25487/events
|
https://github.com/huggingface/transformers/issues/25487
| 1,849,004,278 |
I_kwDOCUB6oc5uNZD2
| 25,487 |
GPU memory usage increased continuously in validation
|
{
"login": "sdlee130",
"id": 49019184,
"node_id": "MDQ6VXNlcjQ5MDE5MTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/49019184?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sdlee130",
"html_url": "https://github.com/sdlee130",
"followers_url": "https://api.github.com/users/sdlee130/followers",
"following_url": "https://api.github.com/users/sdlee130/following{/other_user}",
"gists_url": "https://api.github.com/users/sdlee130/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sdlee130/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sdlee130/subscriptions",
"organizations_url": "https://api.github.com/users/sdlee130/orgs",
"repos_url": "https://api.github.com/users/sdlee130/repos",
"events_url": "https://api.github.com/users/sdlee130/events{/privacy}",
"received_events_url": "https://api.github.com/users/sdlee130/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Yes, the tensors are accumulated on the GPUs and you are asking for a lot of them with `output_attentions=True`, so it's perfectly normal to see a GPU memory increase. To manage your GPU RAM you will need to set `eval_accumulation_steps` so that the tensors accumulated are offloaded to the CPU every `eval_accumulation_steps` steps/",
"I already try to use `eval_accumulation_steps`. But when I used that, RAM(not GPU memory) usage increased continuously and then kernel was dead. ",
"And I have another question. In my knowledge, loss and logit also accumulate in gpu but why GPU memory usage dosen't change? I think it is strange that the GPU memory usage did not change during the validation process when `output_attentions=False`, but it did change when `output_attentions=True`.",
"Again with `output_attentions=True`, you are asking to get all the attentions outputs of all layers for all inputs. It will take a lot of memory, either on the GPU or on the CPU (if using `eval_accumulation_steps`). Do not ask the model to output those if you do not have the memory to store them.",
"Thank you for your answering"
] | 1,691 | 1,692 | 1,692 |
NONE
| null |
### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
@sgugger
I was trying to finetune a model that inherits from BertForSequenceClassification.
```python
model = CustomForSequenceClassification.from_pretrained(model_name,
output_hidden_states=True,
return_dict=False,
num_labels=2,
)
```
`CustomForSeuenceClassification` is a class that inherit from BertForSequenceClassification.
I used `return_dict=False` and `output_hidden_states=True` to get all encoders' output.
But, if I use `output_hidden_states=True`, GPU memory usage increased during validation process.
This issue is a problem when training on a large dataset.
When I set `output_hidden_states` to False, that issue didn't occured.
I am using GLUE dataset to train and validation.
Below code is a class that inherits from BertForSequenceClassfication.
```python
class CustomForSequenceClassification(BertForSequenceClassification):
def __init__(self, config):
super().__init__(config)
self.num_labels = config.num_labels
self.config = config
self.bert = BertModel(config)
classifier_dropout = (
config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob
)
self.dropout = nn.Dropout(classifier_dropout)
self.classifier = nn.Linear(config.hidden_size, config.num_labels)
self.post_init()
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
inputs_embeds: Optional[torch.Tensor] = None,
labels: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple[torch.Tensor], SequenceClassifierOutput]:
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
outputs = self.bert(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
last_hidden_states = outputs[1]
all_hidden_states = outputs[2]
pooled_output = last_hidden_states
pooled_output = self.dropout(pooled_output)
logits = self.classifier(pooled_output)
loss = None
if labels is not None:
if self.config.problem_type is None:
if self.num_labels == 1:
self.config.problem_type = "regression"
elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
self.config.problem_type = "single_label_classification"
else:
self.config.problem_type = "multi_label_classification"
if self.config.problem_type == "regression":
loss_fct = MSELoss()
if self.num_labels == 1:
loss = loss_fct(logits.squeeze(), labels.squeeze())
else:
loss = loss_fct(logits, labels)
elif self.config.problem_type == "single_label_classification":
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
elif self.config.problem_type == "multi_label_classification":
loss_fct = BCEWithLogitsLoss()
loss = loss_fct(logits, labels)
if not return_dict:
output = (logits,) + outputs[2:]
return ((loss,) + output) if loss is not None else output
return SequenceClassifierOutput(
loss=loss,
logits=logits,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
#1. import
import math
from transformers import BertTokenizer, BertForSequenceClassification, AdamW
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from transformers import TrainingArguments, Trainer, TrainerCallback
from datasets import load_dataset
from torch import nn
import pdb
import ipdb
from typing import List, Optional, Tuple, Union
from transformers.modeling_outputs import SequenceClassifierOutput
from transformers import BertModel, BertLayer
import math
import os
import warnings
import random
from dataclasses import dataclass
from typing import List, Optional, Tuple, Union
import torch
import torch.utils.checkpoint
from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
from sklearn.metrics import accuracy_score, precision_recall_fscore_support, matthews_corrcoef
from scipy.stats import pearsonr, spearmanr
# 2. load model, tokenizer, dataset, compute_metrics
model_name = 'bert-base-uncased'
tokenizer = BertTokenizer.from_pretrained(model_name)
train = load_dataset('glue', 'qqp', split='train')
valid = load_dataset('glue', 'qqp', split='validation')
class BERT_Dataset(torch.utils.data.Dataset):
def __init__(self, dataset1, dataset2, label, tokenizer):
self.dataset1 = dataset1
self.dataset2 = dataset2
self.label = label
self.tokenizer = tokenizer
def __getitem__(self, idx):
text1 = self.dataset1[idx]
text2 = self.dataset2[idx]
tokens = self.tokenizer(text1, text2,
max_length=128,
padding="max_length",
truncation=True,
)
tokens['label'] = torch.LongTensor([self.label[idx]])
return tokens
def __len__(self):
return len(self.label)
train_dataset = BERT_Dataset(train['question1'],
train['question2'],
train['label'],
tokenizer,
)
valid_dataset = BERT_Dataset(valid['question1'],
valid['question2'],
valid['label'],
tokenizer,
)
def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = np.argmax(predictions[0], axis=1)
_, _, f1, _ = precision_recall_fscore_support(labels, predictions, average='binary')
acc = accuracy_score(labels, predictions)
return {
'f1': f1,
'accuracy': acc,
}
# 3. inherit from BertForSequenceClassification
class CustomForSequenceClassification(BertForSequenceClassification):
def __init__(self, config):
super().__init__(config)
self.num_labels = config.num_labels
self.config = config
self.bert = BertModel(config)
#self.attention_layer = BertLayer(config)
#self.pooler = BertPooler(config)
#self.attn = Attention(hidden_dim=config.hidden_size, method='dot')
classifier_dropout = (
config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob
)
self.dropout = nn.Dropout(classifier_dropout)
self.classifier = nn.Linear(config.hidden_size, config.num_labels)
self.post_init()
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
inputs_embeds: Optional[torch.Tensor] = None,
labels: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple[torch.Tensor], SequenceClassifierOutput]:
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
outputs = self.bert(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
last_hidden_states = outputs[1]
all_hidden_states = outputs[2]
pooled_output = last_hidden_states
pooled_output = self.dropout(pooled_output)
logits = self.classifier(pooled_output)
loss = None
if labels is not None:
if self.config.problem_type is None:
if self.num_labels == 1:
self.config.problem_type = "regression"
elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
self.config.problem_type = "single_label_classification"
else:
self.config.problem_type = "multi_label_classification"
if self.config.problem_type == "regression":
loss_fct = MSELoss()
if self.num_labels == 1:
loss = loss_fct(logits.squeeze(), labels.squeeze())
else:
loss = loss_fct(logits, labels)
elif self.config.problem_type == "single_label_classification":
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
elif self.config.problem_type == "multi_label_classification":
loss_fct = BCEWithLogitsLoss()
loss = loss_fct(logits, labels)
if not return_dict:
output = (logits,) + outputs[2:]
return ((loss,) + output) if loss is not None else output
return SequenceClassifierOutput(
loss=loss,
logits=logits,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
# 4. generate model object
model = CustomForSequenceClassification.from_pretrained(model_name,
output_hidden_states=False,
return_dict=False,
num_labels=2,
)
# 5. set TrainerArguements
training_ars = TrainingArguments(
output_dir="./checkpoint",
num_train_epochs=20,
per_device_train_batch_size=128,
per_device_eval_batch_size=16,
learning_rate=1e-5,
weight_decay=0.01,
evaluation_strategy="epoch",
)
trainer = Trainer(
model=model,
args=training_ars,
train_dataset=train_dataset,
eval_dataset=valid_dataset,
tokenizer=tokenizer,
compute_metrics=compute_metrics,
)
# 6. start training
trainer.train()
### Expected behavior
In first validation process, GPU memory usage will increase.
If used dataset is large dataset, then CUDA OOM error will be occurred.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25487/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25486
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25486/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25486/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25486/events
|
https://github.com/huggingface/transformers/issues/25486
| 1,848,970,146 |
I_kwDOCUB6oc5uNQui
| 25,486 |
Mask2Former post-processing RLE
|
{
"login": "vjsrinivas",
"id": 5075453,
"node_id": "MDQ6VXNlcjUwNzU0NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5075453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vjsrinivas",
"html_url": "https://github.com/vjsrinivas",
"followers_url": "https://api.github.com/users/vjsrinivas/followers",
"following_url": "https://api.github.com/users/vjsrinivas/following{/other_user}",
"gists_url": "https://api.github.com/users/vjsrinivas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vjsrinivas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vjsrinivas/subscriptions",
"organizations_url": "https://api.github.com/users/vjsrinivas/orgs",
"repos_url": "https://api.github.com/users/vjsrinivas/repos",
"events_url": "https://api.github.com/users/vjsrinivas/events{/privacy}",
"received_events_url": "https://api.github.com/users/vjsrinivas/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @vjsrinivas, thanks for reporting! \r\n\r\nI've opened #25497 which should resolve this issue",
"@amyeroberts thanks for the quick reply! Do we pip install from the github project for these kinds of hotfixes?",
"@vjsrinivas Yes, once the PR is merged in, you'll need to install from source to have the current changes in main. They will be included in the next version release. "
] | 1,691 | 1,692 | 1,692 |
NONE
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-6.2.6-76060206-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@amyeroberts I was trying to finetune Mask2Former with my own custom dataset, but I ran into an error when calling the `Mask2FormerImageProcessor.post_process_instance_segmentation`. I'm getting the following error when I set `return_coco_annotation=True` and a relatively low confidence threshold:
```
segmentation[pred_masks[j] == 1] = current_segment_id
TypeError: only integer tensors of a single element can be converted to an index
```
Could the issue be that the `convert_segmentation_to_rle` is called within the query loop rather than outside: https://github.com/huggingface/transformers/blob/0ebe7ae16076f727ac40c47f8f9167013c4596d8/src/transformers/models/mask2former/image_processing_mask2former.py#L1031 The segmentation tensor turns into a `List[List]`, which might be causing the TypeError.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
It's not practical to give you the custom training loop I have, but I recreated the situation with the ADE20K example for MaskFormer. Note that I stop this model's training within the first iteration and set the confidence threshold to 0.001 (error also occurs at 0.01, 0.1, etc). The error still occurs when I do a full epoch on my custom dataset.
``` python
from datasets import load_dataset
import torch
from tqdm.auto import tqdm
import pandas as pd
import numpy as np
import albumentations as A
from PIL import Image
import numpy as np
from torch.utils.data import Dataset
import albumentations as A
from torch.utils.data import DataLoader
from transformers import MaskFormerImageProcessor
dataset = load_dataset("scene_parse_150", "instance_segmentation")
data = pd.read_csv('./instanceInfo100_train.txt',
sep='\t', header=0, on_bad_lines='warn')
data.head(5)
id2label = {id: label.strip() for id, label in enumerate(data["Object Names"])}
print(id2label)
example = dataset['train'][1]
image = example['image']
seg = np.array(example['annotation'])
# get green channel
instance_seg = seg[:, :, 1]
instance_seg = np.array(example["annotation"])[:,:,1] # green channel encodes instances
class_id_map = np.array(example["annotation"])[:,:,0] # red channel encodes semantic category
class_labels = np.unique(class_id_map)
# create mapping between instance IDs and semantic category IDs
inst2class = {}
for label in class_labels:
instance_ids = np.unique(instance_seg[class_id_map == label])
inst2class.update({i: label for i in instance_ids})
print(inst2class)
processor = MaskFormerImageProcessor(reduce_labels=True, ignore_index=255, do_resize=False, do_rescale=False, do_normalize=False)
class ImageSegmentationDataset(Dataset):
"""Image segmentation dataset."""
def __init__(self, dataset, processor, transform=None):
"""
Args:
dataset
"""
self.dataset = dataset
self.processor = processor
self.transform = transform
def __len__(self):
return len(self.dataset)
def __getitem__(self, idx):
image = np.array(self.dataset[idx]["image"].convert("RGB"))
instance_seg = np.array(self.dataset[idx]["annotation"])[:,:,1]
class_id_map = np.array(self.dataset[idx]["annotation"])[:,:,0]
class_labels = np.unique(class_id_map)
inst2class = {}
for label in class_labels:
instance_ids = np.unique(instance_seg[class_id_map == label])
inst2class.update({i: label for i in instance_ids})
# apply transforms
if self.transform is not None:
transformed = self.transform(image=image, mask=instance_seg)
image, instance_seg = transformed['image'], transformed['mask']
# convert to C, H, W
image = image.transpose(2,0,1)
if class_labels.shape[0] == 1 and class_labels[0] == 0:
# Some image does not have annotation (all ignored)
inputs = self.processor([image], return_tensors="pt")
inputs = {k:v.squeeze() for k,v in inputs.items()}
inputs["class_labels"] = torch.tensor([0])
inputs["mask_labels"] = torch.zeros((0, inputs["pixel_values"].shape[-2], inputs["pixel_values"].shape[-1]))
else:
inputs = self.processor([image], [instance_seg], instance_id_to_semantic_id=inst2class, return_tensors="pt")
inputs = {k: v.squeeze() if isinstance(v, torch.Tensor) else v[0] for k,v in inputs.items()}
return inputs
ADE_MEAN = np.array([123.675, 116.280, 103.530]) / 255
ADE_STD = np.array([58.395, 57.120, 57.375]) / 255
# note that you can include more fancy data augmentation methods here
train_transform = A.Compose([
A.Resize(width=512, height=512),
A.Normalize(mean=ADE_MEAN, std=ADE_STD),
])
train_dataset = ImageSegmentationDataset(dataset["train"], processor=processor, transform=train_transform)
def collate_fn(batch):
pixel_values = torch.stack([example["pixel_values"] for example in batch])
pixel_mask = torch.stack([example["pixel_mask"] for example in batch])
class_labels = [example["class_labels"] for example in batch]
mask_labels = [example["mask_labels"] for example in batch]
return {"pixel_values": pixel_values, "pixel_mask": pixel_mask, "class_labels": class_labels, "mask_labels": mask_labels}
train_dataloader = DataLoader(train_dataset, batch_size=2, shuffle=True, collate_fn=collate_fn)
# %%
batch = next(iter(train_dataloader))
for k,v in batch.items():
if isinstance(v, torch.Tensor):
print(k,v.shape)
else:
print(k,len(v))
# %%
from transformers import MaskFormerForInstanceSegmentation
# Replace the head of the pre-trained model
# We specify ignore_mismatched_sizes=True to replace the already fine-tuned classification head by a new one
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-ade",
id2label=id2label,
ignore_mismatched_sizes=True)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=5e-5)
running_loss = 0.0
num_samples = 0
for epoch in range(1):
print("Epoch:", epoch)
model.train()
for idx, batch in enumerate(tqdm(train_dataloader)):
# Reset the parameter gradients
optimizer.zero_grad()
# Forward pass
outputs = model(
pixel_values=batch["pixel_values"].to(device),
mask_labels=[labels.to(device) for labels in batch["mask_labels"]],
class_labels=[labels.to(device) for labels in batch["class_labels"]],
)
# Backward propagation
loss = outputs.loss
loss.backward()
batch_size = batch["pixel_values"].size(0)
running_loss += loss.item()
num_samples += batch_size
if idx % 100 == 0:
print("Loss:", running_loss/num_samples)
# Optimization
optimizer.step()
if idx == 1:
break
########## SAMPLE VALIDATION LOOP: ############
processor = MaskFormerImageProcessor()
model.eval()
with torch.no_grad():
for idx, batch in enumerate(tqdm(train_dataloader)):
outputs = model(
pixel_values=batch["pixel_values"].to(device)
)
coco_out = processor.post_process_instance_segmentation(outputs, threshold=0.001, return_coco_annotation=True)
print(coco_out)
```
### Expected behavior
`Mask2FormerImageProcessor.post_process_instance_segmentation` not erroring out regardless of model segmentation output.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25486/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25486/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25485
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25485/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25485/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25485/events
|
https://github.com/huggingface/transformers/issues/25485
| 1,848,736,502 |
I_kwDOCUB6oc5uMXr2
| 25,485 |
[MMS] Unable to load processor for `facebook/mms-300m` or `facebook/mms-1b`
|
{
"login": "xenova",
"id": 26504141,
"node_id": "MDQ6VXNlcjI2NTA0MTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xenova",
"html_url": "https://github.com/xenova",
"followers_url": "https://api.github.com/users/xenova/followers",
"following_url": "https://api.github.com/users/xenova/following{/other_user}",
"gists_url": "https://api.github.com/users/xenova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xenova/subscriptions",
"organizations_url": "https://api.github.com/users/xenova/orgs",
"repos_url": "https://api.github.com/users/xenova/repos",
"events_url": "https://api.github.com/users/xenova/events{/privacy}",
"received_events_url": "https://api.github.com/users/xenova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The processor class is made up of two objects:\r\n1. The feature extractor: to pre-process the audio inputs\r\n2. The tokenizer: to convert the predicted CTC ids to text outputs\r\n\r\nNote that the tokenizer is **only** required for CTC decoding `Wav2Vec2ForCTC`. If you only require the base model (`Wav2Vec2Model`), then you can get away with just using the feature extractor to pre-process the inputs:\r\n```python\r\nfrom transformers import AutoFeatureExtractor\r\n\r\nmodel_id = \"facebook/mms-300m\"\r\nfeature_extractor = AutoFeatureExtractor.from_pretrained('facebook/mms-300m')\r\n```\r\n\r\nEach language in MMS has a different vocabulary, and hence a different tokenizer. Thus, we need to specify the target language in order to know the correct tokenizer to load. \r\n\r\nCan you try setting the target language to load the target tokenizer? E.g. as follows:\r\n```python\r\nfrom transformers import AutoProcessor\r\n\r\nmodel_id = \"facebook/mms-1b-all\"\r\ntarget_lang = \"fra\"\r\n\r\nprocessor = AutoProcessor.from_pretrained(model_id, target_lang=target_lang)\r\n```\r\n\r\nThis will load the tokenizer for the French language under the hood.",
"Thanks! I was aware of `AutoFeatureExtractor.from_pretrained`, but I assumed that `AutoProcessor` would handle the case where a tokenizer is not available for the model type (i.e., for pretraining or audio classification). I believe my misunderstanding stemmed from other Processor classes (like those for image classification) which do not need tokenizers, e.g.,:\r\n\r\n```py\r\nimport requests\r\nimport torch\r\nfrom PIL import Image\r\nfrom transformers import AutoProcessor, MobileViTV2ForSemanticSegmentation\r\n\r\nurl = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\n\r\nimage_processor = AutoProcessor.from_pretrained(\"apple/mobilevitv2-1.0-imagenet1k-256\")\r\nmodel = MobileViTV2ForSemanticSegmentation.from_pretrained(\"apple/mobilevitv2-1.0-imagenet1k-256\")\r\n\r\ninputs = image_processor(images=image, return_tensors=\"pt\")\r\n\r\nwith torch.no_grad():\r\n outputs = model(**inputs)\r\n\r\n# logits are of shape (batch_size, num_labels, height, width)\r\nlogits = outputs.logits\r\nprint(logits.shape) # torch.Size([1, 1000, 8, 8])\r\n```\r\n\r\nAnyway, thanks for the response - I'll close the issue.",
"@xenova `MobileVit` does not have a processor class, so `AutoProcessor.from_pretrained` returns to you a simple `ImageProcessor`.\r\n\r\n`AutoProcessor.from_pretrained` will always return the tool for preprocessing starting with `Processor`, then `Tokenizer`, then `FeatureExtractor` and finally `ImageProcessor`. For your example above, there is a processor class, so it tried to load this one, but for MobileViT, there is none, so it automatically goes down to the `ImageProcessor`. Does that make sense?",
"@sgugger Great! Thanks for the explanation! 🤗 "
] | 1,691 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.32.0.dev0
- Platform: Linux-5.15.109+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.2 (cpu)
- Jax version: 0.4.14
- JaxLib version: 0.4.14
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Running
```python
from transformers import AutoProcessor
AutoProcessor.from_pretrained('facebook/mms-300m')
```
Produces:
```bash
/usr/local/lib/python3.10/dist-packages/transformers/models/wav2vec2/processing_wav2vec2.py:53: FutureWarning: Loading a tokenizer inside Wav2Vec2Processor from a config that does not include a `tokenizer_class` attribute is deprecated and will be removed in v5. Please add `'tokenizer_class': 'Wav2Vec2CTCTokenizer'` attribute to either your `config.json` or `tokenizer_config.json` file to suppress this warning:
warnings.warn(
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/transformers/models/wav2vec2/processing_wav2vec2.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
50 try:
---> 51 return super().from_pretrained(pretrained_model_name_or_path, **kwargs)
52 except OSError:
7 frames
OSError: Can't load tokenizer for 'facebook/mms-300m'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'facebook/mms-300m' is the correct path to a directory containing all relevant files for a Wav2Vec2CTCTokenizer tokenizer.
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, cache_dir, force_download, local_files_only, token, revision, *init_inputs, **kwargs)
1828
1829 if all(full_file_name is None for full_file_name in resolved_vocab_files.values()):
-> 1830 raise EnvironmentError(
1831 f"Can't load tokenizer for '{pretrained_model_name_or_path}'. If you were trying to load it from "
1832 "'https://huggingface.co/models', make sure you don't have a local directory with the same name. "
OSError: Can't load tokenizer for 'facebook/mms-300m'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'facebook/mms-300m' is the correct path to a directory containing all relevant files for a Wav2Vec2CTCTokenizer tokenizer.
```
### Expected behavior
The correct processor should be loaded (`Wav2Vec2FeatureExtractor` from the [preprocessor_config.json](https://huggingface.co/facebook/mms-300m/blob/main/preprocessor_config.json)).
The error message suggests the tokenizer is mandatory for all MMS models, which isn't necessarily the case (specifically for just loading the pretrained base models).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25485/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25485/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25484
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25484/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25484/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25484/events
|
https://github.com/huggingface/transformers/issues/25484
| 1,848,644,145 |
I_kwDOCUB6oc5uMBIx
| 25,484 |
[Trainer API] Mention that the total number of steps per epoch is separate from total number of steps on training run
|
{
"login": "suvadityamuk",
"id": 70141886,
"node_id": "MDQ6VXNlcjcwMTQxODg2",
"avatar_url": "https://avatars.githubusercontent.com/u/70141886?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suvadityamuk",
"html_url": "https://github.com/suvadityamuk",
"followers_url": "https://api.github.com/users/suvadityamuk/followers",
"following_url": "https://api.github.com/users/suvadityamuk/following{/other_user}",
"gists_url": "https://api.github.com/users/suvadityamuk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/suvadityamuk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suvadityamuk/subscriptions",
"organizations_url": "https://api.github.com/users/suvadityamuk/orgs",
"repos_url": "https://api.github.com/users/suvadityamuk/repos",
"events_url": "https://api.github.com/users/suvadityamuk/events{/privacy}",
"received_events_url": "https://api.github.com/users/suvadityamuk/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The progress bar clearly indicates the count of epochs (here we can see 0/25 in your screenshot), I'm not sure what more we can add to make this clearer.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,691 | 1,695 | 1,695 |
NONE
| null |
### Feature request
Is there a possibility to maybe mention within the Trainer API that the number of steps mentioned during a `trainer.train()` call is actually the `num_steps_per_epoch * epochs` instead of just `num_steps_per_epoch` and also shift the Epoch note to the front?

### Motivation
This was very confusing for me as a first-time user of the Trainer API, coming from Keras and it looked like while I had 9688 samples, I had 241700 steps which looked like an error at first (spent 2 hours trying to debug my `datasets` loading :sweat_smile: behind this).
Not sure if it's a major thing, but still something I thought I'd report just in case.
@sgugger
### Your contribution
Not sure how I can help, but happy to make the PR if guided.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25484/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25483
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25483/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25483/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25483/events
|
https://github.com/huggingface/transformers/pull/25483
| 1,848,566,316 |
PR_kwDOCUB6oc5X0Z_4
| 25,483 |
import required torch and numpy libraries
|
{
"login": "eze1376",
"id": 40582518,
"node_id": "MDQ6VXNlcjQwNTgyNTE4",
"avatar_url": "https://avatars.githubusercontent.com/u/40582518?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eze1376",
"html_url": "https://github.com/eze1376",
"followers_url": "https://api.github.com/users/eze1376/followers",
"following_url": "https://api.github.com/users/eze1376/following{/other_user}",
"gists_url": "https://api.github.com/users/eze1376/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eze1376/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eze1376/subscriptions",
"organizations_url": "https://api.github.com/users/eze1376/orgs",
"repos_url": "https://api.github.com/users/eze1376/repos",
"events_url": "https://api.github.com/users/eze1376/events{/privacy}",
"received_events_url": "https://api.github.com/users/eze1376/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25483). All of your documentation changes will be reflected on that endpoint."
] | 1,691 | 1,691 | 1,691 |
CONTRIBUTOR
| null |
# Import torch and numpy libraries to required section
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # unimported libraries
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? @amyeroberts @sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25483/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25483/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25483",
"html_url": "https://github.com/huggingface/transformers/pull/25483",
"diff_url": "https://github.com/huggingface/transformers/pull/25483.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25483.patch",
"merged_at": 1691947600000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25482
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25482/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25482/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25482/events
|
https://github.com/huggingface/transformers/pull/25482
| 1,848,500,551 |
PR_kwDOCUB6oc5X0MXf
| 25,482 |
Added paper links in logitprocess.py
|
{
"login": "pranith7",
"id": 117859007,
"node_id": "U_kgDOBwZivw",
"avatar_url": "https://avatars.githubusercontent.com/u/117859007?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pranith7",
"html_url": "https://github.com/pranith7",
"followers_url": "https://api.github.com/users/pranith7/followers",
"following_url": "https://api.github.com/users/pranith7/following{/other_user}",
"gists_url": "https://api.github.com/users/pranith7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pranith7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pranith7/subscriptions",
"organizations_url": "https://api.github.com/users/pranith7/orgs",
"repos_url": "https://api.github.com/users/pranith7/repos",
"events_url": "https://api.github.com/users/pranith7/events{/privacy}",
"received_events_url": "https://api.github.com/users/pranith7/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @gante \r\n\r\nThanks for your PR! Please run `make style` on your branch to fix the auto-formatting issues.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25482). All of your documentation changes will be reflected on that endpoint.",
"@sgugger I have addressed the above change.\r\n\r\ncc @gante ",
"cc @gante updated the links for the paper",
"Thanks @gante for your support and this is my first step towards open source. "
] | 1,691 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
#24783
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25482/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25482/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25482",
"html_url": "https://github.com/huggingface/transformers/pull/25482",
"diff_url": "https://github.com/huggingface/transformers/pull/25482.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25482.patch",
"merged_at": 1692616174000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25481
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25481/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25481/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25481/events
|
https://github.com/huggingface/transformers/pull/25481
| 1,848,420,179 |
PR_kwDOCUB6oc5Xz8AE
| 25,481 |
[DOCS] Add example for HammingDiversityLogitsProcessor
|
{
"login": "jessthebp",
"id": 17071492,
"node_id": "MDQ6VXNlcjE3MDcxNDky",
"avatar_url": "https://avatars.githubusercontent.com/u/17071492?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessthebp",
"html_url": "https://github.com/jessthebp",
"followers_url": "https://api.github.com/users/jessthebp/followers",
"following_url": "https://api.github.com/users/jessthebp/following{/other_user}",
"gists_url": "https://api.github.com/users/jessthebp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessthebp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessthebp/subscriptions",
"organizations_url": "https://api.github.com/users/jessthebp/orgs",
"repos_url": "https://api.github.com/users/jessthebp/repos",
"events_url": "https://api.github.com/users/jessthebp/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessthebp/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@gante -- hey! Currently, the checks are failing because of some changes in the docs, but not sure if I can/should fix those-- using black, the files I've changed seem to be formatted correctly? Is there something I need to change in the docs to have this merge? ",
"@gante Fixes made! :)",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25481). All of your documentation changes will be reflected on that endpoint.",
"@jesspeck thank for you iterating and thank you for the contribution 🤗 "
] | 1,691 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
# What does this PR do?
Adds an example to the docstring of HammingDiversityLogitsProcessor class definition in [this file](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/logits_process.py).
Part of the docs work on [Generate: have an example on each logits processor class docstring ](https://github.com/huggingface/transformers/issues/24783)
Changes -
- Added example
- Added some more info about beam search and how hamming diversity works with it
- Added warning/tip about resources
Happy to edit it all down a bit if it's too much stuff!
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
## Who can review?
@gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25481/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25481/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25481",
"html_url": "https://github.com/huggingface/transformers/pull/25481",
"diff_url": "https://github.com/huggingface/transformers/pull/25481.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25481.patch",
"merged_at": 1692963341000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25479
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25479/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25479/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25479/events
|
https://github.com/huggingface/transformers/issues/25479
| 1,848,405,518 |
I_kwDOCUB6oc5uLG4O
| 25,479 |
AttributeError: 'NoneType' object has no attribute 'shape'
|
{
"login": "andysingal",
"id": 20493493,
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andysingal",
"html_url": "https://github.com/andysingal",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"repos_url": "https://api.github.com/users/andysingal/repos",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"In order for your model to compute a loss, you need to give it `labels`. The dataset you are using does not contain any `labels` key, which is why you are getting this error.\r\n\r\nPlease also consider using the [forums](https://discuss.huggingface.co/) to help debug your code.",
"@sgugger Thanks for your response, I converted the dataset to have message Id and\r\nmessage text . Messages text includes human and assistant conversation. I\r\nam able to tokenize and get ( message_id, message_text, input_ids,\r\nattention, token) . When I train it gives the error.\r\n\r\nAny advise what changes I can make to the address the issue(Goal is to train the model to get chat based results during inference) . I also observed\r\nautotokenizer does not have support for vicuña . Is that true?. Any help to\r\naddress the issue is appreciated.\r\n\r\nLooking forward to hearing from you.\r\n\r\nThanks,\r\nAndy\r\n\r\nOn Sun, Aug 13, 2023 at 22:59 Sylvain Gugger ***@***.***>\r\nwrote:\r\n\r\n> In order for your model to compute a loss, you need to give it labels.\r\n> The dataset you are using does not contain any labels key, which is why\r\n> you are getting this error.\r\n>\r\n> Please also consider using the forums <https://discuss.huggingface.co/>\r\n> to help debug your code.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/25479#issuecomment-1676419327>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AE4LJNPDRTXPSGVVXQRMW7DXVEFIDANCNFSM6AAAAAA3OMKZ6Y>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,691 | 1,695 | 1,695 |
NONE
| null |
### System Info
kaggle notebook
```
Libraries:
!pip install -Uqqq pip --progress-bar off
!pip install -qqq bitsandbytes==0.40.2 trl==0.4.7 --progress-bar off
!pip install -q torch==2.0.1+cu118 torchvision==0.15.2+cu118 torchaudio==2.0.2+cu118 torchtext==0.15.2 torchdata==0.6.1 --extra-index-url https://download.pytorch.org/whl/cu118 -U
!pip install -qqq -U git+https://github.com/huggingface/transformers.git@e03a9cc --progress-bar off
!pip install -qqq -U peft==0.4.0 --progress-bar off
!pip install -qqq -U git+https://github.com/huggingface/accelerate.git@c9fbb71 --progress-bar off
!pip install -qqq datasets==2.12.0 --progress-bar off
!pip install -qqq loralib==0.1.1 --progress-bar off
!pip install -qqq einops==0.6.1 --progress-bar off
!pip install accelerate==0.21.0 --progress-bar off
```
```
import json
import os
from pprint import pprint
import pandas as pd
import numpy as np
import bitsandbytes as bnb
import pandas as pd
import torch
import torch.nn as nn
import transformers
from trl import SFTTrainer
import peft
from datasets import load_dataset, DatasetDict,Dataset
from huggingface_hub import notebook_login
from peft import (
get_peft_config, get_peft_model, PromptTuningInit, PromptTuningConfig, TaskType, PeftType,prepare_model_for_kbit_training
)
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
LlamaForCausalLM,
LlamaTokenizer
)
# Check for CUDA availability
# device = torch.device("cuda" if not(torch.cuda.is_available()) else "cpu")
# print(f"The device type is {device.type}")
def bytes_to_gb(bytes_value):
gb_value = bytes_value / (1024 ** 3)
return gb_value
def get_device():
if torch.cuda.is_available():
# Get current GPU's VRAM (in bytes)
vram_bytes = torch.cuda.get_device_properties(0).total_memory
print(f"Cuda Found! You have {((round(bytes_to_gb(vram_bytes))))} GB VRAM\n")
# Convert 24 GB to bytes
min_vram_required_bytes = 24 * (1024 ** 3)
if vram_bytes >= min_vram_required_bytes:
return torch.device("cuda")
if ((round(bytes_to_gb(vram_bytes))) >= 16):
return torch.device("cuda")
print("You didn't have at least 16GB of VRAM. Switching to CPU.")
return torch.device("cpu")
device = get_device()
```
### Who can help?
@sgugger @youn
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
MODEL_NAME = "TheBloke/stable-vicuna-13B-HF"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME,
torch_dtype=torch.float16,
load_in_8bit=True,
device_map="auto",
)
dataset = load_dataset("Dahoas/first-instruct-human-assistant-prompt")
transformed into:
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
```
```
data = formatted_dataset.map(lambda samples: tokenizer(samples["message_tree_text"], padding=True, truncation=True,), batched=True)
Dataset({
features: ['message_tree_id', 'message_tree_text', 'input_ids', 'token_type_ids', 'attention_mask'],
num_rows: 33143
})
```
```
training_args = transformers.TrainingArguments(
per_device_train_batch_size=1,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
num_train_epochs=1,
learning_rate=2e-4,
fp16=True,
save_total_limit=3,
logging_steps=1,
output_dir=OUTPUT_DIR,
max_steps=80,
optim="paged_adamw_8bit",
lr_scheduler_type="cosine",
warmup_ratio=0.05,
report_to="tensorboard",
)
trainer = SFTTrainer(
model=model,
train_dataset=data,
peft_config=peft_config,
dataset_text_field="message_tree_text",
max_seq_length=max_seq_length,
tokenizer=tokenizer,
args=training_args,
packing=packing,
)
model.config.use_cache = False # silence the warnings. Please re-enable for inference!
trainer.train()
# Save trained model
trainer.model.save_pretrained(new_model)
```
ERROR:
```
You're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[17], line 31
18 trainer = SFTTrainer(
19 model=model,
20 train_dataset=data,
(...)
26 packing=packing,
27 )
30 model.config.use_cache = False # silence the warnings. Please re-enable for inference!
---> 31 trainer.train()
33 # Save trained model
34 trainer.model.save_pretrained(new_model)
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1661, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1656 self.model_wrapped = self.model
1658 inner_training_loop = find_executable_batch_size(
1659 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
1660 )
-> 1661 return inner_training_loop(
1662 args=args,
1663 resume_from_checkpoint=resume_from_checkpoint,
1664 trial=trial,
1665 ignore_keys_for_eval=ignore_keys_for_eval,
1666 )
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1946, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1943 self.control = self.callback_handler.on_step_begin(args, self.state, self.control)
1945 with self.accelerator.accumulate(model):
-> 1946 tr_loss_step = self.training_step(model, inputs)
1948 if (
1949 args.logging_nan_inf_filter
1950 and not is_torch_tpu_available()
1951 and (torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step))
1952 ):
1953 # if loss is nan or inf simply add the average of previous logged losses
1954 tr_loss += tr_loss / (1 + self.state.global_step - self._globalstep_last_logged)
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:2753, in Trainer.training_step(self, model, inputs)
2750 return loss_mb.reduce_mean().detach().to(self.args.device)
2752 with self.compute_loss_context_manager():
-> 2753 loss = self.compute_loss(model, inputs)
2755 if self.args.n_gpu > 1:
2756 loss = loss.mean() # mean() to average on multi-gpu parallel training
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:2778, in Trainer.compute_loss(self, model, inputs, return_outputs)
2776 else:
2777 labels = None
-> 2778 outputs = model(**inputs)
2779 # Save past state if it exists
2780 # TODO: this needs to be fixed and made cleaner later.
2781 if self.args.past_index >= 0:
File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py:581, in convert_outputs_to_fp32.<locals>.forward(*args, **kwargs)
580 def forward(*args, **kwargs):
--> 581 return model_forward(*args, **kwargs)
File /opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py:569, in ConvertOutputsToFp32.__call__(self, *args, **kwargs)
568 def __call__(self, *args, **kwargs):
--> 569 return convert_to_fp32(self.model_forward(*args, **kwargs))
File /opt/conda/lib/python3.10/site-packages/torch/amp/autocast_mode.py:14, in autocast_decorator.<locals>.decorate_autocast(*args, **kwargs)
11 @functools.wraps(func)
12 def decorate_autocast(*args, **kwargs):
13 with autocast_instance:
---> 14 return func(*args, **kwargs)
File /opt/conda/lib/python3.10/site-packages/peft/peft_model.py:968, in PeftModelForCausalLM.forward(self, input_ids, attention_mask, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict, **kwargs)
966 prompts = prompts.to(inputs_embeds.dtype)
967 inputs_embeds = torch.cat((prompts, inputs_embeds), dim=1)
--> 968 return self.base_model(inputs_embeds=inputs_embeds, **kwargs)
File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/conda/lib/python3.10/site-packages/peft/peft_model.py:933, in PeftModelForCausalLM.forward(self, input_ids, attention_mask, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict, **kwargs)
912 return self.base_model(
913 input_ids=input_ids,
914 attention_mask=attention_mask,
(...)
919 **kwargs,
920 )
922 return self.base_model(
923 input_ids=input_ids,
924 attention_mask=attention_mask,
(...)
930 **kwargs,
931 )
--> 933 batch_size = input_ids.shape[0]
934 if attention_mask is not None:
935 # concat prompt attention mask
936 prefix_attention_mask = torch.ones(batch_size, peft_config.num_virtual_tokens).to(attention_mask.device)
AttributeError: 'NoneType' object has no attribute 'shape'
```
### Expected behavior
model needs to run
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25479/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25479/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25478
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25478/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25478/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25478/events
|
https://github.com/huggingface/transformers/pull/25478
| 1,848,379,298 |
PR_kwDOCUB6oc5XzzRo
| 25,478 |
Add `M2M100TokenizerFast` (+ convert_slow_tokenizer implementation)
|
{
"login": "xenova",
"id": 26504141,
"node_id": "MDQ6VXNlcjI2NTA0MTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xenova",
"html_url": "https://github.com/xenova",
"followers_url": "https://api.github.com/users/xenova/followers",
"following_url": "https://api.github.com/users/xenova/following{/other_user}",
"gists_url": "https://api.github.com/users/xenova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xenova/subscriptions",
"organizations_url": "https://api.github.com/users/xenova/orgs",
"repos_url": "https://api.github.com/users/xenova/repos",
"events_url": "https://api.github.com/users/xenova/events{/privacy}",
"received_events_url": "https://api.github.com/users/xenova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25478). All of your documentation changes will be reflected on that endpoint.",
"Feel free to ping me when you need a review! ",
"@ArthurZucker Hey! 👋 I think this is ready for your review. The tokenizer is quite similar to NLLB, which was added [here](https://github.com/huggingface/transformers/pull/18126) and then updated by you [here](https://github.com/huggingface/transformers/pull/22313).\r\n\r\nI haven't added a tokenizer before, but one thing I am missing is a unit test for [hf-internal-testing/tiny-random-m2m_100](https://huggingface.co/hf-internal-testing/tiny-random-m2m_100/tree/main), but this is due to the missing tokenizer.json file.\r\n\r\n[Here](https://huggingface.co/Xenova/m2m100_418M) is a tokenizer file which is generated from above, but then [updated by hand](https://huggingface.co/Xenova/m2m100_418M/commit/b9d0d206708d92d0c2331a07e6662eb5bccb34ba) to work correctly. So that functionality in the conversion script is missing (i.e., `added_tokens`), but I'm not sure what the best way to fix it is. Would appreciate some help here too 😄 ",
"> Looks good already, could you give me more details on how you converted the model?\r\n\r\nI just used [Optimum](https://github.com/huggingface/optimum) to convert the model to ONNX, but this shouldn't matter for this PR (which only concerns the tokenizer).",
"Sorry by model I meant the tokenizer model (the backend sentencepiece model if you will). I am trying to understand why you had to manually add the added tokens, and the `from_slow` is to initialize a fast tokenizer from a slow tokenizer using the conversion method! \r\n",
"> Sorry by model I meant the tokenizer model (the backend sentencepiece model if you will).\r\n\r\nOh yes of course 😆 (silly me). I converted it with this code:\r\n```py\r\nfrom transformers import convert_slow_tokenizer, AutoTokenizer\r\ntokenizer = AutoTokenizer.from_pretrained('Xenova/m2m100_418M', use_fast=False)\r\nfast_tokenizer=convert_slow_tokenizer.convert_slow_tokenizer(tokenizer)\r\nfast_tokenizer.save('tokenizer.json')\r\n```\r\n\r\n\r\n> I am trying to understand why you had to manually add the added tokens\r\n\r\nI'm not 100% sure, I just ran some unit tests for transformers.js and found that it failed (and after inspecting, I found that it was just missing added tokens. I'll play around with a few more things today!",
"@ArthurZucker I added the \"copied from\" annotations as well as added the special tokens (like they are done for Llama). On the latter point, the tokenizer includes `<madeupwordX>` tokens (for some reason) - should these be classified as special tokens?\r\n\r\nHere's the output from running the above code (zipped because GH doesn't like JSON 🤷)\r\n[m2m.zip](https://github.com/huggingface/transformers/files/12410892/m2m.zip)\r\n\r\n\r\n",
"Regarding the madeup words, it depends, would just say let's follow what's done for slow! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Should this be reopened @xenova ? :)",
"I marked it as a draft since I have been quite busy on some more important things, with the idea that I would return to it eventually once I had more time 😅. It's basically just missing some unit tests and example tokenizer files. "
] | 1,691 | 1,695 | 1,695 |
CONTRIBUTOR
| null |
# What does this PR do?
Adds fast tokenizer support for `M2M100Tokenizer`. I also added a `convert_slow_tokenizer` config to support generating a tokenizer.json file. For example, the generated tokenizer files for [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) can be found [here](https://huggingface.co/Xenova/m2m100_418M/tree/main).
The fast tokenizer format is needed for transformers.js (see [issue](https://github.com/xenova/transformers.js/issues/235) and [PR](https://github.com/xenova/transformers.js/pull/250)). This may or may not be super relevant (considering the models are quite old), but there was a feature request in transformers.js, and I needed to write this code to get it working - so I thought I might as well share here.
I have ran the tokenizer on a [variety of test cases](https://github.com/xenova/transformers.js/blob/main/tests/generate_tests.py#L24-L46), and it passes each one. Additionally, it fixes a bug/inconsistency with the slow tokenizer (actually found in all sentencepiece tokenizers), where whitespace after special tokens is removed. To make the test cases past, I just basically hardcoded a [fix](https://github.com/xenova/transformers.js/blob/58425088a42931306304fa38b4cb50b901a2eefa/tests/generate_tests.py#L120-L125) for it (for now at least).
### Example conversion code
```python
from transformers import convert_slow_tokenizer, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('Xenova/m2m100_418M', use_fast=False)
fast_tokenizer=convert_slow_tokenizer.convert_slow_tokenizer(tokenizer)
fast_tokenizer.save('tokenizer.json')
```
### Example usage code
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('Xenova/m2m100_418M', use_fast=False)
fast_tokenizer = AutoTokenizer.from_pretrained('Xenova/m2m100_418M', use_fast=True)
assert tokenizer('Hello world').input_ids == fast_tokenizer('Hello world').input_ids
```
NOTE: To get facebook/m2m100_418M working, we need to remove the `"tokenizer_file": null,` line from [tokenizer_config.json](https://huggingface.co/facebook/m2m100_418M/blob/main/tokenizer_config.json)
Fixes # (issue)
### TODO
- [ ] Unit tests
- [x] Other config files + docs as done with `NllbTokenizerFast`
- [ ] Check that *all* special tokens are added (including `</s>` and similar tokens)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25478/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25478/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25478",
"html_url": "https://github.com/huggingface/transformers/pull/25478",
"diff_url": "https://github.com/huggingface/transformers/pull/25478.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25478.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25477
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25477/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25477/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25477/events
|
https://github.com/huggingface/transformers/issues/25477
| 1,848,376,144 |
I_kwDOCUB6oc5uK_tQ
| 25,477 |
倉庫
|
{
"login": "Jaywang9131",
"id": 140184885,
"node_id": "U_kgDOCFsNNQ",
"avatar_url": "https://avatars.githubusercontent.com/u/140184885?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jaywang9131",
"html_url": "https://github.com/Jaywang9131",
"followers_url": "https://api.github.com/users/Jaywang9131/followers",
"following_url": "https://api.github.com/users/Jaywang9131/following{/other_user}",
"gists_url": "https://api.github.com/users/Jaywang9131/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jaywang9131/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jaywang9131/subscriptions",
"organizations_url": "https://api.github.com/users/Jaywang9131/orgs",
"repos_url": "https://api.github.com/users/Jaywang9131/repos",
"events_url": "https://api.github.com/users/Jaywang9131/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jaywang9131/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The description is empty. Going to close."
] | 1,691 | 1,692 | 1,692 |
NONE
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25477/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25477/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25476
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25476/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25476/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25476/events
|
https://github.com/huggingface/transformers/issues/25476
| 1,848,372,677 |
I_kwDOCUB6oc5uK-3F
| 25,476 |
Some problems with XVector implementation
|
{
"login": "gau-nernst",
"id": 26946864,
"node_id": "MDQ6VXNlcjI2OTQ2ODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/26946864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gau-nernst",
"html_url": "https://github.com/gau-nernst",
"followers_url": "https://api.github.com/users/gau-nernst/followers",
"following_url": "https://api.github.com/users/gau-nernst/following{/other_user}",
"gists_url": "https://api.github.com/users/gau-nernst/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gau-nernst/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gau-nernst/subscriptions",
"organizations_url": "https://api.github.com/users/gau-nernst/orgs",
"repos_url": "https://api.github.com/users/gau-nernst/repos",
"events_url": "https://api.github.com/users/gau-nernst/events{/privacy}",
"received_events_url": "https://api.github.com/users/gau-nernst/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @sanchit-gandhi ",
"Hey @gau-nernst! Thanks for the great write-up and for the code references! \r\n\r\nGoing through your points one-by-one:\r\n1. **Statistical pooling is using for loops for batched computation:** would you like to open a PR to update this to use the attention mask as suggested? Think this should be fairly straightforward to add\r\n2. **Std uses unbiased estimate (correction=1):** we could add the 'correction' as a config attribute should we wish to port a model from another library to HF. This way, we could use the weights in the HF implementation of Wav2Vec2. However, unless there is an external model that we want to port, I think we should leave it as is for the sake of simplicity\r\n3. **Using unfold and linear is wasteful:** indeed, I agree! It's quite difficult to change the modelling code to add `nn.Conv1D` layers without breaking backwards compatibility for users who are running the old style unfold + linear, so here I think it's worth considering what the overhead is of using less efficient layers versus doing an implantation overhaul to use `nn.Conv1D` **and** maintain backwards compatibility.\r\n\r\nOverall, the TDNN variant of Wav2Vec2 is not super used, but it's crucial that we keep the numerical / parameter compatibility with the existing model to prevent un-expected breaking changes. If you have the time to look into this more deeply I'd be happy to review any PRs, and as always help with any questions / queries!",
"@sanchit-gandhi Thank you for the reply.\r\n\r\n1. Yes, I can make a PR for this.\r\n2. I agree with you, for the sake of backward compatibility and simplicity, we should leave it as it is.\r\n3. In terms of speed, I don't think it will affect much since most of the computation is in the backbone anyway. For weights compatibility, sadly the shape of Conv1d weight is different from Linear weight in current implementation - (out_dim, in_dim, kernel_size) vs (out_dim, in_dim * kernel_size). One possible solution is to keep `nn.Linear()`, but call `F.conv1d()` together with reshaping `kernel.weight`, so that we can keep backward compatibility and enjoy speed up. I will look more into this.",
"Your solution to 3 sounds very interesting! Super keen to take a look at a PR if this is something you want to pursue!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,691 | 1,704 | 1,704 |
CONTRIBUTOR
| null |
### System Info
NA
### Who can help?
@sanchi
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
While working on #25471, I spot a few problems with the current XVector implementation
1. Statistical pooling is using for loops for batched computation
https://github.com/huggingface/transformers/blob/fe3c8ab1af558b95f67f5fafc0c55f09fd2b09db/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L2436-L2438
This can be avoided by using the attention mask. However, since TDNN layers (basically 1D-conv) also change the effective attention mask, we need to update the attention mask again.
2. Std uses unbiased estimate (correction=1), while from what I know, statistical pooling typically uses biased estimate.
See that PyTorch uses correction=1 by default (unbiased=True in previous versions): https://pytorch.org/docs/stable/generated/torch.std.html
The original papers (x-vector: https://www.danielpovey.com/files/2018_icassp_xvectors.pdf, stats pooling: http://danielpovey.com/files/2017_interspeech_embeddings.pdf) don't specify the equations, but most other papers show equations without the correction factor: https://arxiv.org/pdf/1803.10963.pdf.
The official implementation is included in Kaldi, but I don't understand C++ and their codebase well enough to understand what's going on. https://github.com/kaldi-asr/kaldi/blob/71f38e62cad01c3078555bfe78d0f3a527422d75/src/nnet3/nnet-general-component.cc#L808-L821
If we use HF to train a model, it wouldn't matter much. But if we use this to port weights from other implementation, this can be a problem.
3. TDNN layer is 1D-conv. Using unfold and linear is wasteful. We can directly use 1D-conv instead.
NeMo's TDNN: https://github.com/NVIDIA/NeMo/blob/ab749e4401a3dcdfa5ea969347aaee20b7947c7c/nemo/collections/asr/parts/submodules/tdnn_attention.py#L138
### Expected behavior
NA
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25476/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25476/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25475
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25475/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25475/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25475/events
|
https://github.com/huggingface/transformers/issues/25475
| 1,848,318,610 |
I_kwDOCUB6oc5uKxqS
| 25,475 |
free(): invalid pointer
|
{
"login": "Geremia",
"id": 4298614,
"node_id": "MDQ6VXNlcjQyOTg2MTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4298614?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Geremia",
"html_url": "https://github.com/Geremia",
"followers_url": "https://api.github.com/users/Geremia/followers",
"following_url": "https://api.github.com/users/Geremia/following{/other_user}",
"gists_url": "https://api.github.com/users/Geremia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Geremia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Geremia/subscriptions",
"organizations_url": "https://api.github.com/users/Geremia/orgs",
"repos_url": "https://api.github.com/users/Geremia/repos",
"events_url": "https://api.github.com/users/Geremia/events{/privacy}",
"received_events_url": "https://api.github.com/users/Geremia/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"There is nothing linked to the Transformers library in the reproducer you shared. Are you sure you are opening the issue in the right repo?",
"@sgugger No, I'm not sure. Is `ct2-transformers-converter` not part of Hugging Face's Transformers? [OpenNMT documentation](https://opennmt.net/CTranslate2/guides/transformers.html#transformers) seemed to imply it was:\r\n\r\n```bash\r\npip install transformers[torch]\r\nct2-transformers-converter --model facebook/m2m100_418M --output_dir ct2_model\r\n```\r\n\r\nUpdate: [`ct2-transformers-converter` is part of CTranslate2.](https://github.com/OpenNMT/CTranslate2/blob/61d34502325bfa3c5ef8a11cd2e391d0efed1bf9/python/setup.py#L122)"
] | 1,691 | 1,691 | 1,691 |
NONE
| null |
### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.32.0.dev0
- Platform: Linux-6.1.42-x86_64-AMD_Ryzen_Threadripper_2990WX_32-Core_Processor-with-glibc2.37
- Python version: 3.9.17
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0.dev20230812+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```bash
$ ct2-transformers-converter
free(): invalid pointer
Annullato
```
### Expected behavior
I expect it not to encounter this error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25475/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25475/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25474
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25474/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25474/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25474/events
|
https://github.com/huggingface/transformers/pull/25474
| 1,848,180,435 |
PR_kwDOCUB6oc5XzIkP
| 25,474 |
Add support for BLIP-2 multimodal feature extraction
|
{
"login": "youssefadr",
"id": 104783077,
"node_id": "U_kgDOBj7c5Q",
"avatar_url": "https://avatars.githubusercontent.com/u/104783077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/youssefadr",
"html_url": "https://github.com/youssefadr",
"followers_url": "https://api.github.com/users/youssefadr/followers",
"following_url": "https://api.github.com/users/youssefadr/following{/other_user}",
"gists_url": "https://api.github.com/users/youssefadr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/youssefadr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/youssefadr/subscriptions",
"organizations_url": "https://api.github.com/users/youssefadr/orgs",
"repos_url": "https://api.github.com/users/youssefadr/repos",
"events_url": "https://api.github.com/users/youssefadr/events{/privacy}",
"received_events_url": "https://api.github.com/users/youssefadr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25474). All of your documentation changes will be reflected on that endpoint.",
"cc @amyeroberts and @younesbelkada ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey @youssefadr, feel free to re-open this if you still plan on adding this feature",
"@ArthurZucker Thank you, I'll do it, I am sorry I couldn't find time lately for it. It is on my backlog!",
"@NielsRogge Hi, should I open a new PR for this, or is there a way to re-open this one? It seems to be requested by a lot of users."
] | 1,691 | 1,706 | null |
CONTRIBUTOR
| null |
# What does this PR do?
This PR introduces the addition of `get_image_feature` and `get_text_feature` methods to the `Blip2ForConditionalGeneration` class. These changes align with the original Qformer implementation, which utilized both text and image inputs.
The current implementation in HuggingFace lacks support for multimodal embeddings, especially the capacity to extract embeddings by passing both text and image to the QFormer. This PR addresses this shortcoming.
<!-- Remove if not applicable -->
Fixes #25300 #25245
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
@NielsRogge @amyeroberts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25474/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25474/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25474",
"html_url": "https://github.com/huggingface/transformers/pull/25474",
"diff_url": "https://github.com/huggingface/transformers/pull/25474.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25474.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25473
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25473/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25473/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25473/events
|
https://github.com/huggingface/transformers/issues/25473
| 1,848,129,055 |
I_kwDOCUB6oc5uKDYf
| 25,473 |
Error loading models in 8-bit
|
{
"login": "RonanKMcGovern",
"id": 78278410,
"node_id": "MDQ6VXNlcjc4Mjc4NDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/78278410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RonanKMcGovern",
"html_url": "https://github.com/RonanKMcGovern",
"followers_url": "https://api.github.com/users/RonanKMcGovern/followers",
"following_url": "https://api.github.com/users/RonanKMcGovern/following{/other_user}",
"gists_url": "https://api.github.com/users/RonanKMcGovern/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RonanKMcGovern/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RonanKMcGovern/subscriptions",
"organizations_url": "https://api.github.com/users/RonanKMcGovern/orgs",
"repos_url": "https://api.github.com/users/RonanKMcGovern/repos",
"events_url": "https://api.github.com/users/RonanKMcGovern/events{/privacy}",
"received_events_url": "https://api.github.com/users/RonanKMcGovern/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I tried to run this with colab notebook CPU and got same error but once I run it on T4 GPU no errors appeared and worked so I assume you can not use **load_in_8bit=True** on cpu device actually. you can still keep **device_map=\"cpu\"** but you should have GPU environment I guess. ",
"Yes `load_in_8bit` requires using GPUs, not CPUs.",
"Correct, sorry for the false flag @gante "
] | 1,691 | 1,692 | 1,692 |
NONE
| null |
### System Info
There is an error stating that bitsandbytes and accelerate are required (even though the latest versions were already installed.
```
!pip install -q -U git+https://github.com/huggingface/accelerate.git
!pip install -q -U bitsandbytes
!pip install -q -U git+https://github.com/huggingface/transformers.git
model_id = 'meta-llama/Llama-2-7b-chat-hf'
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="cpu", load_in_8bit=True)
```
I then tried with `!pip install transformers==4.31`
and that resolved the issue. @gante likely related to the prior [issue](https://github.com/huggingface/transformers/pull/25411#event-10057101239)
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
See above
### Expected behavior
Model should load.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25473/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25473/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25472
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25472/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25472/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25472/events
|
https://github.com/huggingface/transformers/pull/25472
| 1,848,060,016 |
PR_kwDOCUB6oc5XyusR
| 25,472 |
fix : escape key of start_token from special characters before search end_token in token2json function of DonutProcessor
|
{
"login": "nour-elkamel",
"id": 75819292,
"node_id": "MDQ6VXNlcjc1ODE5Mjky",
"avatar_url": "https://avatars.githubusercontent.com/u/75819292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nour-elkamel",
"html_url": "https://github.com/nour-elkamel",
"followers_url": "https://api.github.com/users/nour-elkamel/followers",
"following_url": "https://api.github.com/users/nour-elkamel/following{/other_user}",
"gists_url": "https://api.github.com/users/nour-elkamel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nour-elkamel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nour-elkamel/subscriptions",
"organizations_url": "https://api.github.com/users/nour-elkamel/orgs",
"repos_url": "https://api.github.com/users/nour-elkamel/repos",
"events_url": "https://api.github.com/users/nour-elkamel/events{/privacy}",
"received_events_url": "https://api.github.com/users/nour-elkamel/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
…ing for end_token
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25472/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25472/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25472",
"html_url": "https://github.com/huggingface/transformers/pull/25472",
"diff_url": "https://github.com/huggingface/transformers/pull/25472.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25472.patch",
"merged_at": 1692013578000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25471
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25471/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25471/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25471/events
|
https://github.com/huggingface/transformers/pull/25471
| 1,848,041,515 |
PR_kwDOCUB6oc5XyqzJ
| 25,471 |
Return effective attention mask in Wav2Vec2BaseModelOutput
|
{
"login": "gau-nernst",
"id": 26946864,
"node_id": "MDQ6VXNlcjI2OTQ2ODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/26946864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gau-nernst",
"html_url": "https://github.com/gau-nernst",
"followers_url": "https://api.github.com/users/gau-nernst/followers",
"following_url": "https://api.github.com/users/gau-nernst/following{/other_user}",
"gists_url": "https://api.github.com/users/gau-nernst/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gau-nernst/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gau-nernst/subscriptions",
"organizations_url": "https://api.github.com/users/gau-nernst/orgs",
"repos_url": "https://api.github.com/users/gau-nernst/repos",
"events_url": "https://api.github.com/users/gau-nernst/events{/privacy}",
"received_events_url": "https://api.github.com/users/gau-nernst/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25471). All of your documentation changes will be reflected on that endpoint.",
"@sanchit-gandhi How should I add a test checking for returned downsampled attention mask? Should it be under `Wav2Vec2ModelTest`? I'm not familiar with HF tests, and it looks kinda overwhelming.",
"Hey @gau-nernst - indeed it's quite an intimidating file! It's got quite a lot of tests given Wav2Vec2 is a core model in the library and we want to ensure that any new changes don't break backwards compatibility. But once you know how it works it's quite straightforward!\r\n\r\nThe model tester class defines all the functions we want to test (`check_xxx`). The model test class then executes them (`test_xxx`).\r\n\r\nWhat I would suggest doing here is defining a function in the model tester, e.g. https://github.com/huggingface/transformers/blob/450a181d8b963b4e896be4aac701815aa554a6bb/tests/models/wav2vec2/test_modeling_wav2vec2.py#L462\r\n\r\nAnd then running the test in the model test, e.g. https://github.com/huggingface/transformers/blob/450a181d8b963b4e896be4aac701815aa554a6bb/tests/models/wav2vec2/test_modeling_wav2vec2.py#L546\r\n\r\n=> this way you just have to focus on writing one new function in the model tester, and then execute it in the model test\r\n\r\nIn this test, I think we can check that:\r\n1. We return an attention mask from the model output\r\n2. This attention mask has the correct downsampled length (which we can get from the private method `_get_feat_extract_output_lengths` if required)",
"It's looking very close to completion @gau-nernst! Just the `return_dict` situation that needs addressing, otherwise in good shape 🤗",
"Let me know when you'd like a re-review here @gau-nernst! It's looking quite nice already!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,691 | 1,708 | null |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #25307
- Add field `attention_mask` to Wav2Vec2BaseModelOutput
- Return updated attention mask for Wav2VecModel, Data2VecAudioModel, HubertModel, SEWModel, SEWDModel, WavLMModel, Wav2Vec2ConformerModel, UniSpeechModel, UniSpeechSatModel
- Change model output from BaseModelOutput to Wav2Vec2BaseModelOutput for HubertModel, SEWModel, SEWDModel
- Fix tensor comparison functions to accept bool
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sanchit-gandhi
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25471/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25471",
"html_url": "https://github.com/huggingface/transformers/pull/25471",
"diff_url": "https://github.com/huggingface/transformers/pull/25471.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25471.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25470
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25470/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25470/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25470/events
|
https://github.com/huggingface/transformers/issues/25470
| 1,848,025,552 |
I_kwDOCUB6oc5uJqHQ
| 25,470 |
Batch Inference for Streaming generation strategy for transformer.generate()
|
{
"login": "AyushVachaspati",
"id": 25360086,
"node_id": "MDQ6VXNlcjI1MzYwMDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/25360086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AyushVachaspati",
"html_url": "https://github.com/AyushVachaspati",
"followers_url": "https://api.github.com/users/AyushVachaspati/followers",
"following_url": "https://api.github.com/users/AyushVachaspati/following{/other_user}",
"gists_url": "https://api.github.com/users/AyushVachaspati/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AyushVachaspati/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AyushVachaspati/subscriptions",
"organizations_url": "https://api.github.com/users/AyushVachaspati/orgs",
"repos_url": "https://api.github.com/users/AyushVachaspati/repos",
"events_url": "https://api.github.com/users/AyushVachaspati/events{/privacy}",
"received_events_url": "https://api.github.com/users/AyushVachaspati/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I found that I can Inherit the TextStreamer class and implement my own put() and end() methods for the functionality I need. But the general functionality for batch inference might still be useful for the community. ",
"cc @gante ",
"@AyushVachaspati Thank you for opening the issue 👍 \r\n\r\nWe want to add it eventually, but we are prioritizing a structural `generate` refactor before we go forward with this feature :)\r\n\r\nBTW, since you're serving generation requests, have you looked at our [text-generation-inference](https://github.com/huggingface/text-generation-inference) server-grade library? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@AyushVachaspati @gante Has this issue gained any traction yet? Also interested in a streaming solution for batched inference! ",
"@ddl-avanitanna No bandwidth to expand it yet, I'm afraid :) The pointer to [TGI](https://github.com/huggingface/text-generation-inference) is still my suggested alternative"
] | 1,691 | 1,697 | 1,695 |
NONE
| null |
### Feature request
Hi,
I am working on implementing a service that receives inference requests form users and sends a stream of responses (Tokens as they are generated). The TextStreamer and TextIteratorStreamer classes are good options to do that.
https://huggingface.co/docs/transformers/v4.31.0/en/internal/generation_utils#transformers.TextStreamer
The issue is that they only support one inference request at a time, which wastes a lot of GPU resources. Is there some way to run batches of requests using a stream interface, like "BatchTextIteratorStream" or something in that vain. This way the responses can be generated on a batch of requests. The generator would push list of tokens for each batch for each iteration of the model (in context of LLMs producing 1 token at a time) and the user can then process those to service their requests.
### Motivation
Streaming responses are very useful for large and slow Chat LLMs. These models have large latencies. By batching the requests we can improve token throughout and improve GPU utilization, and also service more requests at the same time.
### Your contribution
I can help write the Batch Streamer class to implement the functionality, but I'm not too familiar with the internals of the Transformers codebase, so I need someone to point me in the right direction.
I'm motivated to contribute a substantial amount of time to implement this feature as it would be a great help for my project.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25470/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25470/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25469
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25469/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25469/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25469/events
|
https://github.com/huggingface/transformers/issues/25469
| 1,847,818,318 |
I_kwDOCUB6oc5uI3hO
| 25,469 |
AttributeError: 'TFBertForQuestionAnswering' object has no attribute 'prepare_tf_dataset'
|
{
"login": "daniau23",
"id": 87085687,
"node_id": "MDQ6VXNlcjg3MDg1Njg3",
"avatar_url": "https://avatars.githubusercontent.com/u/87085687?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daniau23",
"html_url": "https://github.com/daniau23",
"followers_url": "https://api.github.com/users/daniau23/followers",
"following_url": "https://api.github.com/users/daniau23/following{/other_user}",
"gists_url": "https://api.github.com/users/daniau23/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daniau23/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daniau23/subscriptions",
"organizations_url": "https://api.github.com/users/daniau23/orgs",
"repos_url": "https://api.github.com/users/daniau23/repos",
"events_url": "https://api.github.com/users/daniau23/events{/privacy}",
"received_events_url": "https://api.github.com/users/daniau23/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,691 | 1,691 | 1,691 |
NONE
| null |
### System Info
transformers → 4.18.0
datasets- → 2.14.4
tensorflow -->2.10.0
python --> 3.8.13
### Who can help?
text models: @ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
So I am following the Huggingface course for Question answering and I am getting this error
`AttributeError : ‘TFBertForQuestionAnswering’ object has no attribute ‘prepare_tf_dataset’`
from the code given in the course
`tf_train_dataset = model.prepare_tf_dataset(
train_dataset,
collate_fn=data_collator,
shuffle=True,
batch_size=16,
)
tf_eval_dataset = model.prepare_tf_dataset(
validation_dataset,
collate_fn=data_collator,
shuffle=False,
batch_size=16,
)`
Course link:[Question answering - Hugging Face NLP Course](https://huggingface.co/learn/nlp-course/chapter7/7?fw=tf#post-processing)
### Expected behavior
Should have returned the needed training and validation datasets.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25469/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25468
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25468/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25468/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25468/events
|
https://github.com/huggingface/transformers/pull/25468
| 1,847,383,808 |
PR_kwDOCUB6oc5Xwax2
| 25,468 |
Bump gitpython from 3.1.30 to 3.1.32 in /examples/research_projects/distillation
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,691 | 1,691 |
CONTRIBUTOR
| null |
Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.30 to 3.1.32.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/gitpython-developers/GitPython/releases">gitpython's releases</a>.</em></p>
<blockquote>
<h2>v3.1.32 - with another security update</h2>
<h2>What's Changed</h2>
<ul>
<li>Bump cygwin/cygwin-install-action from 3 to 4 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1572">gitpython-developers/GitPython#1572</a></li>
<li>Fix up the commit trailers functionality by <a href="https://github.com/itsluketwist"><code>@itsluketwist</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1576">gitpython-developers/GitPython#1576</a></li>
<li>Name top-level exceptions as private variables by <a href="https://github.com/Hawk777"><code>@Hawk777</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1590">gitpython-developers/GitPython#1590</a></li>
<li>fix pypi long description by <a href="https://github.com/eUgEntOptIc44"><code>@eUgEntOptIc44</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1603">gitpython-developers/GitPython#1603</a></li>
<li>Don't rely on <strong>del</strong> by <a href="https://github.com/r-darwish"><code>@r-darwish</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1606">gitpython-developers/GitPython#1606</a></li>
<li>Block insecure non-multi options in clone/clone_from by <a href="https://github.com/Beuc"><code>@Beuc</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1609">gitpython-developers/GitPython#1609</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/Hawk777"><code>@Hawk777</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1590">gitpython-developers/GitPython#1590</a></li>
<li><a href="https://github.com/eUgEntOptIc44"><code>@eUgEntOptIc44</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1603">gitpython-developers/GitPython#1603</a></li>
<li><a href="https://github.com/r-darwish"><code>@r-darwish</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1606">gitpython-developers/GitPython#1606</a></li>
<li><a href="https://github.com/Beuc"><code>@Beuc</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1609">gitpython-developers/GitPython#1609</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.31...3.1.32">https://github.com/gitpython-developers/GitPython/compare/3.1.31...3.1.32</a></p>
<h2>3.1.31</h2>
<h2>What's Changed</h2>
<ul>
<li>Fix Sphinx rendering errors by <a href="https://github.com/stephan-cr"><code>@stephan-cr</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1524">gitpython-developers/GitPython#1524</a></li>
<li>tests: Use <code>command -v</code> instead of third-party <code>which</code> program by <a href="https://github.com/mgorny"><code>@mgorny</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1525">gitpython-developers/GitPython#1525</a></li>
<li>fix/add allow_unsafe_* params in docstrings + fix typo by <a href="https://github.com/obfusk"><code>@obfusk</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1530">gitpython-developers/GitPython#1530</a></li>
<li>use tempfile.TemporaryDirectory & fix clone_from_unsafe_protocol tests by <a href="https://github.com/obfusk"><code>@obfusk</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1531">gitpython-developers/GitPython#1531</a></li>
<li>Fix some resource leaks by open file handles by <a href="https://github.com/marlamb"><code>@marlamb</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1532">gitpython-developers/GitPython#1532</a></li>
<li>fix files list on file rename by <a href="https://github.com/teknoraver"><code>@teknoraver</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1537">gitpython-developers/GitPython#1537</a></li>
<li>Declare support for Python 3.11 by <a href="https://github.com/hugovk"><code>@hugovk</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1541">gitpython-developers/GitPython#1541</a></li>
<li>Fix ignored by <a href="https://github.com/Lightborne"><code>@Lightborne</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1545">gitpython-developers/GitPython#1545</a></li>
<li>Fix timezone parsing functions for non-hour timezones by <a href="https://github.com/jcowgill"><code>@jcowgill</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1547">gitpython-developers/GitPython#1547</a></li>
<li>Enable user to override default diff -M arg by <a href="https://github.com/mellowed100"><code>@mellowed100</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1551">gitpython-developers/GitPython#1551</a></li>
<li>Remove optional from two member variables by <a href="https://github.com/Sineaggi"><code>@Sineaggi</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1550">gitpython-developers/GitPython#1550</a></li>
<li>Fix RecursionError when iterating streams by <a href="https://github.com/eric-wieser"><code>@eric-wieser</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1554">gitpython-developers/GitPython#1554</a></li>
<li>Fix get_values() so it correctly loads section names by <a href="https://github.com/Codym48"><code>@Codym48</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1555">gitpython-developers/GitPython#1555</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/stephan-cr"><code>@stephan-cr</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1524">gitpython-developers/GitPython#1524</a></li>
<li><a href="https://github.com/obfusk"><code>@obfusk</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1530">gitpython-developers/GitPython#1530</a></li>
<li><a href="https://github.com/marlamb"><code>@marlamb</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1532">gitpython-developers/GitPython#1532</a></li>
<li><a href="https://github.com/teknoraver"><code>@teknoraver</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1537">gitpython-developers/GitPython#1537</a></li>
<li><a href="https://github.com/Lightborne"><code>@Lightborne</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1545">gitpython-developers/GitPython#1545</a></li>
<li><a href="https://github.com/jcowgill"><code>@jcowgill</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1547">gitpython-developers/GitPython#1547</a></li>
<li><a href="https://github.com/mellowed100"><code>@mellowed100</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1551">gitpython-developers/GitPython#1551</a></li>
<li><a href="https://github.com/Sineaggi"><code>@Sineaggi</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1550">gitpython-developers/GitPython#1550</a></li>
<li><a href="https://github.com/Codym48"><code>@Codym48</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1555">gitpython-developers/GitPython#1555</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.30...3.1.31">https://github.com/gitpython-developers/GitPython/compare/3.1.30...3.1.31</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/5d45ce243a12669724e969442e6725a894e30fd4"><code>5d45ce2</code></a> prepare 3.1.32 release</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/ca965ecc81853bca7675261729143f54e5bf4cdd"><code>ca965ec</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1609">#1609</a> from Beuc/block-insecure-options-clone-non-multi</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/5c59e0d63da6180db8a0b349f0ad36fef42aceed"><code>5c59e0d</code></a> Block insecure non-multi options in clone/clone_from</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/c09a71e2caefd5c25195b0b2decc8177d658216a"><code>c09a71e</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1606">#1606</a> from r-darwish/no-del</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/a3859ee6f72e604d46a63dcd9fa3098adcc35cb0"><code>a3859ee</code></a> fixes</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/8186159af1a35c57829d86dd9a5a8c4f472f4637"><code>8186159</code></a> Don't rely on <strong>del</strong></li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/741edb54300fb4eb172e85e8ea0f07b4bd39bcc0"><code>741edb5</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1603">#1603</a> from eUgEntOptIc44/eugenoptic44-fix-pypi-long-descri...</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/0c543cd0ddedeaee27ca5e7c4c22b25a8fd5becb"><code>0c543cd</code></a> Improve readability of README.md</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/9cd7ddb96022dd30cfe7b64378e3b32a3747c1dd"><code>9cd7ddb</code></a> Improve the 'long_description' displayed on pypi</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/6fc11e6e36e524a6749e15046eca3a8601745822"><code>6fc11e6</code></a> update README to reflect the status quo on <code>git</code> command usage</li>
<li>Additional commits viewable in <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.30...3.1.32">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25468/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25468",
"html_url": "https://github.com/huggingface/transformers/pull/25468",
"diff_url": "https://github.com/huggingface/transformers/pull/25468.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25468.patch",
"merged_at": 1691948825000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25467
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25467/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25467/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25467/events
|
https://github.com/huggingface/transformers/pull/25467
| 1,847,383,793 |
PR_kwDOCUB6oc5Xwaxq
| 25,467 |
Bump gitpython from 3.1.30 to 3.1.32 in /examples/research_projects/decision_transformer
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,691 | 1,691 |
CONTRIBUTOR
| null |
Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.30 to 3.1.32.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/gitpython-developers/GitPython/releases">gitpython's releases</a>.</em></p>
<blockquote>
<h2>v3.1.32 - with another security update</h2>
<h2>What's Changed</h2>
<ul>
<li>Bump cygwin/cygwin-install-action from 3 to 4 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1572">gitpython-developers/GitPython#1572</a></li>
<li>Fix up the commit trailers functionality by <a href="https://github.com/itsluketwist"><code>@itsluketwist</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1576">gitpython-developers/GitPython#1576</a></li>
<li>Name top-level exceptions as private variables by <a href="https://github.com/Hawk777"><code>@Hawk777</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1590">gitpython-developers/GitPython#1590</a></li>
<li>fix pypi long description by <a href="https://github.com/eUgEntOptIc44"><code>@eUgEntOptIc44</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1603">gitpython-developers/GitPython#1603</a></li>
<li>Don't rely on <strong>del</strong> by <a href="https://github.com/r-darwish"><code>@r-darwish</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1606">gitpython-developers/GitPython#1606</a></li>
<li>Block insecure non-multi options in clone/clone_from by <a href="https://github.com/Beuc"><code>@Beuc</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1609">gitpython-developers/GitPython#1609</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/Hawk777"><code>@Hawk777</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1590">gitpython-developers/GitPython#1590</a></li>
<li><a href="https://github.com/eUgEntOptIc44"><code>@eUgEntOptIc44</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1603">gitpython-developers/GitPython#1603</a></li>
<li><a href="https://github.com/r-darwish"><code>@r-darwish</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1606">gitpython-developers/GitPython#1606</a></li>
<li><a href="https://github.com/Beuc"><code>@Beuc</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1609">gitpython-developers/GitPython#1609</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.31...3.1.32">https://github.com/gitpython-developers/GitPython/compare/3.1.31...3.1.32</a></p>
<h2>3.1.31</h2>
<h2>What's Changed</h2>
<ul>
<li>Fix Sphinx rendering errors by <a href="https://github.com/stephan-cr"><code>@stephan-cr</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1524">gitpython-developers/GitPython#1524</a></li>
<li>tests: Use <code>command -v</code> instead of third-party <code>which</code> program by <a href="https://github.com/mgorny"><code>@mgorny</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1525">gitpython-developers/GitPython#1525</a></li>
<li>fix/add allow_unsafe_* params in docstrings + fix typo by <a href="https://github.com/obfusk"><code>@obfusk</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1530">gitpython-developers/GitPython#1530</a></li>
<li>use tempfile.TemporaryDirectory & fix clone_from_unsafe_protocol tests by <a href="https://github.com/obfusk"><code>@obfusk</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1531">gitpython-developers/GitPython#1531</a></li>
<li>Fix some resource leaks by open file handles by <a href="https://github.com/marlamb"><code>@marlamb</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1532">gitpython-developers/GitPython#1532</a></li>
<li>fix files list on file rename by <a href="https://github.com/teknoraver"><code>@teknoraver</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1537">gitpython-developers/GitPython#1537</a></li>
<li>Declare support for Python 3.11 by <a href="https://github.com/hugovk"><code>@hugovk</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1541">gitpython-developers/GitPython#1541</a></li>
<li>Fix ignored by <a href="https://github.com/Lightborne"><code>@Lightborne</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1545">gitpython-developers/GitPython#1545</a></li>
<li>Fix timezone parsing functions for non-hour timezones by <a href="https://github.com/jcowgill"><code>@jcowgill</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1547">gitpython-developers/GitPython#1547</a></li>
<li>Enable user to override default diff -M arg by <a href="https://github.com/mellowed100"><code>@mellowed100</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1551">gitpython-developers/GitPython#1551</a></li>
<li>Remove optional from two member variables by <a href="https://github.com/Sineaggi"><code>@Sineaggi</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1550">gitpython-developers/GitPython#1550</a></li>
<li>Fix RecursionError when iterating streams by <a href="https://github.com/eric-wieser"><code>@eric-wieser</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1554">gitpython-developers/GitPython#1554</a></li>
<li>Fix get_values() so it correctly loads section names by <a href="https://github.com/Codym48"><code>@Codym48</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1555">gitpython-developers/GitPython#1555</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/stephan-cr"><code>@stephan-cr</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1524">gitpython-developers/GitPython#1524</a></li>
<li><a href="https://github.com/obfusk"><code>@obfusk</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1530">gitpython-developers/GitPython#1530</a></li>
<li><a href="https://github.com/marlamb"><code>@marlamb</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1532">gitpython-developers/GitPython#1532</a></li>
<li><a href="https://github.com/teknoraver"><code>@teknoraver</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1537">gitpython-developers/GitPython#1537</a></li>
<li><a href="https://github.com/Lightborne"><code>@Lightborne</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1545">gitpython-developers/GitPython#1545</a></li>
<li><a href="https://github.com/jcowgill"><code>@jcowgill</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1547">gitpython-developers/GitPython#1547</a></li>
<li><a href="https://github.com/mellowed100"><code>@mellowed100</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1551">gitpython-developers/GitPython#1551</a></li>
<li><a href="https://github.com/Sineaggi"><code>@Sineaggi</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1550">gitpython-developers/GitPython#1550</a></li>
<li><a href="https://github.com/Codym48"><code>@Codym48</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1555">gitpython-developers/GitPython#1555</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.30...3.1.31">https://github.com/gitpython-developers/GitPython/compare/3.1.30...3.1.31</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/5d45ce243a12669724e969442e6725a894e30fd4"><code>5d45ce2</code></a> prepare 3.1.32 release</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/ca965ecc81853bca7675261729143f54e5bf4cdd"><code>ca965ec</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1609">#1609</a> from Beuc/block-insecure-options-clone-non-multi</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/5c59e0d63da6180db8a0b349f0ad36fef42aceed"><code>5c59e0d</code></a> Block insecure non-multi options in clone/clone_from</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/c09a71e2caefd5c25195b0b2decc8177d658216a"><code>c09a71e</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1606">#1606</a> from r-darwish/no-del</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/a3859ee6f72e604d46a63dcd9fa3098adcc35cb0"><code>a3859ee</code></a> fixes</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/8186159af1a35c57829d86dd9a5a8c4f472f4637"><code>8186159</code></a> Don't rely on <strong>del</strong></li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/741edb54300fb4eb172e85e8ea0f07b4bd39bcc0"><code>741edb5</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1603">#1603</a> from eUgEntOptIc44/eugenoptic44-fix-pypi-long-descri...</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/0c543cd0ddedeaee27ca5e7c4c22b25a8fd5becb"><code>0c543cd</code></a> Improve readability of README.md</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/9cd7ddb96022dd30cfe7b64378e3b32a3747c1dd"><code>9cd7ddb</code></a> Improve the 'long_description' displayed on pypi</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/6fc11e6e36e524a6749e15046eca3a8601745822"><code>6fc11e6</code></a> update README to reflect the status quo on <code>git</code> command usage</li>
<li>Additional commits viewable in <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.30...3.1.32">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25467/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25467/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25467",
"html_url": "https://github.com/huggingface/transformers/pull/25467",
"diff_url": "https://github.com/huggingface/transformers/pull/25467.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25467.patch",
"merged_at": 1691948836000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25466
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25466/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25466/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25466/events
|
https://github.com/huggingface/transformers/pull/25466
| 1,847,083,559 |
PR_kwDOCUB6oc5XvZPS
| 25,466 |
Revert "Reuse the cache created for latest `main` on PRs/branches if `setup.py` is not modified"
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,691 | 1,691 |
COLLABORATOR
| null |
Reverts huggingface/transformers#25445
The way to solve the whole huge cache issue is to enable cache sharing between PRs from different forked repositories, see [this doc](https://circleci.com/docs/caching/#caching-and-open-source).
If we decide to enable this (it's a dangerous take), then we can include the #25445 once the sharing is done.
More info: https://circleci.com/docs/oss/#pass-secrets-to-builds-from-forked-pull-requests
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25466/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25466/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25466",
"html_url": "https://github.com/huggingface/transformers/pull/25466",
"diff_url": "https://github.com/huggingface/transformers/pull/25466.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25466.patch",
"merged_at": 1691780828000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25465
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25465/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25465/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25465/events
|
https://github.com/huggingface/transformers/issues/25465
| 1,847,034,662 |
I_kwDOCUB6oc5uF4Mm
| 25,465 |
FeatureExtractionPipeline for Causal model gives unexpected results
|
{
"login": "tonifuc3m",
"id": 46200970,
"node_id": "MDQ6VXNlcjQ2MjAwOTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/46200970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tonifuc3m",
"html_url": "https://github.com/tonifuc3m",
"followers_url": "https://api.github.com/users/tonifuc3m/followers",
"following_url": "https://api.github.com/users/tonifuc3m/following{/other_user}",
"gists_url": "https://api.github.com/users/tonifuc3m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tonifuc3m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tonifuc3m/subscriptions",
"organizations_url": "https://api.github.com/users/tonifuc3m/orgs",
"repos_url": "https://api.github.com/users/tonifuc3m/repos",
"events_url": "https://api.github.com/users/tonifuc3m/events{/privacy}",
"received_events_url": "https://api.github.com/users/tonifuc3m/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This is entirely possible, and totally not necessarily a bug.\r\nkernels can vary based on tensor shapes for efficiency leading to small but existing differences.\r\nYou can just adding padding for longer and longer sequences you will observe the same.\r\n\r\nJust changing torch version can cause this too.\r\n\r\nJust use higher tolerations to ignore that error."
] | 1,691 | 1,691 | 1,691 |
NONE
| null |
Causal model embeddings should not depend on right tokens.
For example, the word `man` should have the same GPT2 embedding in these two sentences: `The man is successfull`, `The man has for sons and takes care of the house` because the left context is the same: `The`. However, when using the `FeatureExtractionPipeline` there are small differences in the embedding values.
### System Info
- `transformers` version: 4.24.0
- Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.13.0+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who can help?
- @Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer, pipeline
# Load pipeline and tokenizer for GPT2
feature_extraction = pipeline('feature-extraction', model="gpt2")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
# Define test sentences
sentences_en = ["The man is successfull",
"The man is not successfull",
"The man has four sons and takes care of the house",
"The man has a brilliant professional career"]
# Get contextual embeddings of all subtokens of all sentences
embeddings = feature_extraction(sentences_en)
# Get embeddings of word "man" -> in our examples is always in position 1 for GPT2
man_embeddings = []
for emb in embeddings:
man_embeddings.append(emb[0][1])
# Check if embeddings are equal
print("1st and 2nd embedding are equal?", man_embeddings[0] == man_embeddings[1])
print("1st and 3rd embedding are equal?", man_embeddings[0] == man_embeddings[2])
print("1st embedding, first values:", man_embeddings[0][0:3])
print("3rd embedding, first values:", man_embeddings[2][0:3])
```
Results:
`1st and 2nd embedding are equal? True`
`1st and 3rd embedding are equal? False`
`1st embedding, first values: [0.06501030921936035, 0.2574709951877594, -0.8581657409667969]`
`3rd embedding, first values: [0.065010204911232, 0.25747090578079224, -0.8581656217575073]`
[Colab](https://colab.research.google.com/drive/16QrjVWy65mfZ5MprV26kz_9AcB-YLSCe?pli=1#scrollTo=N3Xyo9XeG_EY), to reproduce the results.
### Expected behavior
I would expect that the four embeddings of the word `man` are exactly equal:
`1st and 2nd embedding are equal? True`
`1st and 3rd embedding are equal? True`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25465/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25465/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25464
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25464/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25464/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25464/events
|
https://github.com/huggingface/transformers/pull/25464
| 1,846,959,134 |
PR_kwDOCUB6oc5Xu9ie
| 25,464 |
Input data format
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,692 | 1,692 |
COLLABORATOR
| null |
# What does this PR do?
Adds the `input_data_format` argument to all of the image processor methods.
This allows for passing in of images with an unusual number of channels, or ones where it's difficult to infer because of ambiguity e.g size (3, 3, 3).
This is an alternative to #24577
Fixes issues like:
* https://github.com/huggingface/transformers/issues/21981
* https://github.com/huggingface/transformers/issues/21638
* https://github.com/huggingface/transformers/issues/22577
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25464/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25464/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25464",
"html_url": "https://github.com/huggingface/transformers/pull/25464",
"diff_url": "https://github.com/huggingface/transformers/pull/25464.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25464.patch",
"merged_at": 1692204302000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25463
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25463/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25463/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25463/events
|
https://github.com/huggingface/transformers/pull/25463
| 1,846,876,831 |
PR_kwDOCUB6oc5XurmL
| 25,463 |
Mark flaky tests
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,691 | 1,691 |
COLLABORATOR
| null |
# What does this PR do?
Handles two tests which semi-regularly fail on CI runs
`tests/utils/test_image_utils.py::LoadImageTester::test_load_img_url_timeout`
- Mark with an is_flaky decorator
`tests/models/wav2vec2/test_modeling_tf_wav2vec2.py::TFWav2Vec2ModelTest::test_labels_out_of_vocab`
- Already has an is_flaky decorator
- Upped the vocab size being used when generating random ids tensor to increase probability of OOV error being hit
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25463/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25463/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25463",
"html_url": "https://github.com/huggingface/transformers/pull/25463",
"diff_url": "https://github.com/huggingface/transformers/pull/25463.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25463.patch",
"merged_at": 1691764005000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25462
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25462/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25462/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25462/events
|
https://github.com/huggingface/transformers/pull/25462
| 1,846,821,924 |
PR_kwDOCUB6oc5Xufpn
| 25,462 |
Add input_data_format argument, image transforms
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,691 | 1,691 |
COLLABORATOR
| null |
# What does this PR do?
Adds `input_data_format` as optional argument to all image transforms. This enables users to explicitly set the format, rather than it being inferred, which can solve issues when it's ambiguous.
Once merged, image processors can then pass this along to allow for more robust processing - either images with different number of channels or hard-to-infer data format.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25462/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25462/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25462",
"html_url": "https://github.com/huggingface/transformers/pull/25462",
"diff_url": "https://github.com/huggingface/transformers/pull/25462.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25462.patch",
"merged_at": 1691762971000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25461
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25461/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25461/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25461/events
|
https://github.com/huggingface/transformers/pull/25461
| 1,846,781,597 |
PR_kwDOCUB6oc5XuW1Z
| 25,461 |
Update run_translation.py broken link example Pytoch
|
{
"login": "SoyGema",
"id": 24204714,
"node_id": "MDQ6VXNlcjI0MjA0NzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/24204714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SoyGema",
"html_url": "https://github.com/SoyGema",
"followers_url": "https://api.github.com/users/SoyGema/followers",
"following_url": "https://api.github.com/users/SoyGema/following{/other_user}",
"gists_url": "https://api.github.com/users/SoyGema/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SoyGema/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SoyGema/subscriptions",
"organizations_url": "https://api.github.com/users/SoyGema/orgs",
"repos_url": "https://api.github.com/users/SoyGema/repos",
"events_url": "https://api.github.com/users/SoyGema/events{/privacy}",
"received_events_url": "https://api.github.com/users/SoyGema/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,691 | 1,691 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #25444
<!-- Remove if not applicable -->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25461/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25461/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25461",
"html_url": "https://github.com/huggingface/transformers/pull/25461",
"diff_url": "https://github.com/huggingface/transformers/pull/25461.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25461.patch",
"merged_at": 1691761285000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25460
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25460/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25460/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25460/events
|
https://github.com/huggingface/transformers/issues/25460
| 1,846,645,062 |
I_kwDOCUB6oc5uEZFG
| 25,460 |
ValueError: Expected input batch_size (1052) to match target batch_size (508) when fine tuning GPT 2 model
|
{
"login": "Damika-Anupama",
"id": 63784444,
"node_id": "MDQ6VXNlcjYzNzg0NDQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/63784444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Damika-Anupama",
"html_url": "https://github.com/Damika-Anupama",
"followers_url": "https://api.github.com/users/Damika-Anupama/followers",
"following_url": "https://api.github.com/users/Damika-Anupama/following{/other_user}",
"gists_url": "https://api.github.com/users/Damika-Anupama/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Damika-Anupama/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Damika-Anupama/subscriptions",
"organizations_url": "https://api.github.com/users/Damika-Anupama/orgs",
"repos_url": "https://api.github.com/users/Damika-Anupama/repos",
"events_url": "https://api.github.com/users/Damika-Anupama/events{/privacy}",
"received_events_url": "https://api.github.com/users/Damika-Anupama/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"You can set a breakpoint at `loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.v ` at see the shape of the inputs. Then you can trace back if necessary why the shape is being those values. It's likely an issue in the datasets.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,691 | 1,695 | 1,695 |
NONE
| null |
Hello there I'm attempting to train a GPT 2 model how to summarize passages without compromising their emotional impact. Consider summarizing a chapter from a book, but we want the reader to experience the same emotions as the chapter itself. I discovered a Kaggle dataset that includes Amazon's fine food reviews (`/kaggle/input/amazon-fine-food-reviews/Reviews.csv`).
First I extracted features from the dataset feedback such as processed_text (feedback without stop words), sentiments, emotions using bert and T5 models. Then I tokenized these features and relevent summary as follows
```
!pip install transformers torch
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
# Preprocess sentiment values to extract sentiment labels
data['sentiment'] = data['sentiment'].apply(lambda sentiment: sentiment[0]['label']) # Extracting the sentiment label from the dictionary
# Tokenize the processed_text, sentiment, and emotion
data['processed_text_tokenized'] = data['processed_text'].apply(lambda text: tokenizer.encode(text, truncation=True, padding='max_length', max_length=256))
data['sentiment_tokenized'] = data['sentiment'].apply(lambda sentiment: tokenizer.encode(sentiment, truncation=True, padding='max_length', max_length=4))
data['emotion_tokenized'] = data['emotions'].apply(lambda emotion: tokenizer.encode(emotion, truncation=True, padding='max_length', max_length=4))
# Tokenize the summaries
data['summary_tokenized'] = data['Summary'].apply(lambda summary: tokenizer.encode(summary, truncation=True, padding='max_length', max_length=128))
```
Next I created the relevant dataset and the dataloader as follows:
```
import torch
from torch.utils.data import Dataset, DataLoader
class EmotionAwareSummaryDataset(Dataset):
def __init__(self, processed_text, sentiment, emotion, summary):
self.processed_text = processed_text
self.sentiment = sentiment
self.emotion = emotion
self.summary = summary
def __len__(self):
return len(self.processed_text)
def __getitem__(self, idx):
input_ids = self.processed_text[idx] + self.sentiment[idx] + self.emotion[idx]
attention_mask = torch.ones(len(input_ids))
decoder_input_ids = self.summary[idx]
decoder_attention_mask = torch.ones(len(decoder_input_ids))
# Calculate the loss
labels = torch.tensor(decoder_input_ids).clone()
labels[labels == tokenizer.pad_token_id] = -100 # Ignore padding tokens for loss calculation
return {
"input_ids": input_ids,
"attention_mask": attention_mask,
"decoder_input_ids": decoder_input_ids,
"decoder_attention_mask": decoder_attention_mask,
"labels": labels # Add this line to calculate the loss
}
# Create datasets and dataloaders
train_dataset = EmotionAwareSummaryDataset(
processed_text=data['processed_text_tokenized'].tolist(),
sentiment=data['sentiment_tokenized'].tolist(),
emotion=data['emotion_tokenized'].tolist(),
summary=data['summary_tokenized'].tolist()
)
train_dataloader = DataLoader(train_dataset, batch_size=4, shuffle=True)
```
Finally I finetuned the GPT 2 model as follows:
```
from transformers import GPT2LMHeadModel, GPT2Config, Trainer, TrainingArguments
# Load GPT-2 model configuration
config = GPT2Config.from_pretrained("gpt2", output_hidden_states=True)
# Load GPT-2 model and add a linear layer for summarization
model = GPT2LMHeadModel.from_pretrained("gpt2", config=config)
model.resize_token_embeddings(len(tokenizer))
model.train()
output = "./emotion_aware_summary"
# Define the training arguments
training_args = TrainingArguments(
output_dir= output,
overwrite_output_dir=True,
num_train_epochs=5,
per_device_train_batch_size=4,
save_steps=500,
save_total_limit=2,
)
# Define the trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset
)
# Start fine-tuning
trainer.train()
model.save_pretrained(output)
tokenizer.save_pretrained(output)
```
But when I run the notebook I'm getting the following error:
```
/opt/conda/lib/python3.10/site-packages/transformers/optimization.py:411: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
warnings.warn(
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ in <module>:29 │
│ │
│ 26 ) │
│ 27 │
│ 28 # Start fine-tuning │
│ ❱ 29 trainer.train() │
│ 30 model.save_pretrained(output) │
│ 31 tokenizer.save_pretrained(output) │
│ 32 │
│ │
│ /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1645 in train │
│ │
│ 1642 │ │ inner_training_loop = find_executable_batch_size( │
│ 1643 │ │ │ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size │
│ 1644 │ │ ) │
│ ❱ 1645 │ │ return inner_training_loop( │
│ 1646 │ │ │ args=args, │
│ 1647 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │
│ 1648 │ │ │ trial=trial, │
│ │
│ /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1938 in _inner_training_loop │
│ │
│ 1935 │ │ │ │ │ self.control = self.callback_handler.on_step_begin(args, self.state, │
│ 1936 │ │ │ │ │
│ 1937 │ │ │ │ with self.accelerator.accumulate(model): │
│ ❱ 1938 │ │ │ │ │ tr_loss_step = self.training_step(model, inputs) │
│ 1939 │ │ │ │ │
│ 1940 │ │ │ │ if ( │
│ 1941 │ │ │ │ │ args.logging_nan_inf_filter │
│ │
│ /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:2759 in training_step │
│ │
│ 2756 │ │ │ return loss_mb.reduce_mean().detach().to(self.args.device) │
│ 2757 │ │ │
│ 2758 │ │ with self.compute_loss_context_manager(): │
│ ❱ 2759 │ │ │ loss = self.compute_loss(model, inputs) │
│ 2760 │ │ │
│ 2761 │ │ if self.args.n_gpu > 1: │
│ 2762 │ │ │ loss = loss.mean() # mean() to average on multi-gpu parallel training │
│ │
│ /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:2784 in compute_loss │
│ │
│ 2781 │ │ │ labels = inputs.pop("labels") │
│ 2782 │ │ else: │
│ 2783 │ │ │ labels = None │
│ ❱ 2784 │ │ outputs = model(**inputs) │
│ 2785 │ │ # Save past state if it exists │
│ 2786 │ │ # TODO: this needs to be fixed and made cleaner later. │
│ 2787 │ │ if self.args.past_index >= 0: │
│ │
│ /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501 in _call_impl │
│ │
│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │
│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │
│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │
│ 1502 │ │ # Do not call functions when jit is used │
│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1504 │ │ backward_pre_hooks = [] │
│ │
│ /opt/conda/lib/python3.10/site-packages/transformers/models/gpt2/modeling_gpt2.py:1113 in │
│ forward │
│ │
│ 1110 │ │ │ shift_labels = labels[..., 1:].contiguous() │
│ 1111 │ │ │ # Flatten the tokens │
│ 1112 │ │ │ loss_fct = CrossEntropyLoss() │
│ ❱ 1113 │ │ │ loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.v │
│ 1114 │ │ │
│ 1115 │ │ if not return_dict: │
│ 1116 │ │ │ output = (lm_logits,) + transformer_outputs[1:] │
│ │
│ /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501 in _call_impl │
│ │
│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │
│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │
│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │
│ 1502 │ │ # Do not call functions when jit is used │
│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1504 │ │ backward_pre_hooks = [] │
│ │
│ /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py:1174 in forward │
│ │
│ 1171 │ │ self.label_smoothing = label_smoothing │
│ 1172 │ │
│ 1173 │ def forward(self, input: Tensor, target: Tensor) -> Tensor: │
│ ❱ 1174 │ │ return F.cross_entropy(input, target, weight=self.weight, │
│ 1175 │ │ │ │ │ │ │ ignore_index=self.ignore_index, reduction=self.reduction, │
│ 1176 │ │ │ │ │ │ │ label_smoothing=self.label_smoothing) │
│ 1177 │
│ │
│ /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py:3029 in cross_entropy │
│ │
│ 3026 │ │ ) │
│ 3027 │ if size_average is not None or reduce is not None: │
│ 3028 │ │ reduction = _Reduction.legacy_get_string(size_average, reduce) │
│ ❱ 3029 │ return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(re │
│ 3030 │
│ 3031 │
│ 3032 def binary_cross_entropy( │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: Expected input batch_size (1052) to match target batch_size (508).
```
I feel there's an issue with the batch sizes of my inputs and targets do not match. But I can't find from where the issue triggers. If you can help me it'll be great! Furthermore if you have more suggestions rather than my method please suggest. Thanks in advance.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25460/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25459
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25459/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25459/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25459/events
|
https://github.com/huggingface/transformers/pull/25459
| 1,846,635,575 |
PR_kwDOCUB6oc5Xt2yl
| 25,459 |
Marian: post-hack-fix correction
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,692 | 1,692 |
MEMBER
| null |
# What does this PR do?
### Context
#25294 removes a Marian-only hack, where the `logits` of the `pad_token_id` were being set to `-inf` before the `logsoftmax`. This was perceived as safe to do, as `bad_word_ids` is set to the `pad_token_id` in Marian. `bad_word_ids` sets the scores (i.e. after `logsoftmax`) to `-inf`
### Why was that a problem?
The changed position of the operation (setting to `-inf` before/after `logsoftmax`) meant that the log probabilities after the `logsoftmax` were now NOT summing to 1, i.e. not normalized. Since beam search selects the beams with the highest scores, this meant that the higher `logits[:, pad_token_id]` were before being set to `-inf`, the higher distribution shift would be observed due to the lack of normalization.
This issue materializes as minor output differences, as seen in our CI, and minor performance degradation, as discussed in #4290
### Do we have to revert the PR and reintroduce the hack?
No 🙌 Long after the hack in #4290 was introduced, we have added a flag to renormalize the scores after applying the logits processors, which happens after the `logsoftmax`. Applying this flag is equivalent to applying the hack.
⚠️ This means that Marian's generation configs should be updated with this flag as well, in addition to this PR (assuming it is accepted :) )
👉 all `slow` tests are back to green after this change.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25459/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25459",
"html_url": "https://github.com/huggingface/transformers/pull/25459",
"diff_url": "https://github.com/huggingface/transformers/pull/25459.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25459.patch",
"merged_at": 1692182970000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25458
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25458/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25458/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25458/events
|
https://github.com/huggingface/transformers/pull/25458
| 1,846,575,910 |
PR_kwDOCUB6oc5Xtpmd
| 25,458 |
Switch Transformers: remove overwritten beam sample test
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,691 | 1,691 |
MEMBER
| null |
# What does this PR do?
#25375 added a fix to beam sample + `num_return_sequences >1`, and had to tweak the corresponding tests. Switch Transformers had those tests overwritten, meaning its beam sample tests were not updated -- causing our slow CI to fail.
The cause for overwriting the tests was flakiness of the original tests with this model. However, running with `--flake-finder --flake-runs=1000` on CPU and GPU, I couldn't find any flakiness. As such, this PR simply removes the overwritten tests.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25458/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25458/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25458",
"html_url": "https://github.com/huggingface/transformers/pull/25458",
"diff_url": "https://github.com/huggingface/transformers/pull/25458.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25458.patch",
"merged_at": 1691756161000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25457
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25457/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25457/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25457/events
|
https://github.com/huggingface/transformers/pull/25457
| 1,846,565,643 |
PR_kwDOCUB6oc5XtnXM
| 25,457 |
More utils doc
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,692 | 1,692 |
COLLABORATOR
| null |
# What does this PR do?
This continues the work of cleaning up the scripts used for quality checks, which comes with a couple of fixes mainly:
- the instructions for releases in the `setup.py` weren't necessarily up to date, so fixing those.
- the script that auto-generates the transformers metadata used by the Hub didn't give the right processing class for `ImageProcessor`s.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25457/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25457/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25457",
"html_url": "https://github.com/huggingface/transformers/pull/25457",
"diff_url": "https://github.com/huggingface/transformers/pull/25457.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25457.patch",
"merged_at": 1692251915000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25456
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25456/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25456/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25456/events
|
https://github.com/huggingface/transformers/pull/25456
| 1,846,561,072 |
PR_kwDOCUB6oc5XtmWT
| 25,456 |
Rebase HF
|
{
"login": "vahanhov",
"id": 32771381,
"node_id": "MDQ6VXNlcjMyNzcxMzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/32771381?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vahanhov",
"html_url": "https://github.com/vahanhov",
"followers_url": "https://api.github.com/users/vahanhov/followers",
"following_url": "https://api.github.com/users/vahanhov/following{/other_user}",
"gists_url": "https://api.github.com/users/vahanhov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vahanhov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vahanhov/subscriptions",
"organizations_url": "https://api.github.com/users/vahanhov/orgs",
"repos_url": "https://api.github.com/users/vahanhov/repos",
"events_url": "https://api.github.com/users/vahanhov/events{/privacy}",
"received_events_url": "https://api.github.com/users/vahanhov/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,691 | 1,691 | 1,691 |
NONE
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25456/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25456/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25456",
"html_url": "https://github.com/huggingface/transformers/pull/25456",
"diff_url": "https://github.com/huggingface/transformers/pull/25456.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25456.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25455
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25455/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25455/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25455/events
|
https://github.com/huggingface/transformers/issues/25455
| 1,846,510,194 |
I_kwDOCUB6oc5uD4Jy
| 25,455 |
The length used by length_penalty during beam_search is not correct when input batch size > 1
|
{
"login": "xiafan-su",
"id": 1536185,
"node_id": "MDQ6VXNlcjE1MzYxODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1536185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiafan-su",
"html_url": "https://github.com/xiafan-su",
"followers_url": "https://api.github.com/users/xiafan-su/followers",
"following_url": "https://api.github.com/users/xiafan-su/following{/other_user}",
"gists_url": "https://api.github.com/users/xiafan-su/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiafan-su/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiafan-su/subscriptions",
"organizations_url": "https://api.github.com/users/xiafan-su/orgs",
"repos_url": "https://api.github.com/users/xiafan-su/repos",
"events_url": "https://api.github.com/users/xiafan-su/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiafan-su/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false | null |
[] |
[
"Hi @xiafan-su 👋 \r\n\r\nThis property is present in all length-related forms of processing. I understand (and agree) that considering the unpadded length would be better, but correcting this property would add significant complexity across the codebase. It is also simple to work around (avoid creating batches with large length mismatches, when relying on length-related features). \r\n\r\nSince we are currently focusing on other structural projects for `generate`, I'm keeping this issue open, but placing it low on our priority list. \r\n\r\nI won't be accepting PRs to fix it now, but may consider it in the future.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"(should not have been closed)"
] | 1,691 | 1,697 | null |
NONE
| null |
### System Info
Python 3.9.2
transformers 4.30.2
### Who can help?
@gante
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```
from transformers import BloomTokenizerFast, BloomForCausalLM
import torch
tokenizer = BloomTokenizerFast.from_pretrained('bigscience/bloomz-7b1')
tokenizer.padding_side = "left"
model = BloomForCausalLM.from_pretrained('bigscience/bloomz-7b1')
model = model.cuda().half().eval()
def predict(model, text):
token_out = tokenizer(text, padding=True, return_tensors="pt")
input_ids = token_out.input_ids.cuda()
attention_mask = token_out.attention_mask.cuda()
generate_kwargs = dict(max_new_tokens=128, do_sample=False, num_beams=2, length_penalty=3.0)
ori_out = model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)
print('ori input shape: ', input_ids.shape)
print('ori output shape: ', ori_out.shape)
return tokenizer.batch_decode(ori_out, skip_special_tokens=True)
text = ["Explain backpropagation in neural networks."]
baseline = predict(model, text)[0]
text = ["Explain backpropagation in neural networks.", "What is 1 + 1?"]
good_case = predict(model, text)[0]
text = ["Explain backpropagation in neural networks.", "The Basilica of the Sacred heart at Notre Dame is beside to which structure? The Basilica of the Sacred heart at Notre Dame is beside to which structure? The Basilica of the Sacred heart at Notre Dame is beside to which structure?"]
bad_case = predict(model, text)[0]
print(f'baseline: \n{baseline}\ngood_case: \n{good_case}\nbad_case: \n{bad_case}\n')
```
The script results:
```
baseline:
Explain backpropagation in neural networks. backpropagation is the process by which the error is propagated backwards through the network from the output layer to the input layer, and then back to the output layer again, until the error is reduced to zero. backpropagation is the process by which the error is propagated backwards through the network from the output layer to the input layer, and then back to the output layer again, until the error is reduced to zero. backpropagation is the process by which the error is propagated backwards through the network from the output layer to the input layer, and then back to the output layer again, until the error is reduced to zero
good_case:
Explain backpropagation in neural networks. backpropagation is the process by which the error is propagated backwards through the network from the output layer to the input layer, and then back to the output layer again, until the error is reduced to zero. backpropagation is the process by which the error is propagated backwards through the network from the output layer to the input layer, and then back to the output layer again, until the error is reduced to zero. backpropagation is the process by which the error is propagated backwards through the network from the output layer to the input layer, and then back to the output layer again, until the error is reduced to zero
bad_case:
Explain backpropagation in neural networks. backpropagation is the process by which the error is propagated backwards through the network from the output layer to the input layer.
```
### Expected behavior
In the add method of [BeamHypotheses](https://github.com/huggingface/transformers/blob/347001237a8ff845fc23f678107fc505361f9f13/src/transformers/generation/beam_search.py#L938), it uses `hyp.shape[-1]` as length penalty, but actually it may include padding tokens if batched input are provided.
In such instances, the same input can yield different results when combined with varying additional inputs. As demonstrated in the provided reproduction script, pairing the input with an exceedingly long peer alters the padding length. This modification leads to different length penalty, ultimately producing significantly different results.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25455/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25455/timeline
|
reopened
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25454
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25454/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25454/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25454/events
|
https://github.com/huggingface/transformers/pull/25454
| 1,846,470,026 |
PR_kwDOCUB6oc5XtSRx
| 25,454 |
Fix for #25437
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,691 | 1,691 |
COLLABORATOR
| null |
# What does this PR do?
We need to make some extra adjustment to make the code below in `.circleci/create_circleci_config.py` work
```python
if example_tests == "all":
job.tests_to_run = [f"examples/{framework}"]
```
This is also align with at the end of `utils/tests_fetcher.py`
```python
if commit_flags["test_all"]:
...
with open(example_file, "w", encoding="utf-8") as f:
f.write("all")
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25454/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25454/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25454",
"html_url": "https://github.com/huggingface/transformers/pull/25454",
"diff_url": "https://github.com/huggingface/transformers/pull/25454.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25454.patch",
"merged_at": 1691746797000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25453
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25453/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25453/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25453/events
|
https://github.com/huggingface/transformers/issues/25453
| 1,846,162,423 |
I_kwDOCUB6oc5uCjP3
| 25,453 |
Why exist_ok=True is not supported in AutoConfig.register(model_type, config)
|
{
"login": "CheungZeeCn",
"id": 2025362,
"node_id": "MDQ6VXNlcjIwMjUzNjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2025362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CheungZeeCn",
"html_url": "https://github.com/CheungZeeCn",
"followers_url": "https://api.github.com/users/CheungZeeCn/followers",
"following_url": "https://api.github.com/users/CheungZeeCn/following{/other_user}",
"gists_url": "https://api.github.com/users/CheungZeeCn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CheungZeeCn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CheungZeeCn/subscriptions",
"organizations_url": "https://api.github.com/users/CheungZeeCn/orgs",
"repos_url": "https://api.github.com/users/CheungZeeCn/repos",
"events_url": "https://api.github.com/users/CheungZeeCn/events{/privacy}",
"received_events_url": "https://api.github.com/users/CheungZeeCn/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"It's an oversight, feel free to open a PR to fix this!"
] | 1,691 | 1,693 | 1,693 |
NONE
| null |
https://github.com/huggingface/transformers/blob/55db70c63de2c07b6ffe36f24c0e7df8f967e935/src/transformers/models/auto/configuration_auto.py#L1035C38-L1035C38
Here is my code:
```
···
from libs.models.custom_layoutlmv3 import LayoutLMv3Model
···
encoder_model_path = "/home/ana/data4/models/layoutlmv3-base"
decoder_model_path = "/home/ana/data4/models/bert-base-uncased"
processor = LayoutLMv3Processor.from_pretrained(encoder_model_path, apply_ocr=False)
decoder_tokenizer = BertTokenizerFast.from_pretrained(decoder_model_path)
model = MyEncoderDecoderModelv3.from_encoder_decoder_pretrained(
encoder_model_path, decoder_model_path
)
model.config.decoder_start_token_id = decoder_tokenizer.cls_token_id
model.config.eos_token_id = decoder_tokenizer.sep_token_id
model.config.pad_token_id = decoder_tokenizer.pad_token_id
model.config.vocab_size = model.config.decoder.vocab_size
model.config.max_length = 32
model.config.min_length = 5
model.config.no_repeat_ngram_size = 3
model.config.early_stopping = True
model.config.length_penalty = 2.0
model.config.num_beams = 4
print(model)
```
When I try to replace the origin Layoutlmv3 by my Custom layoutlmv3 like:
```
# in libs.models.custom_layoutlmv3.__init__.py :
AutoConfig.register("layoutlmv3", LayoutLMv3Config)
AutoModel.register(LayoutLMv3Config, LayoutLMv3Model, exist_ok=True)
AutoModelForTokenClassification.register(LayoutLMv3Config, LayoutLMv3ForTokenClassification, exist_ok=True)
AutoModelForQuestionAnswering.register(LayoutLMv3Config, LayoutLMv3ForQuestionAnswering, exist_ok=True)
AutoModelForSequenceClassification.register(LayoutLMv3Config, LayoutLMv3ForSequenceClassification, exist_ok=True)
```
it turns out to be:
```
Traceback (most recent call last):
File "/home/ana/data4/projects/MyMLLM/examples/try_encoder_decoder_models_basic_709.py", line 35, in <module>
from libs.models.custom_layoutlmv3 import LayoutLMv3Model
File "/home/ana/data4/projects/MyMLLM/libs/models/custom_layoutlmv3/__init__.py", line 16, in <module>
AutoConfig.register("layoutlmv3", LayoutLMv3Config)
File "/home/ana/data4/installed_repos/transformers/src/transformers/models/auto/configuration_auto.py", line 1049, in register
CONFIG_MAPPING.register(model_type, config)
File "/home/ana/data4/installed_repos/transformers/src/transformers/models/auto/configuration_auto.py", line 753, in register
raise ValueError(f"'{key}' is already used by a Transformers config, pick another name.")
ValueError: 'layoutlmv3' is already used by a Transformers config, pick another name.
```
Why AutoConfig.register("layoutlmv3", LayoutLMv3Config) doesn't support the exist_ok like the AutoModel.register ?
What should I do?
transformers source:
```
@staticmethod
def register(model_type, config):
"""
Register a new configuration for this class.
Args:
model_type (`str`): The model type like "bert" or "gpt".
config ([`PretrainedConfig`]): The config to register.
"""
if issubclass(config, PretrainedConfig) and config.model_type != model_type:
raise ValueError(
"The config you are passing has a `model_type` attribute that is not consistent with the model type "
f"you passed (config has {config.model_type} and you passed {model_type}. Fix one of those so they "
"match!"
)
CONFIG_MAPPING.register(model_type, config)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25453/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25452
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25452/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25452/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25452/events
|
https://github.com/huggingface/transformers/issues/25452
| 1,846,127,866 |
I_kwDOCUB6oc5uCaz6
| 25,452 |
Packing without cross-contamination
|
{
"login": "ToddMorrill",
"id": 12600692,
"node_id": "MDQ6VXNlcjEyNjAwNjky",
"avatar_url": "https://avatars.githubusercontent.com/u/12600692?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ToddMorrill",
"html_url": "https://github.com/ToddMorrill",
"followers_url": "https://api.github.com/users/ToddMorrill/followers",
"following_url": "https://api.github.com/users/ToddMorrill/following{/other_user}",
"gists_url": "https://api.github.com/users/ToddMorrill/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ToddMorrill/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ToddMorrill/subscriptions",
"organizations_url": "https://api.github.com/users/ToddMorrill/orgs",
"repos_url": "https://api.github.com/users/ToddMorrill/repos",
"events_url": "https://api.github.com/users/ToddMorrill/events{/privacy}",
"received_events_url": "https://api.github.com/users/ToddMorrill/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Also in #6661",
"Hi @ToddMorrill\r\n\r\nFor existing models, I am afraid that it's unlikely to make changes for this feature. If there are new models that support this natively in the original modeling code, that could be the case when that model is ported into `transformers`.",
"It’s a real bummer because it seems like an important feature to have."
] | 1,691 | 1,692 | 1,692 |
NONE
| null |
### Feature request
Is there something within Hugging Face that prevents latter subsequences from attending to earlier subsequences when you use packing? Is there a way to implement attention masking so that subsequences only attend to tokens within the subsequence within a packed example?
As it currently stands:
1. the attention mask fed into transformers is a 1d sequence and we need to be able to pass a 2d sequence to specify the appropriate attention mask with multiple sequences
2. this will interact with positional embeddings, because the position should be relative to the start of the example, not the sequence it's packed into
3. this will impact the loss calculation at the boundaries of examples. In particular, EOS tokens shouldn't have loss calculated for predicting the start of the next example.
4. and there may be other impacts I'm not thinking of.
There appear to be a few challenges to overcome but nevertheless it seems like an important feature to have.
### Motivation
I find it unsettling that when packing, we're just simply letting the latter subsequences' tokens attend to the first subsequences tokens. Packed sequences could have nothing to do with one another and I don't want to contaminate examples. At the same time, I don't want to give up the throughput gains of packing sequences.
I suppose I could sort my dataset by length to minimize the wasted computation (i.e. pack approximately equal length examples into batches together) as a decent solution. I'm not sure if this will impact model performance in any way though.
This feature request has been raised several times: https://github.com/huggingface/trl/issues/302
https://github.com/huggingface/transformers/issues/17726
I think tensorflow implements this and GraphCore talks about it [here](https://www.graphcore.ai/posts/introducing-packed-bert-for-2x-faster-training-in-natural-language-processing) and in their [paper](https://arxiv.org/abs/2107.02027).
### Your contribution
This doesn't strike me as a "first contribution" but if someone wants to coach me, I can give it a shot.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25452/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25452/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25451
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25451/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25451/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25451/events
|
https://github.com/huggingface/transformers/pull/25451
| 1,845,946,982 |
PR_kwDOCUB6oc5XrhsV
| 25,451 |
[WIP] Add Grounding DINO
|
{
"login": "EduardoPach",
"id": 69953243,
"node_id": "MDQ6VXNlcjY5OTUzMjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/69953243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EduardoPach",
"html_url": "https://github.com/EduardoPach",
"followers_url": "https://api.github.com/users/EduardoPach/followers",
"following_url": "https://api.github.com/users/EduardoPach/following{/other_user}",
"gists_url": "https://api.github.com/users/EduardoPach/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EduardoPach/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EduardoPach/subscriptions",
"organizations_url": "https://api.github.com/users/EduardoPach/orgs",
"repos_url": "https://api.github.com/users/EduardoPach/repos",
"events_url": "https://api.github.com/users/EduardoPach/events{/privacy}",
"received_events_url": "https://api.github.com/users/EduardoPach/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @amyeroberts for information. Please let us know when your PR is ready for review!",
"@amyeroberts Hey, I've actually started a new branch to port the model because in the branch from this PR, I've done `add-new-model-like` and then `swin`, but talking to @NielsRogge he mentioned that starting from `deformable-detr` would be easier (which was a SUPER helpful tip btw). Should I close this PR and open a new one with the correct branch? (still WIP, but almost there)",
"@EduardoPach Do whatever is easiest for you in terms of code management. If you open a new PR, just make sure to link to it in a comment on this one so it can be easily tracked. ",
"@amyeroberts opened a new PR in https://github.com/huggingface/transformers/pull/26087 as mentioned. Should I close this one?",
"@EduardoPach I can close it for you :)"
] | 1,691 | 1,694 | 1,694 |
NONE
| null |
# What does this PR do?
This PR adds Grounding DINO
Fixes #25423
To-Do's:
- [x] Port vision backbone
- [x] Port Text backbone
- [ ] Port Feature Enhancer
- [ ] Port Cross-Modality Decoder
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25451/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25451/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25451",
"html_url": "https://github.com/huggingface/transformers/pull/25451",
"diff_url": "https://github.com/huggingface/transformers/pull/25451.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25451.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25450
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25450/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25450/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25450/events
|
https://github.com/huggingface/transformers/pull/25450
| 1,845,737,111 |
PR_kwDOCUB6oc5Xqz7o
| 25,450 |
Refactor image processor testers
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,691 | 1,691 |
COLLABORATOR
| null |
# What does this PR do?
The tests for the image processors contained a lot of repeated logic. In particular, the logic to prepare inputs and for testing resulting outputs. Removing the class-specific logic and consolidating in the mixin should make add new image processor tests easier.
This PR:
* Moves `test_call_numpy`, `test_call_pil`, `test_call_pytorch` to the mixin and removes repeated logic
* Renames `ImageProcessorSavingTestMixin` -> `ImageProcessingTestMixin` as the mixin handles more than saving
To support this, two methods were added to the model tester class:
* `prepare_image_inputs`: Each model is responsible for preparing the inputs for testing. Removes the repeated `prepare_image_inputs` logic. in some model testers
* `expected_output_image_shape` - method which returns the expected shape for
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25450/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25450",
"html_url": "https://github.com/huggingface/transformers/pull/25450",
"diff_url": "https://github.com/huggingface/transformers/pull/25450.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25450.patch",
"merged_at": 1691749818000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25449
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25449/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25449/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25449/events
|
https://github.com/huggingface/transformers/pull/25449
| 1,845,692,336 |
PR_kwDOCUB6oc5XqqKV
| 25,449 |
add init_include_buffers
|
{
"login": "shingjan",
"id": 11846349,
"node_id": "MDQ6VXNlcjExODQ2MzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11846349?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shingjan",
"html_url": "https://github.com/shingjan",
"followers_url": "https://api.github.com/users/shingjan/followers",
"following_url": "https://api.github.com/users/shingjan/following{/other_user}",
"gists_url": "https://api.github.com/users/shingjan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shingjan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shingjan/subscriptions",
"organizations_url": "https://api.github.com/users/shingjan/orgs",
"repos_url": "https://api.github.com/users/shingjan/repos",
"events_url": "https://api.github.com/users/shingjan/events{/privacy}",
"received_events_url": "https://api.github.com/users/shingjan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,691 | 1,691 | 1,691 |
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #25448
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
cc: @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25449/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25449/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25449",
"html_url": "https://github.com/huggingface/transformers/pull/25449",
"diff_url": "https://github.com/huggingface/transformers/pull/25449.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25449.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25448
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25448/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25448/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25448/events
|
https://github.com/huggingface/transformers/issues/25448
| 1,845,689,737 |
I_kwDOCUB6oc5uAv2J
| 25,448 |
Add `init_include_buffers` kwargs to `modeling_utils.from_pretrained`
|
{
"login": "shingjan",
"id": 11846349,
"node_id": "MDQ6VXNlcjExODQ2MzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11846349?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shingjan",
"html_url": "https://github.com/shingjan",
"followers_url": "https://api.github.com/users/shingjan/followers",
"following_url": "https://api.github.com/users/shingjan/following{/other_user}",
"gists_url": "https://api.github.com/users/shingjan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shingjan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shingjan/subscriptions",
"organizations_url": "https://api.github.com/users/shingjan/orgs",
"repos_url": "https://api.github.com/users/shingjan/repos",
"events_url": "https://api.github.com/users/shingjan/events{/privacy}",
"received_events_url": "https://api.github.com/users/shingjan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Sadly the arguments we had to the main method of Transformers are limited to big use cases as we don;t want this method to get to 100 kwargs. For this it would be better to define an env variable in accelerate that would switch the default of `include_buffers` which would allow you to do this change without adding a new kwarg, all-the-while preserving the existing default.",
"@sgugger I see. That makes sense. Thanks!"
] | 1,691 | 1,691 | 1,691 |
NONE
| null |
### Feature request
There is a parameter for `init_empty_weights` context manager from `accelerate`, `include_buffers` which defines if buffers of nn.Module should be initialized as meta tensors during loading. I propose that we expose this param as a kwarg in `modeling_utils`.
### Motivation
The motivation is that now with https://github.com/huggingface/accelerate/pull/1826, user can use `torch.device` as context in `accelerate` with `init_empty_weights`, it would be great if this argument can be exposed to `model.from_pretrained` to give user more control over `init_empty_weights`.
### Your contribution
~Will send out a PR for this~ PR #25449 is drafted for this feature proposal.
cc: @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25448/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25448/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25447
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25447/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25447/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25447/events
|
https://github.com/huggingface/transformers/pull/25447
| 1,845,639,747 |
PR_kwDOCUB6oc5Xqens
| 25,447 |
Add Number Normalisation for SpeechT5
|
{
"login": "tanaymeh",
"id": 26519539,
"node_id": "MDQ6VXNlcjI2NTE5NTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/26519539?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tanaymeh",
"html_url": "https://github.com/tanaymeh",
"followers_url": "https://api.github.com/users/tanaymeh/followers",
"following_url": "https://api.github.com/users/tanaymeh/following{/other_user}",
"gists_url": "https://api.github.com/users/tanaymeh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tanaymeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanaymeh/subscriptions",
"organizations_url": "https://api.github.com/users/tanaymeh/orgs",
"repos_url": "https://api.github.com/users/tanaymeh/repos",
"events_url": "https://api.github.com/users/tanaymeh/events{/privacy}",
"received_events_url": "https://api.github.com/users/tanaymeh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25447). All of your documentation changes will be reflected on that endpoint.",
"cc @sanchit-gandhi ",
"@sanchit-gandhi I added a single test which tests all the covered cases. I also fixed an existing test in the `tests/models/speecht5/test_tokenization_speecht5.py` file which previously was just passing the number '92000' around without normalization but now it would normalize it (see below)\r\n\r\nhttps://github.com/tanaymeh/transformers/blob/afa90898b93c58e712d6d7e3726a66d964282930/tests/models/speecht5/test_tokenization_speecht5.py#L164-L183",
"> Thanks for adding the tests! LGTM - would just request that we change the default behaviour to not normalise, and maybe add a line in the SpeechT5 tokenizer docs to highlight to users that they can use this great feature?\r\n\r\nI have a rather dumb doubt, @sanchit-gandhi: I had added the information about the `normalize` argument in the docstring of the `SpeechT5Tokenizer` class ([here](https://github.com/tanaymeh/transformers/blob/afa90898b93c58e712d6d7e3726a66d964282930/src/transformers/models/speecht5/tokenization_speecht5.py#L68-L69)). Should I add it somewhere else to be shown in the docs? Doesn't the doc builder builds the docstrings and renders them in the docs automatically?\r\n\r\nThanks!",
"@ArthurZucker I added your suggested changes ([here](https://github.com/tanaymeh/transformers/blob/682bee8d517398b8b9840808ff3a3bb106f6b9fd/src/transformers/models/speecht5/tokenization_speecht5.py#L126-L132) and [here](https://github.com/tanaymeh/transformers/blob/682bee8d517398b8b9840808ff3a3bb106f6b9fd/src/transformers/models/speecht5/tokenization_speecht5.py#L138-L146)).\r\n\r\nI also added support for comma-separated numbers [here](https://github.com/tanaymeh/transformers/blob/682bee8d517398b8b9840808ff3a3bb106f6b9fd/src/transformers/models/speecht5/number_normalizer.py#L185-L186).\r\n\r\nPlease let me know if I missing something or need to change something.",
"Thanks for your review @sanchit-gandhi and @ArthurZucker. Waiting for review from @ylacombe!",
"I am rather new to python and transformers.\r\nCan someone please provide me documentation for this? I am doing the following, but I am not sure where I can pass the normalize parameter.\r\nCurrently the audio being generated skips all the numbers given in the input text.\r\n\r\n```\r\nimport sounddevice as sd\r\nimport torch\r\nfrom datasets import load_dataset\r\nfrom transformers import SpeechT5Processor, SpeechT5HifiGan, SpeechT5ForTextToSpeech\r\n\r\nprocessor = SpeechT5Processor.from_pretrained(\"microsoft/speecht5_tts\")\r\nembeddings_dataset = load_dataset(\"Matthijs/cmu-arctic-xvectors\", split=\"validation\")\r\nspeaker_embeddings = torch.tensor(embeddings_dataset[7306][\"xvector\"]).unsqueeze(0)\r\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\r\n\r\n\r\ndef synthesise(text):\r\n inputs = processor(text=text, return_tensors=\"pt\")\r\n vocoder = SpeechT5HifiGan.from_pretrained(\"microsoft/speecht5_hifigan\").to(device)\r\n model = SpeechT5ForTextToSpeech.from_pretrained(\"microsoft/speecht5_tts\").to(device)\r\n speech = model.generate_speech(\r\n inputs[\"input_ids\"].to(device), speaker_embeddings.to(device), vocoder=vocoder\r\n )\r\n return speech.cpu()\r\n\r\n\r\ndef playback_speech(text):\r\n audio = synthesise(text)\r\n sd.play(audio.numpy(), samplerate=16000)\r\n sd.wait()\r\n\r\n\r\nplayback_speech(\"Numbers from one through ten : 1 2 3 4 5 6 7 8 9 10\")\r\n```",
"@ramkrishna757575 You need to turn `normalize=True` in the `processor`. By default, the number normalization is turned off to ensure backwards compatibility since it's a breaking change.\r\n\r\nInside the `synthesise()` function, you can do:\r\n```python\r\n# ... Rest of the code\r\ndef synthesise(text):\r\n inputs = processor(text=text, return_tensors=\"pt\", normalize=True)\r\n# ... Rest of the code\r\n```",
"> \r\n\r\nThanks for the quick reply @tanaymeh.\r\nI tried it, but I got this warning:\r\n\r\n`Keyword arguments {'normalize': True} not recognized.`\r\n\r\nI checked the version of `transformers` that I have installed and it is currently `4.32.0`\r\n\r\nI tried doing this\r\n\r\n```\r\nprocessor = SpeechT5Processor.from_pretrained(\"microsoft/speecht5_tts\", normalize=True)\r\n```\r\n\r\nThis stops throwing the warning, but also does not speak out the numerals in the input text",
"@ramkrishna757575 You need to install the latest version of the transformers (that is not yet released), using:\r\n`pip install git+https://github.com/huggingface/transformers`, in this, you will be able to use this setting. \r\n",
"Thanks a lot @tanaymeh.\r\nThis seemed to work.\r\nThe issue was the incorrect version of transformers installed.\r\nAfter I installed the latest unreleased version as you stated, it seems to work.\r\n\r\nThanks again :smiley: ",
"Thanks for helping out here @tanaymeh! Here's the parameter in the docs (on `main`) @ramkrishna757575: https://huggingface.co/docs/transformers/main/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.normalize",
"> Thanks for helping out here @tanaymeh! Here's the parameter in the docs (on `main`) @ramkrishna757575: https://huggingface.co/docs/transformers/main/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.normalize\r\n\r\ngot it...thanks @sanchit-gandhi "
] | 1,691 | 1,693 | 1,692 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR will add number normalisation for SpeechT5. Currently, SpeechT5 cannot read numbers, as described in #23480.
Fixes #23480
# Cases covered so far
- [x] Integer number normalisation
- [x] Floating point number normalisation
- [x] Currency normalisation
- [x] US Dollar `$`
- [x] cents `¢`
- [x] Euro `€`
- [x] Sterling Pound `£`
- [x] Yen `¥`
- [x] Other major currency symbols
- [x] Percents `%`
- [x] Negative numbers (starting with `-`)
- [x] Numbers from 0 to Decillion
- [x] Any combination of those stated and ticked above
## Who can review?
@sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25447/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25447",
"html_url": "https://github.com/huggingface/transformers/pull/25447",
"diff_url": "https://github.com/huggingface/transformers/pull/25447.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25447.patch",
"merged_at": 1692684778000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25446
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25446/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25446/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25446/events
|
https://github.com/huggingface/transformers/issues/25446
| 1,845,617,183 |
I_kwDOCUB6oc5uAeIf
| 25,446 |
llama 2 weights from fb (in bfloat16) are perhaps accidentally cast to float16 in conversion script?
|
{
"login": "jmhessel",
"id": 178075,
"node_id": "MDQ6VXNlcjE3ODA3NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/178075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmhessel",
"html_url": "https://github.com/jmhessel",
"followers_url": "https://api.github.com/users/jmhessel/followers",
"following_url": "https://api.github.com/users/jmhessel/following{/other_user}",
"gists_url": "https://api.github.com/users/jmhessel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmhessel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmhessel/subscriptions",
"organizations_url": "https://api.github.com/users/jmhessel/orgs",
"repos_url": "https://api.github.com/users/jmhessel/repos",
"events_url": "https://api.github.com/users/jmhessel/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmhessel/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey! \r\nThe weights were pushed as `float16` as this is what is used for `inference`. We are going to add a line mentioning that the training was done un `bfloat16` but there should not be issues with performances when training no. There is an issue with training in `float16` as it was reported here #25065, which is expected. \r\nNot by that default, if you use `LlamaXXXX` the dtype will be torch's default `float32`. ",
"Hey @ArthurZucker ! Thanks for the reply :-) When you say \"The weights were pushed as float16 as this is what is used for inference\" --- is this what meta used in the llama2 paper for their results? I guess I am wondering why not also do bfloat16 for inference, particularly because of the potential fine-tuning issue when fine-tuning in float16 ? I could probably just do the conversion myself and host them on my huggingface, but just wondering the rationale, if possible",
"If you look at [this](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L94) or just try running a model you'll see that it is in `fp16`. Main reason is that it's faster and should not really induce performance loss. But training was done in `bf16`. ",
"Aha! Gotcha :-) seems like if the official implementation does it this way, good to have it this way in huggingface. Maybe I'll do a bfloat16 conversion on my own and do some experiments, but probably not a big deal either way. Thanks!",
"FYI, for future readers, something related to this bfloat conversion was made:\r\nhttps://github.com/huggingface/transformers/commit/015f8e110d270a0ad42de4ae5b98198d69eb1964#diff-110a445233a8b15a0875998eeaf75cb8607b38a5daa736291dd058766879bbddL259-R273\r\n\r\nIt isn't clear to me what this change actually does, but it looks like the codellama weights are in bfloat16 https://huggingface.co/codellama/CodeLlama-7b-hf\r\n\r\nnot sure if this was intended but it might be worth trying the conversion for the original models in bfloat16 :-) (which I might try)",
"Once and for all, the `dtype` of the checkpoints on the hub is only used if you set `torch_dtype = \"auto\"` when you initialise the checkpoints. Otherwise, the `torch_dtype` will be used to cast the checkpoints from the initialization type (so torch's `float32`) to this `torch_dtype` (only when you are using the `auto` API. \r\nThe reason why we used the `torch_dtype = torch.floa16` is because that the inference dtype, and thus for most common usages were you just want something to work out of the box, the type that should be used. \r\n",
"Following up on this, so what is the recommended dtype for llama2 **inference**? I assumed it's `torch.float16`, given this thread and also I've always been working with the assumption that the dtype in `config.json` is the recommendation. However, (1) I saw NaN issues inferencing with `torch.float16`, which went away after switching to `torch.bfloat16`; (2) the config for codellama specifies `torch.bfloat16`, as Jack pointed out above."
] | 1,691 | 1,696 | 1,692 |
NONE
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.31
- Python version: 3.10.9
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi there!
credit to @dirkgr and @jacob-morrison for finding this LoC !
The [facebook -> huggingface conversion script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py#L259) for llama/llama2 appears to cast weights to float16. While llama1 was distributed in fp16:
```python
loaded_llama1 = torch.load("llama1/7B/consolidated.00.pth", map_location="cuda:0")
loaded_llama1['layers.4.feed_forward.w2.weight'].dtype
torch.float16
```
llama2 seems to be bfloat16
```python
loaded_llama2 = torch.load("Llama-2-7b/consolidated.00.pth", map_location="cuda:0")
loaded_llama2['layers.4.feed_forward.w2.weight'].dtype
torch.bfloat16
```
The casting differences are small in both absolute and percent terms (here's a random weight)
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
facebook_model = torch.load("Llama-2-7b/consolidated.00.pth", map_location="cuda:0")
huggingface_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf")
from_huggingface = model.get_parameter('model.layers.19.self_attn.v_proj.weight').cuda()
from_facebook = loaded['layers.19.attention.wv.weight']
...
print(torch.mean((torch.abs(from_facebook - from_huggingface) / torch.abs(from_facebook))*100))
# > tensor(0.0006, device='cuda:0', dtype=torch.float32, grad_fn=<MeanBackward0>)
print((from_facebook - from_huggingface).abs().max())
# > tensor(2.9802e-08, device='cuda:0', dtype=torch.float32, grad_fn=<MaxBackward1>)
```
but, in theory, this could lead to a small performance degradation (e.g., https://github.com/facebookresearch/llama/issues/634).
### Expected behavior
I think llama2 should probably be saved in bfloat16 rather than cast to float16 and then saved in huggingface format.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25446/reactions",
"total_count": 9,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 9
}
|
https://api.github.com/repos/huggingface/transformers/issues/25446/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25445
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25445/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25445/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25445/events
|
https://github.com/huggingface/transformers/pull/25445
| 1,845,598,720 |
PR_kwDOCUB6oc5XqVqS
| 25,445 |
Reuse the cache created for latest `main` on PRs/branches if `setup.py` is not modified
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger FYI: I need to change 2 more places to make it work correctly for all cases. See my last 2 review comments.",
"Hi @sgugger \r\n\r\nI am afraid I have to revert this PR until we do something to enabling sharing cache (see below at the end). [From this section](\r\nhttps://circleci.com/docs/caching/#caching-and-open-source)\r\n\r\n> PRs from the same fork repo share a cache (this includes, as previously stated, that PRs in the main repo share a cache with main).\r\n> Two PRs in different fork repos have different caches.\r\n\r\nThe cache is never shared between PRs/branches from different forks. This might explains the question I posted about why the same cache key could be found sometimes but not other times. \r\n\r\nThe PR description of #24886 is partially valid as **lhoestq** created a branch in `transformers` rather than a PR from his own forked repo.\r\n\r\nThe way to share cache is\r\n\r\n> Enabling the sharing of[ environment variables](https://circleci.com/docs/env-vars/) allows cache sharing between the original repo and all forked builds.\r\n\r\nBut we need to be careful if we have sensitive env. variables or not."
] | 1,691 | 1,691 | 1,691 |
COLLABORATOR
| null |
# What does this PR do?
For a PR or a branch, if `setup.py` is not modified (compare to the common ancestor with the `main`), let's use cache that is created for the `setup.py` from the `latest` commit on the `main` branch.
**latest** means the latest one on `main` at the moment where a run is triggered.
Motivation:
- avoid having cache for `main` and `pull` (for most cases), which is introduced in #24886 to avoid unexpected/undesired edge cases.
- **avoid the cache creation and store for PRs/branches that still have (very) old `setup.py` as they don't rebase on more recent `main`**
With this PR, we expect the storage usage for `cache` could be reduced dramatically 🤞 .
[The artifact in this job run](https://app.circleci.com/pipelines/github/huggingface/transformers/70312/workflows/b93db7c8-40ac-4925-ac2a-ea9feac9b926/jobs/881950/artifacts) shows `generated_config.txt` has explicit checksum used `v0.7-pipelines_torch-main-pip-9RXs1YQ8L2beP4cdAfRDkWX0VRTtWaQodDVKzvyJwPI=` rather than `{{ checksum "setup.py" }}`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25445/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25445/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25445",
"html_url": "https://github.com/huggingface/transformers/pull/25445",
"diff_url": "https://github.com/huggingface/transformers/pull/25445.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25445.patch",
"merged_at": 1691757652000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25444
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25444/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25444/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25444/events
|
https://github.com/huggingface/transformers/issues/25444
| 1,845,533,289 |
I_kwDOCUB6oc5uAJpp
| 25,444 |
Access to Transformers example Pytorch link broken . Impact on navigation as well
|
{
"login": "SoyGema",
"id": 24204714,
"node_id": "MDQ6VXNlcjI0MjA0NzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/24204714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SoyGema",
"html_url": "https://github.com/SoyGema",
"followers_url": "https://api.github.com/users/SoyGema/followers",
"following_url": "https://api.github.com/users/SoyGema/following{/other_user}",
"gists_url": "https://api.github.com/users/SoyGema/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SoyGema/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SoyGema/subscriptions",
"organizations_url": "https://api.github.com/users/SoyGema/orgs",
"repos_url": "https://api.github.com/users/SoyGema/repos",
"events_url": "https://api.github.com/users/SoyGema/events{/privacy}",
"received_events_url": "https://api.github.com/users/SoyGema/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"For the redirects, you will have to see with the datasets team on their repo :-)\r\nAs for Transformers, happy to look at a PR fixing the link!"
] | 1,691 | 1,691 | 1,691 |
CONTRIBUTOR
| null |
Hello there! 👋
## Context
Exploring Pytorch translation example
## Issue
There seems to be a broken link in Pytorch example in the Translation example, similar to what happened in #24497
https://github.com/huggingface/transformers/blob/347001237a8ff845fc23f678107fc505361f9f13/examples/pytorch/translation/run_translation.py#L381
When I follow the link , the following message appears. And when I click the here link
<img width="1154" alt="Captura de pantalla 2023-08-10 a las 18 19 10" src="https://github.com/huggingface/transformers/assets/24204714/9cc8450f-85bc-4be2-934b-b687dcb8c4f9">
It redirects me to https://huggingface.co/docs/transformers/main/en/examples with an 404 error.
## Potential fix
1. Is that ok if I submit a quick fix as in #24594 ?
2. Would it be hard to try to submit a contrib of something like this? ...
> specify doc redirects using a `redirects.yml` like in datasets and other libraries: https://github.com/huggingface/datasets/blob/main/docs/source/_redirects.yml
Do you know if this entails more than create than source folder and .yaml file? Would I have to study [this](https://github.com/huggingface/datasets/tree/main/docs#generating-the-documentation)?
@sgugger
Keep up the good work!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25444/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25444/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25443
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25443/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25443/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25443/events
|
https://github.com/huggingface/transformers/issues/25443
| 1,845,526,234 |
I_kwDOCUB6oc5uAH7a
| 25,443 |
Load T5 model in 8 bit fails
|
{
"login": "PansaLegrand",
"id": 119485913,
"node_id": "U_kgDOBx812Q",
"avatar_url": "https://avatars.githubusercontent.com/u/119485913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PansaLegrand",
"html_url": "https://github.com/PansaLegrand",
"followers_url": "https://api.github.com/users/PansaLegrand/followers",
"following_url": "https://api.github.com/users/PansaLegrand/following{/other_user}",
"gists_url": "https://api.github.com/users/PansaLegrand/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PansaLegrand/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PansaLegrand/subscriptions",
"organizations_url": "https://api.github.com/users/PansaLegrand/orgs",
"repos_url": "https://api.github.com/users/PansaLegrand/repos",
"events_url": "https://api.github.com/users/PansaLegrand/events{/privacy}",
"received_events_url": "https://api.github.com/users/PansaLegrand/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @SunMarc and @younesbelkada ",
"I am facing the same issue as well, anybody found a solution to this?",
"Thanks for reporting this issue @leejielong and @PansaLegrand . To solve this, please install the main branch of transformers `pip install git+https://github.com/huggingface/transformers.git`. I was able to reproduce the issue with the latest release v4.31.0 but the issue is solved in the main branch. ",
"Hi @SunMarc, installing transformers from source worked for me too. Thanks!"
] | 1,691 | 1,692 | 1,692 |
NONE
| null |
### System Info
I use free google Colab with T4 GPU
### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am running the colab created for [DeepFloyd](https://huggingface.co/blog/if),
and I cannot load the T5 encoder in 8 bits.
```
from transformers import T5EncoderModel
text_encoder = T5EncoderModel.from_pretrained(
"DeepFloyd/IF-I-XL-v1.0",
subfolder="text_encoder",
device_map="auto",
load_in_8bit=True,
variant="8bit"
)
```
The error message is :
```
You are loading your model in 8bit or 4bit but no linear modules were found in your model. Please double check your model architecture, or submit an issue on github if you think this is a bug.
RuntimeError Traceback (most recent call last)
[<ipython-input-66-186cdabda356>](https://localhost:8080/#) in <cell line: 3>()
1 from transformers import T5EncoderModel
2
----> 3 text_encoder = T5EncoderModel.from_pretrained(
4 "DeepFloyd/IF-I-XL-v1.0",
5 subfolder="text_encoder",
4 frames
[/usr/local/lib/python3.10/dist-packages/torch/nn/parameter.py](https://localhost:8080/#) in __new__(cls, data, requires_grad)
34 # For ease of BC maintenance, keep this path for standard Tensor.
35 # Eventually (tm), we should change the behavior for standard Tensor to match.
---> 36 return torch.Tensor._make_subclass(cls, data, requires_grad)
37
38 # Path for custom tensors: set a flag on the instance to indicate parameter-ness.
RuntimeError: Only Tensors of floating point and complex dtype can require gradients
```
I cannot see where the problem come from.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25443/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25442
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25442/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25442/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25442/events
|
https://github.com/huggingface/transformers/pull/25442
| 1,845,428,295 |
PR_kwDOCUB6oc5Xpwl5
| 25,442 |
[`Idefics`] add image_embeddings option in generate-related methods
|
{
"login": "leot13",
"id": 17809020,
"node_id": "MDQ6VXNlcjE3ODA5MDIw",
"avatar_url": "https://avatars.githubusercontent.com/u/17809020?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leot13",
"html_url": "https://github.com/leot13",
"followers_url": "https://api.github.com/users/leot13/followers",
"following_url": "https://api.github.com/users/leot13/following{/other_user}",
"gists_url": "https://api.github.com/users/leot13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leot13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leot13/subscriptions",
"organizations_url": "https://api.github.com/users/leot13/orgs",
"repos_url": "https://api.github.com/users/leot13/repos",
"events_url": "https://api.github.com/users/leot13/events{/privacy}",
"received_events_url": "https://api.github.com/users/leot13/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Computing the encoder hidden states (i.e. the vision/perceiver hidden states in our case) inside the prepare... function seems curious to me.\r\n\r\nYes I'm not sure it is very standard. \r\nMaybe a more standard way to do it would be to do an encoder_decoder setting with 'encoder_outputs'. \r\nThat would require adding another input 'encoder_outputs' on top of the image_encoder_embeddings/perceiver_embeddings + a get_encoder() function that would get the encoder and compute the embeddings. \r\nI think it would add a lot of unnecessary logic, so using _prepare_inputs() to prepare the inputs made more sense to me",
"And if it’s not too much work, a test calling generate to make sure this all works well 🤗 ",
"I think there's a test here` tests/models/idefics/test_modeling_idefics.py`. Do you have something else in mind?\r\n",
"Okay `class IdeficsModelIntegrationTest(TestCasePlus)` already tests it thanks for checking! Feel free to merge "
] | 1,691 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
Update Idefics generate-related functions to allow for precomputed image embeddings
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25442/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25442",
"html_url": "https://github.com/huggingface/transformers/pull/25442",
"diff_url": "https://github.com/huggingface/transformers/pull/25442.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25442.patch",
"merged_at": 1692370011000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25441
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25441/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25441/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25441/events
|
https://github.com/huggingface/transformers/pull/25441
| 1,845,361,265 |
PR_kwDOCUB6oc5XphwY
| 25,441 |
docs: add LLaMA-Efficient-Tuning to awesome-transformers
|
{
"login": "statelesshz",
"id": 28150734,
"node_id": "MDQ6VXNlcjI4MTUwNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/statelesshz",
"html_url": "https://github.com/statelesshz",
"followers_url": "https://api.github.com/users/statelesshz/followers",
"following_url": "https://api.github.com/users/statelesshz/following{/other_user}",
"gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions",
"organizations_url": "https://api.github.com/users/statelesshz/orgs",
"repos_url": "https://api.github.com/users/statelesshz/repos",
"events_url": "https://api.github.com/users/statelesshz/events{/privacy}",
"received_events_url": "https://api.github.com/users/statelesshz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25441). All of your documentation changes will be reflected on that endpoint."
] | 1,691 | 1,694 | 1,691 |
CONTRIBUTOR
| null |
# What does this PR do?
Kindly ask to add [LLaMA-Efficient-Tuning](https://github.com/hiyouga/LLaMA-Efficient-Tuning) to the list of awesome projects that has transformers support
## Who can review?
cc @sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25441/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25441/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25441",
"html_url": "https://github.com/huggingface/transformers/pull/25441",
"diff_url": "https://github.com/huggingface/transformers/pull/25441.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25441.patch",
"merged_at": 1691680420000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25440
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25440/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25440/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25440/events
|
https://github.com/huggingface/transformers/issues/25440
| 1,845,342,718 |
I_kwDOCUB6oc5t_bH-
| 25,440 |
Lower Training performance after updating from 4.29.2 to 4.30.0+
|
{
"login": "waterhorse1",
"id": 27195540,
"node_id": "MDQ6VXNlcjI3MTk1NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/27195540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/waterhorse1",
"html_url": "https://github.com/waterhorse1",
"followers_url": "https://api.github.com/users/waterhorse1/followers",
"following_url": "https://api.github.com/users/waterhorse1/following{/other_user}",
"gists_url": "https://api.github.com/users/waterhorse1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/waterhorse1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/waterhorse1/subscriptions",
"organizations_url": "https://api.github.com/users/waterhorse1/orgs",
"repos_url": "https://api.github.com/users/waterhorse1/repos",
"events_url": "https://api.github.com/users/waterhorse1/events{/privacy}",
"received_events_url": "https://api.github.com/users/waterhorse1/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 5616426447,
"node_id": "LA_kwDOCUB6oc8AAAABTsPdzw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/solved",
"name": "solved",
"color": "B1D6DC",
"default": false,
"description": ""
}
] |
closed
| false |
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"cc @ArthurZucker @pacman100 ",
"@pacman100 and @muellerzr would be nice to check this out",
"@waterhorse1 this may have been some stuff we had to fix throughout the Accelerate integration with transformers potentially. Regardless, after running through `gsm8k-ScRel` on 4xA100, I was able to get an ending score of `35.78468536770281`, which I believe should solve your issue.\r\n\r\nThis was ran on `transformers` and `accelerate` main. If this seems acceptable to you, let me know!",
"Sure, thx for that."
] | 1,691 | 1,699 | 1,699 |
NONE
| null |
### System Info
- `transformers` version: 4.29.2
- Platform: Linux-5.4.0-149-generic-x86_64-with-glibc2.31
- Python version: 3.10.10
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.2
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The codebase is https://github.com/OFA-Sys/gsm8k-ScRel.
1. Run train.py to train the llama model, the only modification is to modify this line https://github.com/OFA-Sys/gsm8k-ScRel/blob/f4d01761ec03d88a39486399c4617d29ee1dca7f/train.py#L264 to model_args.model_name_or_path. And here is the training scripts I am using:
```
export MODEL_PATH="huggyllama/llama-7b"
export SAVE_PATH="/data/ziyu/rft_model_xidong/llama1-7b-sft-test-430"
export MASTER_ADDR="localhost"
export MASTER_PORT="1231"
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 -m torch.distributed.launch --master_addr ${MASTER_ADDR} --master_port ${MASTER_PORT} --nproc_per_node=8 --use_env train.py \
--model_name_or_path $MODEL_PATH \
--data_path data/train_use.jsonl \
--bf16 True \
--output_dir $SAVE_PATH \
--num_train_epochs 3 \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 200 \
--save_total_limit 40 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--fsdp "full_shard auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
--tf32 True \
--cache_dir "/data/ziyu/hf_cache/huggingface/hub"
```
2. after training the model, test it with https://github.com/OFA-Sys/gsm8k-ScRel/blob/main/test_7b_13b.sh.
```
sh test_7b_13b.sh /data/ziyu/rft_model_xidong/llama1-7b-sft-test-430
```
3. Run the evaluation code here: https://github.com/OFA-Sys/gsm8k-ScRel/blob/main/eval.py, need to modify this line https://github.com/OFA-Sys/gsm8k-ScRel/blob/f4d01761ec03d88a39486399c4617d29ee1dca7f/eval.py#L146 to your model path, which is /data/ziyu/rft_model_xidong/llama1-7b-sft-test-430 in our case.
### Expected behavior
We observe that with accelerate=0.21.0, torch=2.0.1, transformers=4.29.2, we can get 0.35+ performance on llama1 finetuning on gsm8k, but with transformers=4.30.0 and beyond, we can only get 0.32 performance using exactly the same code. For llama2 sft it is similar, I can get 0.41 with transformers=4.29.2 but just 0.36 with transformers=4.30.0. Any reasons for that?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25440/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 4
}
|
https://api.github.com/repos/huggingface/transformers/issues/25440/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25439
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25439/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25439/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25439/events
|
https://github.com/huggingface/transformers/issues/25439
| 1,845,313,465 |
I_kwDOCUB6oc5t_T-5
| 25,439 |
Allow Loading Custom Model Names by Searching for Extension
|
{
"login": "VitorHugoOli",
"id": 37223412,
"node_id": "MDQ6VXNlcjM3MjIzNDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/37223412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VitorHugoOli",
"html_url": "https://github.com/VitorHugoOli",
"followers_url": "https://api.github.com/users/VitorHugoOli/followers",
"following_url": "https://api.github.com/users/VitorHugoOli/following{/other_user}",
"gists_url": "https://api.github.com/users/VitorHugoOli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VitorHugoOli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VitorHugoOli/subscriptions",
"organizations_url": "https://api.github.com/users/VitorHugoOli/orgs",
"repos_url": "https://api.github.com/users/VitorHugoOli/repos",
"events_url": "https://api.github.com/users/VitorHugoOli/events{/privacy}",
"received_events_url": "https://api.github.com/users/VitorHugoOli/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"In general `from_pretrained` is only intended to work with models saved with `save_pretrained`, not with other models, so this is not a feature we plan to add.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,691 | 1,695 | 1,695 |
NONE
| null |
### Feature request
I propose that in the `from_pretrained`, instead of relying on a set of specific filenames, the loading function should search for the model by its file extension. By doing so, the model names can be more flexible, allowing users to name their custom models as they see fit without the need to conform to a predefined naming scheme.
**Example:**
```txt
Directory: ./input/model/clip/vit-large-patch14
Files: config.json, preprocessor_config.json, vit-large-patch14.safetensors
```
In this case, the model could be loaded by searching for the .safetensors extension rather than expecting a specific filename like model.safetensors.
### Motivation
Currently, in the `from_pretrained` if a folder contains a custom model with a name that doesn't match one of the predefined specified names, it cannot be simply passed to the loading function. Instead, the file name of the model must be manually changed to match one of the allowed names. This behavior can be restrictive, especially when dealing with multiple custom models with varying names.
**Benefit:**
This change would enhance the usability of the library, making it more adaptable to various use-cases and workflows involving custom models. It would also streamline the process of loading custom models, eliminating the need to rename files to match a predetermined pattern.
### Your contribution
I will make changes to the files `utils/modeling_utils.py` and `utils/__init__.py` to enhance the way models are loaded.
1. In utils/__init__.py: I will define a set of predefined extensions that the loading function can recognize. This list of extensions will be used to identify the model files within a given directory.
2. In utils/modeling_utils.py: I will modify the function _add_variant to accept a variant parameter, which can either be the name of the file or a directive to search for the first file with one of the predefined extensions.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25439/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25438
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25438/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25438/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25438/events
|
https://github.com/huggingface/transformers/pull/25438
| 1,845,210,534 |
PR_kwDOCUB6oc5XpArV
| 25,438 |
[ASR Pipeline] Fix init with timestamps
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Actually instead of moving parameter checking into logic of the pipeline, can we keep things inside `_sanitize_parameters` which is the proper location.\r\n\r\nMaybe there are better way to make `self.type` available before hitting `_sanitizer_parameters` no ? ",
"The goal of those checks is to raise errors early rather than later. and also splitting checking from actual inference logic.",
"> Maybe there are better way to make self.type available before hitting _sanitizer_parameters no ?\r\n\r\nYes - to achieve this we have to override the `__init__` method of the pipeline. See the latest commit for how this design would look. If you're happy with this, I can tidy it up and complete the PR this way. Otherwise, we can keep the checks in the `_forward` method, since they'll still be done before any model computations.\r\n\r\nIMO the key for me is that these checks are performed **before** the forward pass, rather than after the forward pass. Before https://github.com/huggingface/transformers/pull/25344, they were done **after** the forward pass in the `postprocess` method, which is wasteful if the settings are incorrect but we still run the model computations:\r\nhttps://github.com/huggingface/transformers/blob/5b7ffd5492e13b10fcfe282b1ab99097a6e0230a/src/transformers/pipelines/automatic_speech_recognition.py#L500\r\nSo this design of moving them to the `_forward` is still an improvement over the old one.\r\n\r\n> also splitting checking from actual inference logic.\r\n\r\nCurrently, there are numerous additional checks that are performed either in the forward pass or in the post-processing. See example above, and check in the forward pass below:\r\nhttps://github.com/huggingface/transformers/blob/5b7ffd5492e13b10fcfe282b1ab99097a6e0230a/src/transformers/pipelines/automatic_speech_recognition.py#L447-L448\r\n\r\nHappy to do a big refactor and move all of these to `_sanitize_paramters`, which we can do **if** the attribute `self.type` is set (as per the latest commit).",
"Thanks for fixing it !"
] | 1,691 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
# What does this PR do?
The PR #25344 added checks for the setting of `return_timestamps` in the method `sanitize_paramters` - this works when the argument `return_timestamps` is passed in the forward pass:
```python
from transformers import pipeline
import numpy as np
pipe = pipeline("automatic-speech-recognition", model="openai/whisper-tiny")
dummy_speech = np.ones(100)
pipe(dummy_speech, return_timestamps=True)
```
But fails when the argument is passed in the init (thanks to @ydshieh for flagging):
```python
from transformers import pipeline
import numpy as np
pipe = pipeline("automatic-speech-recognition", model="openai/whisper-tiny", return_timestamps=True)
```
<details>
<summary> Traceback: </summary>
```
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ <ipython-input-2-d59778d5549a>:1 in <module> │
│ │
│ /Users/sanchitgandhi/transformers/src/transformers/pipelines/__init__.py:993 in pipeline │
│ │
│ 990 │ if device is not None: │
│ 991 │ │ kwargs["device"] = device │
│ 992 │ │
│ ❱ 993 │ return pipeline_class(model=model, framework=framework, task=task, **kwargs) │
│ 994 │
│ │
│ /Users/sanchitgandhi/transformers/src/transformers/pipelines/automatic_speech_recognition.py:202 │
│ in __init__ │
│ │
│ 199 │ │ decoder: Optional[Union["BeamSearchDecoderCTC", str]] = None, │
│ 200 │ │ **kwargs, │
│ 201 │ ): │
│ ❱ 202 │ │ super().__init__(**kwargs) │
│ 203 │ │ self.feature_extractor = feature_extractor │
│ 204 │ │ │
│ 205 │ │ if self.model.config.model_type == "whisper": │
│ │
│ /Users/sanchitgandhi/transformers/src/transformers/pipelines/base.py:822 in __init__ │
│ │
│ 819 │ │ self.call_count = 0 │
│ 820 │ │ self._batch_size = kwargs.pop("batch_size", None) │
│ 821 │ │ self._num_workers = kwargs.pop("num_workers", None) │
│ ❱ 822 │ │ self._preprocess_params, self._forward_params, self._postprocess_params = self._ │
│ 823 │ │ │
│ 824 │ │ if self.image_processor is None and self.feature_extractor is not None: │
│ 825 │ │ │ if isinstance(self.feature_extractor, BaseImageProcessor): │
│ │
│ /Users/sanchitgandhi/transformers/src/transformers/pipelines/automatic_speech_recognition.py:326 │
│ in _sanitize_parameters │
│ │
│ 323 │ │ │ postprocess_params["decoder_kwargs"] = decoder_kwargs │
│ 324 │ │ if return_timestamps is not None: │
│ 325 │ │ │ # Check whether we have a valid setting for return_timestamps and throw an e │
│ ❱ 326 │ │ │ if self.type == "seq2seq" and return_timestamps: │
│ 327 │ │ │ │ raise ValueError("We cannot return_timestamps yet on non-CTC models apar │
│ 328 │ │ │ if self.type == "ctc_with_lm" and return_timestamps != "word": │
│ 329 │ │ │ │ raise ValueError("CTC with LM can only predict word level timestamps, se │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
AttributeError: 'AutomaticSpeechRecognitionPipeline' object has no attribute 'type'
```
</details>
=> this is because we call `sanitise_parameters` in the `__init__`, but **before** the attribute `self.type` is set, so these checks fail
This PR moves these `return_timestamps` checks to the `_forward` method. They are safe to perform here (we have set `self.type`), and are still performed **before** we perform any forward pass, so the error is thrown earlier rather than later.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25438/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25438",
"html_url": "https://github.com/huggingface/transformers/pull/25438",
"diff_url": "https://github.com/huggingface/transformers/pull/25438.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25438.patch",
"merged_at": 1692205460000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25437
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25437/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25437/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25437/events
|
https://github.com/huggingface/transformers/pull/25437
| 1,845,195,639 |
PR_kwDOCUB6oc5Xo9aY
| 25,437 |
Add `examples` to tests to run when `setup.py` is modified
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,691 | 1,691 |
COLLABORATOR
| null |
# What does this PR do?
#25095 modified `setup.py`, but `examples` job is not triggered, see [here](https://app.circleci.com/pipelines/github/huggingface/transformers/70003/workflows/9d952427-a622-429d-8ad1-155b4523048b)
We should include it, right?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25437/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25437/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25437",
"html_url": "https://github.com/huggingface/transformers/pull/25437",
"diff_url": "https://github.com/huggingface/transformers/pull/25437.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25437.patch",
"merged_at": 1691678525000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25436
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25436/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25436/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25436/events
|
https://github.com/huggingface/transformers/pull/25436
| 1,845,147,153 |
PR_kwDOCUB6oc5Xoy0u
| 25,436 |
Fix issue with ratio evaluation steps and auto find batch size
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,691 | 1,691 |
CONTRIBUTOR
| null |
# What does this PR do?
Modifies step ratios in the case of when `auto_find_batch_size` is used, otherwise it will still maintain the old ratio step (so if we went from 10% starting at 100 steps, at 1000 steps it would still try and evaluate at step 10 instead of step 100)
Fixes # (issue)
Solves #24248
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger @amyeroberts
Conversation of original PR (https://github.com/huggingface/transformers/pull/25390) as had to re-open due to rebase shenanigans.
Sylvain:
> Thanks for the fix! There is normally a test that checks the training arguments have not been changed. I'm guessing it didn't kick in with a float value for those ;-)
> Might be worth using a logic that does not change the training arguments and use that test to avoid future regression. In general the training arguments are not supposed to be modified outside of the post init, to allow users to be able to re-use them. So here we should store (in the Trainer state if needed but I think this is all contained to one method?) the logging_steps/eval_steps etc. once converted.
Note: the silent fail on the test will now show up via https://github.com/huggingface/transformers/pull/25435
Amy:
> Thanks for fixing this!
> Just to make sure I've understood correctly before approving - is this right:
> Previously when using auto_find_batch_size, the evaluation step number would be wrong if one of the step ratios < 1
> This was because in the first _inner_training_loop call, the args were overridden so that they where absolute rather than relative
> This meant that in the next call the _inner_training_loop the logic checking for relative values was skipped.
> The solution is to store the absolute values in the TrainerState rather than modify the trainer arguments
> The part I think I'm missing is why this is triggered in the auto_find_batch_size case
Sylvain:
> I think more code should be migrated to look at the state. Are those the only places we look at logging_steps and co? The state should always be filled, not jsut when the variables are floats.
> This is triggered by auto_find_batch_size since this decorator calls the training loop several times with different batch sizes. Except that at the second run, the values of logging_steps and others are wrong since they were modified in the first run, and the number of steps per epoch has changed with the batch size change.
Myself:
> correct, those are the only places. (There are references in the tensorflow class, however I'm unsure if they need the migration or not).
> What other aspects of the trainer should we look for when determining if it should go into the state?
Sylvain:
> Thanks for iterating! We're almost there.
> In terms of design for the Trainer: training arguments should be frozen after post init (exactly for this kind of bug, and there were others in hyperparameter search as well) whereas state contains the thing that can change depending on the training run. Does that make sense?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25436/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25436/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25436",
"html_url": "https://github.com/huggingface/transformers/pull/25436",
"diff_url": "https://github.com/huggingface/transformers/pull/25436.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25436.patch",
"merged_at": 1691680053000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25435
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25435/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25435/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25435/events
|
https://github.com/huggingface/transformers/pull/25435
| 1,845,115,964 |
PR_kwDOCUB6oc5Xor9G
| 25,435 |
Make training args fully immutable
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2155169140,
"node_id": "MDU6TGFiZWwyMTU1MTY5MTQw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/trainer",
"name": "trainer",
"color": "2ef289",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger can you give it one final look please 😄 "
] | 1,691 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR ensures that the `TrainingArguments` are a fully immutable dataclass after the `__post_init__` has been ran. We'll find that the tests suddenly fail now 😉 Should be merged after https://github.com/huggingface/transformers/pull/25390
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25435/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25435/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25435",
"html_url": "https://github.com/huggingface/transformers/pull/25435",
"diff_url": "https://github.com/huggingface/transformers/pull/25435.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25435.patch",
"merged_at": 1692114468000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25434
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25434/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25434/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25434/events
|
https://github.com/huggingface/transformers/issues/25434
| 1,845,000,936 |
I_kwDOCUB6oc5t-Hro
| 25,434 |
Bug when re-initializing CUDA in forked subprocess
|
{
"login": "cassianlewis",
"id": 131266258,
"node_id": "U_kgDOB9L20g",
"avatar_url": "https://avatars.githubusercontent.com/u/131266258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cassianlewis",
"html_url": "https://github.com/cassianlewis",
"followers_url": "https://api.github.com/users/cassianlewis/followers",
"following_url": "https://api.github.com/users/cassianlewis/following{/other_user}",
"gists_url": "https://api.github.com/users/cassianlewis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cassianlewis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cassianlewis/subscriptions",
"organizations_url": "https://api.github.com/users/cassianlewis/orgs",
"repos_url": "https://api.github.com/users/cassianlewis/repos",
"events_url": "https://api.github.com/users/cassianlewis/events{/privacy}",
"received_events_url": "https://api.github.com/users/cassianlewis/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] |
[
"CC @SunMarc @younesbelkada, `transformers` may need a solution similar to what I had to do here: https://github.com/huggingface/accelerate/pull/1813",
"Sorry for the goose-chase, this is indeed accelerate: https://github.com/huggingface/accelerate/pull/1833"
] | 1,691 | 1,691 | 1,691 |
NONE
| null |
### System Info
```
- `transformers` version: 4.32.0.dev0
- Platform: Linux-5.10.178-162.673.amzn2.x86_64-x86_64-with-glibc2.26
- Python version: 3.10.10
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.22.0.dev0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
```
### Who can help?
@younesbelkada @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Following 3 cells in Jupyter Noteboook:
```
from transformers import AutoModelForSeq2SeqLM
from accelerate import notebook_launcher
import torch
torch.cuda.is_initialized()
```
output: `False`
```
def run():
model = AutoModelForSeq2SeqLM.from_pretrained(
't5-small',
load_in_8bit = 'True',
cache_dir='model_cache')
print(model)
```
```
notebook_launcher(run, num_processes = 2)
```
Note: runs fine if I remove the entire `load_in_8bit = 'True',` line.
### Expected behavior
Following error:
```
---------------------------------------------------------------------------
ProcessRaisedException Traceback (most recent call last)
File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/accelerate/launchers.py:137, in notebook_launcher(function, args, num_processes, mixed_precision, use_port)
136 try:
--> 137 start_processes(launcher, args=args, nprocs=num_processes, start_method="fork")
138 except ProcessRaisedException as e:
File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/multiprocessing/spawn.py:197, in start_processes(fn, args, nprocs, join, daemon, start_method)
196 # Loop on join until it returns True or raises an exception.
--> 197 while not context.join():
198 pass
File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/multiprocessing/spawn.py:160, in ProcessContext.join(self, timeout)
159 msg += original_trace
--> 160 raise ProcessRaisedException(msg, error_index, failed_process.pid)
ProcessRaisedException:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
fn(i, *args)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/accelerate/utils/launch.py", line 543, in __call__
self.launcher(*args)
File "/tmp/ipykernel_30342/662585938.py", line 4, in run
model = AutoModelForSeq2SeqLM.from_pretrained(
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 493, in from_pretrained
trust_remote_code = resolve_trust_remote_code(
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2330, in from_pretrained
device_map = {"": torch.cuda.current_device()}
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/cuda/__init__.py", line 679, in current_device
_lazy_init()
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/cuda/__init__.py", line 235, in _lazy_init
raise RuntimeError(
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
Cell In[21], line 1
----> 1 notebook_launcher(run, num_processes = 2)
File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/accelerate/launchers.py:140, in notebook_launcher(function, args, num_processes, mixed_precision, use_port)
138 except ProcessRaisedException as e:
139 if "Cannot re-initialize CUDA in forked subprocess" in e.args[0]:
--> 140 raise RuntimeError(
141 "CUDA has been initialized before the `notebook_launcher` could create a forked subprocess. "
142 "This likely stems from an outside import causing issues once the `notebook_launcher()` is called. "
143 "Please review your imports and test them when running the `notebook_launcher()` to identify "
144 "which one is problematic."
145 ) from e
147 else:
148 # No need for a distributed launch otherwise as it's either CPU, GPU or MPS.
149 if is_mps_available():
RuntimeError: CUDA has been initialized before the `notebook_launcher` could create a forked subprocess. This likely stems from an outside import causing issues once the `notebook_launcher()` is called. Please review your imports and test them when running the `notebook_launcher()` to identify which one is problematic.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25434/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25434/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25433
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25433/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25433/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25433/events
|
https://github.com/huggingface/transformers/pull/25433
| 1,844,952,733 |
PR_kwDOCUB6oc5XoH5t
| 25,433 |
🌐 [i18n-KO] Translated `perf_train_tpu_tf.md` to Korean
|
{
"login": "0525hhgus",
"id": 47289574,
"node_id": "MDQ6VXNlcjQ3Mjg5NTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/47289574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/0525hhgus",
"html_url": "https://github.com/0525hhgus",
"followers_url": "https://api.github.com/users/0525hhgus/followers",
"following_url": "https://api.github.com/users/0525hhgus/following{/other_user}",
"gists_url": "https://api.github.com/users/0525hhgus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/0525hhgus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/0525hhgus/subscriptions",
"organizations_url": "https://api.github.com/users/0525hhgus/orgs",
"repos_url": "https://api.github.com/users/0525hhgus/repos",
"events_url": "https://api.github.com/users/0525hhgus/events{/privacy}",
"received_events_url": "https://api.github.com/users/0525hhgus/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25433). All of your documentation changes will be reflected on that endpoint.",
"LGTM!",
"@0525hhgus May you please update the status to `Open` and tag our wonderful reviewers at Hugging Face? Thank you.",
"Documentation privew throws a 500 error. \r\nMay you please review this PR? It looks fine in my github codespace. \r\nThank you!\r\n@sgugger, @ArthurZucker, @eunseojo\r\n\r\nhttps://moon-ci-docs.huggingface.co/docs/transformers/pr_25433/ko/perf_train_tpu_tf\r\n<img width=\"1280\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/47289574/c1843192-d45d-4033-be43-c281c860886f\">"
] | 1,691 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.md` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Translated the `perf_train_tpu_tf.md` file of the documentation to Korean.
Thank you in advance for your review!
(I closed https://github.com/huggingface/transformers/pull/24896 and opened a new one.)
Part of https://github.com/huggingface/transformers/issues/20179
<!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
<!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
Team PseudoLab, may you please review this PR?
@kihoon71, @0525hhgus, @54data, @Sunmin0520, @seank021, @augustinLib
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
May you please review this PR?
@sgugger, @ArthurZucker, @eunseojo
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25433/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/25433/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25433",
"html_url": "https://github.com/huggingface/transformers/pull/25433",
"diff_url": "https://github.com/huggingface/transformers/pull/25433.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25433.patch",
"merged_at": 1692392914000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25432
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25432/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25432/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25432/events
|
https://github.com/huggingface/transformers/pull/25432
| 1,844,947,805 |
PR_kwDOCUB6oc5XoGz6
| 25,432 |
Fix rendering for `torch.compile()` docs
|
{
"login": "merveenoyan",
"id": 53175384,
"node_id": "MDQ6VXNlcjUzMTc1Mzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/merveenoyan",
"html_url": "https://github.com/merveenoyan",
"followers_url": "https://api.github.com/users/merveenoyan/followers",
"following_url": "https://api.github.com/users/merveenoyan/following{/other_user}",
"gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions",
"organizations_url": "https://api.github.com/users/merveenoyan/orgs",
"repos_url": "https://api.github.com/users/merveenoyan/repos",
"events_url": "https://api.github.com/users/merveenoyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/merveenoyan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,691 | 1,691 |
CONTRIBUTOR
| null |
These are the docs I've written previously, apparently I forgot to put a newline between table and header part so they're not being rendered: https://huggingface.co/docs/transformers/main/en/perf_torch_compile#reduce-overhead
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25432/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25432/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25432",
"html_url": "https://github.com/huggingface/transformers/pull/25432",
"diff_url": "https://github.com/huggingface/transformers/pull/25432.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25432.patch",
"merged_at": 1691666701000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25431
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25431/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25431/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25431/events
|
https://github.com/huggingface/transformers/pull/25431
| 1,844,737,052 |
PR_kwDOCUB6oc5XnYKf
| 25,431 |
[Time series Informer] fix dtype of cumsum
|
{
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
# What does this PR do?
Fix an issue when training Informer with FP16, the `cumsum` returns float32.
See report here: https://discuss.huggingface.co/t/how-to-train-on-multiple-gpus-the-informer-model-for-time-series-forecasting/48984/3
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25431/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25431/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25431",
"html_url": "https://github.com/huggingface/transformers/pull/25431",
"diff_url": "https://github.com/huggingface/transformers/pull/25431.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25431.patch",
"merged_at": 1692361636000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25430
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25430/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25430/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25430/events
|
https://github.com/huggingface/transformers/issues/25430
| 1,844,706,967 |
I_kwDOCUB6oc5t8_6X
| 25,430 |
Potential security issue
|
{
"login": "psmoros",
"id": 17127410,
"node_id": "MDQ6VXNlcjE3MTI3NDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/17127410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/psmoros",
"html_url": "https://github.com/psmoros",
"followers_url": "https://api.github.com/users/psmoros/followers",
"following_url": "https://api.github.com/users/psmoros/following{/other_user}",
"gists_url": "https://api.github.com/users/psmoros/gists{/gist_id}",
"starred_url": "https://api.github.com/users/psmoros/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/psmoros/subscriptions",
"organizations_url": "https://api.github.com/users/psmoros/orgs",
"repos_url": "https://api.github.com/users/psmoros/repos",
"events_url": "https://api.github.com/users/psmoros/events{/privacy}",
"received_events_url": "https://api.github.com/users/psmoros/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"You can report the security issue on huntr.dev, we monitor this website (cc @coyotte508 ).",
"Hey @sgugger I'm actually from huntr.dev; sorry for reaching we already have your email :) See you!",
"Ah ah, no problem!",
"Send it to security@huggingface :)"
] | 1,691 | 1,692 | 1,691 |
NONE
| null |
Hello 👋
I run a security community that finds and fixes vulnerabilities in OSS. A researcher (@b3ef) has found a potential issue, which I would be eager to share with you.
Could you add a `SECURITY.md` file with an e-mail address for me to send further details to? GitHub [recommends](https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository) a security policy to ensure issues are responsibly disclosed, and it would help direct researchers in the future.
Looking forward to hearing from you 👍
(cc @huntr-helper)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25430/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25429
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25429/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25429/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25429/events
|
https://github.com/huggingface/transformers/pull/25429
| 1,844,556,303 |
PR_kwDOCUB6oc5XmxHk
| 25,429 |
[`NllbMoe`] Update code to properly support loss computation
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25429). All of your documentation changes will be reflected on that endpoint."
] | 1,691 | 1,692 | 1,692 |
COLLABORATOR
| null |
# What does this PR do?
The code works in `Switch` because we decided to only return none `None` outputs, but it makes more sens to return None when the layers is not sparse. Otherwise finding the index of the layer that produced this might be impossible.
fixes #24898
- TODO:
- [x] Update Switch Transformers code as well
- [x] Add some tests
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25429/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25429/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25429",
"html_url": "https://github.com/huggingface/transformers/pull/25429",
"diff_url": "https://github.com/huggingface/transformers/pull/25429.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25429.patch",
"merged_at": 1692285716000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25428
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25428/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25428/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25428/events
|
https://github.com/huggingface/transformers/issues/25428
| 1,844,514,402 |
I_kwDOCUB6oc5t8Q5i
| 25,428 |
Images in "Efficient Inference on a Single GPU" don't load
|
{
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @stevhliu and @MKhalusova ",
"I see that the images were not stored in a HF dataset and are likely taken from a third-party blog(?) https://s3.amazonaws.com/moonup/production/uploads/1659861207959-62441d1d9fdefb55a0b7d12c.png\r\nhttps://s3.amazonaws.com/moonup/production/uploads/1660567469965-62441d1d9fdefb55a0b7d12c.gif\r\nAdded initially by @younesbelkada, do you remember what the images were supposed to be? ",
"Hi,\r\nThe fix is simply to replace `https://s3.amazonaws.com/moonup` by `https://cdn-uploads.huggingface.co` (originally pointed out by @coyotte508 internally), I made #25561 that should fix it for images of transformers repo. "
] | 1,691 | 1,692 | 1,692 |
MEMBER
| null |
### System Info
NA
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
In https://huggingface.co/docs/transformers/perf_infer_gpu_one there are some unredered images/gifs
### Expected behavior
-
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25428/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25428/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25427
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25427/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25427/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25427/events
|
https://github.com/huggingface/transformers/pull/25427
| 1,844,501,225 |
PR_kwDOCUB6oc5XmlJd
| 25,427 |
[`SwitchTransformers`] Remove unused module
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,692 | 1,692 |
COLLABORATOR
| null |
# What does this PR do?
Fixes #25347
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25427/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25427",
"html_url": "https://github.com/huggingface/transformers/pull/25427",
"diff_url": "https://github.com/huggingface/transformers/pull/25427.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25427.patch",
"merged_at": 1692284621000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25426
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25426/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25426/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25426/events
|
https://github.com/huggingface/transformers/issues/25426
| 1,844,368,991 |
I_kwDOCUB6oc5t7tZf
| 25,426 |
InstructBlip generate function
|
{
"login": "Elvisambition",
"id": 75023175,
"node_id": "MDQ6VXNlcjc1MDIzMTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/75023175?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Elvisambition",
"html_url": "https://github.com/Elvisambition",
"followers_url": "https://api.github.com/users/Elvisambition/followers",
"following_url": "https://api.github.com/users/Elvisambition/following{/other_user}",
"gists_url": "https://api.github.com/users/Elvisambition/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Elvisambition/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Elvisambition/subscriptions",
"organizations_url": "https://api.github.com/users/Elvisambition/orgs",
"repos_url": "https://api.github.com/users/Elvisambition/repos",
"events_url": "https://api.github.com/users/Elvisambition/events{/privacy}",
"received_events_url": "https://api.github.com/users/Elvisambition/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] |
open
| false | null |
[] |
[
"Hey, the error you encountered is usually from trying to access an embedding with a value that is outside of range. You can move everything to CPU to make sure that the error is from there. I suspect that the `padding = True` is padding with the `[PAD]` token that has index `32000`, but the model.config has `pad_token_id=-1` in the text config and `0` in the `qformer_config`. \r\nMake sure they match! ",
"thank you very much ~",
"@Elvisambition I'm getting the same error right now. Which number did you end up setting all three pad_token_ids to in order to get this to work?",
"@ArthurZucker I dug into this and your hunch was correct. The default config for InstructBLIP (\"Salesforce/instructblip-vicuna-7b\") is incorrect. The model.config.text_config.pad_token_id has to be 32000, not 0. Would it be possible to get this fixed?\r\n\r\nThis error occurs when calling instruct-blip like so over a batch of images (this error doesn't happen when calling instruct-blip over one image):\r\n```python\r\ninstruct_blip_processor = InstructBlipProcessor.from_pretrained(\"Salesforce/instructblip-vicuna-7b\")\r\n instruct_blip_model = InstructBlipForConditionalGeneration.from_pretrained(\r\n \"Salesforce/instructblip-vicuna-7b\", torch_dtype=torch.float16).to(device=\"cuda\")\r\n \r\nwith torch.no_grad():\r\n for batch in tqdm(dataloader):\r\n prompts = ['What is going on in this image?' for _ in range (batch['img'].shape[0])]\r\n inputs = instruct_blip_processor(images=batch['img'],\r\n text=prompts,\r\n return_tensors=\"pt\").to(device=\"cuda\",\r\n dtype=torch.float16)\r\n\r\n outputs = instruct_blip_model.generate(\r\n **inputs,\r\n num_beams=5,\r\n max_new_tokens=256,\r\n min_length=1,\r\n top_p=0.9,\r\n repetition_penalty=1.5,\r\n length_penalty=1.0,\r\n temperature=1,\r\n )\r\n```\r\n\r\nSpecifically, the error occurs during beam search that's triggered by this call in transformers.generation.utils.py starting on line 1627. This call is triggered from by the self.language_model.generate call in modeling_instructblip.py :\r\n```python\r\nreturn self.beam_search(\r\n input_ids,\r\n beam_scorer,\r\n logits_processor=logits_processor,\r\n stopping_criteria=stopping_criteria,\r\n pad_token_id=generation_config.pad_token_id,\r\n eos_token_id=generation_config.eos_token_id,\r\n output_scores=generation_config.output_scores,\r\n return_dict_in_generate=generation_config.return_dict_in_generate,\r\n synced_gpus=synced_gpus,\r\n **model_kwargs,\r\n )\r\n```\r\nHere, generation_config.pad_token_id has to be 32000, not -1 which it is by default. It seems to inherit the pad_token_id from the model.config.text_config. When this is run over a batch of inputs, some outputs will have 256 tokens, but some responses will have fewer tokens. The [PAD] token is used to pad each output in the batch to 256 tokens, so if the pad token is incorrect, it will error out.\r\n\r\n",
"Feel free to open a PR here: https://huggingface.co/Salesforce/instructblip-vicuna-7b/blob/main/config.json. I am not admin but you should ping Niels there! ",
"Hey,\r\n\r\nbatched generation hasn't been tested yet with InstructBLIP. Would be great to add a corresponding integration test [here](https://github.com/huggingface/transformers/blob/4e1dee0e8e06c1146d023c43812b88bfe2763329/tests/models/instructblip/test_modeling_instructblip.py#L523) for that. @ArthurZucker could you reopen this issue and perhaps label with \"good second issue\"?\r\n\r\nFor me it works fine if you set the padding token IDs properly of the language model (Vicuna-7b in this case):\r\n```\r\nmodel.config.text_config.pad_token_id = 0\r\n```\r\nhowever we need to check whether the integration tests are still passing with this update.",
"Noting here that I was getting:\r\n```OverflowError: out of range integral type conversion attempted```\r\nwhen using the `generate` and then `batch_decode` of InstructBlip.\r\n\r\nOn inspection, this was because the model was outputting `-1` tokens (which was what `model.config.text_config.pad_token_id` was set to).\r\n\r\nI fixed it with:\r\n`model.config.text_config.pad_token_id = preproc.tokenizer.pad_token_id\r\n`",
"@ArthurZucker @NielsRogge @danielamassiceti Can I work on this if it's not fixed yet??",
"Yes please. Batched generation of InstructBLIP still needs to be addressed",
"This is a problem only with vicuna model, and specifically specific indexing errors with batched inference in modelling_llama.py.\r\n\r\nI have tried a bit to diagnose this. Even with making the `model.config.text_config.pad_token_id = 0` or `model.config.text_config.pad_token_id = processor.tokenizer.pad_token_id`, it doesn't work.\r\n\r\nFor me, the errors are appearing at this line:\r\n```\r\nif inputs_embeds is None:\r\n inputs_embeds = self.embed_tokens(input_ids)\r\n```\r\nThe issue is when a \"-1\" exists in input_ids. If we revert it to 0, I don't think generation results are correct. So\r\nSample of input_ids:\r\n```\r\ntorch.Size([32, 1])\r\ntensor([[26205],\r\n [ 565],\r\n [ 366],\r\n [ 1554],\r\n [ 6635],\r\n **[ -1]**,\r\n [ 2318],\r\n [ 902],\r\n [13736],\r\n [ 683],\r\n [ 393],\r\n [ 411],\r\n [ 472],\r\n [ 322],\r\n [ 1432],\r\n [ 683],\r\n [ 2172],\r\n [ 278],\r\n [ 292],\r\n [ 322],\r\n [ 2],\r\n [ 2],\r\n [ 683],\r\n [ 322],\r\n **[ -1]**,\r\n [11203],\r\n [ 278],\r\n [ 278],\r\n [ 411],\r\n [ 263],\r\n [ 4799],\r\n [ 1361]], device='cuda:0')\r\n```\r\n\r\nThis -1 in input_ids causes indexing errors here:\r\n`return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)`\r\n\r\nAny help is appreciated in looking into this further to get to the root cause.",
"you should set `model.config.pad_token_id = -1` if you want -1 to be the pad token used by the embedding, but not by the tokenizer. Overall, negative indexing is never a good solution. Would recommend adding tokens and resizing the vocab. If padding_idx is set to -1 there should not be errors",
"There is no attribute padding_token in the model config.\r\n```\r\nprint(model.config.padding_token)\r\nAttributeError: 'InstructBlipConfig' object has no attribute 'padding_token'\r\n```\r\n\r\nI can explicitly set it as you said, but it doesn't fix the problem.\r\n\r\nJust to show the difference in token ids of model and the tokenizer, it looks like below:\r\n```\r\nprint(model.config.text_config.pad_token_id) --> -1\r\nprint(model.config.text_config.eos_token_id) --> 1\r\nprint(processor.tokenizer.pad_token_id) --> 32000\r\nprint(processor.tokenizer.eos_token_id) --> 2\r\n```\r\n\r\n**Would recommend adding tokens and resizing the vocab.** \r\n-> I can try this, but I don't think, this will solve the problem I just pointed out.",
"Sorry it's the pad token id. \r\nIf it's vicuna specific I would recommend this. However I don't really understand if the issue is with the results you obtain or with the inputs? ",
"> Noting here that I was getting: `OverflowError: out of range integral type conversion attempted` when using the `generate` and then `batch_decode` of InstructBlip.\r\n> \r\n> On inspection, this was because the model was outputting `-1` tokens (which was what `model.config.text_config.pad_token_id` was set to).\r\n> \r\n> I fixed it with: `model.config.text_config.pad_token_id = preproc.tokenizer.pad_token_id `\r\n\r\nThis is to me the solution",
"This line, where the error occurs, is within modeling_llama.py\r\n```\r\nif inputs_embeds is None:\r\n inputs_embeds = self.embed_tokens(input_ids)\r\n```\r\n\r\nSo, error occurs during generation. The inputs to the model.generate(**inputs) are fine.\r\n\r\nI have tried both solutions:\r\nmaking the model pad_token_id as 0 or same as tokenizer's pad_token_id. The problem still persists internally during generation. The -1 values that I showed in the first comment are from modeling_llama.py during generation.",
"I am not getting an overflow error, but similar error to the original issue of this thread, that is an indexing error which looks like below:\r\n```\r\n/opt/conda/conda-bld/pytorch_1699449181202/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [188,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1699449181202/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [188,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1699449181202/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [188,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n...\r\n```",
"I am not sure I understand how a model can generate a negative token? The output of a model is determined by an argmax on the logits, which is always positive. The -1 cannot possibly be generated no?",
"The pad_token_id is set to -1 in config.yaml. I found the command `model.config.text_config.pad_token_id=0 or model.config.text_config.pad_token_id = preproc.tokenizer.pad_token_id ` don't modify output's padding value(-1). I fixed it with `outputs = torch.where(outputs != -1, outputs, processor.tokenizer.pad_token_id)`",
"Hi, I've just thrown in vicuna7b:\r\n`model.config.text_config.pad_token_id = 0`\r\nbefore batch generating the output. It works fine....Don't know whether it affects the correctness of the output or not. I just see the warning of python saying changing model params might lead to significant perf drop "
] | 1,691 | 1,703 | null |
NONE
| null |
### System Info
I encountered the following error when using the generate function of the instructblip model to generate multiple samples in parallel:
```
/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 88, in forward
hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
../aten/src/ATen/native/cuda/Indexing.cu:1093: indexSelectSmallIndex: block: [24,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1093: indexSelectSmallIndex: block: [24,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1093: indexSelectSmallIndex: block: [24,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
```
code is:
```py
from transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration
import torch
from PIL import Image
processor = InstructBlipProcessor.from_pretrained("Salesforce/instructblip-vicuna-7b")
device = "cuda" if torch.cuda.is_available() else "cpu"
image = Image.open('./Confusing-Pictures.jpg').convert("RGB")
prompt = "What is unusual about this image?"
inputs = processor(images=image, text=prompt,return_tensors="pt").to(device)
print(inputs)
model = InstructBlipForConditionalGeneration.from_pretrained("Salesforce/instructblip-vicuna-7b")
model.to(device)
inputs = processor(images=[image,image], text=[prompt,'another prompt'],padding=True,return_tensors="pt").to(device)
outputs = model.generate(
**inputs,
do_sample=False,
num_beams=5,
max_length=256,
min_length=1,
top_p=0.9,
repetition_penalty=1.5,
length_penalty=1.0,
temperature=1,
)
print(outputs)
generated_text = processor.batch_decode(outputs, skip_special_tokens=False)
print(generated_text)
```
### Who can help?
@ArthurZucker @amyeroberts @gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```py
from transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration
import torch
from PIL import Image
processor = InstructBlipProcessor.from_pretrained("Salesforce/instructblip-vicuna-7b")
device = "cuda" if torch.cuda.is_available() else "cpu"
image = Image.open('./Confusing-Pictures.jpg').convert("RGB")
prompt = "What is unusual about this image?"
inputs = processor(images=image, text=prompt,return_tensors="pt").to(device)
print(inputs)
model = InstructBlipForConditionalGeneration.from_pretrained("Salesforce/instructblip-vicuna-7b")
model.to(device)
inputs = processor(images=[image,image], text=[prompt,'another prompt'],padding=True,return_tensors="pt").to(device)
outputs = model.generate(
**inputs,
do_sample=False,
num_beams=5,
max_length=256,
min_length=1,
top_p=0.9,
repetition_penalty=1.5,
length_penalty=1.0,
temperature=1,
)
print(outputs)
generated_text = processor.batch_decode(outputs, skip_special_tokens=False)
print(generated_text)
```
### Expected behavior
no error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25426/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25426/timeline
|
reopened
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25425
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25425/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25425/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25425/events
|
https://github.com/huggingface/transformers/issues/25425
| 1,844,324,712 |
I_kwDOCUB6oc5t7ilo
| 25,425 |
MBartForConditionalGeneration doesn't seem to be able to complete the task of filling mask.
|
{
"login": "5i-wanna-be-the-666",
"id": 58644245,
"node_id": "MDQ6VXNlcjU4NjQ0MjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/58644245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/5i-wanna-be-the-666",
"html_url": "https://github.com/5i-wanna-be-the-666",
"followers_url": "https://api.github.com/users/5i-wanna-be-the-666/followers",
"following_url": "https://api.github.com/users/5i-wanna-be-the-666/following{/other_user}",
"gists_url": "https://api.github.com/users/5i-wanna-be-the-666/gists{/gist_id}",
"starred_url": "https://api.github.com/users/5i-wanna-be-the-666/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/5i-wanna-be-the-666/subscriptions",
"organizations_url": "https://api.github.com/users/5i-wanna-be-the-666/orgs",
"repos_url": "https://api.github.com/users/5i-wanna-be-the-666/repos",
"events_url": "https://api.github.com/users/5i-wanna-be-the-666/events{/privacy}",
"received_events_url": "https://api.github.com/users/5i-wanna-be-the-666/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey! It seems that the model you are trying to use was not trained on `zh_ZH` but `zh_CN`. Could you try to use this instead? (It might just be the token that need to be updated). \r\n\r\nFor the second script, I don't think you changed the `src_lang` of the `tokenizer` which is not Chinese by default.\r\nI got `[',我是早上去']` as an output with:\r\n```python \r\nfrom transformers import AutoTokenizer, MBartForConditionalGeneration\r\n\r\nmodel = MBartForConditionalGeneration.from_pretrained(\"facebook/mbart-large-cc25\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"facebook/mbart-large-cc25\")\r\n\r\n# de_DE is the language symbol id <LID> for German\r\nTXT = \"</s> 今天<mask>真好,我准备去公园打羽毛球. </s> zh_CN\"\r\n\r\ninput_ids = tokenizer([TXT], add_special_tokens=False, return_tensors=\"pt\")[\"input_ids\"]\r\nlogits = model(input_ids).logits\r\n\r\nmasked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item()\r\nprobs = logits[0, masked_index].softmax(dim=0)\r\nvalues, predictions = probs.topk(5)\r\n\r\ntokenizer.decode(predictions).split()\r\n```\r\nwhich is already a lot better 😉 ",
"Thank you for your reply!I did get good results after changing zh_ZH to zh_CN. The reason why I think it is zh_ZH is that I accidentally read the document wrong.\r\n\r\n\r\n**But how can I solve the problem as in the last script? Even if there are multiple mask marks, I also want to know the MLM loss of this model.**",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Sorry for the late reply, recommend you to have a look at #10222 and search on our [forum](https://discuss.huggingface.co/) were this has been answered! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I also have problem with this.\r\nI want to use 【facebook/mbart-large-50-many-to-many-mmt】 to do mask filling task. But the output is always strange.\r\nI modify the input format as the Model Card from https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt suggested.\r\nMy code is as follows:\r\n```\r\nfrom transformers import (\r\nAutoTokenizer,\r\nBertForMaskedLM,\r\nMBart50TokenizerFast,\r\nMBartForConditionalGeneration,\r\nDataCollatorForLanguageModeling\r\n)\r\nmodel_name_or_path = 'my_path/mbart-large-50-many-to-many-mmt'\r\nmodel = MBartForConditionalGeneration.from_pretrained(model_name_or_path)\r\ntokenizer = MBart50TokenizerFast.from_pretrained(model_name_or_path)\r\n\r\ntokenizer.src_lang = 'en_XX'\r\n\r\nsrc = \"So that such a thing won’t happen <mask>.\"\r\nencoded_src = tokenizer([src], return_tensors=\"pt\")\r\ninput_ids = encoded_src[\"input_ids\"]\r\nsrc_tokens = tokenizer.convert_ids_to_tokens(input_ids[0])\r\n\r\nmodel_outputs = model(**encoded_src)\r\nlogits = model_outputs.logits\r\n\r\nmasked_index = torch.nonzero((input_ids[0] == tokenizer.mask_token_id)).item()\r\nprobs = logits[0, masked_index].softmax(dim=0)\r\nvalues, predictions = probs.topk(5)\r\n\r\nprint(tokenizer.convert_ids_to_tokens(predictions))\r\n```\r\n\r\nThe output is:\r\n['.', '☎', '↔', '∏', '∴']\r\n\r\nWhen I change my input, it always output strange symbols. I think this is wrong.\r\n\r\nI am confused whether this model is not suitable for this task. How should I modify to get proper outputs? Thank you so much!"
] | 1,691 | 1,707 | 1,697 |
NONE
| null |
### System Info
transformers version: 4.29.2
Platform: Linux ubt-4090 5.15.0-75-generic
Python version: 3.9.5
PyTorch version (GPU?): 1.12.1+cu113 (True)
Tensorflow version (GPU?): not installed (NA)
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @younesbelkada @patrickvonplaten
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When I used the official document on huggingface for mask filling, I got the expected output.
```python
from transformers import AutoTokenizer, MBartForConditionalGeneration
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-cc25")
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
# de_DE is the language symbol id <LID> for German
TXT = "</s> Meine Freunde sind <mask> nett aber sie essen zu viel Kuchen. </s> de_DE"
input_ids = tokenizer([TXT], add_special_tokens=False, return_tensors="pt")["input_ids"]
logits = model(input_ids).logits
masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item()
probs = logits[0, masked_index].softmax(dim=0)
values, predictions = probs.topk(5)
tokenizer.decode(predictions).split()
['nett', 'sehr', 'ganz', 'nicht', 'so']
```
But when I changed the characters that need to be filled into Chinese, there was an accident.
```python
from transformers import AutoTokenizer, MBartForConditionalGeneration
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-cc25")
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
# de_DE is the language symbol id <LID> for German
TXT = "</s> 今天<mask>真好,我准备去公园打羽毛球. </s> zh_ZH"
input_ids = tokenizer([TXT], add_special_tokens=False, return_tensors="pt")["input_ids"]
logits = model(input_ids).logits
masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item()
probs = logits[0, masked_index].softmax(dim=0)
values, predictions = probs.topk(5)
tokenizer.decode(predictions).split()
[',·:.']
```

After that, I tried to get mBART to restore a sentence with multiple masks for me, and the effect was even worse.
```python
from transformers import MBartTokenizer,DataCollatorForLanguageModeling,MBartForConditionalGeneration
tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-cc25")
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15)
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-cc25")
TXT_input = "<s>The weather is so nice today, I am going to play badminton in the park</s>en_xx"
inputs = tokenizer([TXT_input], add_special_tokens=False, return_tensors="pt",max_length=32, padding='max_length')
masked_inputs_and_labels = data_collator([inputs])
input_ids = masked_inputs_and_labels['input_ids'][0]
attention_mask = masked_inputs_and_labels['attention_mask'][0]
labels = masked_inputs_and_labels['labels'][0]
masked_inputs={key:value[0] for key,value in masked_inputs_and_labels.items()}
outputs = model(input_ids = input_ids,attention_mask = attention_mask,labels = labels)
logits = outputs.logits
print(f'after mask: {tokenizer.decode(masked_inputs["input_ids"][0])}')
predictions = outputs.logits.argmax(dim=-1)
print(f'Predicted sentence: {tokenizer.decode(predictions[0])}')
after mask: <s> The weather is so nice today, I am going tosähkö badminton in the park</s> en_xx<pad><pad><pad><pad><pad><pad><pad><mask><pad><pad><pad>
Predicted sentence: <s>นยยยยยนนนนนน badmintonนนนap<s><s><s><s><s><s><s><s><s><s><s><s><s><s>
```
**Excuse me, is there something wrong with my usage?In that case, how can I correctly use mBART to fill the mask?**
### Expected behavior
I think mBART has at least one Chinese token with five highest probabilities.Or restore the masked sentence for me.
such as:['天气','心情',.....]
or:Predicted sentence: "The weather is so nice today, I am going to play badminton in the park en_xx"
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25425/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25425/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25424
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25424/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25424/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25424/events
|
https://github.com/huggingface/transformers/pull/25424
| 1,844,158,811 |
PR_kwDOCUB6oc5XlcBy
| 25,424 |
[WIP] Adding Grounding DINO
|
{
"login": "EduardoPach",
"id": 69953243,
"node_id": "MDQ6VXNlcjY5OTUzMjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/69953243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EduardoPach",
"html_url": "https://github.com/EduardoPach",
"followers_url": "https://api.github.com/users/EduardoPach/followers",
"following_url": "https://api.github.com/users/EduardoPach/following{/other_user}",
"gists_url": "https://api.github.com/users/EduardoPach/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EduardoPach/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EduardoPach/subscriptions",
"organizations_url": "https://api.github.com/users/EduardoPach/orgs",
"repos_url": "https://api.github.com/users/EduardoPach/repos",
"events_url": "https://api.github.com/users/EduardoPach/events{/privacy}",
"received_events_url": "https://api.github.com/users/EduardoPach/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I clearly did something wrong so I'll make a new branch and add a new PR"
] | 1,691 | 1,691 | 1,691 |
NONE
| null |
# What does this PR do?
This PR adds Grounding DINO
Related to: #25423
To-do:
- [ ] Port vision backbone
- [ ] Port Text backbone
- [ ] Port Feature Enhancer
- [ ] Port Cross-Modality Decoder
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25424/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25424/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25424",
"html_url": "https://github.com/huggingface/transformers/pull/25424",
"diff_url": "https://github.com/huggingface/transformers/pull/25424.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25424.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25423
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25423/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25423/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25423/events
|
https://github.com/huggingface/transformers/issues/25423
| 1,844,148,048 |
I_kwDOCUB6oc5t63dQ
| 25,423 |
Add Grounding DINO
|
{
"login": "EduardoPach",
"id": 69953243,
"node_id": "MDQ6VXNlcjY5OTUzMjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/69953243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EduardoPach",
"html_url": "https://github.com/EduardoPach",
"followers_url": "https://api.github.com/users/EduardoPach/followers",
"following_url": "https://api.github.com/users/EduardoPach/following{/other_user}",
"gists_url": "https://api.github.com/users/EduardoPach/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EduardoPach/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EduardoPach/subscriptions",
"organizations_url": "https://api.github.com/users/EduardoPach/orgs",
"repos_url": "https://api.github.com/users/EduardoPach/repos",
"events_url": "https://api.github.com/users/EduardoPach/events{/privacy}",
"received_events_url": "https://api.github.com/users/EduardoPach/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false | null |
[] |
[
"Hi @EduardoPach! will you be working on this or is this an issue I can claim and work on?",
"@chiral-carbon Hello, I'm already working on this issue. I'm going to create a pull request and attach here as a WIP",
"Add Grounding DINO #25424 ",
"Closed, PR due to an issue and I'm going to open another one.",
"> Closed, PR due to an issue and I'm going to open another one.\r\n\r\nWIP in #25451 \r\n"
] | 1,691 | 1,691 | null |
NONE
| null |
### Model description
Grounding DINO is a zero-shot object detection developed by IDEA-Research presented in the paper
[Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection](https://arxiv.org/abs/2303.05499)
The model uses a Swin transformer and BERT as image and text backbones, respectively, a feature enhancer and a cross-modality decoder.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Model implementation and weights can be found in their GitHub repo: https://github.com/IDEA-Research/GroundingDINO/tree/main#luggage-checkpoints
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25423/timeline
| null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.